热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

pythoncallback用法_Pythoncallbacks.CSVLogger方法代码示例

本文整理汇总了Python中keras.callbacks.CSVLogger方法的典型用法代码示例。如果您正苦于以下问题:Pythoncallbacks.CSVLo

本文整理汇总了Python中keras.callbacks.CSVLogger方法的典型用法代码示例。如果您正苦于以下问题:Python callbacks.CSVLogger方法的具体用法?Python callbacks.CSVLogger怎么用?Python callbacks.CSVLogger使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块keras.callbacks的用法示例。

在下文中一共展示了callbacks.CSVLogger方法的22个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: resume_train

​点赞 6

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def resume_train(self, category, pretrainModel, modelName, initEpoch, batchSize=8, epochs=20):

self.modelName = modelName

self.load_model(pretrainModel)

refineNetflag = True

self.nStackNum = 2

modelPath = os.path.dirname(pretrainModel)

trainDt = DataGenerator(category, os.path.join("../../data/train/Annotations", "train_split.csv"))

trainGen = trainDt.generator_with_mask_ohem(graph=tf.get_default_graph(), kerasModel=self.model,

batchSize=batchSize, inputSize=(self.inputHeight, self.inputWidth),

nStackNum=self.nStackNum, flipFlag=False, cropFlag=False)

normalizedErrorCallBack = NormalizedErrorCallBack("../../trained_models/", category, refineNetflag, resumeFolder=modelPath)

csvlogger = CSVLogger(os.path.join(normalizedErrorCallBack.get_folder_path(),

"csv_train_" + self.modelName + "_" + str(

datetime.datetime.now().strftime('%H:%M')) + ".csv"))

self.model.fit_generator(initial_epoch=initEpoch, generator=trainGen, steps_per_epoch=trainDt.get_dataset_size() // batchSize,

epochs=epochs, callbacks=[normalizedErrorCallBack, csvlogger])

开发者ID:yuanyuanli85,项目名称:FashionAI_KeyPoint_Detection_Challenge_Keras,代码行数:24,

示例2: train

​点赞 6

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train():

data = load_train_data()

data = data.reshape((data.shape[0],data.shape[1],data.shape[2],1))

data = data.astype('float32')/255.0

# model selection

if args.pretrain: model = load_model(args.pretrain, compile=False)

else:

if args.model == 'DnCNN': model = models.DnCNN()

# compile the model

model.compile(optimizer=Adam(), loss=['mse'])

# use call back functions

ckpt = ModelCheckpoint(save_dir+'/model_{epoch:02d}.h5', monitor='val_loss',

verbose=0, period=args.save_every)

csv_logger = CSVLogger(save_dir+'/log.csv', append=True, separator=',')

lr = LearningRateScheduler(step_decay)

# train

history = model.fit_generator(train_datagen(data, batch_size=args.batch_size),

steps_per_epoch=len(data)//args.batch_size, epochs=args.epoch, verbose=1,

callbacks=[ckpt, csv_logger, lr])

return model

开发者ID:husqin,项目名称:DnCNN-keras,代码行数:25,

示例3: get_callbacks

​点赞 6

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def get_callbacks(model_file, initial_learning_rate=0.0001, learning_rate_drop=0.5, learning_rate_epochs=None,

learning_rate_patience=50, logging_file="training.log", verbosity=1,

early_stopping_patience=None):

callbacks = list()

callbacks.append(ModelCheckpoint(model_file,monitor='val_acc', save_best_only=True,verbose=verbosity, save_weights_only=True))

# callbacks.append(ModelCheckpoint(model_file, save_best_only=True, save_weights_only=True))

callbacks.append(CSVLogger(logging_file, append=True))

if learning_rate_epochs:

callbacks.append(LearningRateScheduler(partial(step_decay, initial_lrate=initial_learning_rate,

drop=learning_rate_drop, epochs_drop=learning_rate_epochs)))

else:

callbacks.append(ReduceLROnPlateau(factor=learning_rate_drop, patience=learning_rate_patience,

verbose=verbosity))

if early_stopping_patience:

callbacks.append(EarlyStopping(verbose=verbosity, patience=early_stopping_patience))

return callbacks

开发者ID:wcfzl,项目名称:3D-CNNs-for-Liver-Classification,代码行数:18,

示例4: main

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def main(rootdir, case, results):

train_x, train_y, valid_x, valid_y, test_x, test_y = get_data(args.dataset, case)

input_shape = (train_x.shape[1], train_x.shape[2])

num_class = train_y.shape[1]

if not os.path.exists(rootdir):

os.makedirs(rootdir)

filepath = os.path.join(rootdir, str(case) + '.hdf5')

saveto = os.path.join(rootdir, str(case) + '.csv')

optimizer = Adam(lr=args.lr, clipnorm=args.clip)

pred_dir = os.path.join(rootdir, str(case) + '_pred.txt')

if args.train:

model = creat_model(input_shape, num_class)

early_stop = EarlyStopping(monitor='val_acc', patience=15, mode='auto')

reduce_lr = ReduceLROnPlateau(monitor='val_acc', factor=0.1, patience=5, mode='auto', cooldown=3., verbose=1)

checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='auto')

csv_logger = CSVLogger(saveto)

if args.dataset=='NTU' or args.dataset == 'PKU':

callbacks_list = [csv_logger, checkpoint, early_stop, reduce_lr]

else:

callbacks_list = [csv_logger, checkpoint]

model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

model.fit(train_x, train_y, validation_data=[valid_x, valid_y], epochs=args.epochs,

batch_size=args.batch_size, callbacks=callbacks_list, verbose=2)

# test

model = creat_model(input_shape, num_class)

model.load_weights(filepath)

model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

scores = get_activation(model, test_x, test_y, pred_dir, VA=10, par=9)

results.append(round(scores, 2))

开发者ID:microsoft,项目名称:View-Adaptive-Neural-Networks-for-Skeleton-based-Human-Action-Recognition,代码行数:37,

示例5: train

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train(model, data, args):

"""

Training a CapsuleNet

:param model: the CapsuleNet model

:param data: a tuple containing training and testing data, like `((x_train, y_train), (x_test, y_test))`

:param args: arguments

:return: The trained model

"""

(x_train, y_train), (x_test, y_test) = data

log = callbacks.CSVLogger(args.save_dir + '/log.csv')

checkpoint = callbacks.ModelCheckpoint(args.save_dir + '/weights-{epoch:02d}.h5', monitor='val_capsnet_acc',

save_best_only=False, save_weights_only=True, verbose=1)

lr_decay = callbacks.LearningRateScheduler(schedule=lambda epoch: args.lr * (args.lr_decay ** epoch))

model.compile(optimizer=optimizers.Adam(lr=args.lr),

loss=[margin_loss, 'mse'],

loss_weights=[1., args.lam_recon],

metrics={'capsnet': 'accuracy'})

def train_generator(x, y, batch_size, shift_fraction=0.):

train_datagen = ImageDataGenerator(width_shift_range=shift_fraction,

height_shift_range=shift_fraction)

generator = train_datagen.flow(x, y, batch_size=batch_size)

while 1:

x_batch, y_batch = generator.next()

yield ([x_batch, y_batch], [y_batch, x_batch])

model.fit_generator(generator=train_generator(x_train, y_train, args.batch_size, args.shift_fraction),

steps_per_epoch=int(y_train.shape[0] / args.batch_size),

epochs=args.epochs,

shuffle = True,

validation_data=[[x_test, y_test], [y_test, x_test]],

callbacks=snapshot.get_callbacks(log,model_prefix=model_prefix))

model.save_weights(args.save_dir + '/trained_model.h5')

print('Trained model saved to \'%s/trained_model.h5\'' % args.save_dir)

return model

开发者ID:vinojjayasundara,项目名称:textcaps,代码行数:41,

示例6: train

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train(train_set, val_set, cfg, config_name, resume, model_path):

if not(model_path is None):

if resume:

print("Loading compiled model: " + model_path)

model = keras.models.load_model(model_path, compile=True)

else:

print("Loading uncompiled model: " + model_path)

model = keras.models.load_model(model_path, compile=False)

model = compile_model(model, cfg["model"])

else:

print("Loading the network..")

model = load_model(cfg["model"])

csv_logger = CSVLogger('checkpoint/' + config_name +

'-training.log', append=resume)

save_ckpt = ModelCheckpoint("checkpoint/weights.{epoch:02d}-{val_loss:.2f}" + config_name + ".hdf5", monitor='val_loss',

verbose=1,

save_best_only=True,

period=1)

early_stopping = EarlyStopping(monitor='val_loss',

min_delta=0,

patience=5,

verbose=0, mode='auto')

lr_schedule = ReduceLROnPlateau(

monitor='val_loss', factor=0.1, patience=3, verbose=1, mode='auto', min_lr=10e-7)

callback_list = [save_ckpt, early_stopping, lr_schedule, csv_logger]

print("Start the training..")

model.fit_generator(train_set,

epochs=cfg["nb_epoch"],

callbacks=callback_list,

validation_data=val_set,

workers=cfg["workers"],

use_multiprocessing=cfg["use_multiprocessing"],

shuffle=True

)

开发者ID:qlemaire22,项目名称:speech-music-detection,代码行数:43,

示例7: train

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train(self, category, batchSize=8, epochs=20, lrschedule=False):

trainDt = DataGenerator(category, os.path.join("../../data/train/Annotations", "train_split.csv"))

trainGen = trainDt.generator_with_mask_ohem( graph=tf.get_default_graph(), kerasModel=self.model,

batchSize= batchSize, inputSize=(self.inputHeight, self.inputWidth),

nStackNum=self.nStackNum, flipFlag=False, cropFlag=False)

normalizedErrorCallBack = NormalizedErrorCallBack("../../trained_models/", category, True)

csvlogger = CSVLogger( os.path.join(normalizedErrorCallBack.get_folder_path(),

"csv_train_"+self.modelName+"_"+str(datetime.datetime.now().strftime('%H:%M'))+".csv"))

xcallbacks = [normalizedErrorCallBack, csvlogger]

self.model.fit_generator(generator=trainGen, steps_per_epoch=trainDt.get_dataset_size()//batchSize,

epochs=epochs, callbacks=xcallbacks)

开发者ID:yuanyuanli85,项目名称:FashionAI_KeyPoint_Detection_Challenge_Keras,代码行数:17,

示例8: train_gan

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train_gan( dataf ) :

gen, disc, gan = build_networks()

# Uncomment these, if you want to continue training from some snapshot.

# (or load pretrained generator weights)

#load_weights(gen, Args.genw)

#load_weights(disc, Args.discw)

logger = CSVLogger('loss.csv') # yeah, you can use callbacks independently

logger.on_train_begin() # initialize csv file

with h5py.File( dataf, 'r' ) as f :

faces = f.get( 'faces' )

run_batches(gen, disc, gan, faces, logger, range(5000))

logger.on_train_end()

开发者ID:forcecore,项目名称:Keras-GAN-Animeface-Character,代码行数:16,

示例9: train_model

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train_model(self, train_generator, steps_per_epoch=None, epochs=1, validation_generator=None,

validation_steps=None, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0,

save_history=False, save_model_per_epoch=False):

saved_items_dir = os.path.join(os.path.dirname(__file__), os.pardir, 'saved_items')

if not os.path.exists(saved_items_dir):

os.makedirs(saved_items_dir)

callbacks = []

if save_history:

history_file = os.path.join(saved_items_dir, 'history')

csv_logger = CSVLogger(history_file, append=True)

callbacks.append(csv_logger)

if save_model_per_epoch:

save_model_file = os.path.join(saved_items_dir, 'bidaf_{epoch:02d}.h5')

checkpointer = ModelCheckpoint(filepath=save_model_file, verbose=1)

callbacks.append(checkpointer)

history = self.model.fit_generator(train_generator, steps_per_epoch=steps_per_epoch, epochs=epochs,

callbacks=callbacks, validation_data=validation_generator,

validation_steps=validation_steps, workers=workers,

use_multiprocessing=use_multiprocessing, shuffle=shuffle,

initial_epoch=initial_epoch)

if not save_model_per_epoch:

self.model.save(os.path.join(saved_items_dir, 'bidaf.h5'))

return history, self.model

开发者ID:ParikhKadam,项目名称:bidaf-keras,代码行数:31,

示例10: pretrain

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def pretrain(self, x, y=None, optimizer='adam', epochs=200, batch_size=256, save_dir='results/temp'):

print('...Pretraining...')

self.autoencoder.compile(optimizer=optimizer, loss='mse')

csv_logger = callbacks.CSVLogger(save_dir + '/pretrain_log.csv')

cb = [csv_logger]

if y is not None:

class PrintACC(callbacks.Callback):

def __init__(self, x, y):

self.x = x

self.y = y

super(PrintACC, self).__init__()

def on_epoch_end(self, epoch, logs=None):

if int(epochs/10) != 0 and epoch % int(epochs/10) != 0:

return

feature_model = Model(self.model.input,

self.model.get_layer(

'encoder_%d' % (int(len(self.model.layers) / 2) - 1)).output)

features = feature_model.predict(self.x)

km = KMeans(n_clusters=len(np.unique(self.y)), n_init=20, n_jobs=4)

y_pred = km.fit_predict(features)

# print()

print(&#39; &#39;*8 &#43; &#39;|&#61;&#61;> acc: %.4f, nmi: %.4f <&#61;&#61;|&#39;

% (metrics.acc(self.y, y_pred), metrics.nmi(self.y, y_pred)))

cb.append(PrintACC(x, y))

# begin pretraining

t0 &#61; time()

self.autoencoder.fit(x, x, batch_size&#61;batch_size, epochs&#61;epochs, callbacks&#61;cb)

print(&#39;Pretraining time: %ds&#39; % round(time() - t0))

self.autoencoder.save_weights(save_dir &#43; &#39;/ae_weights.h5&#39;)

print(&#39;Pretrained weights are saved to %s/ae_weights.h5&#39; % save_dir)

self.pretrained &#61; True

开发者ID:XifengGuo&#xff0c;项目名称:DEC-keras&#xff0c;代码行数:37&#xff0c;

示例11: get_callbacks

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def get_callbacks(model_file, initial_learning_rate&#61;0.0001, learning_rate_drop&#61;0.5, learning_rate_epochs&#61;None,

learning_rate_patience&#61;50, logging_file&#61;"training.log", verbosity&#61;1,

early_stopping_patience&#61;None):

callbacks &#61; list()

callbacks.append(ModelCheckpoint(model_file, save_best_only&#61;True))

callbacks.append(CSVLogger(logging_file, append&#61;True))

if learning_rate_epochs:

callbacks.append(LearningRateScheduler(partial(step_decay, initial_lrate&#61;initial_learning_rate,

drop&#61;learning_rate_drop, epochs_drop&#61;learning_rate_epochs)))

else:

callbacks.append(ReduceLROnPlateau(factor&#61;learning_rate_drop, patience&#61;learning_rate_patience,

verbose&#61;verbosity))

if early_stopping_patience:

callbacks.append(EarlyStopping(verbose&#61;verbosity, patience&#61;early_stopping_patience))

return callbacks

开发者ID:ellisdg&#xff0c;项目名称:3DUnetCNN&#xff0c;代码行数:17&#xff0c;

示例12: train

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train(model, epochs, patience, output_path, nproc, train_obj, val_obj):

"""

:param model: model to train (must be compiled)

:type model: Model

:param epochs: max number of epochs to train.

:type epochs: int

:param patience: Stop after these many layers if val. loss doesn&#39;t decrease

:type patience: int

:param output_path: paths to save weights and logs

:type output_path: str

:param nproc: number of processors for training

:type nproc: int

:param train_obj: DataGenerator training object for training

:type train_obj: DataGenerator

:param val_obj: DataGenerator training object for validation

:type val_obj: DataGenerator

:return: model, history object

"""

if nproc &#61;&#61; 1:

use_multiprocessing &#61; False

else:

use_multiprocessing &#61; True

# Callbacks for training and validation

ES &#61; EarlyStopping(monitor&#61;&#39;val_loss&#39;, min_delta&#61;1e-3, patience&#61;patience, verbose&#61;1, mode&#61;&#39;min&#39;,

restore_best_weights&#61;True)

CK &#61; ModelCheckpoint(output_path &#43; &#39;weights.h5&#39;, monitor&#61;&#39;val_loss&#39;, verbose&#61;1, save_best_only&#61;True,

save_weights_only&#61;False,

mode&#61;&#39;min&#39;)

csv_name &#61; output_path &#43; &#39;training_log.csv&#39;

LO &#61; CSVLogger(csv_name, append&#61;False)

callbacks &#61; [ES, CK, LO]

train_history &#61; model.fit_generator(generator&#61;train_obj, validation_data&#61;val_obj, epochs&#61;epochs,

use_multiprocessing&#61;use_multiprocessing, max_queue_size&#61;10, workers&#61;nproc,

shuffle&#61;True, callbacks&#61;callbacks, verbose&#61;1)

return model, train_history

开发者ID:devanshkv&#xff0c;项目名称:fetch&#xff0c;代码行数:41&#xff0c;

示例13: initialize_parameters

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def initialize_parameters():

mnist_common &#61; mnist.MNIST(mnist.file_path,

&#39;mnist_params.txt&#39;,

&#39;keras&#39;,

prog&#61;&#39;mnist_mlp&#39;,

desc&#61;&#39;MNIST example&#39;

)

# Initialize parameters

gParameters &#61; candle.finalize_parameters(mnist_common)

csv_logger &#61; CSVLogger(&#39;{}/params.log&#39;.format(gParameters))

return gParameters

开发者ID:ECP-CANDLE&#xff0c;项目名称:Benchmarks&#xff0c;代码行数:15&#xff0c;

示例14: initialize_parameters

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def initialize_parameters():

mnist_common &#61; mnist.MNIST(mnist.file_path,

&#39;mnist_params.txt&#39;,

&#39;keras&#39;,

prog&#61;&#39;mnist_cnn&#39;,

desc&#61;&#39;MNIST CNN example&#39;

)

# Initialize parameters

gParameters &#61; candle.finalize_parameters(mnist_common)

csv_logger &#61; CSVLogger(&#39;{}/params.log&#39;.format(gParameters))

return gParameters

开发者ID:ECP-CANDLE&#xff0c;项目名称:Benchmarks&#xff0c;代码行数:15&#xff0c;

示例15: get_callbacks

​点赞 5

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def get_callbacks(arguments):

if arguments.net.find(&#39;caps&#39;) !&#61; -1:

monitor_name &#61; &#39;val_out_seg_dice_hard&#39;

else:

monitor_name &#61; &#39;val_dice_hard&#39;

csv_logger &#61; CSVLogger(join(arguments.log_dir, arguments.output_name &#43; &#39;_log_&#39; &#43; arguments.time &#43; &#39;.csv&#39;), separator&#61;&#39;,&#39;)

tb &#61; TensorBoard(arguments.tf_log_dir, batch_size&#61;arguments.batch_size, histogram_freq&#61;0)

model_checkpoint &#61; ModelCheckpoint(join(arguments.check_dir, arguments.output_name &#43; &#39;_model_&#39; &#43; arguments.time &#43; &#39;.hdf5&#39;),

monitor&#61;monitor_name, save_best_only&#61;True, save_weights_only&#61;True,

verbose&#61;1, mode&#61;&#39;max&#39;)

lr_reducer &#61; ReduceLROnPlateau(monitor&#61;monitor_name, factor&#61;0.05, cooldown&#61;0, patience&#61;5,verbose&#61;1, mode&#61;&#39;max&#39;)

early_stopper &#61; EarlyStopping(monitor&#61;monitor_name, min_delta&#61;0, patience&#61;25, verbose&#61;0, mode&#61;&#39;max&#39;)

return [model_checkpoint, csv_logger, lr_reducer, early_stopper, tb]

开发者ID:lalonderodney&#xff0c;项目名称:SegCaps&#xff0c;代码行数:17&#xff0c;

示例16: train_model

​点赞 4

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train_model(self,file_list,labels,n_fold&#61;5,batch_size&#61;16,epochs&#61;40,dim&#61;224,lr&#61;1e-5,model&#61;&#39;ResNet50&#39;):

model_save_dest &#61; {}

k &#61; 0

kf &#61; KFold(n_splits&#61;n_fold, random_state&#61;0, shuffle&#61;True)

for train_index,test_index in kf.split(file_list):

k &#43;&#61; 1

file_list &#61; np.array(file_list)

labels &#61; np.array(labels)

train_files,train_labels &#61; file_list[train_index],labels[train_index]

val_files,val_labels &#61; file_list[test_index],labels[test_index]

if model &#61;&#61; &#39;Resnet50&#39;:

model_final &#61; self.resnet_pseudo(dim&#61;224,freeze_layers&#61;10,full_freeze&#61;&#39;N&#39;)

if model &#61;&#61; &#39;VGG16&#39;:

model_final &#61; self.VGG16_pseudo(dim&#61;224,freeze_layers&#61;10,full_freeze&#61;&#39;N&#39;)

if model &#61;&#61; &#39;InceptionV3&#39;:

model_final &#61; self.inception_pseudo(dim&#61;224,freeze_layers&#61;10,full_freeze&#61;&#39;N&#39;)

adam &#61; optimizers.Adam(lr&#61;lr, beta_1&#61;0.9, beta_2&#61;0.999, epsilon&#61;1e-08, decay&#61;0.0)

model_final.compile(optimizer&#61;adam, loss&#61;["mse"],metrics&#61;[&#39;mse&#39;])

reduce_lr &#61; keras.callbacks.ReduceLROnPlateau(monitor&#61;&#39;val_loss&#39;, factor&#61;0.50,patience&#61;3, min_lr&#61;0.000001)

early &#61; EarlyStopping(monitor&#61;&#39;val_loss&#39;, patience&#61;10, mode&#61;&#39;min&#39;, verbose&#61;1)

logger &#61; CSVLogger(&#39;keras-5fold-run-01-v1-epochs_ib.log&#39;, separator&#61;&#39;,&#39;, append&#61;False)

checkpoint &#61; ModelCheckpoint(

&#39;kera1-5fold-run-01-v1-fold-&#39; &#43; str(&#39;%02d&#39; % (k &#43; 1)) &#43; &#39;-run-&#39; &#43; str(&#39;%02d&#39; % (1 &#43; 1)) &#43; &#39;.check&#39;,

monitor&#61;&#39;val_loss&#39;, mode&#61;&#39;min&#39;,

save_best_only&#61;True,

verbose&#61;1)

callbacks &#61; [reduce_lr,early,checkpoint,logger]

train_gen &#61; DataGenerator(train_files,train_labels,batch_size&#61;32,n_classes&#61;len(self.class_folders),dim&#61;(self.dim,self.dim,3),shuffle&#61;True)

val_gen &#61; DataGenerator(val_files,val_labels,batch_size&#61;32,n_classes&#61;len(self.class_folders),dim&#61;(self.dim,self.dim,3),shuffle&#61;True)

model_final.fit_generator(train_gen,epochs&#61;epochs,verbose&#61;1,validation_data&#61;(val_gen),callbacks&#61;callbacks)

model_name &#61; &#39;kera1-5fold-run-01-v1-fold-&#39; &#43; str(&#39;%02d&#39; % (k &#43; 1)) &#43; &#39;-run-&#39; &#43; str(&#39;%02d&#39; % (1 &#43; 1)) &#43; &#39;.check&#39;

del model_final

f &#61; h5py.File(model_name, &#39;r&#43;&#39;)

del f[&#39;optimizer_weights&#39;]

f.close()

model_final &#61; keras.models.load_model(model_name)

model_name1 &#61; self.outdir &#43; str(model) &#43; &#39;___&#39; &#43; str(k)

model_final.save(model_name1)

model_save_dest[k] &#61; model_name1

return model_save_dest

# Hold out dataset validation function

开发者ID:PacktPublishing&#xff0c;项目名称:Intelligent-Projects-Using-Python&#xff0c;代码行数:52&#xff0c;

示例17: train_model

​点赞 4

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train_model(self,train_dir,val_dir,n_fold&#61;5,batch_size&#61;16,epochs&#61;40,dim&#61;224,lr&#61;1e-5,model&#61;&#39;ResNet50&#39;):

if model &#61;&#61; &#39;Resnet50&#39;:

model_final &#61; self.resnet_pseudo(dim&#61;224,freeze_layers&#61;10,full_freeze&#61;&#39;N&#39;)

if model &#61;&#61; &#39;VGG16&#39;:

model_final &#61; self.VGG16_pseudo(dim&#61;224,freeze_layers&#61;10,full_freeze&#61;&#39;N&#39;)

if model &#61;&#61; &#39;InceptionV3&#39;:

model_final &#61; self.inception_pseudo(dim&#61;224,freeze_layers&#61;10,full_freeze&#61;&#39;N&#39;)

train_file_names &#61; glob.glob(f&#39;{train_dir}/*/*&#39;)

val_file_names &#61; glob.glob(f&#39;{val_dir}/*/*&#39;)

train_steps_per_epoch &#61; len(train_file_names)/float(batch_size)

val_steps_per_epoch &#61; len(val_file_names)/float(batch_size)

train_datagen &#61; ImageDataGenerator(horizontal_flip &#61; True,vertical_flip &#61; True,width_shift_range &#61; 0.1,height_shift_range &#61; 0.1,

channel_shift_range&#61;0,zoom_range &#61; 0.2,rotation_range &#61; 20,preprocessing_function&#61;pre_process)

val_datagen &#61; ImageDataGenerator(preprocessing_function&#61;pre_process)

train_generator &#61; train_datagen.flow_from_directory(train_dir,

target_size&#61;(dim,dim),

batch_size&#61;batch_size,

class_mode&#61;&#39;categorical&#39;)

val_generator &#61; val_datagen.flow_from_directory(val_dir,

target_size&#61;(dim,dim),

batch_size&#61;batch_size,

class_mode&#61;&#39;categorical&#39;)

print(train_generator.class_indices)

joblib.dump(train_generator.class_indices,f&#39;{self.outdir}/class_indices.pkl&#39;)

adam &#61; optimizers.Adam(lr&#61;lr, beta_1&#61;0.9, beta_2&#61;0.999, epsilon&#61;1e-08, decay&#61;0.0)

model_final.compile(optimizer&#61;adam, loss&#61;["categorical_crossentropy"],metrics&#61;[&#39;accuracy&#39;])

reduce_lr &#61; keras.callbacks.ReduceLROnPlateau(monitor&#61;&#39;val_loss&#39;, factor&#61;0.50,patience&#61;3, min_lr&#61;0.000001)

early &#61; EarlyStopping(monitor&#61;&#39;val_loss&#39;, patience&#61;10, mode&#61;&#39;min&#39;, verbose&#61;1)

logger &#61; CSVLogger(f&#39;{self.outdir}/keras-epochs_ib.log&#39;, separator&#61;&#39;,&#39;, append&#61;False)

model_name &#61; f&#39;{self.outdir}/keras_transfer_learning-run.check&#39;

checkpoint &#61; ModelCheckpoint(

model_name,

monitor&#61;&#39;val_loss&#39;, mode&#61;&#39;min&#39;,

save_best_only&#61;True,

verbose&#61;1)

callbacks &#61; [reduce_lr,early,checkpoint,logger]

model_final.fit_generator(train_generator,steps_per_epoch&#61;train_steps_per_epoch,epochs&#61;epochs,verbose&#61;1,validation_data&#61;(val_generator),validation_steps&#61;val_steps_per_epoch,callbacks&#61;callbacks,

class_weight&#61;{0:0.012,1:0.12,2:0.058,3:0.36,4:0.43})

#model_final.fit_generator(train_generator,steps_per_epoch&#61;1,epochs&#61;epochs,verbose&#61;1,validation_data&#61;(val_generator),validation_steps&#61;1,callbacks&#61;callbacks)

del model_final

f &#61; h5py.File(model_name, &#39;r&#43;&#39;)

del f[&#39;optimizer_weights&#39;]

f.close()

model_final &#61; keras.models.load_model(model_name)

model_to_store_path &#61; f&#39;{self.outdir}/{model}&#39;

model_final.save(model_to_store_path)

return model_to_store_path,train_generator.class_indices

# Hold out dataset validation function

开发者ID:PacktPublishing&#xff0c;项目名称:Intelligent-Projects-Using-Python&#xff0c;代码行数:53&#xff0c;

示例18: train

​点赞 4

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train(model, data, args):

"""

Training a 3-level DCNet

:param model: the 3-level DCNet model

:param data: a tuple containing training and testing data, like &#96;((x_train, y_train), (x_test, y_test))&#96;

:param args: arguments

:return: The trained model

"""

# unpacking the data

(x_train, y_train), (x_test, y_test) &#61; data

row &#61; x_train.shape[1]

col &#61; x_train.shape[2]

channel &#61; x_train.shape[3]

# callbacks

log &#61; callbacks.CSVLogger(args.save_dir &#43; &#39;/log.csv&#39;)

tb &#61; callbacks.TensorBoard(log_dir&#61;args.save_dir &#43; &#39;/tensorboard-logs&#39;, histogram_freq&#61;int(args.debug))

checkpoint &#61; callbacks.ModelCheckpoint(args.save_dir &#43; &#39;/weights-{epoch:02d}.h5&#39;, monitor&#61;&#39;val_capsnet_acc&#39;,

verbose&#61;1)

lr_decay &#61; callbacks.LearningRateScheduler(schedule&#61;lambda epoch: args.lr * (args.lr_decay ** epoch))

# compile the model

# Notice the four separate losses (for separate backpropagations)

model.compile(optimizer&#61;optimizers.Adam(lr&#61;args.lr),

loss&#61;[margin_loss, margin_loss, margin_loss, margin_loss, &#39;mse&#39;],

loss_weights&#61;[1., 1., 1., 1., args.lam_recon],

metrics&#61;{&#39;capsnet&#39;: &#39;accuracy&#39;})

#model.load_weights(&#39;result/weights.h5&#39;)

"""

# Training without data augmentation:

model.fit([x_train, y_train], [y_train, y_train, y_train, y_train, x_train], batch_size&#61;args.batch_size, epochs&#61;args.epochs,

validation_data&#61;[[x_test, y_test], [y_test, y_test, y_test, y_test, x_test]], callbacks&#61;[log, tb, checkpoint, lr_decay])

"""

# Training with data augmentation

def train_generator(x, y, batch_size, shift_fraction&#61;0.):

train_datagen &#61; ImageDataGenerator(width_shift_range&#61;shift_fraction,

height_shift_range&#61;shift_fraction) # shift up to 2 pixel for MNIST

generator &#61; train_datagen.flow(x, y, batch_size&#61;batch_size)

while 1:

x_batch, y_batch &#61; generator.next()

yield ([x_batch, y_batch], [y_batch, y_batch, y_batch, y_batch, x_batch[:,:,:,0:1]])

# Training with data augmentation. If shift_fraction&#61;0., also no augmentation.

model.fit_generator(generator&#61;train_generator(x_train, y_train, args.batch_size, args.shift_fraction),

steps_per_epoch&#61;int(y_train.shape[0] / args.batch_size),

epochs&#61;args.epochs,

validation_data&#61;[[x_test, y_test], [y_test, y_test, y_test, y_test, x_test[:,:,:,0:1]]],

callbacks&#61;[log, tb, checkpoint, lr_decay])

# Save model weights

model.save_weights(args.save_dir &#43; &#39;/trained_model.h5&#39;)

print(&#39;Trained model saved to \&#39;%s/trained_model.h5\&#39;&#39; % args.save_dir)

plot_log(args.save_dir &#43; &#39;/log.csv&#39;, show&#61;True)

return model

开发者ID:ssrp&#xff0c;项目名称:Multi-level-DCNet&#xff0c;代码行数:62&#xff0c;

示例19: train

​点赞 4

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train(model, data, args):

"""

Training a 3-level DCNet

:param model: the 3-level DCNet model

:param data: a tuple containing training and testing data, like &#96;((x_train, y_train), (x_test, y_test))&#96;

:param args: arguments

:return: The trained model

"""

# unpacking the data

(x_train, y_train), (x_test, y_test) &#61; data

row &#61; x_train.shape[1]

col &#61; x_train.shape[2]

channel &#61; x_train.shape[3]

# callbacks

log &#61; callbacks.CSVLogger(args.save_dir &#43; &#39;/log.csv&#39;)

tb &#61; callbacks.TensorBoard(log_dir&#61;args.save_dir &#43; &#39;/tensorboard-logs&#39;, histogram_freq&#61;int(args.debug))

checkpoint &#61; callbacks.ModelCheckpoint(args.save_dir &#43; &#39;/weights-{epoch:02d}.h5&#39;, monitor&#61;&#39;val_capsnet_acc&#39;,

verbose&#61;1)

lr_decay &#61; callbacks.LearningRateScheduler(schedule&#61;lambda epoch: args.lr * (args.lr_decay ** epoch))

# compile the model

# Notice the four separate losses (for separate backpropagations)

model.compile(optimizer&#61;optimizers.Adam(lr&#61;args.lr),

loss&#61;[margin_loss, &#39;mse&#39;],

loss_weights&#61;[1., args.lam_recon],

metrics&#61;{&#39;capsnet&#39;: &#39;accuracy&#39;})

#model.load_weights(&#39;result/weights.h5&#39;)

"""

# Training without data augmentation:

model.fit([x_train, y_train], [y_train, x_train], batch_size&#61;args.batch_size, epochs&#61;args.epochs,

validation_data&#61;[[x_test, y_test], [y_test, x_test]], callbacks&#61;[log, tb, checkpoint, lr_decay])

"""

# Training with data augmentation

def train_generator(x, y, batch_size, shift_fraction&#61;0.):

train_datagen &#61; ImageDataGenerator(width_shift_range&#61;shift_fraction,

height_shift_range&#61;shift_fraction) # shift up to 2 pixel for MNIST

generator &#61; train_datagen.flow(x, y, batch_size&#61;batch_size)

while 1:

x_batch, y_batch &#61; generator.next()

yield ([x_batch, y_batch], [y_batch, x_batch])

# Training with data augmentation. If shift_fraction&#61;0., also no augmentation.

model.fit_generator(generator&#61;train_generator(x_train, y_train, args.batch_size, args.shift_fraction),

steps_per_epoch&#61;int(y_train.shape[0] / args.batch_size),

epochs&#61;args.epochs,

validation_data&#61;[[x_test, y_test], [y_test, x_test]],

callbacks&#61;[log, tb, checkpoint, lr_decay])

# Save model weights

model.save_weights(args.save_dir &#43; &#39;/trained_model.h5&#39;)

print(&#39;Trained model saved to \&#39;%s/trained_model.h5\&#39;&#39; % args.save_dir)

plot_log(args.save_dir &#43; &#39;/log.csv&#39;, show&#61;True)

return model

开发者ID:ssrp&#xff0c;项目名称:Multi-level-DCNet&#xff0c;代码行数:62&#xff0c;

示例20: train

​点赞 4

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def train(model, data, args):

"""

Training a CapsuleNet

:param model: the CapsuleNet model

:param data: a tuple containing training and testing data, like &#96;((x_train, y_train), (x_test, y_test))&#96;

:param args: arguments

:return: The trained model

"""

# unpacking the data

(x_train, y_train), (x_test, y_test) &#61; data

# callbacks

log &#61; callbacks.CSVLogger(args.save_dir &#43; &#39;/log.csv&#39;)

tb &#61; callbacks.TensorBoard(log_dir&#61;args.save_dir &#43; &#39;/tensorboard-logs&#39;,

batch_size&#61;args.batch_size, histogram_freq&#61;args.debug)

checkpoint &#61; callbacks.ModelCheckpoint(args.save_dir &#43; &#39;/weights-{epoch:02d}.h5&#39;, monitor&#61;&#39;val_capsnet_acc&#39;,

save_best_only&#61;True, save_weights_only&#61;True, verbose&#61;1)

lr_decay &#61; callbacks.LearningRateScheduler(schedule&#61;lambda epoch: args.lr * (0.95 ** epoch))

# compile the model

model.compile(optimizer&#61;optimizers.Adam(lr&#61;args.lr),

loss&#61;[margin_loss, &#39;mse&#39;],

loss_weights&#61;[1., args.lam_recon],

metrics&#61;{&#39;capsnet&#39;: &#39;accuracy&#39;})

"""

# Training without data augmentation:

model.fit([x_train, y_train], [y_train, x_train], batch_size&#61;args.batch_size, epochs&#61;args.epochs,

validation_data&#61;[[x_test, y_test], [y_test, x_test]], callbacks&#61;[log, tb, checkpoint, lr_decay])

"""

# Begin: Training with data augmentation ---------------------------------------------------------------------#

def train_generator(x, y, batch_size, shift_fraction&#61;0.):

train_datagen &#61; ImageDataGenerator(width_shift_range&#61;shift_fraction,

height_shift_range&#61;shift_fraction,

horizontal_flip&#61;True) # shift up to 2 pixel for MNIST

generator &#61; train_datagen.flow(x, y, batch_size&#61;batch_size)

while 1:

x_batch, y_batch &#61; generator.next()

yield ([x_batch, y_batch], [y_batch, x_batch])

# Training with data augmentation. If shift_fraction&#61;0., also no augmentation.

model.fit_generator(generator&#61;train_generator(x_train, y_train, args.batch_size, args.shift_fraction),

steps_per_epoch&#61;int(y_train.shape[0] / args.batch_size),

epochs&#61;args.epochs,

validation_data&#61;[[x_test, y_test], [y_test, x_test]],

callbacks&#61;[log, tb, checkpoint, lr_decay])

# End: Training with data augmentation -----------------------------------------------------------------------#

model.save_weights(args.save_dir &#43; &#39;/trained_model.h5&#39;)

print(&#39;Trained model saved to \&#39;%s/trained_model.h5\&#39;&#39; % args.save_dir)

from utils import plot_log

plot_log(args.save_dir &#43; &#39;/log.csv&#39;, show&#61;True)

return model

开发者ID:XifengGuo&#xff0c;项目名称:CapsNet-Fashion-MNIST&#xff0c;代码行数:58&#xff0c;

示例21: test_stop_training_csv

​点赞 4

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def test_stop_training_csv(tmpdir):

np.random.seed(1337)

fp &#61; str(tmpdir / &#39;test.csv&#39;)

(X_train, y_train), (X_test, y_test) &#61; get_test_data(num_train&#61;train_samples,

num_test&#61;test_samples,

input_shape&#61;(input_dim,),

classification&#61;True,

num_classes&#61;num_classes)

y_test &#61; np_utils.to_categorical(y_test)

y_train &#61; np_utils.to_categorical(y_train)

cbks &#61; [callbacks.TerminateOnNaN(), callbacks.CSVLogger(fp)]

model &#61; Sequential()

for _ in range(5):

model.add(Dense(num_hidden, input_dim&#61;input_dim, activation&#61;&#39;relu&#39;))

model.add(Dense(num_classes, activation&#61;&#39;linear&#39;))

model.compile(loss&#61;&#39;mean_squared_error&#39;,

optimizer&#61;&#39;rmsprop&#39;)

def data_generator():

i &#61; 0

max_batch_index &#61; len(X_train) // batch_size

tot &#61; 0

while 1:

if tot > 3 * len(X_train):

yield np.ones([batch_size, input_dim]) * np.nan, np.ones([batch_size, num_classes]) * np.nan

else:

yield (X_train[i * batch_size: (i &#43; 1) * batch_size],

y_train[i * batch_size: (i &#43; 1) * batch_size])

i &#43;&#61; 1

tot &#43;&#61; 1

i &#61; i % max_batch_index

history &#61; model.fit_generator(data_generator(),

len(X_train) // batch_size,

validation_data&#61;(X_test, y_test),

callbacks&#61;cbks,

epochs&#61;20)

loss &#61; history.history[&#39;loss&#39;]

assert len(loss) > 1

assert loss[-1] &#61;&#61; np.inf or np.isnan(loss[-1])

values &#61; []

with open(fp) as f:

for x in reader(f):

values.append(x)

assert &#39;nan&#39; in values[-1], &#39;The last epoch was not logged.&#39;

os.remove(fp)

开发者ID:hello-sea&#xff0c;项目名称:DeepLearning_Wavelet-LSTM&#xff0c;代码行数:51&#xff0c;

示例22: test_CSVLogger

​点赞 4

# 需要导入模块: from keras import callbacks [as 别名]

# 或者: from keras.callbacks import CSVLogger [as 别名]

def test_CSVLogger(tmpdir):

np.random.seed(1337)

filepath &#61; str(tmpdir / &#39;log.tsv&#39;)

sep &#61; &#39;\t&#39;

(X_train, y_train), (X_test, y_test) &#61; get_test_data(num_train&#61;train_samples,

num_test&#61;test_samples,

input_shape&#61;(input_dim,),

classification&#61;True,

num_classes&#61;num_classes)

y_test &#61; np_utils.to_categorical(y_test)

y_train &#61; np_utils.to_categorical(y_train)

def make_model():

np.random.seed(1337)

model &#61; Sequential()

model.add(Dense(num_hidden, input_dim&#61;input_dim, activation&#61;&#39;relu&#39;))

model.add(Dense(num_classes, activation&#61;&#39;softmax&#39;))

model.compile(loss&#61;&#39;categorical_crossentropy&#39;,

optimizer&#61;optimizers.SGD(lr&#61;0.1),

metrics&#61;[&#39;accuracy&#39;])

return model

# case 1, create new file with defined separator

model &#61; make_model()

cbks &#61; [callbacks.CSVLogger(filepath, separator&#61;sep)]

model.fit(X_train, y_train, batch_size&#61;batch_size,

validation_data&#61;(X_test, y_test), callbacks&#61;cbks, epochs&#61;1)

assert os.path.isfile(filepath)

with open(filepath) as csvfile:

dialect &#61; Sniffer().sniff(csvfile.read())

assert dialect.delimiter &#61;&#61; sep

del model

del cbks

# case 2, append data to existing file, skip header

model &#61; make_model()

cbks &#61; [callbacks.CSVLogger(filepath, separator&#61;sep, append&#61;True)]

model.fit(X_train, y_train, batch_size&#61;batch_size,

validation_data&#61;(X_test, y_test), callbacks&#61;cbks, epochs&#61;1)

# case 3, reuse of CSVLogger object

model.fit(X_train, y_train, batch_size&#61;batch_size,

validation_data&#61;(X_test, y_test), callbacks&#61;cbks, epochs&#61;1)

import re

with open(filepath) as csvfile:

output &#61; " ".join(csvfile.readlines())

assert len(re.findall(&#39;epoch&#39;, output)) &#61;&#61; 1

os.remove(filepath)

assert not tmpdir.listdir()

开发者ID:hello-sea&#xff0c;项目名称:DeepLearning_Wavelet-LSTM&#xff0c;代码行数:55&#xff0c;

注&#xff1a;本文中的keras.callbacks.CSVLogger方法示例整理自Github/MSDocs等源码及文档管理平台&#xff0c;相关代码片段筛选自各路编程大神贡献的开源项目&#xff0c;源码版权归原作者所有&#xff0c;传播和使用请参考对应项目的License&#xff1b;未经允许&#xff0c;请勿转载。



推荐阅读
  • 高质量SQL书写的30条建议
    本文提供了30条关于优化SQL的建议,包括避免使用select *,使用具体字段,以及使用limit 1等。这些建议是基于实际开发经验总结出来的,旨在帮助读者优化SQL查询。 ... [详细]
  • MyBatis多表查询与动态SQL使用
    本文介绍了MyBatis多表查询与动态SQL的使用方法,包括一对一查询和一对多查询。同时还介绍了动态SQL的使用,包括if标签、trim标签、where标签、set标签和foreach标签的用法。文章还提供了相关的配置信息和示例代码。 ... [详细]
  • 模板引擎StringTemplate的使用方法和特点
    本文介绍了模板引擎StringTemplate的使用方法和特点,包括强制Model和View的分离、Lazy-Evaluation、Recursive enable等。同时,还介绍了StringTemplate语法中的属性和普通字符的使用方法,并提供了向模板填充属性的示例代码。 ... [详细]
  • MySQL多表数据库操作方法及子查询详解
    本文详细介绍了MySQL数据库的多表操作方法,包括增删改和单表查询,同时还解释了子查询的概念和用法。文章通过示例和步骤说明了如何进行数据的插入、删除和更新操作,以及如何执行单表查询和使用聚合函数进行统计。对于需要对MySQL数据库进行操作的读者来说,本文是一个非常实用的参考资料。 ... [详细]
  • 基于分布式锁的防止重复请求解决方案
    一、前言关于重复请求,指的是我们服务端接收到很短的时间内的多个相同内容的重复请求。而这样的重复请求如果是幂等的(每次请求的结果都相同,如查 ... [详细]
  • SpringBoot uri统一权限管理的实现方法及步骤详解
    本文详细介绍了SpringBoot中实现uri统一权限管理的方法,包括表结构定义、自动统计URI并自动删除脏数据、程序启动加载等步骤。通过该方法可以提高系统的安全性,实现对系统任意接口的权限拦截验证。 ... [详细]
  • 本文介绍了数据库的存储结构及其重要性,强调了关系数据库范例中将逻辑存储与物理存储分开的必要性。通过逻辑结构和物理结构的分离,可以实现对物理存储的重新组织和数据库的迁移,而应用程序不会察觉到任何更改。文章还展示了Oracle数据库的逻辑结构和物理结构,并介绍了表空间的概念和作用。 ... [详细]
  • CSS3选择器的使用方法详解,提高Web开发效率和精准度
    本文详细介绍了CSS3新增的选择器方法,包括属性选择器的使用。通过CSS3选择器,可以提高Web开发的效率和精准度,使得查找元素更加方便和快捷。同时,本文还对属性选择器的各种用法进行了详细解释,并给出了相应的代码示例。通过学习本文,读者可以更好地掌握CSS3选择器的使用方法,提升自己的Web开发能力。 ... [详细]
  • eclipse学习(第三章:ssh中的Hibernate)——11.Hibernate的缓存(2级缓存,get和load)
    本文介绍了eclipse学习中的第三章内容,主要讲解了ssh中的Hibernate的缓存,包括2级缓存和get方法、load方法的区别。文章还涉及了项目实践和相关知识点的讲解。 ... [详细]
  • 本文讨论了一个关于cuowu类的问题,作者在使用cuowu类时遇到了错误提示和使用AdjustmentListener的问题。文章提供了16个解决方案,并给出了两个可能导致错误的原因。 ... [详细]
  • sklearn数据集库中的常用数据集类型介绍
    本文介绍了sklearn数据集库中常用的数据集类型,包括玩具数据集和样本生成器。其中详细介绍了波士顿房价数据集,包含了波士顿506处房屋的13种不同特征以及房屋价格,适用于回归任务。 ... [详细]
  • 本文介绍了游标的使用方法,并以一个水果供应商数据库为例进行了说明。首先创建了一个名为fruits的表,包含了水果的id、供应商id、名称和价格等字段。然后使用游标查询了水果的名称和价格,并将结果输出。最后对游标进行了关闭操作。通过本文可以了解到游标在数据库操作中的应用。 ... [详细]
  • 本文介绍了一个在线急等问题解决方法,即如何统计数据库中某个字段下的所有数据,并将结果显示在文本框里。作者提到了自己是一个菜鸟,希望能够得到帮助。作者使用的是ACCESS数据库,并且给出了一个例子,希望得到的结果是560。作者还提到自己已经尝试了使用"select sum(字段2) from 表名"的语句,得到的结果是650,但不知道如何得到560。希望能够得到解决方案。 ... [详细]
  • 本文介绍了在处理不规则数据时如何使用Python自动提取文本中的时间日期,包括使用dateutil.parser模块统一日期字符串格式和使用datefinder模块提取日期。同时,还介绍了一段使用正则表达式的代码,可以支持中文日期和一些特殊的时间识别,例如'2012年12月12日'、'3小时前'、'在2012/12/13哈哈'等。 ... [详细]
  • java drools5_Java Drools5.1 规则流基础【示例】(中)
    五、规则文件及规则流EduInfoRule.drl:packagemyrules;importsample.Employ;ruleBachelorruleflow-group ... [详细]
author-avatar
宝贝小妖精芳_555
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有