首页 > 编程 > Python > 正文

Tensorflow训练模型越来越慢的2种解决方案

2020-02-15 21:20:46
字体:
来源:转载
供稿:网友

1 解决方案

【方案一】

载入模型结构放在全局,即tensorflow会话外层。

'''载入模型结构:最关键的一步'''saver = tf.train.Saver()'''建立会话'''with tf.Session() as sess: for i in range(STEPS): '''开始训练''' _, loss_1, acc, summary = sess.run([train_op_1, train_loss, train_acc, summary_op], feed_dict=feed_dict) '''保存模型''' saver.save(sess, save_path="./model/path", i)

【方案二】

在方案一的基础上,将模型结构放在图会话的外部。

'''预测值'''train_logits= network_model.inference(inputs, keep_prob)'''损失值'''train_loss = network_model.losses(train_logits)'''优化'''train_op = network_model.train(train_loss, learning_rate)'''准确率'''train_acc = network_model.evaluation(train_logits, labels)'''模型输入'''feed_dict = {inputs: x_batch, labels: y_batch, keep_prob: 0.5}'''载入模型结构'''saver = tf.train.Saver()'''建立会话'''with tf.Session() as sess: for i in range(STEPS): '''开始训练''' _, loss_1, acc, summary = sess.run([train_op_1, train_loss, train_acc, summary_op], feed_dict=feed_dict) '''保存模型''' saver.save(sess, save_path="./model/path", i) 

2 时间测试

通过不同方法测试训练程序,得到不同的训练时间,每执行一次训练都重新载入图结构,会使每一步的训练时间逐次增加,如果训练步数越大,后面训练速度越来越慢,最终可导致图爆炸,而终止训练。

【时间累加】

2019-05-15 10:55:29.009205: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMAstep: 0, time cost: 1.8800880908966064step: 1, time cost: 1.592250108718872step: 2, time cost: 1.553826093673706step: 3, time cost: 1.5687050819396973step: 4, time cost: 1.5777575969696045step: 5, time cost: 1.5908267498016357step: 6, time cost: 1.5989274978637695step: 7, time cost: 1.6078357696533203step: 8, time cost: 1.6087186336517334step: 9, time cost: 1.6123006343841553step: 10, time cost: 1.6320762634277344step: 11, time cost: 1.6317598819732666step: 12, time cost: 1.6570467948913574step: 13, time cost: 1.6584930419921875step: 14, time cost: 1.6765813827514648step: 15, time cost: 1.6751370429992676step: 16, time cost: 1.7304580211639404step: 17, time cost: 1.7583982944488525

【时间均衡】

2019-05-15 13:03:49.394354: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 7048 MB memory) -> physical GPU (device: 1, name: Tesla P4, pci bus id: 0000:00:0d.0, compute capability: 6.1)step: 0, time cost: 1.9781079292297363loss1:6.78, loss2:5.47, loss3:5.27, loss4:7.31, loss5:5.44, loss6:6.87, loss7: 6.84Total loss: 43.98, accuracy: 0.04, steps: 0, time cost: 1.9781079292297363step: 1, time cost: 0.09688425064086914step: 2, time cost: 0.09693264961242676step: 3, time cost: 0.09671926498413086step: 4, time cost: 0.09688210487365723step: 5, time cost: 0.09646058082580566step: 6, time cost: 0.09669041633605957step: 7, time cost: 0.09666872024536133step: 8, time cost: 0.09651994705200195step: 9, time cost: 0.09705543518066406step: 10, time cost: 0.09690332412719727            
发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表