前度传播+损失函数+ 学习效率+反向传播+session
import tensorflow as tf
from numpy.random import RandomState
batch_size = 8
w1= tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1))
w2= tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1))
x = tf.placeholder(tf.float32, shape=(None, 2), name="x-input")
y_= tf.placeholder(tf.float32, shape=(None, 1), name='y-input')
a = tf.matmul(x, w1)
y = tf.matmul(a, w2)
y = tf.sigmoid(y)
cross_entropy = -tf.reduce_mean(y_ * tf.log(tf.clip_by_value(y, 1e-10, 1.0))
+ (1 - y_) * tf.log(tf.clip_by_value(1 - y, 1e-10, 1.0)))
train_step = tf.train.AdamOptimizer(0.001).minimize(cross_entropy)
rdm = RandomState(1)
X = rdm.rand(128,2)
Y = [[int(x1+x2 < 1)] for (x1, x2) in X]
with tf.Session() as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
# 输出目前(未经训练)的参数取值。
print(sess.run(w1))
print(sess.run(w2))
print("\n")
# 训练模型。
STEPS = 12000
for i in range(STEPS):
start = (i*batch_size) % 128
end = (i*batch_size) % 128 + batch_size
sess.run([train_step, y, y_], feed_dict={x: X[start:end], y_: Y[start:end]})
if i % 1000 == 0:
total_cross_entropy = sess.run(cross_entropy, feed_dict={x: X, y_: Y})
print("After %d training step(s), cross entropy on all data is %g" % (i, total_cross_entropy))
# 输出训练后的参数取值。
print("\n")
print(sess.run(w1))
print(sess.run(w2))
12000次训练结果:交叉熵集越来越小,得出w1,w2的优化参数
[[-0.8113182 1.4845988 0.06532937]
[-2.4427042 0.0992484 0.5912243 ]]
[[-0.8113182 ]
[ 1.4845988 ]
[ 0.06532937]]
After 0 training step(s), cross entropy on all data is 1.89805
After 1000 training step(s), cross entropy on all data is 0.655075
After 2000 training step(s), cross entropy on all data is 0.626172
After 3000 training step(s), cross entropy on all data is 0.615096
After 4000 training step(s), cross entropy on all data is 0.610309
After 5000 training step(s), cross entropy on all data is 0.608679
After 6000 training step(s), cross entropy on all data is 0.608231
After 7000 training step(s), cross entropy on all data is 0.608114
After 8000 training step(s), cross entropy on all data is 0.608088
After 9000 training step(s), cross entropy on all data is 0.608081
After 10000 training step(s), cross entropy on all data is 0.608079
After 11000 training step(s), cross entropy on all data is 0.608079
[[ 0.08924854 0.51599807 1.7538922 ]
[-2.2377944 -0.20479864 1.0734867 ]]
[[-0.49589315]
[ 0.4026622 ]
[-1.0064225 ]]
mlchapter 反向传播算法实现参数优化
最后编辑于 :
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。
推荐阅读更多精彩内容
- 姓名:周雪宁 学号:1702110196 转载:http://mp.weixin.qq.com/s/8U3vFaf...
- 引言 系统运行的表现通常会受到多种参数变量的影响。参数取值的两两组合,其可能的组合数是非常大的。如何比较好且快速地...