代码函数详解
tf.random.truncated_normal()函数
tf.truncated_normal函数随机生成正态分布的数据,生成的数据是截断的正态分布,截断的标准是2倍的stddev。
zip()函数
zip() 函数用于将可迭代对象作为参数,将对象中对应的元素打包成一个个元组,然后返回由这些元组组成的对象。如果各个可迭代对象的元素个数不一致,则返回的对象长度与最短的可迭代对象相同。利用 * 号操作符,与zip相反,进行解压。
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
train_x = np.linspace(-5, 3, 50)
train_y = train_x * 5 + 10 + np.random.random(50) * 10 - 5
plt.plot(train_x, train_y, 'r.')
plt.grid(True)
plt.show()
X = tf.placeholder(dtype=tf.float32)
Y = tf.placeholder(dtype=tf.float32)
w = tf.Variable(tf.random.truncated_normal([1]), name='Weight')
b = tf.Variable(tf.random.truncated_normal([1]), name='bias')
z = tf.multiply(X, w) + b
cost = tf.reduce_mean(tf.square(Y - z))
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
init = tf.global_variables_initializer()
training_epochs = 20
display_step = 2
with tf.Session() as sess:
sess.run(init)
loss_list = []
for epoch in range(training_epochs):
for (x, y) in zip(train_x, train_y):
sess.run(optimizer,feed_dict={X:x, Y:y})
if epoch % display_step == 0:
loss = sess.run(cost, feed_dict={X:x, Y:y})
loss_list.append(loss)
print('Iter: ', epoch, ' Loss: ', loss)
w_, b_ = sess.run([w, b], feed_dict={X: x, Y: y})
print(" Finished ")
print("W: ", w_, " b: ", b_, " loss: ", loss)
plt.plot(train_x, train_x*w_ + b_, 'g-', train_x, train_y, 'r.')
plt.grid(True)
plt.show()