首页 > 学院 > 开发设计 > 正文

Python学习笔记之逻辑回归

2019-11-14 17:15:34
字体:
来源:转载
供稿:网友
 1 # -*- coding: utf-8 -*- 2 """ 3 Created on Wed APR 22 17:39:19 2015 4  5 @author: 90Zeng 6 """ 7  8 import numpy 9 import theano10 import theano.tensor as T11 import matplotlib.pyplot as plt12 rng = numpy.random13 N = 400 # 400个样本14 feats = 784 # 每个样本的维度15 D = (rng.randn(N, feats), rng.randint(size=N, low=0, high=2))16 training_steps = 1000017 18 # Declare Theano symbolic variables19 x = T.dmatrix("x")20 y = T.dvector("y")21 22 # 随机初始化权重23 w = theano.shared(rng.randn(feats), name="w")24 # 偏置初始化为 025 b = theano.shared(0.0, name="b")26 print "Initial model:"27 print w.get_value(), b.get_value()28 29 # Construct Theano expression graph30 p_1 = 1 / (1 + T.exp(-T.dot(x, w) - b))   # Probability that target = 131 prediction = p_1 > 0.5                    # The prediction thresholded32 xent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy loss function33 lost_avg = xent.mean()34 cost = xent.mean() + 0.01 * (w ** 2).sum()# The cost to minimize35 gw, gb = T.grad(cost, [w, b])             # Compute the gradient of the cost36                                           # (we shall return to this in a37                                           # following section of this tutorial)38 39 # Compile40 train = theano.function(41     inputs=[x,y],42     outputs=[prediction, lost_avg],43     updates=((w, w - 0.1 * gw),(b, b - 0.1 * gb)),44     )45 predict=theano.function(46     inputs=[x], 47     outputs=prediction, 48     )49 50 # Train51 err = []52 for i in range(training_steps):53     pred, er = train(D[0], D[1])54     err.append(er)55 56 print "Final model:"57 print w.get_value(), b.get_value()58 print "target values for D:", D[1]59 print "prediction on D:", predict(D[0])60 61 # 画出损失函数图62 x = range(1000)63 plt.plot(x,err[0:1000])

 损失函数随着迭代次数变化,运行结果:


发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表