头条ID:钱多多先森,关注更多AI、CV、数码、个人理财领域知识,关注我,一起成长
在深度学习中,数据、模型、参数、非线性、前向传播预测、反向偏微分参数更新等等,都是该领域的基础内容。究竟他们最基础的都有哪些?什么原理?用python如何实现?都是本节要描述的内容。
sigmoid激活函数
<code>import numpy as npimport matplotlib.pyplot as pltimport h5pyimport sklearnimport sklearn.datasetsimport sklearn.linear_modelimport scipy.iodef sigmoid(x): """ Compute the sigmoid of x Arguments: x -- A scalar or numpy array of any size. Return: s -- sigmoid(x) """ s = 1/(1+np.exp(-x)) return s/<code>
relu激活函数
<code>def relu(x): """ Compute the relu of x Arguments: x -- A scalar or numpy array of any size. Return: s -- relu(x) """ s = np.maximum(0,x) return s/<code>
网络层参数的初始化
网络层参数的初始化,就是初始化网络模型中间的权值和偏执(简单理解)
<code>def initialize_parameters(layer_dims): """ Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) b1 -- bias vector of shape (layer_dims[l], 1) Wl -- weight matrix of shape (layer_dims[l-1], layer_dims[l]) bl -- bias vector of shape (1, layer_dims[l]) Tips: - For example: the layer_dims for the "Planar Data classification model" would have been [2,2,1]. This means W1's shape was (2,2), b1 was (1,2), W2 was (2,1) and b2 was (1,1). Now you have to generalize it! - In the for loop, use parameters['W' + str(l)] to access Wl, where l is the iterative integer. """ np.random.seed(3) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) / np.sqrt(layer_dims[l-1]) parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) assert(parameters['W' + str(l)].shape == layer_dims[l], layer_dims[l-1]) assert(parameters['W' + str(l)].shape == layer_dims[l], 1) return parameters/<code>
前向传播(FP)
从网络输入到网络最终输出的过程称为前向算法。前向传播包括三块内容,一是输入,二是网络中间参数,三是输出,具体过程如下图所示:
<code>def forward_propagation(X, parameters): """ Implements the forward propagation (and computes the loss) presented in Figure 2. Arguments: X -- input dataset, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape () b1 -- bias vector of shape () W2 -- weight matrix of shape () b2 -- bias vector of shape () W3 -- weight matrix of shape () b3 -- bias vector of shape () Returns: loss -- the loss function (vanilla logistic loss) """ # retrieve parameters W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return A3, cache/<code>
反向传播(BP)
用来解决网络优化问题,通过调节输出层的结果和真实值之间的偏差来进行逐层调节参数。该学习过程是一个不断迭代的过程。
<code>def backward_propagation(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input dataset, of shape (input size, number of examples) Y -- true "label" vector (containing 0 if cat, 1 if non-cat) cache -- cache output from forward_propagation() Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y # error dW3 = 1./m * np.dot(dZ3, A2.T)#矩阵点乘 db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) #数组和矩阵对应位置相乘,输出与相乘数组/矩阵的大小一致 dW2 = 1./m * np.dot(dZ2, A1.T) db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients/<code>
更新模型(权值w、偏执b)参数
<code>def update_parameters(parameters, grads, learning_rate): """ Update parameters using gradient descent Arguments: parameters -- python dictionary containing your parameters: parameters['W' + str(i)] = Wi parameters['b' + str(i)] = bi grads -- python dictionary containing your gradients for each parameters: grads['dW' + str(i)] = dWi grads['db' + str(i)] = dbi learning_rate -- the learning rate, scalar. Returns: parameters -- python dictionary containing your updated parameters """ n = len(parameters) // 2 # number of layers in the neural networks # Update rule for each parameter for k in range(n): parameters["W" + str(k+1)] = parameters["W" + str(k+1)] - learning_rate * grads["dW" + str(k+1)] parameters["b" + str(k+1)] = parameters["b" + str(k+1)] - learning_rate * grads["db" + str(k+1)] return parameters/<code>
前向传播进行预测
网络执行前向传播,预测的结果大于阈值的就置为1。
<code>def predict(X, y, parameters): """ This function is used to predict the results of a n-layer neural network. Arguments: X -- data set of examples you would like to label parameters -- parameters of the trained model Returns: p -- predictions for the given dataset X """ m = X.shape[1] p = np.zeros((1,m), dtype = np.int) # Forward propagation a3, caches = forward_propagation(X, parameters) # convert probas to 0/1 predictions for i in range(0, a3.shape[1]): if a3[0,i] > 0.5: p[0,i] = 1 else: p[0,i] = 0 # print results #print ("predictions: " + str(p[0,:])) #print ("true labels: " + str(y[0,:])) print("Accuracy: " + str(np.mean((p[0,:] == y[0,:])))) return p/<code>
计算代价函数
以交叉熵损失函数为例(Cross Entropy Loss),其代价函数的计算公式如下:
<code>def compute_cost(a3, Y): """ Implement the cost function Arguments: a3 -- post-activation, output of forward propagation Y -- "true" labels vector, same shape as a3 Returns: cost - value of the cost function """ m = Y.shape[1] logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y) cost = 1./m * np.nansum(logprobs) return cost/<code>
结语
通过这篇文章,你应该对深度学习中的地基模块:数据、模型、参数、非线性、前向传播预测、反向偏微分参数更新等等有了新的认识。在平时的学习中,不能单纯的知道tf.sigmoid就可以四线非线性,而更加深入的了解其底层的代码,这样能加深我们对深度学习的认识。
最后,感谢你关注:钱多多先森,一个关注更多AI、CV、数码、个人理财领域知识的同学。关注我,一起成长。
往期内容回顾: