热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

神经网络字符识别matlab程序,用matlab实现神经网络识别数字|学步园

AndrewNg机器学习第四周的编程练习是用matlab实现一个神经网络对一幅图中的数字进行识别,有待识别的数字全集如下:其中每一个数字都是一个大小为2

Andrew Ng机器学习第四周的编程练习是用matlab实现一个神经网络对一幅图中的数字进行识别,有待识别的数字全集如下:

27ec67dea300c3ec54ce58cb0f503ca2.png

其中每一个数字都是一个大小为20*20像素的图像,如果把每个像素作为一个输入单元,那有400个输入。考虑到神经网络还需要增加一个额外输入单元表示偏差,一共有401个输入单元。题目中给的训练数据X是一个5000*400的向量。

题目中要求包含一个25个节点的隐藏层,隐藏层也存在表示偏差的额外输入,所以一共有26个输入。

最终的输出结果是一个10维的向量,分别表示该数字在0-9上面的概率值(由于没有0这个下标位,这里题目中把0标记为10,其余1-9还是对应1-9),找到其中概率最大的就是要识别的结果。

神经网络的结构如下:

e0a4127de02530ff296d668bf877af5f.png

从上图可以看到,神经网络中除了输入参数外,还包含Theta1和Theta2两个参数。

其中的Theta1就表示输入层到隐含层中每条边的权重,为25*401的向量。Theta2是隐含层到输出层每条边的权重,为10*26的向量。

为了把数据标准化减少误差,这里要对每一步的输出用sigmoid函数进行处理。

构造好神经网络后,首先是用训练数据进行训练,得出Theta1和Theta2的权重信息,然后就可以预测了。

主要的matlab代码如下:

%% Machine Learning Online Class - Exercise 3 | Part 2: Neural Networks

% Instructions

% ------------

%

% This file contains code that helps you get started on the

% linear exercise. You will need to complete the following functions

% in this exericse:

%

% lrCostFunction.m (logistic regression cost function)

% oneVsAll.m

% predictOneVsAll.m

% predict.m

%

% For this exercise, you will not need to change any code in this file,

% or any other files other than those mentioned above.

%

%% Initialization

clear ; close all; clc

%% Setup the parameters you will use for this exercise

input_layer_size = 400; % 20x20 Input Images of Digits

hidden_layer_size = 25; % 25 hidden units

num_labels = 10; % 10 labels, from 1 to 10

% (note that we have mapped "0" to label 10)

%% =========== Part 1: Loading and Visualizing Data =============

% We start the exercise by first loading and visualizing the dataset.

% You will be working with a dataset that contains handwritten digits.

%

% Load Training Data

fprintf('Loading and Visualizing Data ...\n')

load('ex3data1.mat');

m = size(X, 1);

% Randomly select 100 data points to display

sel = randperm(size(X, 1));

sel = sel(1:100);

displayData(X(sel, :));

fprintf('Program paused. Press enter to continue.\n');

pause;

%% ================ Part 2: Loading Pameters ================

% In this part of the exercise, we load some pre-initialized

% neural network parameters.

fprintf('\nLoading Saved Neural Network Parameters ...\n')

% Load the weights into variables Theta1 and Theta2

load('ex3weights.mat');

%% ================= Part 3: Implement Predict =================

% After training the neural network, we would like to use it to predict

% the labels. You will now implement the "predict" function to use the

% neural network to predict the labels of the training set. This lets

% you compute the training set accuracy.

pred = predict(Theta1, Theta2, X);

fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

fprintf('Program paused. Press enter to continue.\n');

pause;

% To give you an idea of the network's output, you can also run

% through the examples one at the a time to see what it is predicting.

% Randomly permute examples

rp = randperm(m);

for i = 1:m

% Display

fprintf('\nDisplaying Example Image\n');

displayData(X(rp(i), :));

pred = predict(Theta1, Theta2, X(rp(i),:));

fprintf('\nNeural Network Prediction: %d (digit %d)\n', pred, mod(pred, 10));

% Pause

fprintf('Program paused. Press enter to continue.\n');

pause;

end

预测函数如下:

function p = predict(Theta1, Theta2, X)

%PREDICT Predict the label of an input given a trained neural network

% p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the

% trained weights of a neural network (Theta1, Theta2)

% Useful values

m = size(X, 1);

num_labels = size(Theta2, 1);

% You need to return the following variables correctly

p = zeros(size(X, 1), 1);

% ====================== YOUR CODE HERE ======================

% Instructions: Complete the following code to make predictions using

% your learned neural network. You should set p to a

% vector containing labels between 1 to num_labels.

%

% Hint: The max function might come in useful. In particular, the max

% function can also return the index of the max element, for more

% information see 'help max'. If your examples are in rows, then, you

% can use max(A, [], 2) to obtain the max for each row.

%

X = [ones(m, 1) X];

predictZ=X*Theta1';

predictZ=sigmoid(predictZ);

predictZ=[ones(m,1) predictZ];

predictZZ=predictZ*Theta2';

predictY=sigmoid(predictZZ);

[mp,imp]=max(predictY,[],2);

p=imp;

% =========================================================================

end

最终运算截图如下:

b8bc6d4d1114007648c4af3de4e4de51.png

最后与回归分析做个对比:

回归分析需要对每个数字训练一个分类器,这里需要训练10个,每一个分类器迭代50次,结果为:

959d2bf261bbf698fcf770d991840c6e.png

显然神经网络准确率要比回归分析高,同时也要显得简洁很多,代码行数也明显减少,这也正是神经网络的优势所在。



推荐阅读
author-avatar
可怜小淖_135
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有