热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

在Matlab中自主构建与仿真神经网络模型-BuildingandSimulatingNeuralNetworkModelsIndependentlyinMatlab

在Matlab中,我尝试构建了一个神经网络模型,用于预测函数y=x^2。为此,我设计并实现了一个拟合神经网络,并对其进行了详细的仿真和验证。通过调整网络结构和参数,成功实现了对目标函数的准确估计。此外,还对模型的性能进行了全面评估,确保其在不同输入条件下的稳定性和可靠性。

I tried to create a neural network to estimate y = x ^ 2. So I created a fitting neural network and gave it some samples for input and output. I tried to build this network in c++. but the result is different than I expected.

我试图建立一个神经网络估计y = x ^ 2。所以我创建了一个拟合神经网络并给它一些样本用于输入和输出。我尝试用c++构建这个网络。但结果与我预期的不同。

With the following inputs:

使用以下输入:

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21日22日23日24日25日26日27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 1 2 3 4 5 6 7 8 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71

and the following outputs:

和下面的输出:

0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041

0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849 1936 2025 18492209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3944 4096 4225 4489 4624 4900 5041

I used fitting tool network. with matrix rows. training is 70% validation is 15% and testing is 15% too. number of hidden neurons is 2. then in command lines i wrote this :

我使用了拟合工具网络。用矩阵的行。培训是70%验证是15%测试也是15%隐藏的神经元数目为2。然后我在命令行中写道:

purelin(net.LW{2}*tansig(net.IW{1}*inputTest+net.b{1})+net.b{2})

Other information :

其他信息:

my net.b[1] is : -1.16610230053776 1.16667147712026

我的网。b[1]为:-1.16610230053776 1.16667147712026

my net.b[2] is : 51.3266249426358

我的网。b[2]是:51.3266249426358

and net.IW(1) is : 0.344272596370387 0.344111217766824

而net.IW(1)为:0.344272596370387 0.344111217766824

net.LW(2) is : 31.7635369693519 -31.8082184881063

lw(2)为:31.7635369693519 -31.8082184881063。

When my inputTest is 3 the result of this command is 16, while it should be about 9.
If I've made an error somewhere please let me know. thank

当我的输入是3时,这个命令的结果是16,而应该是9。如果我在某个地方犯了错,请告诉我。谢谢

Edited: I found the link Neural network in MATLAB that contains a problem like my problem but there is a little difference, and the differences is in that problem the ranges of input and output are same but in my problem is no. That solution says i need to scale out the results but i don't know how to scale out my result. any idea?

编辑:我在MATLAB中找到了link神经网络,它包含了一个像我的问题一样的问题,但是有一点不同,在那个问题中输入和输出的范围是相同的,但是在我的问题中是不一样的。这个解决方案说我需要把结果放大,但我不知道如何把结果放大。任何想法?

1 个解决方案

#1


6  

You are right about scaling. As was mentioned in the linked answer, the neural network by default scales the input and output to the range [-1,1]. This can be seen in the network processing functions configuration:

你说的对。正如在链接答案中提到的,神经网络默认地将输入和输出扩展到范围[-1,1]。这可以从网络处理功能配置中看出:

>> net = fitnet(2);

>> net.inputs{1}.processFcns
ans = 
    'removeconstantrows'    'mapminmax'

>> net.outputs{2}.processFcns
ans = 
    'removeconstantrows'    'mapminmax'

The second preprocessing function applied to both input/output is mapminmax with the following parameters:

应用于输入/输出的第二个预处理函数是mapminmax,参数如下:

>> net.inputs{1}.processParams{2}
ans = 
    ymin: -1
    ymax: 1

>> net.outputs{2}.processParams{2}
ans = 
    ymin: -1
    ymax: 1

to map both into the range [-1,1] (prior to training).

将两者映射到范围[-1,1](在训练之前)。

This means that the trained network expects input values in this range, and outputs values also in the same range. If you want to manually feed input to the network, and compute the output yourself, you have to scale the data at input, and reverse the mapping at the output.

这意味着经过训练的网络期望输入值在这个范围内,输出值也在相同的范围内。如果您想手动向网络输入输入,并自己计算输出,您必须在输入时缩放数据,并在输出时反转映射。

One last thing to remember is that each time you train the ANN, you will get different weights. If you want reproducible results, you need to fix the state of the random number generator (initialize it with the same seed each time). Read the documentation on functions like rng and RandStream.

最后要记住的是,每次训练ANN,你会得到不同的权重。如果您想要重现结果,您需要修复随机数生成器的状态(每次用相同的种子初始化它)。阅读关于rng和RandStream等函数的文档。

You also have to pay attention that if you are dividing the data into training/testing/validation sets, you must use the same split each time (probably also affected by the randomness aspect I mentioned).

您还必须注意,如果要将数据划分为训练/测试/验证集,那么每次都必须使用相同的分割(可能还受到我提到的随机性方面的影响)。


Here is an example to illustrate the idea (adapted from another post of mine):

这里有一个例子来说明这个想法(改编自我的另一篇文章):

%%# data
x = linspace(-71,71,200);            %# 1D input
y_model = x.^2;                      %# model
y = y_model + 10*randn(size(x)).*x;  %# add some noise

%%# create ANN, train, simulate
net = fitnet(2);                     %# one hidden layer with 2 nodes
net.divideFcn = 'dividerand';
net.trainParam.epochs = 50;
net = train(net,x,y);
y_hat = net(x);

%%# plot
plot(x, y, 'b.'), hold on
plot(x, x.^2, 'Color','g', 'LineWidth',2)
plot(x, y_hat, 'Color','r', 'LineWidth',2)
legend({'data (noisy)','model (x^2)','fitted'})
hold off, grid on

%%# manually simulate network
%# map input to [-1,1] range
[~,inMap] = mapminmax(x, -1, 1);
in = mapminmax('apply', x, inMap);

%# propagate values to get output (scaled to [-1,1])
hid = tansig( bsxfun(@plus, net.IW{1}*in, net.b{1}) ); %# hidden layer
outLayerOut = purelin( net.LW{2}*hid + net.b{2} );     %# output layer

%# reverse mapping from [-1,1] to original data scale
[~,outMap] = mapminmax(y, -1, 1);
out = mapminmax('reverse', outLayerOut, outMap);

%# compare against MATLAB output
max( abs(out - y_hat) )        %# this should be zero (or in the order of `eps`)

I opted to use the mapminmax function, but you could have done that manually as well. The formula is a pretty simply linear mapping:

我选择使用mapminmax函数,但是您也可以手工完成。公式是一个非常简单的线性映射:

y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;

screenshot


推荐阅读
author-avatar
手机用户2502905845
这个家伙很懒,什么也没留下!
Tags | 热门标签
RankList | 热门文章
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有