How can i predict data by using neural network from input after fitting the data??

9 visualizaciones (últimos 30 días)
I used Neural Network fitting tool for training my data and got outputs for each target that i supplied to the network. Those outputs are well within the error range and give a good fit for the network. But, now i want to predict output based on input samples not included within the data set that i previously provided to the nnftool for getting the outputs. Please tell me how i can do that? The input samples are withing the training set range.
  3 comentarios
sidra muqaddas
sidra muqaddas el 26 de Oct. de 2016
how to predict output from a new input,after you have done with the training.(using code not nntoolbox variables)
sidra muqaddas
sidra muqaddas el 26 de Oct. de 2016
x = [0 1 0; 0 1 0; 1 1 0; 1 1 1; 0 1 1; 1 1 1; 0 1 1; 1 1 0]; % three samples from input training data
t = [0 0 1; 1 0 0 ; 0 1 0; 0 0 0;0 0 0;0 0 0]; %three samples from target training data
[ni N]=size(x); % ni= no of input neurons
[no N]=size(t); %no= no of output neurons
nh=8; % no of hidden neurons in hidden layer
wih = 0.01*randn(nh,ni+1); %weight matrix (iput to hidden layer)
who = 0.01*randn(no,nh+1); %weight matrix (hidden to output layer)
c = 0;
while(c < 1000)
c = c+1;
for i=1:N
for j = 1:nh
netj(j) = wih(j,1:end-1)*x(:,i)+wih(j,end);
outj(j) = tansig(netj(j));
end
for k = 1:no
netk(k) = who(k,1:end-1)*outj' + who(k,end);
outk(k) = 1./(1+exp(-netk(k)));
delk(k) = outk(k)*(1-outk(k))*(t(k,i)-outk(k));
end
%back propagation
for j = 1:nh
s=0;
for k = 1:no
s = s + who(k,j)*delk(k);
end
delj(j) = outj(j)*(1-outj(j))*s;
end
for k = 1:no
for l = 1:nh
who(k,l) = who(k,l)+.5*delk(k)*outj(l);
end
who(k,l+1) = who(k,l+1)+1*delk(k)*1;
end
for j = 1:nh
for ii = 1:ni
wih(j,ii) = wih(j,ii)+.5*delj(j)*x(ii,i);
end
wih(j,ii+1) = wih(j,ii+1)+1*delj(j)*1;
end
end
end
h = tansig(wih*[x;ones(1,N)]);
y = logsig(who*[h;ones(1,N)]); y=round(y); e = t-y; % new iput to the network csr=[0 1 0 0 0 0 1 0]; % current sensor reading

Iniciar sesión para comentar.

Respuesta aceptada

Greg Heath
Greg Heath el 29 de Jun. de 2014
Incorrect understanding:
Generalization: Ability to perform well on nontraining data
Overfitting: Number of training equations, Ntrneq, not being sufficiently larger than the number of unknown weights, Nw, can be a cause of DECREASED generalization.
Mitigation: Either increase Ndof and/or use validation stopping(default) and/or use regularization (e.g., TRAINBR)
Insufficient information:
size(input) = [ I N ] = [ ? ? ]
size(target) = [ O N ] = [ ? ? ]
default number of training examples Ntrn = N-2*round(0.15*N) = ?
number of training equations Ntrneq = Ntrn*O
reference mean-square errors
MSEtrn00 = mean(var(trntarget',1)) % Biased
MSEtrn00a = mean(var(trntarget',0))% DOF adjusted
MSEval00 = mean(var(valtarget',1)) % Unbiased
MSEtst00 = mean(var(tsttarget',1)) % Unbiased
number of hidden nodes, H = ?
number of unknown weights Nw = (I+1)*H+(H+1)*O = ?
number of estimation degrees of freedom Ndof = Ntrneq-Nw = ?
normalized-mean-squuare-errors
SSEtrn = sse(trntarget-trnoutput)
MSEtrn = SSEtrn/Ntrneq % mse(trntarget-trnoutput)
MSEtrna = SSEtrn/Ndof
NMSEtrn = MSEtrn/MSEtrn00
NMSEtrna = MSEtrna/MSEtrn00a
NMSEval = MSEval/MSEval00
NMSEtst = MSEtst/MSEtst00
  1 comentario
Atiyo Banerjee
Atiyo Banerjee el 29 de Jun. de 2014
Well thanks for the information, really helpful to me..i used LM algorithm with 12 neurons in 1 hidden layer and it simulated fairly well..and could mimic the parabolic behavior of the system. But some of the results are inconsistent based on the experimental data that i had with me. Maybe the values are falling in local minima traps and the learning is not that good for the system. I referred to a research paper on absorption that did the same experimental fitting and used 1200 data sets for training. In my situation, i had about 150 static datasets of 4 different variables. At first i was getting repetitive values in the simulation results, then i corrected it by changing weights and re-training the network.

Iniciar sesión para comentar.

Más respuestas (1)

Greg Heath
Greg Heath el 28 de Jun. de 2014
newoutput = net(newinput)
THank you for formally accepting my answer
Greg
  4 comentarios
Atiyo Banerjee
Atiyo Banerjee el 5 de Jul. de 2014
Thank you so much for your conceptual reply...i am still searching for deeper concepts on the neural network problem solving ability..and yes, there always remains the notion whether the system is over-defined, under-defined or well defined..the problem in using mse as the performance function is that it gives the mean deviations summed up over the observation but does not tell us about the individual deviations or the number of observations that have deviated from optimum performance..in that case keeping the weights to a necessary value would solve the problem.
Greg Heath
Greg Heath el 26 de Oct. de 2016
You can always superimpose output plots (red) over target plots (blue) to obtain a better understanding of what causes errors.

Iniciar sesión para comentar.

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by