Why scaling data inside [-1,1] ?

1 visualización (últimos 30 días)
Andre
Andre el 15 de En. de 2015
Respondida: Greg Heath el 18 de En. de 2015
What are the diferences between normalizating features in [0,1], [-1,1] or [-5,5] with NN minmax ?

Respuesta aceptada

Greg Heath
Greg Heath el 18 de En. de 2015
The purpose of normalization is to keep inputs to transfer functions as close to the middle of the so called 'active region' as much as possible. For example, Warren Sarle posted the results of experimental examples in the FAQ of comp.ai.neural-nets indicating that in general, you can do no better than use bipolar inputs, outputs and transfer functions.
Nevertheless, it is easier in MATLAB to use unit sum unipolar [0,1] coding for target classification because of the functions vec2ind and ind2vec.
My interpretation of 'better' is faster and/or more accurate. Obviously, this result is machine dependent. So, given what you know now, you can perform your own speed and accuracy tests on your own machine.
You have to take into account how the weights are being initialized. That means understanding the functions init, initwb and initnw.
However, before you start, see my post "Nonsaturating Initial Weights" in comp.ai.neural-nets.
Hope this helps.
Thank you for formally accepting my answer
Greg

Más respuestas (0)

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by