Results when using a time invariant net

As a first attempt we want to see if a time invariant net can do the time series prediction
job. Certainly we do not expect this to work since a time invariant net does not store any
information of past patterns internally, i.e. the output solely depends on the current input.


Nevertheless we want to give it a try to verify the just made assumption and to be able to
compare the results to those from time variant nets later on.



We build up the following net.





The input neuron has the activation function IDENTICAL while for all other neurons we select the
activation function TAN HYP.

For the output neuron we set the data normalization range to -0.6 .. 0.6


Note:

If you do not want to draw the net yourself you can also load the example net from file:

MackeyGlassTimeInvariant.mbn


As a teacher we select the 'RPROP' teacher. The only settings we change with respect to the
defaults are the 'Lesson Pattern Selection' which we adjust to 'Ordered' and the option
'Reset Net Before Every Lesson' which we additionally select as shown in the following screen shot.




The option 'Ordered' advices the teacher to present the patterns to the net in the order of the
lesson. This is certainly important if we want the net to learn rules about the order of the patterns.
Learning order rules between patterns actually is the core goal when training nets to predict time
series.


The check mark in the box 'Reset Net Before Every Lesson' instructs the teacher to reset all
activations stored in the neurons and links right before every lesson start during training.

This is to achieve a defined state before every new lesson run.


Note: Both mentioned settings are actually not required for time invariant nets. However, we want to
also train time variant nets later on during this example and there we will need these adjustments to be
in place.


Now we do the following.


Open the Pattern Error Viewer and the Lesson Error Viewer and start the training. When asked to randomize
the net choose 'Yes'.


You should see something similar to the following on the error viewer and the Pattern Error Viewer.





What we see here is the output of the net during the training phase. We can see that
the net is not even able to approximate the training data set. Thus its no use to
check the reaction on the validation data set.


We will first have to improve our net which in this case means to turn it into a time
variant net.


Continue with time variant net version #1



 

Copyright © 2003 - 2007, Thomas Jetter