Your questions will be appreciated here!
or evaluate network error strange thing happens - results change (on the same data).
I have checked and it happens in examples attached like Mackey series.
Why is that ?
Should it be exactly the same ?
The worst case is when binary activation is used, results are completly different.
Here is link to see example:
http://www.youtube.com/watch?v=-ptn_QSx ... e=youtu.be
Try the following: From MemBrain's main menu select 'Net' - 'Analyse Net' and take a look at the analysis summary of the net.
You will probably find the statement 'The network is time variant'.
This means that your net calculates its output not only on basis of its inputs but it also takes its internal state into consideration.
If you post your net here in the forum I can tell you what elements cause it to be time variant.
PS: I moved this post to another sub forum since it's not related to a certain NN project.
so to obtain the same result I would have to do ThinkStep as many times as there are delay steps ?
edit: I just realized it would not work
During thinking the net remembers its last state,
state after the last pattern has been applied during teaching.
So how to validate and employ such network ?
Since you use a time variant network to predict time series the validation is ideally performed during (i.e. in parallel to) training.klubow wrote:So how to validate and employ such network ?
Have you worked through the Time Series Prediction Tutorial which is part of the MemBrain examples? The one that uses the Mackey Glass time series as subject?
In this tutorial there are many steps described which are essential for training and validating time variant nets. Most important the teacher settings 'Ordered' and 'Reset Net before Every Lesson'.
If you do not want to validate your nets during training the the following ideas could help:
Try the following:
Each time before executing the 'Think On Lesson' in the Lesson >Editor execute the main menu command <Net><Reset Net>. This will set all activations of all neurons to 0 and also reset all activation spikes stored in loopback links or links with length > 1.
You then will see that the net results will become reproducible since the net always starts with the same internal configuration.
The same effect can be achieved by either loading the net newly from file before the 'Think On Lesson' or by clicking on the 'Undo' button between the 'Think On Lesson' commands.
Another idea can be to build a combined validation lesson from the teaching and the original validation lesson, i.e. append the original validation lesson to the teaching lesson (Lesson Editor's menu <Lesson Files><Append...>). If you now perform a 'Think On Lesson' then the net first is confronted with the training lesson part and thus will be prepared correctly when it comes to thinking on the validation lesson part. If you want to have 100% redproducability you will have to perform a 'Reset Net' before each 'Think On Lesson' as described above. However, the 'Reset Net' will probably not have too much of an effect anymore then. Depending on your net architecture it might even have no effect at all anymore.
Finally, if you want to employ your net in a productive way you should have a look at the scripting feature of MemBrain. This lets you automize the steps you want to implement. The scripting examples in the download area of the MemBrain homepage also deal with the Mackey Glass time series example and compare different nets automatically.
results should be exactly the same ?
After teaching (with reseting network after each lesson) when you ResetNet and ThinkOnLesson you don't get the same results.
I checked also on Mackey Glass example (MackeyGlassTimeInvariant.mbn) - it happens exactly the same way
Yes, each 'Think On Lesson' and each 'Evaluate Net Error' should produce exactly the same results in this case independently from the fact if 'Reset Net' is performed or not.klubow wrote:But in case of time invariant networks with no loops, input -> hidden layer 1 -> hidden layer 2 -> output,
results should be exactly the same ?
Sorry, I might have confused wordings here in my last post:klubow wrote:After teaching (with reseting network after each lesson) when you ResetNet and ThinkOnLesson you don't get the same results.
'Evaluate Net Error' is the function that calculates the net error over the currently adjusted 'Net Error Lesson'. This function (as well as the function 'Think On Lesson') should with each execution produce the exactly same results on a time invariant net as long as the Net Error Lesson does not change.
However, the evaluated net error when using the function 'Evaluate Net Error' might differ from the one determined in the end of the teaching process the very first time after the teaching has ended if:
1.) The Net Error Lesson is the same as the teaching lesson
2.) The teacher has the flag set: 'Use On-The-Fly Net Error Calculation'
The reason for this is that the 'on-the-fly net error calculation' is an approach to optimize the teaching process by summing up the net error already during the teaching run over the lesson (as opposed to perform a separate net error calculation run after each teach run). This saves time but leads to a slightly incorrect net error calculation since the net might have been changed during summing up.
If either 1.) or 2.) above are NOT given then 'Evaluate Net Error' should not change anything in the results after a teacher run (on a time invariant net).
In case you observe different behaviour then please post a complete example here including net and lesson files as well as your teacher settings file (the latter can be generated by a button on the Teacher Manager Dialog). I then can try to reproduce the behaviour and hopefully explain it sufficiently.
Many thanks for the data and the clear video!klubow wrote:What I meant was that there is a difference beetwen data when teaching is finished and then you perform ThinkOnLesson.
See attached file
What you can observe here is exactly the effect of the setting 'Use On-The-Fly Net Error Calculation' for the selected teacher:
In your example the flag is set for the selected teacher and the Net Error Lesson is the same as the Training Lesson (see indicators 'Net Error Lesson' = 1 and 'Currently Edited (Training) Lesson' = 1 on the Lesson Editor).
Thus, the teacher calculates the net error 'On-The-Fly' during training (because you allow it to) and this means it does not perform a finalizing net error calculation run on the lesson after each teach run. I.e., the data points you see in the pattern error window are not taken from the same net:
Your teacher is configured to run in online mode (i.e. not in batch mode). This means it makes changes to the weights and thresholds in the net after EACH SINGLE DATAPOINT during training.
That means, the data you see during teaching is the reaction of the net in answer to the single training data points during the teach process. However, this it is not true for all data points at the same time.
The data of the Mackey Glass time series is quite smoth, there are no massive jumps in the value from one point to the next. This means that the effect of teaching one of the data points in most cases means that the net will also react not too bad on the next one during time series prediction. Still, when the next one is trained the net becomes worse on the previous ones again. However, you don't see this in the net error graph since you are working in online mode and allow On-The-Fly calculation of the graph (and the net error value). Thus, when you finally calculate all values in one shot without modifying the net in between the calculations then you see what the net really learned over all data patterns.
I.e. in your case you'll have to deactivate the setting 'Use On-The-Fly Net Error Calculation', so that every teach run on the lesson is subsequently followed by a validation run.
In the given example the timing overhead introduced by this does not represent a problem.
Does this clarify the issue for you?