RPROP Training: Nodal Values Displayed: Understanding

Have specific questions about how to work with certain MemBrain features? Not sure about which checkbox in MemBrain has which effects? Want to know if a certain functionality is available in MemBrain or not?

Your questions will be appreciated here!
Post Reply
MrCreosote
Posts: 55
Joined: Wed 21. Jul 2010, 18:43

RPROP Training: Nodal Values Displayed: Understanding

Post by MrCreosote » Thu 29. Jul 2010, 19:16

A few questions about interpreting the displayed nodal values:

First, verify what values are being shown:
  • Inputs: Max Lesson values used for Normalization
  • Hiddens: Activation - this is the value output by that node
  • Outputs: Activation - this is the value output by that node (i.e., the predictions)
My Lesson file has 72 columns and 2700 rows. I have one layer of 20 hiddens, and another layer of 6 hiddens, and a single output. I have about 900 connections. When training, the node values update about once a second.

What values are being updated every second?

Since RPROP is a Batch method, what values are actually shown every second? To calculate an output, one runs a pattern through the net and gets an output. So might the output be the last pattern of the Lesson which is updated after each pass through the patterns (epoch)?

What values are shown when the Teach Lesson button is pressed?

When training, the values update about every second. Some nodes are constant at 0 or 1. Nodes that are changing, do so by small amounts.

When training is halted, the last set of values observed while training remain.

When Teach Lesson is pressed, a new set of values appear for about a second (call these the momentary values, MV) and then the previous values (PV) return.

What is confusing is that the MV are very different from the PV. In particular, nodes that were stuck at PV=1 might show a small .1 MV.

This leads to a number of questions about the MVs, the main one being: Why are the MV and PV so different?

I was hoping that
If a hidden node was stuck at 0 or 1, Could it be eliminated because it is simply passing a constant to the next layer?

User avatar
Admin
Site Admin
Posts: 438
Joined: Sun 16. Nov 2008, 18:21

Re: RPROP Training: Nodal Values Displayed: Understanding

Post by Admin » Fri 6. Aug 2010, 10:06

Below are some answers, I hope this clarifies things.
MrCreosote wrote:First, verify what values are being shown:

* Inputs: Max Lesson values used for Normalization
* Hiddens: Activation - this is the value output by that node
* Outputs: Activation - this is the value output by that node (i.e., the predictions)
Values shown below nodes are always the current activation values of the neuron. This holds true for all types of neurons, there is no difference between input/hidden/output here.
The only difference is that inputs and outputs can optionally use normalization limits to transform their internally used activation values into user defined ranges. In this case the transformed activations are displayed, i.e. the activation values converted to the user defined range.

In most cases the Activation value is what appears on the output of a node (always in the internally used number range, i.e. not converted to user normlization range).
However, the output can also be different from the activation depending on the settings of the neuron. Default settings of MemBrain always create neurons where activation and output are identical.
See MemBrain help section <Neurons in MemBrain><Neuron Model And Operation> for more details on this.
MrCreosote wrote:What values are being updated every second?
During training MemBrain performs a view update every time the pattern that is currently selected in the lesson editor is reached. Try this: during training open the lesson editor and change the selected pattern by using the up/down arrows on the RHS. You will note that the net now always is updated in the state when this particular pattern is applied. Note that this does not influence the training.
Also try this: During training ensure that the may window (net editor) has the focus. Now press and hold the SHIFT key. While the SHIFT key is down press and hold the left mouse key and move the mouse. This lets you navigate though the drawing area. Since this action requires the net to be re-drawn permanently you can also see the other patterns appear on the net for a very short time during training. This is not a useful feature, I just mention this to demonstrate to you that all patterns are applied to the net during training but only when the currently applied pattern matches the pattern selected in the lesson editor MemBrain will perform an automatic view update (and only if the option <View><Update View during Teach> is enabled).
MrCreosote wrote:What values are shown when the Teach Lesson button is pressed?
Same as above: The pattern selected in the lesson editor should be shown.
MrCreosote wrote:When training, the values update about every second. Some nodes are constant at 0 or 1. Nodes that are changing, do so by small amounts.
As mentioned above, this might only be the case for the pattern you are currently observing (as selected in the pattern editor). The nodes might still be changing significantly for other patterns.
MrCreosote wrote:When training is halted, the last set of values observed while training remain.
Again try the "SHIFT + Mouse-Button Move" after the training. If the last pattern applied during training is not the same as the selected pattern in the lesson editor then the net view will be updated and you will see the net as it currently really stands (i.e. with the last applied pattern). Note that this is not a question of how the net is displayed by MemBrain. It is what the net currently IS: The neurons in the net currently HAVE the displayed activation as a portion of their properties. You can even save the net in this state and the activations will be saved, too, they are an integral part of the network state at this very moment.
MrCreosote wrote:When Teach Lesson is pressed, a new set of values appear for about a second (call these the momentary values, MV) and then the previous values (PV) return.
The MV reflect the pattern you have currently selected in the lesson editor (as described above). The PV reflect the values the net has after the training has stopped. In your experiment something must cause an additional view update after the training has stopped to enforce that the real state of the net is shown rather than the one from the last update during teach.
Which pattern causes the state 'PV' depends on your teacher settings (pattern selection method) and on the method of your validation (On-The-Fly or through a separate validation lesson).
You shouldn't care about the PVs too much anyway...
MrCreosote wrote:I was hoping that
If a hidden node was stuck at 0 or 1, Could it be eliminated because it is simply passing a constant to the next layer?
That would only hold true if that hidden node was passing on a constant value for all patterns. However, it is very unlikely that an NN training would result in such a hidden unit: What is there normally also is used. You should determine your number of hidden units by experiments with properly separated training and validation lessons. I.e. determine the optimum number of hidden units by observing how well the net is generalizing.

Regards,
Thomas
Thomas Jetter

Post Reply