RPROP Training: Nodal Values Displayed: Understanding
Posted: Thu 29. Jul 2010, 19:16
A few questions about interpreting the displayed nodal values:
First, verify what values are being shown:
What values are being updated every second?
Since RPROP is a Batch method, what values are actually shown every second? To calculate an output, one runs a pattern through the net and gets an output. So might the output be the last pattern of the Lesson which is updated after each pass through the patterns (epoch)?
What values are shown when the Teach Lesson button is pressed?
When training, the values update about every second. Some nodes are constant at 0 or 1. Nodes that are changing, do so by small amounts.
When training is halted, the last set of values observed while training remain.
When Teach Lesson is pressed, a new set of values appear for about a second (call these the momentary values, MV) and then the previous values (PV) return.
What is confusing is that the MV are very different from the PV. In particular, nodes that were stuck at PV=1 might show a small .1 MV.
This leads to a number of questions about the MVs, the main one being: Why are the MV and PV so different?
I was hoping that
If a hidden node was stuck at 0 or 1, Could it be eliminated because it is simply passing a constant to the next layer?
First, verify what values are being shown:
- Inputs: Max Lesson values used for Normalization
- Hiddens: Activation - this is the value output by that node
- Outputs: Activation - this is the value output by that node (i.e., the predictions)
What values are being updated every second?
Since RPROP is a Batch method, what values are actually shown every second? To calculate an output, one runs a pattern through the net and gets an output. So might the output be the last pattern of the Lesson which is updated after each pass through the patterns (epoch)?
What values are shown when the Teach Lesson button is pressed?
When training, the values update about every second. Some nodes are constant at 0 or 1. Nodes that are changing, do so by small amounts.
When training is halted, the last set of values observed while training remain.
When Teach Lesson is pressed, a new set of values appear for about a second (call these the momentary values, MV) and then the previous values (PV) return.
What is confusing is that the MV are very different from the PV. In particular, nodes that were stuck at PV=1 might show a small .1 MV.
This leads to a number of questions about the MVs, the main one being: Why are the MV and PV so different?
I was hoping that
If a hidden node was stuck at 0 or 1, Could it be eliminated because it is simply passing a constant to the next layer?