Backpropagation of Error
- The MLP is cool, but how do you set the weights?
- No one sets the weights manually.
- With a single perceptron, you can use the perceptron learning rule.
- What's really cool is that an MLP can be learned by
the
backpropagation of error.
- You train the system in a supervised manner.
- That is, you have instances that you know the output of.
- You put those instances through the system, and record the
error.
- You then move the weights in a direction that will reduce the error.
- You backpropagate the error.
- So if the desired value is higher than the actual value, and the
inputs are positive, increase the weights.
- The actual maths behind this is complex, and there are a few variants
of the rule.
- You then go through another training cycle, and repeat.
- Eventually, if you have enough neurons, this will converge on a
weight matrix that closely approximates the input output pairs.
- It's a universal learner.