Backprop
- MLPs are cool, but what makes them popular are learning
functions that make them able to learn to approximate
a function represented by data.
- There are different rules, but the original and most widely
used is the backpropagation of error, or simply backprop.
- The backprop
wiki has a full explanation, but
here's a synopsis.
- Remember each input and output
from a perceptron is weighted by the connection
- The algorithm changes these weights to better
approximate the data.
- To do so, run the MLP and get the output. It's a supervised algorithm
so it needs to know the answer or answers.
- Using the difference between the predicted and the actual
answer, change the weights. Do that for the next layer
(and then again for the next layers if they exist).
- Keep running the algorithm until its right on all answers
or you get tired.