Multi-layer perceptrons with Backprop
- For a long time, a really popular machine learning algorithm was
multi-layer perceptrons learning via backpropagation.
- A perceptron takes its inputs, adds them, and passes them through
a function.
- A common example of the function is a line, but a step, and
a sigmoid are also common and you can use others.
- It turns out you can only make linearly separable functions with
perceptrons or even a layer of perceptrons.
- However, if you can put together three layers of these,
you can represent any function (to an arbitrary degree of precision).
- Note that the connections have weights that are multiplied from
the inputs.
- A supervised backpropagation of error mechanism allows the system
to learn the function.
- This is important and pretty effective.
- You can overfit, but people have looked into this.