Multi-layer Perceptron
- A really common machine learning algorithm is the multi-layer
perceptron (MLP) learning via backpropogation of error (backprop).
- A perceptron is a reasonable approximation of a neuron. (The whole
MLP is not a reasonable biological model, nor is backprop.)
- A perceptron takes a vector of inputs. These are typically
weighted but for perceptron it still looks like a vector of
inputs.
- It sums those inputs, passes them through a function, and produces
one output.
- The function is important, and a simple one is a step (or threshold)
function with a binary output.
- Linear functions are also used, but sigmoid functions are probably
most common. Both have real valued output.
- A layer of these can only learn linearly separable function.
- If you take the output from the perceptrons in one layer,
and pass them on to another layer, any function can be approximated.