Connectionist Systems
- Connectionist systems are commonly called neural networks.
- They are inspired by neural function, but are really just
parallel computation paradigms.
- There are a host of connectionist systems including
self-organising maps, ART maps, recurrent MLPs, and
radial basis functions.
- The most widely known algorithm is the multi-layered perceptron
(MLP).
- This can take advantage of a learning algorithm known as
backpropagation of error (or simply backprop) to do supervised
learning.
- Supervised learning is when the system is presented with the
answer along with the item. The other primary type of learning
is unsupervised.
- A perceptron just takes numeric input from several sources.
- It integrates those values and performs a transforming function
to generate output. (An example function is a step function,
which merely send output iff the input is above a threshold value.)
- MLPs have layers of perceptrons. The first layer gets input
from the environment (the task). Subsequent layers get input
from their prior layers. Connections are weighted. The output
of the system is the output of the last layer.
- With backprop the system is presented with an input, and
calculates an output. If there is an error in the output, the
weights between the perceptrons are adjusted to improve the results.
- It has been shown that given enough perceptrons, any function
can be approximated to an aribtrary degree of precision by an
MLP.