Problems of Symbolic AI
- Two key factors in AI systems are important but distracting.
- Turing complete: just because a system is Turing complete, it does
not mean that it is a good basis for an AI.
- Logical completeness: just because a system can represent things
logically, it does not mean that it is a good basis for an AI.
- A system needs to do both of these things (people do), but it
needs to do a lot of other things.
- Cyc was a really huge knowledge based system. One lesson
to take from its unintelligence is that you can't just fill
up the KB, the system needs to learn it.
- Learning is critical. People are learning all the time.
- Cognitive models like ACT and Soar can learn, but they can
not readily learn new symbols. (Symbol Grounding Problem.)
- People can ground symbols.
- Neural systems probably can too.
- Most AI systems work on either trivial systems, a very simple domain, or
use highly preprocessed data.
- People do not.
- People learn unexpected associations and have a vast range
of mechanisms for solving problems.
- These are frequently based on the environment that they
are in. (The behaviour of an ant seems complex, but that
is because of its environment.)