Conversational Agents: definitions and a brief history
- Conversational agents need to
-
- take in input. This can be text, speech, or even
gestures and full fledged sensing. (The Turing Test
just allows text.)
- store information. A conversation involves a lot
more than just a single input output turn, so things
need to be stored.
- generate output. Text (speech, gesture or even full
fledged action) needs to be produced for the user.
- those of any complexity need to do something. If
there is going to be a conversation it has to be about something.
This can be getting information, operating in a virtual
environment, or even a real robot.
- One of the advantages of conversation, is that you can get it
wrong, and correct it through the conversation.
- History (Brief and Incomplete)
- Eliza (a chatterbot). Pattern matching psychoanalyst.
This doesn't really talk about
anything, but tries to fool the user that it knows what's
going on.
- Winograd SHRDLU (1972). A simple system about a blocks
world.
- Loebner Prize. It's the yearly Turing test with a big money
prize for someone who passes it (100,000$?). As no
one is close, they do it on special sub-topics.
This is a general AI thing and particularly a Natural
Language thing. You can succeed in a restricted domain,
but do really badly in open domains.
- Wilks (Sheffield). They won the Loebner competition
in 1999 (?). One of the comments is that to fool the
user, it is best to keep control of the conversation
as much as possible; this enables the system to keep
the conversation on things it knows.
- Trains (Allen at Rochester). James Allen has a great
group at the University of Rochester. They've been
developing conversational systems for over 10 years.
Trains is one of (if not) the best conversational
system. It works to help schedule freight train
deliveries.