Can Robots Learn Language the Way Children Do?
Stephen E. Levinson
ECE, Beckman UIUC
ABSTRACT:Speech recognition machines are in use in more and more devices and
services. Airlines, banks, and telephone companies provide
information to customers via spoken queries. You can buy hand-held
devices, appliances, and PCs that are operated by spoken commands.
And, for around $100, you can buy a program for your laptop that will
transcribe speech into text. Unfortunately, automatic speech
recognition systems are quite error prone, nor do they understand the
meanings of spoken messages in any significant way. I argue that to
do so, speech recognition machines would have to possess the same
kinds of cognitive abilities that humans display. Engineers have
been trying to build machines with human-like abilities to think
and use language for nearly 60 years without much success. Are all
such efforts doomed to failure? Maybe not. I suggest
that if we take a radically different approach, we might succeed.
If, instead of trying to program machines to behave intelligently,
we design them to learn by experiencing the real world in the same
way a child does, we might solve the speech recognition problem
in the process. This is the ambitious goal of the research now being
conducted in my laboratory. To date, we have constructed three
robots that have attained some rudimentary visual navigation and
object manipulation abilities which they can perform under spoken
Stephen E. Levinson was born in New York City on September 27,
1944. He received the B. A. degree in Engineering Sciences from
Harvard in 1966, and the M. S. and Ph.D. degrees in Electrical
Engineering from the University of Rhode Island, Kingston, Rhode
Island in 1972 and 1974, respectively. From 1966-1969 he was a
design engineer at Electric Boat Division of General Dynamics in
Groton, Connecticut. From 1974-1976 he held a J. Willard Gibbs
Instructorship in Computer Science at Yale University. In 1976,
he joined the technical staff of Bell Laboratories in Murray
Hill, NJ where he conducted research in the areas of speech
recognition and understanding. In 1979 he was a visiting researcher
at the NTT Musashino Electrical Communication Laboratory in Tokyo,
Japan. In 1984, he held a visiting fellowship in the
Engineering Department at Cambridge University. In 1990, Dr.
Levinson became head of the Linguistics Research Department at
AT&T Bell Laboratories where he directed research in Speech
Synthesis, Speech Recognition and Spoken Language Translation.
In 1997, he joined the Department of Electrical and Computer
Engineering of the University of Illinois at Urbana-Champaign
where he teaches courses in Speech and Language Processing and
leads research projects in speech synthesis and automatic language
acquisition. Dr. Levinson is a member of the Association for
Computing Machinery, a fellow of the Institute of Electrical and
Electronic Engineers and a fellow of the Acoustical Society of
America. He is a founding editor of the journal Computer Speech
and Language and a former member and chair of the Industrial
Advisory Board of the CAIP Center at Rutgers University. He is the
author of more than 80 technical papers and holds seven patents.
His new book is entitled "Mathematical Models for Speech Technology"
Last modified: Fri Aug 8 11:38:17 CST 2006