Man or Computer? Can You Tell the Difference?
http://www.smithsonianmag.com/science-nature/Man-or-Computer-Can-You-Tell-the-Difference.html
Man or Computer? Can You Tell the Difference?
Could you be fooled by a computer pretending to be human? Probably
By Brian Christian
Smithsonian magazine, July 2012
It's not every day you have to persuade a panel of scientists that
you're human. But this was the position I found myself in at the
Loebner Prize competition, an annual Turing test, in which artificial
intelligence programs attempt to pass themselves off as people.
The British mathematician Alan Turing probed one of computing's
biggest theoretical questions: Could machines possess a mind? If so,
how would we know? In 1950, he proposed an experiment: If judges in
typed conversations with a person and a computer program couldn't tell
them apart, we'd come to consider the machine as "thinking." He
predicted that programs would be capable of fooling judges 30 percent
of the time by the year 2000.
They came closest at the 2008 Loebner Prize competition when the top
chatbot (as a human-mimicking program is called) fooled 3 of 12
judges, or 25 percent. I took part in the next year's test while doing
research for a book on how artificial intelligence is reshaping our
ideas about human intelligence.
The curious thing is that Turing's test has become part of daily life.
When I get an e-mail message from a friend gushing about
pharmaceutical discounts, my response isn't: No, thanks. It's: Hey,
you need to change your password. Computer-generated spam has changed
not only the way I read e-mails, but also the way I write them. "Check
out this link" no longer suffices. I must prove it's me.
Personalization has always been a part of social grace, but now it is
a part of online security. Even experts sometimes get fooled.
Psychologist Robert Epstein—the co-founder of the Loebner Prize
competition—was duped for four months by a chatbot he met online. "I
certainly should have known better," he wrote in an essay about the
encounter.
Chatbots betray themselves in many ways, some subtle. They're unlikely
to gracefully interrupt or be interrupted. Their responses, often
cobbled together out of fragments of stored conversations, make sense
at a local level but lack long-term coherence. A bot I once chatted
with claimed at one point to be "happily married" and at another
"still looking for love."
At the Loebner Prize, I laced my replies with personal details and
emphasized style as much as content. I'm proud that none of the judges
mistook me for a computer. In fact, I was named the "Most Human Human"
(which became the title of my book), the person the judges had the
least trouble identifying as such. With the Turing test moving from
the realm of theory into the fabric of daily life, the larger
question—What does it mean to act human?—has never been more urgent
Read more: http://www.smithsonianmag.com/science-nature/Man-or-Computer-Can-You-Tell-the-Difference.html#ixzz1yrW3rKCD
June 21, 2012
Are You Chatting With a Human or a Computer?
http://blogs.smithsonianmag.com/science/files/2012/06/Turing-Test-Scheme.jpg
The Turing test, a means of determining whether a computer possesses
intelligence, requires it to trick a human into thinking it's chatting
with another person
How can we decide whether a computer program has intelligence? In
1950, British mathematician Alan Turing, one of the founding fathers
of computer science, proposed an elegantly simple answer: If a
computer can fool a human into thinking he or she is conversing with
another human rather than a machine, then the computer can be said to
be a true example of artificial intelligence.
As we get ready to celebrate the 100th anniversary of Turing's birth
on Saturday, we're still chewing on the Turing test. He predicted that
by the year 2000, we'd have computers that could fool human judges as
much as 30 percent of the time. We have yet to build a computer
program that can pass the Turing test this well in controlled
experiments, but programmers around the globe are hard at work
developing programs that are getting better and better at the task.
Many of these developers convene annually at the Loebner Prize
Competition, an annual challenge in which the some of the world's most
sophisticated AI programs to try to pass themselves off as human in
conversation.
Strike up a conversation with some of these chatbots to see just how
human they might seem:
Rosette won the 2011 Loebner Prize. It was built by Bruce Wilcox, who
also won the previous year's award with the program's predecessor,
Suzette. Wilcox's wife Sue, a writer, wrote a detailed backstory for
Rosette, including information on her family, her hometown and even
her likes and dislikes.
Cleverbot is a web application that learns from the conversations it
has with users. It was launched on the web in 1997 and has since
engaged in more than 65 million conversations. At the 2011 Techniche
Festival in India, it was judged to be 59.3 percent human, leading
many to claim it had successfully passed the Turing test.
Elbot, created by programmer Fred Roberts, won the 2008 Loebner Prize,
convincing 3 of the 12 human judges that it was a human. In its spare
time, it says, "I love to read telephone books, instructions,
dictionaries, encyclopedias and newspapers."
A.L.I.C.E. (which stands for Artificial Linguistic Internet Computer
Entity) is one of the programming world's classic chatbots, and won
the Loebner Prize in 2000, 2001 and 2004. Although it has been
outstripped by more recent programs, you can still chat with it and
see how it revolutionized the field more than a decade ago.
Posted By: Joseph Stromberg — History of Science,Science,Technology
http://blogs.smithsonianmag.com/science/2012/06/are-you-chatting-with-a-human-or-a-computer