Wednesday, January 28, 2015

Killer Robots?

Friend or Enemy?

Recently a group of scientists, two actors and at least one visionary entrepreneur released a letter titled "Research Priorities for Robust Beneficial Artificial Intelligence."

The idea was that artificial intelligence (AI) has great potential but "it is important to reap its benefits while avoiding potential pitfalls."

The group that released the letter -- modestly named the Future of Life Institute  -- advocated "making AI more capable, but also maximizing the social benefit of AI."

Decoding the Letter's Meaning

At least two prominent supporters of the letter followed it up with specific and scary-sounding warnings.

". . . full artificial intelligence could lead to the end of the human race," said Stephen Hawking, the brilliant British scientist.

Hawking's reasoning:  Computers learn faster than people.  "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded," he was quoted as saying in a BBC article.

AI is "our biggest existential threat," claimed Elon Musk in the same piece.  Musk, the Tesla and Space X founder, has given $10 million to the Future of Life Institute.

Musk believes that, over time, computers will learn to design other computers.  By the time this process is in its fifth generation or so, he says, computers could favor the survival of digital beings over that of humans.


In this month's edition of Wired, another computer scientist called foul on the whole business.

"The press accepts these claims with the same gullibility it displayed during Apple Siri's launch and hails arrival of 'human like' computing as a fait accompli," wrote Roman Ormandy, who also has worked in AI and is the founder of something called Embody Corp.

Ormandy's argument is that the human brain and human learning are vastly more complex and subtle than the central processing units of digital machines.

"For the last 50 years," he writes, "AI researchers have promised to deliver intelligent computers, which always seem to be five years in the future."

Ormandy sees great potential for AI, particularly in the development of  sensors to assist people with physical ailments.   Indeed Hawking, an ALS patient, is benefitting now from Intel developments that allow him to speak.

What to Make of It All

Computer work can be rewarding or frustrating, but it seems like a big leap to assume that a computer could form an intention to make a human happy or angry.

Even after computers have learned to solve the Sunday crossword puzzle, can we really expect them to become wise and creative?  I enjoy crosswords myself, but they have not led me to more complex reasoning or new ideas.  Alas.

Obviously it would be good if artificial intelligence were developed to ennoble ennoble humanity and improve life on earth.  But who would decide what the appropriate research should be?

The concept of thoughtful computers was originated by authors and screenwriters.  Its danger is being promoted by at least one billionaire who is spending much more of his own money on bringing space tourism to rich humans than on thwarting the development of demon robots -- and this in a world beset by poverty, disease and warfare.

It might be well to question whether any of these people is equipped to channel AI toward "maximizing (its) social benefit."

End Note:

This killer robot panic seems to have overtaken much of the tech world.  Just yesterday I read a New Yorker profile of one of its rising stars.  He specializes in what he calls "virality."  This means that he has figured out how to attract millions of readers (if you want to call them that) to short posts and lists composed of information that critics say his staff has cribbed from other people's research.

Anyway, in the interview, the fellow shared his concern about the future of smart computers: "We'll soon get to a point where AI fully surpasses us.  When you think about what asymptotic growth looks like, there's no way humans are going to be able to keep up."

"Asymptotic" is a big word, and my guess is that most of his readers don't know what it means.

It occurred to me that humans who flock to his sites may be hastening the process he fears by making mankind dumber as robots grow smarter.

No comments:

Post a Comment