Runaway Artificial Intelligence?

February 3, 2006

Originally published in The Futurist March-April 2006. Reprinted on KurzweilAI.net February 3, 2006.

This article is a response to Ray Kurzweil’s feature in The Futurist, Reinventing Humanity. You can also read other responses to Kurzweil’s article by Terry Grossman, John Smart, Damien Broderick, and Richard Eckersley. Ray Kurzweil’s response to Eckersley’s comments can be found here.

Click here to read a PDF of the full feature.

Some years ago, I reviewed Kurzweil’s earlier book, The Age of Spiritual Machines, for the Foresight Nanotech Institute’s newsletter. Shortly thereafter I met him in person at a Foresight event, and he had read the review. He told me, "Of all the people who reviewed my book, you were the only one who said I was too conservative!"

The Singularity is Near is very well researched, and I think that in general, Kurzweil’s predictions are about as good as it’s possible to get for things that far in advance. I still think he’s too conservative in one specific area: Synthetic computer-based artificial intelligence will become available well before nanotechnology makes neuron-level brain scans possible in the 2020s.

What’s happening is that existing technologies like functional MRI are beginning to give us a high-level functional block diagram of the brain’s processes. At the same time, the hardware capable of running a strong, artificially intelligent computer, by most estimates, is here now, though it’s still pricey.

Existing AI software techniques can build programs that are experts in any well-defined field. The breakthroughs necessary for such programs to learn for themselves could happen easily in the next decade—one or two decades before Kurzweil predicts.

Kurzweil finesses the issue of runaway AI by proposing a pathway where machine intelligence is patterned after human brains, so that they would have our morals and values built in. Indeed, this would clearly be the wise and prudent course. Unfortunately, it seems all too likely that a shortcut exists without that kind of safeguard. Corporations already use huge computer systems for data mining and decision support that employ sophisticated algorithms no human manager understands. It’s a very short step to having such a system make better decisions than the managers do, as far as the corporation’s bottom line is concerned.

The Singularity may mean different things to different people. To me, it is that point where intelligences significantly greater than our own control so many of the essential processes that figure in our lives that mere humans can’t predict what happens next. This future may be even nearer than Ray Kurzweil has predicted.

© 2006 J. Storrs Hall. Reprinted with permission.