dialogueA conversation on creating a mind.

by Ray Kurzweil
January 1, 2022


— contents —

  • ~ letter
    ~ dialogue

— letter —

Hello,

I hope you enjoy this dialogue — styled as q + a — that I wrote to help explain ideas in my non-fiction books about science, tech, and the future. I include conversations in all my books, to better reach readers of all ages + backgrounds.

Ray Kurzweil


dialogue title: A conversation on creating a mind.
author: by Ray Kurzweil
date:


IMAGE


dialogue |

1. |

question:

How do you create a mind?

answer:

Although the human brain uses biochemical methods rather than electronic ones, it still processes information. If we can understand its algorithms, we will be able to recreate its techniques in a computer of sufficient capacity.


2. |

question:

So the human brain and a computer can be equivalent?

answer:

Based on computer scientist + pioneer Alan Turing’s principle of computational equivalence, a computer can match the performance of a human brain if it has sufficient speed and memory as well as the right software.


3. |

question:

How are we doing on capacity?

answer:

In terms of hardware capability, what we’re concerned about is functional equivalence. In other words, what’s the computational speed + memory needed to match human performance? In my book  the Singularity is Near, I derived the speed figure to be approx. 1014 (a hundred trillion) calculations per second (cps) — but used a range of 1014 to 1016 cps to be conservative.

AI computing expert Hans Moravec PhD made an estimate at the time — based on extrapolating the amount of computation needed to emulate a human region of visual neural processing that had already been successfully re-created. That was 1014 cps.

In my book How to Create a Mind, I again derive this figure based on the latest neuro-science research — and the model of human thinking that I present. The result is again 1014 cps. The fastest super-computer today — IBM’s Sequoia Blue Gene/Q computer — provides about 1016 cps.

A routine desktop machine can reach about 1010 cps. But this can be significantly increased by using specialized chips or cloud resources. Given the ongoing exponential growth inherent in my “law of accelerating returns,” personal computers will routinely achieve 1014 cps well before the end of this decade.

We are even further along in achieving the RAM memory needs. In How to Create a Mind, I estimate that there is a requirement at about 20 billion bytes, which is a number we can readily achieve today in personal computers.


4. |

question:

Isn’t Moore’s law coming to an end?

answer:

Moore’s law is not synonymous with the exponential growth of the price-performance of computing. It is one paradigm among many. The exponential growth of computing started decades before Gordon Moore was even born. We see continual exponential growth going back to the 1890 American census, the first to be automated.

Moore’s law, which refers to the continual shrinking of component sizes on a flat (that is, two dimensional) integrated circuit, was the 5th — not the 1st — paradigm to bring exponential gains to computation. And it won’t be the last. Examples of the sixth paradigm of self-organizing 3D molecular circuits are already working experimentally.

Semi-conductors being fabricated today for MEMS and CMOS image sensors are already 3D chips using vertical stacking tech, which represents a first step into 3D electronics. This 6th paradigm will keep the exponential trajectory in computing going well into this century.


5. |

question:

OK, but what about software? Some observers say that it is stuck in the mud.

answer:

In the Singularity is Near I addressed this issue at length — citing different methods of measuring complexity + capability in software — that clearly demonstrate a similar exponential growth. A recent study below.

— report —

label: report to the President + Congress ~ US
report title: Designing a digital future.
deck: Federally funded research + development in networking and information technology.

author: by the President’s Council of Advisors on science + technology
date: 2010

read | report

the report reads:

Even more remarkable — and even less widely understood — is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today — for speech recognition, for natural language translation, for chess playing, for logistics planning — have evolved remarkably in the past decade.

Here’s just one example, provided by Martin Grötschel PhD — of the Zuse Institute Berlin. Grötschel is an expert in optimization. He observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in year 1988 — using the computers and the linear programming algorithms of the day.

the Zuse Institute Berlin | home
Martin Grötschel PhD | profile

15 years later — in 2003 — this same model could be solved in roughly 1 minute. An improvement by a factor of roughly 43 million.

Of this, a factor of roughly 1,000 was due to increased processor speed — whereas a factor of roughly 43,000 was due to improvements in algorithms. Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.

The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental sub-fields of computer science.

Note that the linear programming that Grötschel cites above — as having benefited from an improvement in performance of 43 million to 1 — is a mathematical technique that I present in the book as being actually used in the human brain.

Aside from these quantitative analyses, we have viscerally impressive recent developments — such as IBM’s Watson computer, which got a higher score in a televised Jeopardy! contest than the best 2 human players combined. The Google self-driving cars have driven over a quarter million miles without human intervention in actual cities and towns.

Not everyone is so impressed with Watson. Microsoft co-founder Paul Allen writes that systems such as Watson “remain brittle, their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific areas.

First of all, we could make a similar observation about humans. I would also point out that Watson’s “specific areas” include all of Wikipedia plus many other knowledge bases, which hardly constitutes a narrow focus. Watson deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes and metaphors in virtually all fields of human endeavor.

It’s not perfect, but neither are humans, and it was good enough to be victorious on Jeopardy! over the best human players. It did not obtain its knowledge by being programmed fact by fact, but rather by reading natural language documents such as Wikipedia and other encyclopedias.


6. |

question:

Critics say that Watson by IBM works through statistical probabilities rather than true understanding.

answer:

Many readers interpret this to mean that Watson is merely gathering statistics on word sequences. The term “statistical information” in the case of Watson actually refers to distributed coefficients and symbolic connections in self-organizing methods. One could just as easily dismiss the distributed neurotransmitter concentrations and redundant connection patterns in the human cortex as “statistical information.”

Indeed we resolve ambiguities in much the same way that Watson does—by considering the likelihood of different interpretations of a phrase. If using statistical probabilities does not represent true understanding, then we would have to conclude that the human brain has no true understanding either.


7. |

question:

How are we going to obtain the algorithms of human intelligence?

answer:

By reverse-engineering the human brain, that is, by understanding its methods and recreating them in a computer of sufficient capacity.


8. |

question:

How is that going?

answer:

Up until just recently, we have not been able to see inside a living, thinking human brain with sufficient spatial and temporal resolution to assess what its methods are. That is now changing, thanks again to the law of accelerating returns. I explain in my book How to Create a Mind: the resolution of different types of brain scanning are improving at an exponential pace, just like every other information technology.

We’re now able to see in a thinking brain new inter-neuronal connections being formed and firing in real time. We can see the brain create our thoughts and we can see our thoughts create our brain, reflecting its ability to self-organize based on what we are thinking. Some of the best evidence for my thesis on how the brain works became available in the last few months that I was writing the book.


9. |

question:

So just how does the brain work?

answer:

Let’s talk first about where our thinking takes place. The region of the brain that we are most interested in is the neo-cortex. It is thin structure about the thickness of a stack of about dozen sheets of paper. It is where we do our thinking. Unlike the “old brain” (the brain we had before we were mammals) the neo-cortex enables us to think in hierarchies, reflecting the natural hierarchical organization of the world. It enables us to learn new skills that are complex and comprised of structures of structures of ideas.

The salient survival advantage of the neocortex was that it could learn complex new skills in a matter of days. If a species encounters dramatically changed circumstances and one member of that species invents or discovers or just stumbles upon (these three methods all being variations of innovation) a way to adapt to that change, other individuals will notice, learn and copy that method, and it will quickly spread virally to the entire population.

The cataclysmic “Cretaceous-Paleogene extinction event” about 65 million years ago led to the rapid demise of many non-neocortex-bearing species, who could not adapt quickly enough to a suddenly altered environment. This marked the turning point for neocortex-capable mammals to take over their ecological niche.

In this way, biological evolution found that the hierarchical learning of the neocortex was so valuable that this region of the brain continued to grow in size until it virtually took over the brain of Homo sapiens. 80 percent of the human brain’s mass consists of the neocortex, which covers the old brain with elaborate folds and convolutions to increase its surface area.

The next observation that we can make about the neocortex is its uniformity in structure, appearance, and method. One region of the neocortex can readily take over the functionality of another region if necessitated by injury or disability. For example, in a congenitally blind individual, region V1, which usually performs very low level recognitions of basic visual phenomena such as edges and shadings, is reassigned to actually process high-level language concepts.

There are many other examples of the interchangeability of the different portions of the neocortex. The evolutionary innovation in Homo sapiens was that we have a larger forehead to accommodate more neocortex in the form of the prefrontal cortex. This greater quantity resulted in a profound qualitative improvement in human thinking—it was the primary enabling factor that led to our invention of language, art, science and technology.


10. |

question:

OK, so how does the neo-cortex work?

answer:

By drawing on the most recent neuroscience research, my own research and inventions in artificial intelligence, and thought experiments (which I present in the book), I describe my theory of how the neocortex works: as a self-organizing hierarchical system of pattern recognizers. We have about 300 million of these pattern recognizers.

Some are responsible for recognizing simple patterns such as the crossbar in a capital A. Others are responsible for high level abstract qualities such as irony, beauty, and humor. They are organized in a grand hierarchy. These pattern recognizers are all uncertain— they communicate with each other with networks of probabilities. We are not born with this hierarchy—our neocortex builds it from the thoughts we are thinking. So you are what you think!


11. |

question:

Have we tried emulating this technique in software?

answer:

It turns out that the mathematics of what goes on in the neocortex is very similar to a method that I helped pioneer a couple of decades ago called hierarchical hidden Markov models (HHMM).


12. |

question:

So why aren’t artificial intelligence programs matching human performance?

answer:

For one thing, the hardware is still not as powerful unless you use the most powerful supercomputers which AI generally does not do. Second, the field of AI has not yet matched the human brain’s ability to build the hierarchy itself. Most HHMMs have relatively fixed patterns of connections. Third, we need to learn how to educate our AIs. Even if we did a perfect job emulating the neocortex including its scale of 300 million recognizers, it wouldn’t do anything useful without an education.

That’s why a newborn human child has a long way to go before she can hold a conversation. Educating an AI does not need to take as long as it does with a human child. Watson’s education consisted of reading 200 million pages of natural language documents but it did that in a matter of weeks. The speech recognition systems that I developed in the 1980s and 1990s listened to many thousands of hours of recorded speech but were able to process it all in a matter of one or two hours. Ultimately we will need to provide AI learning experiences that compete with the sophistication of human ones.

I have been consistent in predicting that AIs will match human intelligence in all of the ways in which humans are now superior by 2029. They will then be able to apply their enormous speed and scale and total recall to all of human knowledge. I believe that recent advances should give us a lot of confidence that we will meet or beat that goal.

However, we will see enormous gains in the intelligence of software over the next several years. Keep in mind that Watson’s ability to understand a natural language document is still substantially lower than a human, but it was nonetheless able to defeat the best two humans in the complex game of Jeopardy! because it could apply its level of understanding to a vast amount of material (200 million pages)— and remember it all—something that humans are unable to do.


13. |

question:

What will we see over the next three to five years?

answer:

We will see question-answering systems that really work. We will see search engines that are based on an actual understanding of what is being said on each page rather than just the inclusion of keywords. We will see systems that anticipate your needs and answer your questions before you even ask them because they are listening in on your conversations, both spoken and written.


14. |

question:

And in the 2030s, when AIs routinely outperform human intelligence, what then?

answer:

We’ll merge with the intelligence expanders we are creating. Another exponential trend is a shrinking of technology, which I’ve measured at a rate of about 100 in 3D volume per decade. At that rate, we’ll have computerized devices that are the size of blood cells in the 2030s. Some of these will circulate in our bloodstream to keep us healthy and extend our longevity.

Others will go into the brain and connect with our biological neurons. They will communicate wirelessly with the Internet enabling our brains to directly tap into AI in the cloud. Keep in mind that we often use cloud-based AI when we do something interesting with our mobile devices.

If you have your mobile device translate from one language to another, ask it a question, or just do a search, the action takes place in the cloud, not just in the device. The same thing will become true of our brains once we can noninvasively place computation and communication devices in our brains.

So instead of being limited to around 300 million pattern recognizers in each neo-cortex, we’ll be able to have more — a billion, then tens of billions, then a trillion. Keep in mind the evolutionary increase in the size of our human neo-cortex led to the qualitative advance of such inventions as language, art, and science. Imagine the qualitative advances we will be able to make when we are able to expand the scope of our neo-cortex — based on the law of accelerating returns.


15. |

question:

So will these AIs circa 2030 be conscious?

answer:

First of all, there’s no reason to suppose that we would stop being conscious just because we put computers in our brains. I see a primary application of future AI as being expanders of our own intelligence. Consider that we currently connect computerized devices into the brains and nervous systems of Parkinson’s patients and deaf people. No one suggests that people with today’s neural implants are no longer conscious.


16. |

question:

But how about the AIs themselves?

answer:

That’s an issue that’s actually been debated since Platonic times: does consciousness require a biological substrate? Is it possible for an entirely computerized system to be conscious? I make the case in the book that if an AI has the same subtle emotional responses as a human— if it gets the joke, can be funny or sexy, and is capable of expressing a convincing loving sentiment—then we will accept these entities as conscious persons.


17. |

question:

Isn’t that the essence of the Turing test?

answer:

Exactly, I make the case that the Turing test is indeed a valid test of consciousness.


18. |

question:

No computer has ever passed a valid Turing test?

answer:

True, it’s not 2029 yet. But they’re getting better. In recent Turing tests competition, some AIs have gotten close.


19. |

question:

Aren’t there dangers to super-intelligent AI?

answer:

Technology has always been a double-edged sword, going back to fire, which cooked our food and kept us warm but was used to burn down our villages. If an AI that is smarter than you has it in for you, well, that’s not a good situation to get into.


20. |

question:

What could you do about that?

answer:

Get an even smarter AI to protect you.

comment:

OK, I’ll have to remember to do that.


— notes  —

AI = artificial intelligence
HHMM = hierarchical hidden Markov models