talkfor | Singularity grp.Touching the future.

by Ray Kurzweil
July 1, 2022


talk title: Touching the future.
author: by Ray Kurzweil
date: 2017

presented by

Singularity grp. | home ~ channel
tag line: Together we can impact a billion people.


TALK
Touching the future.
by Ray Kurzweil

I thought I would give you some background how I got into what I’m doing. I’ve had some involvement with the G and the R of GNR — genetics, nano-tech, and robotics.

G which is another word for bio-tech. And R is for robotics which really refers to artificial intelligence. I encountered the idea of exponential growth in 1981 — when I was studying technology trends — trying to time my technology projects. I started with the common wisdom that I can’t predict the future. But discovered that the price / performance and capacity of information technology follows a very predictable path — and an exponential one.

Peter Diamandis MD and I and I were talking about that. I’ve had some involvement with bio-tech. But let me share with you how I encountered artificial intelligence (AI) — over a half a century ago.

I fashioned myself an inventor when I was 5 years-old. My grandmother showed me this mechanical typewriter and a book that she wrote on that typewriter — titled One Life is Not Enough. The memoir told the story of her mother — my great-grandmother — starting the first school that provided higher education for girls in Europe. It was in 1868.

The school was taken-over by her daughter — my grandmother — who then became the first woman to get a PhD in chemistry in Europe. She tells the story of the school, and her own life. So she showed me this book and the typewriter. But as a 5 year-old I was much more interested in the typewriter — that was a magical machine to me. And it inspired me to become an inventor.

When I was 12 years-old I discovered the computer — which most people had not heard of. I would say most people have not heard the word ‘computer.’ There were only a dozen computers in New York city. I immediately had the idea that you could simulate reality — as well as thinking and intelligence — in a computer. I thought about that for a couple of years.

When I was 14 years-old in 1962 — I wrote the leaders of the 2 opposing schools in artificial intelligence (schools of thought). Those were the symbolic school and the connectionist school. So that bifurcation of artificial intelligence into these two warring camps started at about that time.

Marvin Minsky PhD was considered the head of the symbolic school. He’s been called the ‘father of artificial intelligence.’ It’s actually not well known that he invented the neural net in the early 1950s. But he had come to reject that idea. He expounded the idea that you could define rules that would describe every intelligent activity.

So I went to visit him. I wrote him a letter and he invited me up. He spent all day with me — as if he had nothing else to do. He was a consummate educator. That started a mentorship for over half a century, until his passing-away over a year ago.

He said: ‘Where are you going now?’ And I said, well I’m going to go see professor Frank Rosenblatt PhD at Cornell. And he said: ‘oh don’t waste your time with that.’ But I went there — and he was publicizing a machine called the ‘perceptron.’ It was really the first popular neural net.

And he was making fantastic claims for it. He said: ‘It can’t quite do this now — but it’s gonna be able to translate languages, cure diseases, and recognize speech.’ It couldn’t do any of those things. And that led to a backlash which was why Minsky and others were critical of it.

So I went up there. I brought printed letters in different type fonts — and it could recognize them so long as it was in Courier 10. If it was in any other type style it didn’t work. He said: ‘But don’t worry. We can take the output of one perceptron, and feed it in as the input to another one. Feed the output of that to another perceptron. And keep adding layers. As as we add more and more layers — it will get smarter and smarter and generalize. Then we’ll be able to do all these things.’

platform: Medium
blog: Towards Data Science
story title: Rosenblatt’s perceptron, the first modern neural network
deck: A quick introduction to deep learning for beginners.

read | story

I said: ‘That’s really interesting. Have you tried that, how did that work out?’ He said: ‘Well, I haven’t tried it yet but it’s high on our research agenda.’ So he died 9 years later in 1971 — never having tried hat idea. Things didn’t move quite as quickly in the 1960s as they do today. Culturally things removed pretty quickly in the 1960s — but when I came to computers things were still pretty slow.

In 1969 Minsky wrote a book with Seymour Papert PhD called Perceptrons. That proved the theorem that a perceptron — a neural net — couldn’t solve a particular problem called the ‘connectedness problem.’ The problem was on the book’s cover. There were 2 maze-like images — one was fully connected, one was not.

book: Perceptrons
deck: An introduction to computational geometry.
author: by Marvin Minsky + Seymour Papert PhD
date: 1969

visit | book

Humans could solve this problem easily, it would take a minute or two. But the book proved the theorem that perceptrons could not solve that problem. And it wasn’t a matter of making them more advanced. Just inherently — he proved mathematically — it couldn’t solve this fairly simple problem that humans could solve.

The book was very successful and killing all the funding for neural nets for 25 years — something that Minsky told me he regretted shortly before he died. Because he saw the surge of success in neural nets. I wrote in my book the Age of Spiritual Machines in 1999 that the theorem only applied to single level neural nets.

So this idea that Rosenblatt had of having multi-layer neural nets — the theorem didn’t apply to that, the limitations didn’t apply to that. A few decades after Rosenblatt died, people tried multi-layer neural nets. They could only do 3 or 4 neural nets. And that was not because of computational problems. That was because of a mathematical problem. For the mathematicians in the audience — it has to do with falling into a local minima, or keeping the error surface concave.

But for math reasons when we went beyond 3 or 4 levels the information disintegrated. And then 3 or 4 level neural nets were a little bit smarter than one level — but they still couldn’t do very much. The big accusation against the AI field just 5 or 6 years ago was: ‘You guys can’t even tell the difference between a dog and cat.’

That’s actually a pretty subtle and difficult discrimination. Turns out that the difference between a dog and a cat is at level 15. And if you have a three-level neural net it’s not going to be able to do that. We couldn’t go beyond 3 levels because of this math problem.

About 5 or 6 years ago that math problem was solved by a group of mathematicians — including some that I work with at Google — and now we could go to any number of levels. And indeed then we could tell the difference between a dog and a cat. With hundred-level neural nets the best programs can distinguish between thousands of different categories — and do it better than humans.

This has led to the surge of interest in artificial intelligence. So 2 things really account for this tremendous surge of interest in AI — it’s the solving of this math problem and the law of accelerating returns. Because we have more and more powerful computers, and memories, and lots of data. That’s one of the reasons I’m at Google.

There’s still a problem though. There’s a motto in the field that ‘life begins at a billion examples.’ We have a billion examples of some things — like pictures of dogs and cats. But there’s a lot of things we don’t have that kind of data about. That’s actually the principle difference now between AI and human intelligence. Humans can learn from a small amount of information. Your significant other or your boss tells you something once — maybe twice.  You might actually learn from that.

That’s something that’s these neural nets are not able to do. We don’t always have a billion examples. I’d say the big research challenge now is to find a way around that. Deep Mind — in winning the game of go competition — created their own data. There were about a million moves online of master go games. So they trained it on those million moves. Now that’s not a billion — that’s only a million — and it created a fair go player.

But even an average amateur could beat that program. Then they created an unlimited amount of data by having the program play itself. Kind of similar to how humans might actually think-through a problem on their own. We can annotate each move because we know which side of the simulated game would win. And there’s other ways to evaluate moves. And now it could create an unlimited amount of annotated examples. It just kept improving until it soared past he best players in the world.

So that’s one approach. I have another approach which I think is actually how humans do it. That same year 1962, I wrote a paper about how I thought the human brain worked. I didn’t have much to go on — there was very little neuro-science. There was one neuro-scientist that had something to say that was of interest. That was Vernon Mountcastle MD.

The common wisdom at that time was we knew about different regions of the neo-cortex. There’s the one in the back of the head where the optic nerve spills into — that one can tell me that this is a straight line. There’s the fusiform gyrus up here, which can recognize faces. We know that because if you knock it out through injury or stroke, people can’t recognize faces. Although they will re-learn that skill using a different region of the brain. There’s the famous frontal cortex which enables us to do language, music, and humor.

These regions do such different things — they must be using different algorithms, different methods. Vernon Mountcastle did autopsies of the neo-cortex in different regions. And they all looked the same. They had the same repeating pattern — and that pattern didn’t seem to matter how old the individual was. It stayed the same throughout life. He said: ‘neo-cortex is neo-cortex.’

So I had that clue. I described the brain as a series of modules — each of the modules is pretty much the same. It can learn a pattern. It actually learned a pattern as a linear sequence. Even if we’re learning a pattern like a chair — you might think well that’s a 3-dimensional object it’s not a linear sequence. But we actually are able to use these linear sequences to understand complicated patterns.

And they’re organized in a hierarchy. That was the magic of the neo-cortex. You could actually understand hierarchies — the world is organized hierarchically. That’s why evolution devised the neo-cortex 200 million years ago — with mammals. Trees are have a trunk, which has branches, which have more branches, which lead to leaves. There’s a natural hierarchical structure to the world.

With the ability to understand the world hierarchically, we can actually invent new behaviors. So pre-mammalian animals didn’t have a neo-cortex. They had fixed behaviors, but they were very well evolved for their ecological niches.

Mammals came along — the first ones were basically rodents and they could invent new behaviors, didn’t help them that much because the environment changed very slowly. It might take 50,000 years for there to be an environmental change that would require a new behavior. These non-mammalian animals, over that period of time could actually evolve a new fixed behavior.

So the mammals — they were like this small — kind of stayed out of the way for 135 million years. But then something happened 65 million years ago. There was a sudden catastrophic change to the environment. If you go down to a layer of rock that reflects 65 million years ago — the geologists will explain that shows a sudden violent change to the environment. We see it all around the globe. We call it the Cretaceous Extinction event, because that’s when the dinosaurs went extinct. That’s when 75% of all the animal species went extinct. And that’s when mammals took over.

Mammals got bigger, their bodies got bigger, their brains got bigger, even faster — taking-up a larger fraction of their body. The neo-cortex — which is the outer layer of the brain — got bigger and developed these curvatures. It’s now — in a human brain — about 80% of the brain. It’s where the action is. That’s where we do our thinking.

Something else happened 2 million years ago. If you remember we were walking around, we didn’t have these big foreheads. If you look at other primates they have a slanted brow — they were doing a very good job of being primates. But evolution figured: ‘well, this neo-cortex is pretty good stuff. How can we get more of it?’

And evolution created the frontal cortex. What did we do with it? We were doing a very good job of being primates, so we put it at the top of the hierarchy. The hierarchy is like a pyramid, as you add more of it — even if you’re adding 20% more neo-cortex — it could double or triple the number of levels in the hierarchy.

That was the enabling factor for us to invent language, art, music. Every human culture ever discovered has music. No primate cultures have music, humor, or technology. Along with this opposable appendage, the thumb — so that we could take our imagination of what we could do with the world, and actually make changes. We created tools that were sophisticated enough to create new tools. We had a whole evolutionary process of technology.

I had this model. I submitted it to a science contest, got to meet President Lyndon Johnson — and started thinking about thinking for 50 years. My book How to Create a Mind from 2012 articulates this. But now we have an explosion of neuro-science evidence to support it. The European brain reverse-engineering project has identified a repeating pattern of about a hundred neurons each. There’s about 300 million — out of the 30 billion neurons we have in the neo-cortex — of these modules. We can actually see them connecting themselves into hierarchies.

We also see that there’s no plasticity, no change within each module. It stays fixed throughout life. But the pattern that each module learns — and the kind of hierarchy that’s self-organizing — is created with our own thinking. So that’s the model I have. I’ve been creating simulations of these models. They’re not perfect models of the neo-cortex yet — but we’re learning more and more about how the brain works. The models are good enough to begin to understand language.

Because of these self-organizing neural nets, the connectionist school has really taken over. We’ve discovered that we really can’t define the world in terms of rules. Because the world is too flexible — and the rules become brittle. Then you try to fix something and it breaks 2 other things. There’s a complexity ceiling when you try to define things in terms of rules — whereas we can actually define much more subtle and complicated ways of doing things with these connectionist systems.

It is basically amplifying our technology, amplifying our intelligence. There’s been controversy recently about promise vs. peril. But there’s been promise vs. peril with every technology. I think we go through 3 phases in contemplating exponential tech.

The first is delight: ‘wow, this has the potential to overcome the age-old problems of humanity. Then alarm that they also have the potential to cause great harm. And finally — I think where we should end-up — is a cautious optimism. We have a moral imperative to continue improving exponential technologies because only these will alleviate the suffering in the world.

We’ve made tremendous progress. This is the wealthiest, healthiest, most peaceful time in human history. But there’s still a lot of suffering to go around. It’s only continued advance in these technologies that will get us where we want — while we contain the peril.

A good example of our ability to actually do that is in bio-tech. We’ve had the guidelines from the Asilomar conference, four decades ago. We’re now seeing benefits. You’ve heard that we can fix a broken heart — not yet from romance, that’s going to take more advances in virtual reality. But from a heart attack. We can grow organs.

I’m involved with a company where we’re actually growing organs: kidneys, lungs, and hearts — installing them successfully in animals. This is coming to a human near you soon. These are just a couple of examples. It’s a trickle of impact today. It’ll be a flood in 10 years. The number of people harmed by abuse of bio-tech — intentional or accidental — so far it’s been zero, because of the Asilomar guidelines.

That doesn’t mean we can cross it off the list. Because the technology keeps getting more sophisticated. For example, the recent advent of CRISPR. We have to keep re-inventing the guidelines. But it’s a good model for how we can do exactly what we want — which is yield the promise while we contain the peril.

We just had our first conference on AI ethics. I believe similar ideas will be able to succeed, but it’s not automatic. I think that’s the great challenge for humanity. To continue to yield the promise — apply it to address human problems. While we keep the technology safe and contain the peril.

I’m personally optimistic. I think you have to be an optimist to be an entrepreneur — which is what I’ve been my whole life. You’d never start any project if you were aware of all the challenges that you’ll encounter. But only optimism an succeed in the world. Optimism is not an idle speculation about the future. It’s a self-fulfilling prophecy. And that’s what we’re trying to foster here.

— end of talk —


— notes —

AI = artificial intelligence