<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/"
	>

<channel>
	<title>the Kurzweil Libraryessays &#8211; the Kurzweil Library</title>
	<atom:link href="https://www.writingsbyraykurzweil.com/essays/feed" rel="self" type="application/rss+xml" />
	<link>https://www.writingsbyraykurzweil.com</link>
	<description>Tracking breakthroughs in tech, science, and world progress.</description>
	<lastBuildDate>Thu, 11 Sep 2025 13:05:11 +0000</lastBuildDate>
		<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
		<item>
		<title>the Law of Accelerating Returns.</title>
		<link>https://www.writingsbyraykurzweil.com/the-law-of-accelerating-returns</link>
		<comments>https://www.writingsbyraykurzweil.com/the-law-of-accelerating-returns#respond</comments>
		<pubDate>Wed, 01 Jan 2025 06:00:15 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[IMAGE label :: essay: the Law of Accelerating Returns. author: by Ray Kurzweil date: January 2001 An introduction. You will get $40 trillion just by reading this essay and understanding what it says. For complete details, see below. (It&#8217;s true that authors will do just about anything to keep your attention, but I&#8217;m serious about [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>IMAGE</p>
<hr />
<p>label ::</p>
<p>essay: the Law of Accelerating Returns.<br />
author: by Ray Kurzweil<br />
date: January 2001</p>
<hr />
<p>An introduction.</p>
<p>You will get $40 trillion just by reading this essay and understanding what it says. For complete details, see below. (It&#8217;s true that authors will do just about anything to keep your attention, but I&#8217;m serious about this statement. Until I return to a further explanation, however, do read the first sentence of this paragraph carefully.)</p>
<p>The future is not like the present.</p>
<p>Now back to the future &#8212; it&#8217;s widely misunderstood. Our forebears expected the future to be pretty much like their present, which had been pretty much like their past. Although exponential trends did exist a thousand years ago, they were at that very early stage where an exponential trend is so flat that it looks like no trend at all. So their lack of expectations was largely fulfilled. Today, in accordance with the common wisdom, everyone expects continuous tech progress and the social repercussions that follow. But the future will be far more surprising than most observers realize: few have truly internalized the implications of the fact that the rate of change itself is accelerating.</p>
<p>The intuitive linear view vs. the historical exponential view.</p>
<p>Most long range forecasts of technical feasibility in future time periods dramatically underestimate the power of future technology because they are based on what I call the &#8220;intuitive linear&#8221; view of technological progress rather than the &#8220;historical exponential view.&#8221; To express this another way, it isn&#8217;t the case that we&#8217;ll experience 100 years of progress in the 21st century. Rather we&#8217;ll witness on the order of twenty thousand years of progress &#8212; at today&#8217;s rate of progress, that is.</p>
<p>This disparity in outlook comes up frequently in a variety of contexts, for example, the discussion of the ethical issues that Bill Joy raised in his controversial Wired magazine feature story, </p>
<p>HALF</p>
<p>label ::</p>
<p>publication: Wired<br />
story: Why the future doesn&#8217;t need us.<br />
author: by Bill Joy<br />
date: August 2004</p>
<p>read :: story<br />
http://www.wired.com/wired/archive/8.04/joy.html</p>
<p>HALF</p>
<p>Bill and I have been frequently paired in a variety of venues as pessimist and optimist respectively. Although I&#8217;m expected to criticize Bill&#8217;s position, and indeed I do take issue with his prescription of relinquishment. I nonetheless usually end up defending Joy on the key issue of feasibility. Recently a Noble Prize winning panelist dismissed Bill&#8217;s concerns, exclaiming that, &#8220;we&#8217;re not going to see self-replicating nanoengineered entities for a hundred years.&#8221; I pointed out that 100 years was indeed a reasonable estimate of the amount of technical progress required to achieve this particular milestone <em>at today&#8217;s rate of progress</em>. </p>
<p>But because we&#8217;re doubling the rate of progress every decade, we&#8217;ll see a century of progress&#8211;<em>at today&#8217;s rate</em>&#8211;in only 25 calendar years.</p>
<p>When people think of a future period, they intuitively assume that the current rate of progress will continue for future periods. However, careful consideration of the pace of technology shows that the rate of progress is not constant, but it is human nature to adapt to the changing pace, so the intuitive view is that the pace will continue at the current rate. Even for those of us who have been around long enough to experience how the pace increases over time, our unexamined intuition nonetheless provides the impression that progress changes at the rate that we have experienced recently. From the mathematician&#8217;s perspective, a primary reason for this is that an exponential curve approximates a straight line when viewed for a brief duration. So even though the rate of progress in the very recent past (e.g., this past year) is far greater than it was ten years ago (let alone a hundred or a thousand years ago), our memories are nonetheless dominated by our very recent experience. It is typical, therefore, that even sophisticated commentators, when considering the future, extrapolate the current pace of change over the next 10 years or 100 years to determine their expectations. This is why I call this way of looking at the future the &#8220;intuitive linear&#8221; view.</p>
<p>But a serious assessment of the history of technology shows that technological change is exponential. In exponential growth, we find that a key measurement such as computational power is multiplied by a constant factor for each unit of time (e.g., doubling every year) rather than just being added to incrementally. Exponential growth is a feature of any evolutionary process, of which technology is a primary example. One can examine the data in different ways, on different time scales, and for a wide variety of technologies ranging from electronic to biological, and the acceleration of progress and growth applies. Indeed, we find not just simple exponential growth, but &#8220;double&#8221; exponential growth, meaning that the rate of exponential growth is itself growing exponentially. These observations do not rely merely on an assumption of the continuation of Moore&#8217;s law (i.e., the exponential shrinking of transistor sizes on an integrated circuit), but is based on a rich model of diverse technological processes. What it clearly shows is that technology, particularly the pace of technological change, advances (at least) exponentially, not linearly, and has been doing so since the advent of technology, indeed since the advent of evolution on Earth.</p>
<p>I emphasize this point because it is the most important failure that would-be prognosticators make in considering future trends. Most technology forecasts ignore altogether this &#8220;historical exponential view&#8221; of technological progress. That is why people tend to overestimate what can be achieved in the short term (because we tend to leave out necessary details), but underestimate what can be achieved in the long term (because the exponential growth is ignored).</p>
<h2>The Law of Accelerating Returns</h2>
<p>We can organize these observations into what I call the law of accelerating returns as follows:</p>
<ul>
<li>Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. As a result, the</li>
<li>rate of progress of an evolutionary process increases exponentially over time. Over time, the &#8220;order&#8221; of the information embedded in the evolutionary process (i.e., the measure of how well the information fits a purpose, which in evolution is survival) increases.</li>
<li>A correlate of the above observation is that the &#8220;returns&#8221; of an evolutionary process (e.g., the speed, cost-effectiveness, or overall &#8220;power&#8221; of a process) increase exponentially over time.</li>
<li>In another positive feedback loop, as a particular evolutionary process (e.g., computation) becomes more effective (e.g., cost effective), greater resources are deployed toward the further progress of that process. This results in a second level of exponential growth (i.e., the rate of exponential growth itself grows exponentially).</li>
<li>Biological evolution is one such evolutionary process.</li>
<li>Technological evolution is another such evolutionary process. Indeed, the emergence of the first technology creating species resulted in the new evolutionary process of technology. Therefore, technological evolution is an outgrowth of&#8211;and a continuation of&#8211;biological evolution.</li>
<li>A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential. When this happens, a paradigm shift (i.e., a fundamental change in the approach) occurs, which enables exponential growth to continue.</li>
</ul>
<p>If we apply these principles at the highest level of evolution on Earth, the first step, the creation of cells, introduced the paradigm of biology. The subsequent emergence of DNA provided a digital method to record the results of evolutionary experiments. Then, the evolution of a species who combined rational thought with an opposable appendage (i.e., the thumb) caused a fundamental paradigm shift from biology to technology. The upcoming primary paradigm shift will be from biological thinking to a hybrid combining biological and nonbiological thinking. This hybrid will include &#8220;biologically inspired&#8221; processes resulting from the reverse engineering of biological brains.</p>
<p>If we examine the timing of these steps, we see that the process has continuously accelerated. The evolution of life forms required billions of years for the first steps (e.g., primitive cells); later on progress accelerated. During the Cambrian explosion, major paradigm shifts took only tens of millions of years. Later on, Humanoids developed over a period of millions of years, and Homo sapiens over a period of only hundreds of thousands of years.</p>
<p>With the advent of a technology-creating species, the exponential pace became too fast for evolution through DNA-guided protein synthesis and moved on to human-created technology. Technology goes beyond mere tool making; it is a process of creating ever more powerful technology using the tools from the previous round of innovation. In this way, human technology is distinguished from the tool making of other species. There is a record of each stage of technology, and each new stage of technology builds on the order of the previous stage.</p>
<p>The first technological steps-sharp edges, fire, the wheel&#8211;took tens of thousands of years. For people living in this era, there was little noticeable technological change in even a thousand years. By 1000 A.D., progress was much faster and a paradigm shift required only a century or two. In the nineteenth century, we saw more technological change than in the nine centuries preceding it. Then in the first twenty years of the twentieth century, we saw more advancement than in all of the nineteenth century. Now, paradigm shifts occur in only a few years time. The World Wide Web did not exist in anything like its present form just a few years ago; it didn&#8217;t exist at all a decade ago.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart01.jpg" alt="" vspace="10" /></p>
<p>The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially). So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today&#8217;s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart02.jpg" alt="" vspace="10" /></p>
<h2>The Singularity Is Near</h2>
<p>To appreciate the nature and significance of the coming &#8220;singularity,&#8221; it is important to ponder the nature of exponential growth. Toward this end, I am fond of telling the tale of the inventor of chess and his patron, the emperor of China. In response to the emperor&#8217;s offer of a reward for his new beloved game, the inventor asked for a single grain of rice on the first square, two on the second square, four on the third, and so on. The Emperor quickly granted this seemingly benign and humble request. One version of the story has the emperor going bankrupt as the 63 doublings ultimately totaled 18 million trillion grains of rice. At ten grains of rice per square inch, this requires rice fields covering twice the surface area of the Earth, oceans included. Another version of the story has the inventor losing his head.</p>
<p>It should be pointed out that as the emperor and the inventor went through the first half of the chess board, things were fairly uneventful. The inventor was given spoonfuls of rice, then bowls of rice, then barrels. By the end of the first half of the chess board, the inventor had accumulated one large field&#8217;s worth (4 billion grains), and the emperor did start to take notice. It was as they progressed through the second half of the chessboard that the situation quickly deteriorated. Incidentally, with regard to the doublings of computation, that&#8217;s about where we stand now&#8211;there have been slightly more than 32 doublings of performance since the first programmable computers were invented during World War II.</p>
<p>This is the nature of exponential growth. Although technology grows in the exponential domain, we humans live in a linear world. So technological trends are not noticed as small levels of technological power are doubled. Then seemingly out of nowhere, a technology explodes into view. For example, when the Internet went from 20,000 to 80,000 nodes over a two year period during the 1980s, this progress remained hidden from the general public. A decade later, when it went from 20 million to 80 million nodes in the same amount of time, the impact was rather conspicuous.</p>
<p>As exponential growth continues to accelerate into the first half of the twenty-first century, it will appear to explode into infinity, at least from the limited and linear perspective of contemporary humans. The progress will ultimately become so fast that it will rupture our ability to follow it. It will literally get out of our control. The illusion that we have our hand &#8220;on the plug,&#8221; will be dispelled.</p>
<p>Can the pace of technological progress continue to speed up indefinitely? Is there not a point where humans are unable to think fast enough to keep up with it? With regard to unenhanced humans, clearly so. But what would a thousand scientists, each a thousand times more intelligent than human scientists today, and each operating a thousand times faster than contemporary humans (because the information processing in their primarily nonbiological brains is faster) accomplish? One year would be like a millennium. What would they come up with?</p>
<p>Well, for one thing, they would come up with technology to become even more intelligent (because their intelligence is no longer of fixed capacity). They would change their own thought processes to think even faster. When the scientists evolve to be a million times more intelligent and operate a million times faster, then an hour would result in a century of progress (in today&#8217;s terms).</p>
<p>This, then, is the Singularity. The Singularity is technological change so rapid and so profound that it represents a rupture in the fabric of human history. Some would say that we cannot comprehend the Singularity, at least with our current level of understanding, and that it is impossible, therefore, to look past its &#8220;event horizon&#8221; and make sense of what lies beyond.</p>
<p>My view is that despite our profound limitations of thought, constrained as we are today to a mere hundred trillion interneuronal connections in our biological brains, we nonetheless have sufficient powers of abstraction to make meaningful statements about the nature of life after the Singularity. Most importantly, it is my view that the intelligence that will emerge will continue to represent the human civilization, which is already a human-machine civilization. This will be the next step in evolution, the next high level paradigm shift.</p>
<p>To put the concept of Singularity into perspective, let&#8217;s explore the history of the word itself. Singularity is a familiar word meaning a unique event with profound implications. In mathematics, the term implies infinity, the explosion of value that occurs when dividing a constant by a number that gets closer and closer to zero. In physics, similarly, a singularity denotes an event or location of infinite power. At the center of a black hole, matter is so dense that its gravity is infinite. As nearby matter and energy are drawn into the black hole, an event horizon separates the region from the rest of the Universe. It constitutes a rupture in the fabric of space and time. The Universe itself is said to have begun with just such a Singularity.</p>
<p>In the 1950s, John Von Neumann was quoted as saying that &#8220;the ever accelerating progress of technology&#8230;gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.&#8221; In the 1960s, I. J. Good wrote of an &#8220;intelligence explosion,&#8221; resulting from intelligent machines designing their next generation without human intervention. In 1986, Vernor Vinge, a mathematician and computer scientist at San Diego State University, wrote about a rapidly approaching technological &#8220;singularity&#8221; in his science fiction novel, <em>Marooned in Realtime.</em> Then in 1993, Vinge presented a paper to a NASA-organized symposium which described the Singularity as an impending event resulting primarily from the advent of &#8220;entities with greater than human intelligence,&#8221; which Vinge saw as the harbinger of a run-away phenomenon.</p>
<p>From my perspective, the Singularity has many faces. It represents the nearly vertical phase of exponential growth where the rate of growth is so extreme that technology appears to be growing at infinite speed. Of course, from a mathematical perspective, there is no discontinuity, no rupture, and the growth rates remain finite, albeit extraordinarily large. But from our <em>currently</em> limited perspective, this imminent event appears to be an acute and abrupt break in the continuity of progress. However, I emphasize the word &#8220;currently,&#8221; because one of the salient implications of the Singularity will be a change in the nature of our ability to understand. In other words, we will become vastly smarter as we merge with our technology.</p>
<p>When I wrote my first book, <em>The </em>Age of Intelligent Machines, in the 1980s, I ended the book with the specter of the emergence of machine intelligence greater than human intelligence, but found it difficult to look beyond this event horizon. Now having thought about its implications for the past 20 years, I feel that we are indeed capable of understanding the many facets of this threshold, one that will transform all spheres of human life.</p>
<p>Consider a few examples of the implications. The bulk of our experiences will shift from real reality to virtual reality. Most of the intelligence of our civilization will ultimately be nonbiological, which by the end of this century will be trillions of trillions of times more powerful than human intelligence. However, to address often expressed concerns, this does not imply the end of biological intelligence, even if thrown from its perch of evolutionary superiority. Moreover, it is important to note that the nonbiological forms will be derivative of biological design. In other words, our civilization will remain human, indeed in many ways more exemplary of what we regard as human than it is today, although our understanding of the term will move beyond its strictly biological origins.</p>
<p>Many observers have nonetheless expressed alarm at the emergence of forms of nonbiological intelligence superior to human intelligence. The potential to augment our own intelligence through intimate connection with other thinking mediums does not necessarily alleviate the concern, as some people have expressed the wish to remain &#8220;unenhanced&#8221; while at the same time keeping their place at the top of the intellectual food chain. My view is that the likely outcome is that on the one hand, from the perspective of biological humanity, these superhuman intelligences will appear to be their transcendent servants, satisfying their needs and desires. On the other hand, fulfilling the wishes of a revered biological legacy will occupy only a trivial portion of the intellectual power that the Singularity will bring.</p>
<p>Needless to say, the Singularity will transform all aspects of our lives, social, sexual, and economic, which I explore herewith.</p>
<h2>Wherefrom Moore&#8217;s Law</h2>
<p>Before considering further the implications of the Singularity, let&#8217;s examine the wide range of technologies that are subject to the law of accelerating returns. The exponential trend that has gained the greatest public recognition has become known as &#8220;Moore&#8217;s Law.&#8221; Gordon Moore, one of the inventors of integrated circuits, and then Chairman of Intel, noted in the mid 1970s that we could squeeze twice as many transistors on an integrated circuit every 24 months. Given that the electrons have less distance to travel, the circuits also run twice as fast, providing an overall quadrupling of computational power.</p>
<p>After sixty years of devoted service, Moore&#8217;s Law will die a dignified death no later than the year 2019. By that time, transistor features will be just a few atoms in width, and the strategy of ever finer photolithography will have run its course. So, will that be the end of the exponential growth of computing?</p>
<p>Don&#8217;t bet on it.</p>
<p>If we plot the speed (in instructions per second) per $1000 (in constant dollars) of 49 famous calculators and computers spanning the entire twentieth century, we note some interesting observations.</p>
<h2>Moore&#8217;s Law Was Not the First, but the Fifth Paradigm To Provide Exponential Growth of Computing</h2>
<p>Each time one paradigm runs out of steam, another picks up the pace<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart03.jpg" alt="" vspace="10" /></p>
<p>It is important to note that Moore&#8217;s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing&#8217;s relay-based &#8220;Robinson&#8221; machine that cracked the Nazi enigma code, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer which I used to dictate (and automatically transcribe) this essay.</p>
<p>But I noticed something else surprising. When I plotted the 49 machines on an exponential graph (where a straight line means exponential growth), I didn&#8217;t get a straight line. What I got was another exponential curve. In other words, there&#8217;s exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.</p>
<p>But where does Moore&#8217;s Law come from? What is behind this remarkably predictable phenomenon? I have seen relatively little written about the ultimate source of this trend. Is it just &#8220;a set of industry expectations and goals,&#8221; as Randy Isaac, head of basic science at IBM contends? Or is there something more profound going on?</p>
<p>In my view, it is one manifestation (among many) of the exponential growth of the evolutionary process that is technology. The exponential growth of computing is a marvelous quantitative example of the exponentially growing returns from an evolutionary process. We can also express the exponential growth of computing in terms of an accelerating pace: it took ninety years to achieve the first MIPS (million instructions per second) per thousand dollars, now we add one MIPS per thousand dollars every day.</p>
<p>Moore&#8217;s Law narrowly refers to the number of transistors on an integrated circuit of fixed size, and sometimes has been expressed even more narrowly in terms of transistor feature size. But rather than feature size (which is only one contributing factor), or even number of transistors, I think the most appropriate measure to track is computational speed per unit cost. This takes into account many levels of &#8220;cleverness&#8221; (i.e., innovation, which is to say, technological evolution). In addition to all of the innovation in integrated circuits, there are multiple layers of innovation in computer design, e.g., pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others.</p>
<p>From the above chart, we see that the exponential growth of computing didn&#8217;t start with integrated circuits (around 1958), or even transistors (around 1947), but goes back to the electromechanical calculators used in the 1890 and 1900 U.S. Census. This chart spans at least five distinct paradigms of computing, of which Moore&#8217;s Law pertains to only the latest one.</p>
<p>It&#8217;s obvious what the sixth paradigm will be after Moore&#8217;s Law runs out of steam during the second decade of this century. Chips today are flat (although it does require up to 20 layers of material to produce one layer of circuitry). Our brain, in contrast, is organized in three dimensions. We live in a three dimensional world, why not use the third dimension? The human brain actually uses a very inefficient electrochemical digital controlled analog computational process. The bulk of the calculations are done in the interneuronal connections at a speed of only about 200 calculations per second (in each connection), which is about ten million times slower than contemporary electronic circuits. But the brain gains its prodigious powers from its extremely parallel organization <em>in three dimensions</em>. There are many technologies in the wings that build circuitry in three dimensions. Nanotubes, for example, which are already working in laboratories, build circuits from pentagonal arrays of carbon atoms. One cubic inch of nanotube circuitry would be a million times more powerful than the human brain. There are more than enough new computing technologies now being researched, including three-dimensional silicon chips, optical computing, crystalline computing, DNA computing, and quantum computing, to keep the law of accelerating returns as applied to computation going for a long time.</p>
<p>Thus the (double) exponential growth of computing is broader than Moore&#8217;s Law, which refers to only one of its paradigms. And this accelerating growth of computing is, in turn, part of the yet broader phenomenon of the accelerating pace of any evolutionary process. Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of &#8220;resources.&#8221; The classical example is when a species happens upon a new habitat (e.g., rabbits in Australia), the species&#8217; numbers will grow exponentially for a time, but then hit a limit when resources such as food and space run out.</p>
<p>But the resources underlying the exponential growth of an evolutionary process are relatively unbounded:</p>
<ol>
<li>(i) The (ever growing) order of the evolutionary process itself. Each stage of evolution provides more powerful tools for the next. In biological evolution, the advent of DNA allowed more powerful and faster evolutionary &#8220;experiments.&#8221; Later, setting the &#8220;designs&#8221; of animal body plans during the Cambrian explosion allowed rapid evolutionary development of other body organs such as the brain. Or to take a more recent example, the advent of computer assisted design tools allows rapid development of the next generation of computers.</li>
<li>(ii) The &#8220;chaos&#8221; of the environment in which the evolutionary process takes place and which provides the options for further diversity. In biological evolution, diversity enters the process in the form of mutations and ever changing environmental conditions. In technological evolution, human ingenuity combined with ever changing market conditions keep the process of innovation going.</li>
</ol>
<p>The maximum potential of matter and energy to contain intelligent processes is a valid issue. But according to my models, we won&#8217;t approach those limits during this century (but this will become an issue within a couple of centuries).</p>
<p>We also need to distinguish between the &#8220;S&#8221; curve (an &#8220;S&#8221; stretched to the right, comprising very slow, virtually unnoticeable growth&#8211;followed by very rapid growth&#8211;followed by a flattening out as the process approaches an asymptote) that is characteristic of any specific technological paradigm and the continuing exponential growth that is characteristic of the ongoing evolutionary process of technology. Specific paradigms, such as Moore&#8217;s Law, do ultimately reach levels at which exponential growth is no longer feasible. Thus Moore&#8217;s Law is an S curve. But the growth of computation is an ongoing exponential (at least until we &#8220;saturate&#8221; the Universe with the intelligence of our human-machine civilization, but that will not be a limit in this coming century). In accordance with the law of accelerating returns, paradigm shift, also called innovation, turns the S curve of any specific paradigm into a continuing exponential. A new paradigm (e.g., three-dimensional circuits) takes over when the old paradigm approaches its natural limit. This has already happened at least four times in the history of computation. This difference also distinguishes the tool making of non-human species, in which the mastery of a tool-making (or using) skill by each animal is characterized by an abruptly ending S shaped learning curve, versus human-created technology, which has followed an exponential pattern of growth and acceleration since its inception.</p>
<h2>DNA Sequencing, Memory, Communications, the Internet, and Miniaturization</h2>
<p>This &#8220;law of accelerating returns&#8221; applies to all of technology, indeed to any true evolutionary process, and can be measured with remarkable precision in information based technologies. There are a great many examples of the exponential growth implied by the law of accelerating returns in technologies as varied as DNA sequencing, communication speeds, electronics of all kinds, and even in the rapidly shrinking size of technology. The Singularity results not from the exponential explosion of computation alone, but rather from the interplay and myriad synergies that will result from manifold intertwined technological revolutions. Also, keep in mind that every point on the exponential growth curves underlying these panoply of technologies (see the graphs below) represents an intense human drama of innovation and competition. It is remarkable therefore that these chaotic processes result in such smooth and predictable exponential trends.</p>
<p>For example, when the human genome scan started fourteen years ago, critics pointed out that given the speed with which the genome could then be scanned, it would take thousands of years to finish the project. Yet the fifteen year project was nonetheless completed slightly ahead of schedule.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart04.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart05.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart06.jpg" alt="" vspace="10" /></p>
<p>Of course, we expect to see exponential growth in electronic memories such as RAM.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart07.jpg" alt="" vspace="10" /></p>
<h2>Notice How Exponential Growth Continued through Paradigm Shifts from Vacuum Tubes to Discrete Transistors to Integrated Circuits</h2>
<p>However, growth in magnetic memory is not primarily a matter of Moore&#8217;s law, but includes advances in mechanical and electromagnetic systems.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart08.jpg" alt="" vspace="10" /></p>
<p>Exponential growth in communications technology has been even more explosive than in computation and is no less significant in its implications. Again, this progression involves far more than just shrinking transistors on an integrated circuit, but includes accelerating advances in fiber optics, optical switching, electromagnetic technologies, and others.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart09.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart10.jpg" alt="" vspace="10" /></p>
<p>Notice Cascade of smaller &#8220;S&#8221; Curves</p>
<p>Note that in the above two charts we can actually see the progression of &#8220;S&#8221; curves: the acceleration fostered by a new paradigm, followed by a leveling off as the paradigm runs out of steam, followed by renewed acceleration through paradigm shift.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart11.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart12.jpg" alt="" vspace="10" /></p>
<p>The following two charts show the overall growth of the Internet based on the number of hosts. These two charts plot the same data, but one is on an exponential axis and the other is linear. As I pointed out earlier, whereas technology progresses in the exponential domain, we experience it in the linear domain. So from the perspective of most observers, nothing was happening until the mid 1990s when seemingly out of nowhere, the world wide web and email exploded into view. But the emergence of the Internet into a worldwide phenomenon was readily predictable much earlier by examining the exponential trend data.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart13.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart14.jpg" alt="" vspace="10" /></p>
<p>Notice how the explosion of the Internet appears to be a surprise from the Linear Chart, but was perfectly predictable from the Exponential Chart</p>
<p>Ultimately we will get away from the tangle of wires in our cities and in our lives through wireless communication, the power of which is doubling every 10 to 11 months.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart15.jpg" alt="" vspace="10" /></p>
<p>Another technology that will have profound implications for the twenty-first century is the pervasive trend toward making things smaller, i.e., miniaturization. The salient implementation sizes of a broad range of technologies, both electronic and mechanical, are shrinking, also at a double exponential rate. At present, we are shrinking technology by a factor of approximately 5.6 per linear dimension per decade.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart16.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart17.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart18.jpg" alt="" vspace="10" /></p>
<h2>The Exponential Growth of Computation Revisited</h2>
<p>If we view the exponential growth of computation in its proper perspective as one example of the pervasiveness of the exponential growth of information based technology, that is, as one example of many of the law of accelerating returns, then we can confidently predict its continuation.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart19.jpg" alt="" vspace="10" /></p>
<p>In the accompanying sidebar, I include a simplified mathematical model of the law of accelerating returns as it pertains to the (double) exponential growth of computing. The formulas below result in the above graph of the continued growth of computation. This graph matches the available data for the twentieth century through all five paradigms and provides projections for the twenty-first century. Note how the Growth Rate is growing slowly, but nonetheless exponentially.</p>
<h2>The Law of Accelerating Returns Applied to the Growth of Computation</h2>
<p>The following provides a brief overview of the law of accelerating returns as it applies to the double exponential growth of computation. This model considers the impact of the growing power of the technology to foster its own next generation. For example, with more powerful computers and related technology, we have the tools and the knowledge to design yet more powerful computers, and to do so more quickly.</p>
<p>Note that the data for the year 2000 and beyond assume neural net connection calculations as it is expected that this type of calculation will ultimately dominate, particularly in emulating human brain functions. This type of calculation is less expensive than conventional (e.g., Pentium III / IV) calculations by a factor of at least 100 (particularly if implemented using digital controlled analog electronics, which would correspond well to the brain&#8217;s digital controlled analog electrochemical processes). A factor of 100 translates into approximately 6 years (today) and less than 6 years later in the twenty-first century.</p>
<p>My estimate of brain capacity is 100 billion neurons times an average 1,000 connections per neuron (with the calculations taking place primarily in the connections) times 200 calculations per second. Although these estimates are conservatively high, one can find higher and lower estimates. However, even much higher (or lower) estimates by orders of magnitude only shift the prediction by a relatively small number of years.</p>
<p>Some prominent dates from this analysis include the following:</p>
<ul>
<li>We achieve one Human Brain capability (2 * 10^16 cps) for $1,000 around the year 2023.</li>
<li>We achieve one Human Brain capability (2 * 10^16 cps) for one cent around the year 2037.</li>
<li>We achieve one Human Race capability (2 * 10^26 cps) for $1,000 around the year 2049.</li>
<li>We achieve one Human Race capability (2 * 10^26 cps) for one cent around the year 2059.</li>
</ul>
<p>The Model considers the following variables:</p>
<ul>
<li>V: Velocity (i.e., power) of computing (measured in CPS/unit cost)</li>
<li>W: World Knowledge as it pertains to designing and building computational devices</li>
<li>t: Time</li>
</ul>
<p>The assumptions of the model are:</p>
<ol>
<li>(1) V = C1 * W</li>
</ol>
<p>In other words, computer power is a linear function of the knowledge of how to build computers. This is actually a conservative assumption. In general, innovations improve V (computer power) by a multiple, not in an additive way. Independent innovations multiply each other&#8217;s effect. For example, a circuit advance such as CMOS, a more efficient IC wiring methodology, and a processor innovation such as pipelining all increase V by independent multiples.</p>
<ul>
<li>(2) W = C2 * Integral (0 to t) V</li>
</ul>
<p>In other words, W (knowledge) is cumulative, and the instantaneous increment to knowledge is proportional to V.</p>
<p>This gives us:</p>
<ul>
<li>W = C1 * C2 * Integral (0 to t) W</li>
<li>W = C1 * C2 * C3 ^ (C4 * t)</li>
<li>V = C1 ^ 2 * C2 * C3 ^ (C4 * t)</li>
<li>(Note on notation: a^b means a raised to the b power.)</li>
</ul>
<p>Simplifying the constants, we get:</p>
<ul>
<li>V = Ca * Cb ^ (Cc * t)</li>
</ul>
<p>So this is a formula for &#8220;accelerating&#8221; (i.e., exponentially growing) returns, a &#8220;regular Moore&#8217;s Law.&#8221;</p>
<p>As I mentioned above, the data shows exponential growth in the rate of exponential growth. (We doubled computer power every three years early in the twentieth century, every two years in the middle of the century, and close to every one year during the 1990s.)</p>
<p>Let&#8217;s factor in another exponential phenomenon, which is the growing resources for computation. Not only is each (constant cost) device getting more powerful as a function of W, but the resources deployed for computation are also growing exponentially.</p>
<p>We now have:</p>
<ul>
<li>N: Expenditures for computation</li>
<li>V = C1 * W (as before)</li>
<li>N = C4 ^ (C5 * t) (Expenditure for computation is growing at its own exponential rate)</li>
<li>W = C2 * Integral(0 to t) (N * V)</li>
</ul>
<p>As before, world knowledge is accumulating, and the instantaneous increment is proportional to the amount of computation, which equals the resources deployed for computation (N) * the power of each (constant cost) device.</p>
<p>This gives us:</p>
<ul>
<li>W = C1 * C2 * Integral(0 to t) (C4 ^ (C5 * t) * W)</li>
<li>W = C1 * C2 * (C3 ^ (C6 * t)) ^ (C7 * t)</li>
<li>V = C1 ^ 2 * C2 * (C3 ^ (C6 * t)) ^ (C7 * t)</li>
</ul>
<p>Simplifying the constants, we get:</p>
<ul>
<li>V = Ca * (Cb ^ (Cc * t)) ^ (Cd * t)</li>
</ul>
<p>This is a double exponential&#8211;an exponential curve in which the rate of exponential growth is growing at a different exponential rate.</p>
<p>Now let&#8217;s consider real-world data. Considering the data for actual calculating devices and computers during the twentieth century:</p>
<ul>
<li>CPS/$1K: Calculations Per Second for $1,000</li>
</ul>
<p>Twentieth century computing data matches:</p>
<ul>
<li>CPS/$1K = 10^(6.00*((20.40/6.00)^((A13-1900)/100))-11.00)</li>
</ul>
<p>We can determine the growth rate over a period of time:</p>
<ul>
<li>Growth Rate =10^((LOG(CPS/$1K for Current Year) &#8211; LOG(CPS/$1K for Previous Year))/(Current Year &#8211; Previous Year))</li>
<li>Human Brain = 100 Billion (10^11) neurons * 1000 (10^3) Connections/Neuron * 200 (2 * 10^2) Calculations Per Second Per Connection = 2 * 10^16 Calculations Per Second</li>
<li>Human Race = 10 Billion (10^10) Human Brains = 2 * 10^26 Calculations Per Second</li>
</ul>
<p>These formulas produce the graph above.</p>
<p>Already, IBM&#8217;s &#8220;Blue Gene&#8221; supercomputer, now being built and scheduled to be completed by 2005, is projected to provide 1 million billion calculations per second (i.e., one billion megaflops). This is already one twentieth of the capacity of the human brain, which I estimate at a conservatively high 20 million billion calculations per second (100 billion neurons times 1,000 connections per neuron times 200 calculations per second per connection). In line with my earlier predictions, supercomputers will achieve one human brain capacity by 2010, and personal computers will do so by around 2020. By 2030, it will take a village of human brains (around a thousand) to match $1000 of computing. By 2050, $1000 of computing will equal the processing power of all human brains on Earth. Of course, this only includes those brains still using carbon-based neurons. While human neurons are wondrous creations in a way, we wouldn&#8217;t (and don&#8217;t) design computing circuits the same way. Our electronic circuits are already more than ten million times faster than a neuron&#8217;s electrochemical processes. Most of the complexity of a human neuron is devoted to maintaining its life support functions, not its information processing capabilities. Ultimately, we will need to port our mental processes to a more suitable computational substrate. Then our minds won&#8217;t have to stay so small, being constrained as they are today to a mere hundred trillion neural connections each operating at a ponderous 200 digitally controlled analog calculations per second.</p>
<h2>The Software of Intelligence</h2>
<p>So far, I&#8217;ve been talking about the hardware of computing. The software is even more salient. One of the principal assumptions underlying the expectation of the Singularity is the ability of nonbiological mediums to emulate the richness, subtlety, and depth of human thinking. Achieving the computational capacity of the human brain, or even villages and nations of human brains will not automatically produce human levels of capability. By human levels I include all the diverse and subtle ways in which humans are intelligent, including musical and artistic aptitude, creativity, physically moving through the world, and understanding and responding appropriately to emotion. The requisite hardware capacity is a necessary but not sufficient condition. The organization and content of these resources&#8211;the software of intelligence&#8211;is also critical.</p>
<p>Before addressing this issue, it is important to note that once a computer achieves a human level of intelligence, it will necessarily soar past it. A key advantage of nonbiological intelligence is that machines can easily share their knowledge. If I learn French, or read War and Peace, I can&#8217;t readily download that learning to you. You have to acquire that scholarship the same painstaking way that I did. My knowledge, embedded in a vast pattern of neurotransmitter concentrations and interneuronal connections, cannot be quickly accessed or transmitted. But we won&#8217;t leave out quick downloading ports in our nonbiological equivalents of human neuron clusters. When one computer learns a skill or gains an insight, it can immediately share that wisdom with billions of other machines.</p>
<p>As a contemporary example, we spent years teaching one research computer how to recognize continuous human speech. We exposed it to thousands of hours of recorded speech, corrected its errors, and patiently improved its performance. Finally, it became quite adept at recognizing speech (I dictated most of my recent book to it). Now if you want your own personal computer to recognize speech, it doesn&#8217;t have to go through the same process; you can just download the fully trained patterns in seconds. Ultimately, billions of nonbiological entities can be the master of all human and machine acquired knowledge.</p>
<p>In addition, computers are potentially millions of times faster than human neural circuits. A computer can also remember billions or even trillions of facts perfectly, while we are hard pressed to remember a handful of phone numbers. The combination of human level intelligence in a machine with a computer&#8217;s inherent superiority in the speed, accuracy, and sharing ability of its memory will be formidable.</p>
<p>There are a number of compelling scenarios to achieve higher levels of intelligence in our computers, and ultimately human levels and beyond. We will be able to evolve and train a system combining massively parallel neural nets with other paradigms to understand language and model knowledge, including the ability to read and model the knowledge contained in written documents. Unlike many contemporary &#8220;neural net&#8221; machines, which use mathematically simplified models of human neurons, some contemporary neural nets are already using highly detailed models of human neurons, including detailed nonlinear analog activation functions and other relevant details. Although the ability of today&#8217;s computers to extract and learn knowledge from natural language documents is limited, their capabilities in this domain are improving rapidly. Computers will be able to read on their own, understanding and modeling what they have read, by the second decade of the twenty-first century. We can then have our computers read all of the world&#8217;s literature&#8211;books, magazines, scientific journals, and other available material. Ultimately, the machines will gather knowledge on their own by venturing out on the web, or even into the physical world, drawing from the full spectrum of media and information services, and sharing knowledge with each other (which machines can do far more easily than their human creators).</p>
<h2>Reverse Engineering the Human Brain</h2>
<p>The most compelling scenario for mastering the software of intelligence is to tap into the blueprint of the best example we can get our hands on of an intelligent process. There is no reason why we cannot reverse engineer the human brain, and essentially copy its design. Although it took its original designer several billion years to develop, it&#8217;s readily available to us, and not (yet) copyrighted. Although there&#8217;s a skull around the brain, it is not hidden from our view.</p>
<p>The most immediately accessible way to accomplish this is through destructive scanning: we take a frozen brain, preferably one frozen just slightly before rather than slightly after it was going to die anyway, and examine one brain layer&#8211;one very thin slice&#8211;at a time. We can readily see every neuron and every connection and every neurotransmitter concentration represented in each synapse-thin layer.</p>
<p>Human brain scanning has already started. A condemned killer allowed his brain and body to be scanned and you can access all 10 billion bytes of him on the Internet <a href="http://www.nlm.nih.gov/research/visible/visible_human.html" target="_new" rel="noopener">http://www.nlm.nih.gov/research/visible/visible_human.html</a>.</p>
<p>He has a 25 billion byte female companion on the site as well in case he gets lonely. This scan is not high enough in resolution for our purposes, but then, we probably don&#8217;t want to base our templates of machine intelligence on the brain of a convicted killer, anyway.</p>
<p>But scanning a frozen brain is feasible today, albeit not yet at a sufficient speed or bandwidth, but again, the law of accelerating returns will provide the requisite speed of scanning, just as it did for the human genome scan. Carnegie Mellon University&#8217;s Andreas Nowatzyk plans to scan the nervous system of the brain and body of a mouse with a resolution of less than 200 nanometers, which is getting very close to the resolution needed for reverse engineering.</p>
<p>We also have noninvasive scanning techniques today, including high-resolution magnetic resonance imaging (MRI) scans, optical imaging, near-infrared scanning, and other technologies which are capable in certain instances of resolving individual somas, or neuron cell bodies. Brain scanning technologies are also increasing their resolution with each new generation, just what we would expect from the law of accelerating returns. Future generations will enable us to resolve the connections between neurons and to peer inside the synapses and record the neurotransmitter concentrations.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart20.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart21.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart22.jpg" alt="" vspace="10" /></p>
<p>We can peer inside someone&#8217;s brain today with noninvasive scanners, which are increasing their resolution with each new generation of this technology. There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth, lack of vibration, and safety. For a variety of reasons it is easier to scan the brain of someone recently deceased than of someone still living. It is easier to get someone deceased to sit still, for one thing. But noninvasively scanning a living brain will ultimately become feasible as MRI, optical, and other scanning technologies continue to improve in resolution and speed.</p>
<h2>Scanning from Inside</h2>
<p>Although noninvasive means of scanning the brain from outside the skull are rapidly improving, the most practical approach to capturing every salient neural detail will be to scan it from inside. By 2030, &#8220;nanobot&#8221; (i.e., nano robot) technology will be viable, and brain scanning will be a prominent application. Nanobots are robots that are the size of human blood cells, or even smaller. Billions of them could travel through every brain capillary and scan every relevant feature from up close. Using high speed wireless communication, the nanobots would communicate with each other, and with other computers that are compiling the brain scan data base (in other words, the nanobots will all be on a wireless local area network).</p>
<p>This scenario involves only capabilities that we can touch and feel today. We already have technology capable of producing very high resolution scans, provided that the scanner is physically proximate to the neural features. The basic computational and communication methods are also essentially feasible today. The primary features that are not yet practical are nanobot size and cost. As I discussed above, we can project the exponentially declining cost of computation, and the rapidly declining size of both electronic and mechanical technologies. We can conservatively expect, therefore, the requisite nanobot technology by around 2030. Because of its ability to place each scanner in very close physical proximity to every neural feature, nanobot-based scanning will be more practical than scanning the brain from outside.</p>
<h2>How to Use Your Brain Scan</h2>
<p>How will we apply the thousands of trillions of bytes of information derived from each brain scan? One approach is to use the results to design more intelligent parallel algorithms for our machines, particularly those based on one of the neural net paradigms. With this approach, we don&#8217;t have to copy every single connection. There is a great deal of repetition and redundancy within any particular brain region. Although the information contained in a human brain would require thousands of trillions of bytes of information (on the order of 100 billion neurons times an average of 1,000 connections per neuron, each with multiple neurotransmitter concentrations and connection data), the design of the brain is characterized by a human genome of only about a billion bytes.</p>
<p>Furthermore, most of the genome is redundant, so the initial design of the brain is characterized by approximately one hundred million bytes, about the size of Microsoft Word. Of course, the complexity of our brains greatly increases as we interact with the world (by a factor of more than ten million). Because of the highly repetitive patterns found in each specific brain region, it is not necessary to capture each detail in order to reverse engineer the significant digital-analog algorithms. With this information, we can design simulated nets that operate similarly. There are already multiple efforts under way to scan the human brain and apply the insights derived to the design of intelligent machines.</p>
<p>The pace of brain reverse engineering is only slightly behind the availability of the brain scanning and neuron structure information. A contemporary example is a comprehensive model of a significant portion of the human auditory processing system that Lloyd Watts (<a href="http://www.lloydwatts.com" target="_new" rel="noopener">www.lloydwatts.com</a>) has developed from both neurobiology studies of specific neuron types and brain interneuronal connection information. Watts&#8217; model includes five parallel paths and includes the actual intermediate representations of auditory information at each stage of neural processing. Watts has implemented his model as real-time software which can locate and identify sounds with many of the same properties as human hearing. Although a work in progress, the model illustrates the feasibility of converting neurobiological models and brain connection data into working simulations. Also, as Hans Moravec and others have speculated, these efficient simulations require about 1,000 times less computation than the theoretical potential of the biological neurons being simulated.</p>
<h2>Reverse Engineering the Human Brain: Five Parallel Auditory Pathways</h2>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart23.jpg" alt="" vspace="10" /><br />
<span class="PhotoCredit">Chart by Lloyd Watts</span></p>
<p><strong>Cochlea</strong>: Sense organ of hearing. 30,000 fibers converts motion of the stapes into spectro-temporal representation of sound.</p>
<p><strong>MC</strong>: Multipolar Cells. Measure spectral energy.</p>
<p><strong>GBC</strong>: Globular Bushy Cells. Relays spikes from the auditory nerve to the Lateral Superior.</p>
<p><strong>Olivary Complex</strong> (includes <strong>LSO</strong> and <strong>MSO)</strong>. Encoding of timing and amplitude of signals for binaural comparison of level.</p>
<p><strong>SBC</strong>: Spherical Bushy Cells. Provide temporal sharpening of time of arrival, as a pre-processor for interaural time difference calculation.</p>
<p><strong>OC</strong>: Octopus Cells. Detection of transients.</p>
<p><strong>DCN</strong>: Dorsal Cochlear Nucleus. Detection of spectral edges and calibrating for noise levels.</p>
<p><strong>VNTB</strong>: Ventral Nucleus of the Trapezoid Body. Feedback signals to modulate outer hair cell function in the cochlea.</p>
<p><strong>VNLL</strong>, <strong>PON</strong>: Ventral Nucleus of the Lateral Lemniscus, Peri-Olivary Nuclei. Processing transients from the Octopus Cells.</p>
<p><strong>MSO</strong>: Medial Superior Olive. Computing inter-aural time difference (difference in time of arrival between the two ears, used to tell where a sound is coming from).</p>
<p><strong>LSO</strong>: Lateral Superior Olive. Also involved in computing inter-aural level difference.</p>
<p><strong>ICC</strong>: Central Nucleus of the Inferior Colliculus. The site of major integration of multiple representations of sound.</p>
<p><strong>ICx</strong>: Exterior Nucleus of the Inferior Colliculus. Further refinement of sound localization.</p>
<p><strong>SC</strong>: Superior Colliculus. Location of auditory/visual merging.</p>
<p><strong>MGB</strong>: Medial Geniculate Body. The auditory portion of the thalamus.</p>
<p><strong>LS</strong>: Limbic System. Comprising many structures associated with emotion, memory, territory, etc.</p>
<p><strong>AC</strong>: Auditory Cortex.</p>
<p>The brain is not one huge &#8220;tabula rasa&#8221; (i.e., undifferentiated blank slate), but rather an intricate and intertwined collection of hundreds of specialized regions. The process of &#8220;peeling the onion&#8221; to understand these interleaved regions is well underway. As the requisite neuron models and brain interconnection data becomes available, detailed and implementable models such as the auditory example above will be developed for all brain regions.</p>
<p>After the algorithms of a region are understood, they can be refined and extended before being implemented in synthetic neural equivalents. For one thing, they can be run on a computational substrate that is already more than ten million times faster than neural circuitry. And we can also throw in the methods for building intelligent machines that we already understand.</p>
<h2>Downloading the Human Brain</h2>
<p>A more controversial application than this scanning-the-brain-to-understand-it scenario is <em>scanning-the-brain-to-</em>download<em>-it</em>. Here we scan someone&#8217;s brain to map the locations, interconnections, and contents of all the somas, axons, dendrites, presynaptic vesicles, neurotransmitter concentrations, and other neural components and levels. Its entire organization can then be re-created on a neural computer of sufficient capacity, including the contents of its memory.</p>
<p>To do this, we need to understand local brain processes, although not necessarily all of the higher level processes. Scanning a brain with sufficient detail to download it may sound daunting, but so did the human genome scan. All of the basic technologies exist today, just not with the requisite speed, cost, and size, but these are the attributes that are improving at a double exponential pace.</p>
<p>The computationally pertinent aspects of individual neurons are complicated, but definitely not beyond our ability to accurately model. For example, Ted Berger and his colleagues at Hedco Neurosciences have built integrated circuits that precisely match the digital and analog information processing characteristics of neurons, including clusters with hundreds of neurons. Carver Mead and his colleagues at CalTech have built a variety of integrated circuits that emulate the digital-analog characteristics of mammalian neural circuits.</p>
<p>A recent experiment at San Diego&#8217;s Institute for Nonlinear Science demonstrates the potential for electronic neurons to precisely emulate biological ones. Neurons (biological or otherwise) are a prime example of what is often called &#8220;chaotic computing.&#8221; Each neuron acts in an essentially unpredictable fashion. When an entire network of neurons receives input (from the outside world or from other networks of neurons), the signaling amongst them appears at first to be frenzied and random. Over time, typically a fraction of a second or so, the chaotic interplay of the neurons dies down, and a stable pattern emerges. This pattern represents the &#8220;decision&#8221; of the neural network. If the neural network is performing a pattern recognition task (which, incidentally, comprises the bulk of the activity in the human brain), then the emergent pattern represents the appropriate recognition.</p>
<p>So the question addressed by the San Diego researchers was whether electronic neurons could engage in this chaotic dance alongside biological ones. They hooked up their artificial neurons with those from spiney lobsters in a single network, and their hybrid biological-nonbiological network performed in the same way (i.e., chaotic interplay followed by a stable emergent pattern) and with the same type of results as an all biological net of neurons. Essentially, the biological neurons accepted their electronic peers. It indicates that their mathematical model of these neurons was reasonably accurate.</p>
<p>There are many projects around the world which are creating nonbiological devices to recreate in great detail the functionality of human neuron clusters. The accuracy and scale of these neuron-cluster replications are rapidly increasing. We started with functionally equivalent recreations of single neurons, then clusters of tens, then hundreds, and now thousands. Scaling up technical processes at an exponential pace is what technology is good at.</p>
<p>As the computational power to emulate the human brain becomes available&#8211;we&#8217;re not there yet, but we will be there within a couple of decades&#8211;projects already under way to scan the human brain will be accelerated, with a view both to understand the human brain in general, as well as providing a detailed description of the contents and design of specific brains. By the third decade of the twenty-first century, we will be in a position to create highly detailed and complete maps of all relevant features of all neurons, neural connections and synapses in the human brain, all of the neural details that play a role in the behavior and functionality of the brain, and to recreate these designs in suitably advanced neural computers.</p>
<h2>Is the Human Brain Different from a Computer?</h2>
<p>Is the human brain different from a computer?</p>
<p>The answer depends on what we mean by the word &#8220;computer.&#8221; Certainly the brain uses very different methods from conventional contemporary computers. Most computers today are all digital and perform one (or perhaps a few) computations at a time at extremely high speed. In contrast, the human brain combines digital and analog methods with most computations performed in the analog domain. The brain is massively parallel, performing on the order of a hundred trillion computations at the same time, but at extremely slow speeds.</p>
<p>With regard to digital versus analog computing, we know that digital computing can be functionally equivalent to analog computing (although the reverse is not true), so we can perform all of the capabilities of a hybrid digital&#8211;analog network with an all digital computer. On the other hand, there is an engineering advantage to analog circuits in that analog computing is potentially thousands of times more efficient. An analog computation can be performed by a few transistors, or, in the case of mammalian neurons, specific electrochemical processes. A digital computation, in contrast, requires thousands or tens of thousands of transistors. So there is a significant engineering advantage to emulating the brain&#8217;s analog methods.</p>
<p>The massive parallelism of the human brain is the key to its pattern recognition abilities, which reflects the strength of human thinking. As I discussed above, mammalian neurons engage in a chaotic dance, and if the neural network has learned its lessons well, then a stable pattern will emerge reflecting the network&#8217;s decision. There is no reason why our nonbiological functionally equivalent recreations of biological neural networks cannot be built using these same principles, and indeed there are dozens of projects around the world that have succeeded in doing this. My own technical field is pattern recognition, and the projects that I have been involved in for over thirty years use this form of chaotic computing. Particularly successful examples are Carver Mead&#8217;s neural chips, which are highly parallel, use digital controlled analog computing, and are intended as functionally similar recreations of biological networks.</p>
<h2>Objective and Subjective</h2>
<p>The Singularity envisions the emergence of human-like intelligent entities of astonishing diversity and scope. Although these entities will be capable of passing the &#8220;Turing test&#8221; (i.e., able to fool humans that they are human), the question arises as to whether these &#8220;people&#8221; are conscious, or just appear that way. To gain some insight as to why this is an extremely subtle question (albeit an ultimately important one) it is useful to consider some of the paradoxes that emerge from the concept of downloading specific human brains.</p>
<p>Although I anticipate that the most common application of the knowledge gained from reverse engineering the human brain will be creating more intelligent machines that are not necessarily modeled on specific biological human individuals, the scenario of scanning and reinstantiating all of the neural details of a <em>specific</em> person raises the most immediate questions of identity. Let&#8217;s consider the question of what we will find when we do this.</p>
<p>We have to consider this question on both the objective and subjective levels. &#8220;Objective&#8221; means everyone except me, so let&#8217;s start with that. Objectively, when we scan someone&#8217;s brain and reinstantiate their personal mind file into a suitable computing medium, the newly emergent &#8220;person&#8221; will appear to other observers to have very much the same personality, history, and memory as the person originally scanned. That is, once the technology has been refined and perfected. Like any new technology, it won&#8217;t be perfect at first. But ultimately, the scans and recreations will be very accurate and realistic.</p>
<p>Interacting with the newly instantiated person will feel like interacting with the original person. The new person will claim to be that same old person and will have a memory of having been that person. The new person will have all of the patterns of knowledge, skill, and personality of the original. We are already creating functionally equivalent recreations of neurons and neuron clusters with sufficient accuracy that biological neurons accept their nonbiological equivalents and work with them as if they were biological. There are no natural limits that prevent us from doing the same with the hundred billion neuron cluster of clusters we call the human brain.</p>
<p>Subjectively, the issue is more subtle and profound, but first we need to reflect on one additional objective issue: our physical self.</p>
<h2>The Importance of Having a Body</h2>
<p>Consider how many of our thoughts and thinking are directed toward our body and its survival, security, nutrition, and image, not to mention affection, sexuality, and reproduction. Many, if not most, of the goals we attempt to advance using our brains have to do with our bodies: protecting them, providing them with fuel, making them attractive, making them feel good, providing for their myriad needs and desires. Some philosophers maintain that achieving human level intelligence is impossible without a body. If we&#8217;re going to port a human&#8217;s mind to a new computational medium, we&#8217;d better provide a body. A disembodied mind will quickly get depressed.</p>
<p>There are a variety of bodies that we will provide for our machines, and that they will provide for themselves: bodies built through nanotechnology (i.e., building highly complex physical systems atom by atom), virtual bodies (that exist only in virtual reality), bodies comprised of swarms of nanobots, and other technologies.</p>
<p>A common scenario will be to enhance a person&#8217;s biological brain with intimate connection to nonbiological intelligence. In this case, the body remains the good old human body that we&#8217;re familiar with, although this too will become greatly enhanced through biotechnology (gene enhancement and replacement) and, later on, through nanotechnology. A detailed examination of twenty-first century bodies is beyond the scope of this essay, but recreating and enhancing our bodies will be (and has been) an easier task than recreating our minds.</p>
<h2>So Just Who Are These People?</h2>
<p>To return to the issue of subjectivity, consider: is the reinstantiated mind the same consciousness as the person we just scanned? Are these &#8220;people&#8221; conscious at all? Is this a mind or just a brain?</p>
<p>Consciousness in our twenty-first century machines will be a critically important issue. But it is not easily resolved, or even readily understood. People tend to have strong views on the subject, and often just can&#8217;t understand how anyone else could possibly see the issue from a different perspective. Marvin Minsky observed that &#8220;there&#8217;s something queer about describing consciousness. Whatever people mean to say, they just can&#8217;t seem to make it clear.&#8221;</p>
<p>We don&#8217;t worry, at least not yet, about causing pain and suffering to our computer programs. But at what point do we consider an entity, a process, to be conscious, to feel pain and discomfort, to have its own intentionality, its own free will? How do we determine if an entity is conscious; if it has subjective experience? How do we distinguish a process that is conscious from one that just acts <em>as if</em> it is conscious?</p>
<p>We can&#8217;t simply ask it. If it says &#8220;Hey I&#8217;m conscious,&#8221; does that settle the issue? No, we have computer games today that effectively do that, and they&#8217;re not terribly convincing.</p>
<p>How about if the entity <em>is</em> very convincing and compelling when it says &#8220;I&#8217;m lonely, please keep me company.&#8221; Does that settle the issue?</p>
<p>If we look inside its circuits, and see essentially the identical kinds of feedback loops and other mechanisms in its brain that we see in a human brain (albeit implemented using nonbiological equivalents), does that settle the issue?</p>
<p>And just who are these people in the machine, anyway? The answer will depend on who you ask. If you ask the people in the machine, they will strenuously claim to be the original persons. For example, if we scan&#8211;let&#8217;s say myself&#8211;and record the exact state, level, and position of every neurotransmitter, synapse, neural connection, and every other relevant detail, and then reinstantiate this massive data base of information (which I estimate at thousands of trillions of bytes) into a neural computer of sufficient capacity, the person who then emerges in the machine will think that &#8220;he&#8221; is (and had been) me, or at least he will act that way. He will say &#8220;I grew up in Queens, New York, went to college at MIT, stayed in the Boston area, started and sold a few artificial intelligence companies, walked into a scanner there, and woke up in the machine here. Hey, this technology really works.&#8221;</p>
<p>But wait.</p>
<p>Is this really me? For one thing, old biological Ray (that&#8217;s me) still exists. I&#8217;ll still be here in my carbon-cell-based brain. Alas, I will have to sit back and watch the new Ray succeed in endeavors that I could only dream of.</p>
<h2>A Thought Experiment</h2>
<p>Let&#8217;s consider the issue of just who I am, and who the new Ray is a little more carefully. First of all, am I the stuff in my brain and body?</p>
<p>Consider that the particles making up my body and brain are constantly changing. We are not at all permanent collections of particles. The cells in our bodies turn over at different rates, but the particles (e.g., atoms and molecules) that comprise our cells are exchanged at a very rapid rate. I am just not the same collection of particles that I was even a month ago. It is the patterns of matter and energy that are semipermanent (that is, changing only gradually), but our actual material content is changing constantly, and very quickly. We are rather like the patterns that water makes in a stream. The rushing water around a formation of rocks makes a particular, unique pattern. This pattern may remain relatively unchanged for hours, even years. Of course, the actual material constituting the pattern&#8211;the water&#8211;is replaced in milliseconds. The same is true for Ray Kurzweil. Like the water in a stream, my particles are constantly changing, but the pattern that people recognize as Ray has a reasonable level of continuity. This argues that we should not associate our fundamental identity with a specific set of particles, but rather the pattern of matter and energy that we represent. Many contemporary philosophers seem partial to this &#8220;identify from pattern&#8221; argument.</p>
<p>But (again) wait.</p>
<p>If you were to scan my brain and reinstantiate new Ray while I was sleeping, I would not necessarily even know about it (with the nanobots, this will be a feasible scenario). If you then come to me, and say, &#8220;good news, Ray, we&#8217;ve successfully reinstantiated your mind file, so we won&#8217;t be needing your old brain anymore,&#8221; I may suddenly realize the flaw in this &#8220;identity from pattern&#8221; argument. I may wish new Ray well, and realize that he shares my &#8220;pattern,&#8221; but I would nonetheless conclude that he&#8217;s not me, because I&#8217;m still here. How could he be me? After all, I would not necessarily know that he even existed.</p>
<p>Let&#8217;s consider another perplexing scenario. Suppose I replace a small number of biological neurons with functionally equivalent nonbiological ones (they may provide certain benefits such as greater reliability and longevity, but that&#8217;s not relevant to this thought experiment). After I have this procedure performed, am I still the same person? My friends certainly think so. I still have the same self-deprecating humor, the same silly grin&#8211;yes, I&#8217;m still the same guy.</p>
<p>It should be clear where I&#8217;m going with this. Bit by bit, region by region, I ultimately replace my entire brain with essentially identical (perhaps improved) nonbiological equivalents (preserving all of the neurotransmitter concentrations and other details that represent my learning, skills, and memories). At each point, I feel the procedures were successful. At each point, I feel that I am the same guy. After each procedure, I claim to be the same guy. My friends concur. There is no old Ray and new Ray, just one Ray, one that never appears to fundamentally change.</p>
<p>But consider this. This gradual replacement of my brain with a nonbiological equivalent is essentially identical to the following sequence:</p>
<ol>
<li>(i) scan Ray and reinstantiate Ray&#8217;s mind file into new (nonbiological) Ray, and, then</li>
<li>(ii) terminate old Ray. But we concluded above that in such a scenario new Ray is not the same as old Ray. And if old Ray is terminated, well then that&#8217;s the end of Ray. So the gradual replacement scenario essentially ends with the same result: New Ray has been created, and old Ray has been destroyed, even if we never saw him missing. So what appears to be the continuing existence of just one Ray is really the creation of new Ray and the termination of old Ray.</li>
</ol>
<p><em>On yet another hand</em> (we&#8217;re running out of philosophical hands here), the gradual replacement scenario is not altogether different from what happens normally to our biological selves, in that our particles are always rapidly being replaced. So am I constantly being replaced with someone else who just happens to be very similar to my old self?</p>
<p>I am trying to illustrate why consciousness is not an easy issue. If we talk about consciousness as just a certain type of intelligent skill: the ability to reflect on one&#8217;s own self and situation, for example, then the issue is not difficult at all because any skill or capability or form of intelligence that one cares to define will be replicated in nonbiological entities (i.e., machines) within a few decades. With this type of <em>objective</em> view of consciousness, the conundrums do go away. But a fully objective view does not penetrate to the core of the issue, because the essence of consciousness is <em>subjective</em> experience, not objective correlates of that experience.</p>
<p>Will these future machines be capable of having spiritual experiences?</p>
<p>They certainly will claim to. They will claim to be people, and to have the full range of emotional and spiritual experiences that people claim to have. And these will not be idle claims; they will evidence the sort of rich, complex, and subtle behavior one associates with these feelings. How do the claims and behaviors&#8211;compelling as they will be&#8211;relate to the subjective experience of these reinstantiated people? We keep coming back to the very real but ultimately unmeasurable issue of consciousness.</p>
<p>People often talk about consciousness as if it were a clear property of an entity that can readily be identified, detected, and gauged. If there is one crucial insight that we can make regarding why the issue of consciousness is so contentious, it is the following:</p>
<p>There exists no objective test that can conclusively determine its presence.</p>
<p>Science is about objective measurement and logical implications therefrom, but the very nature of objectivity is that you cannot measure subjective experience-you can only measure correlates of it, such as behavior (and by behavior, I include the actions of components of an entity, such as neurons). This limitation has to do with the very nature of the concepts &#8220;objective&#8221; and &#8220;subjective.&#8221; Fundamentally, we cannot penetrate the subjective experience of another entity with direct objective measurement. We can certainly make arguments about it: i.e., &#8220;look inside the brain of this nonhuman entity, see how its methods are just like a human brain.&#8221; Or, &#8220;see how its behavior is just like human behavior.&#8221; But in the end, these remain just arguments. No matter how convincing the behavior of a reinstantiated person, some observers will refuse to accept the consciousness of an entity unless it squirts neurotransmitters, or is based on DNA-guided protein synthesis, or has some other specific biologically human attribute.</p>
<p>We assume that other humans are conscious, but that is still an assumption, and there is no consensus amongst humans about the consciousness of nonhuman entities, such as higher non-human animals. The issue will be even more contentious with regard to future nonbiological entities with human-like behavior and intelligence.</p>
<p>So how will we resolve the claimed consciousness of nonbiological intelligence (claimed, that is, by the machines)? From a practical perspective, we&#8217;ll accept their claims. Keep in mind that nonbiological entities in the twenty-first century will be extremely intelligent, so they&#8217;ll be able to convince us that they are conscious. They&#8217;ll have all the delicate and emotional cues that convince us today that humans are conscious. They will be able to make us laugh and cry. And they&#8217;ll get mad if we don&#8217;t accept their claims. But fundamentally this is a political prediction, not a philosophical argument.</p>
<h2>On Tubules and Quantum Computing</h2>
<p>Over the past several years, Roger Penrose, a noted physicist and philosopher, has suggested that fine structures in the neurons called tubules perform an exotic form of computation called &#8220;quantum computing.&#8221; Quantum computing is computing using what are called &#8220;qu bits&#8221; which take on all possible combinations of solutions simultaneously. It can be considered to be an extreme form of parallel processing (because every combination of values of the qu bits are tested simultaneously). Penrose suggests that the tubules and their quantum computing capabilities complicate the concept of recreating neurons and reinstantiating mind files.</p>
<p>However, there is little to suggest that the tubules contribute to the thinking process. Even generous models of human knowledge and capability are more than accounted for by current estimates of brain size, based on contemporary models of neuron functioning that do not include tubules. In fact, even with these tubule-less models, it appears that the brain is conservatively designed with many more connections (by several orders of magnitude) than it needs for its capabilities and capacity. Recent experiments (e.g., the San Diego Institute for Nonlinear Science experiments) showing that hybrid biological-nonbiological networks perform similarly to all biological networks, while not definitive, are strongly suggestive that our tubule-less models of neuron functioning are adequate. Lloyd Watts&#8217; software simulation of his intricate model of human auditory processing uses orders of magnitude less computation than the networks of neurons he is simulating, and there is no suggestion that quantum computing is needed.</p>
<p>However, even if the tubules are important, it doesn&#8217;t change the projections I have discussed above to any significant degree. According to my model of computational growth, if the tubules multiplied neuron complexity by a factor of a thousand (and keep in mind that our current tubule-less neuron models are already complex, including on the order of a thousand connections per neuron, multiple nonlinearities and other details), this would delay our reaching brain capacity by only about 9 years. If we&#8217;re off by a factor of a million, that&#8217;s still only a delay of 17 years. A factor of a billion is around 24 years (keep in mind computation is growing by a double exponential).</p>
<p>With regard to quantum computing, once again there is nothing to suggest that the brain does quantum computing. Just because quantum technology may be feasible does not suggest that the brain is capable of it. After all, we don&#8217;t have lasers or even radios in our brains. Although some scientists have claimed to detect quantum wave collapse in the brain, no one has suggested human capabilities that actually require a capacity for quantum computing.</p>
<p>However, even if the brain does do quantum computing, this does not significantly change the outlook for human-level computing (and beyond) nor does it suggest that brain downloading is infeasible. First of all, if the brain does do quantum computing this would only verify that quantum computing is feasible. There would be nothing in such a finding to suggest that quantum computing is restricted to biological mechanisms. Biological quantum computing mechanisms, if they exist, could be replicated. Indeed, recent experiments with small scale quantum computers appear to be successful. Even the conventional transistor relies on the quantum effect of electron tunneling.</p>
<p>Penrose suggests that it is impossible to perfectly replicate a set of quantum states, so therefore, perfect downloading is impossible. Well, how perfect does a download have to be? I am at this moment in a very different quantum state (and different in non-quantum ways as well) than I was a minute ago (certainly in a very different state than I was before I wrote this paragraph). If we develop downloading technology to the point where the &#8220;copies&#8221; are as close to the original as the original person changes anyway in the course of one minute, that would be good enough for any conceivable purpose, yet does not require copying quantum states. As the technology improves, the accuracy of the copy could become as close as the original changes within ever briefer periods of time (e.g., one second, one millisecond, one microsecond).</p>
<p>When it was pointed out to Penrose that neurons (and even neural connections) were too big for quantum computing, he came up with the tubule theory as a possible mechanism for neural quantum computing. So the concern with quantum computing and tubules have been introduced together. If one is searching for barriers to replicating brain function, it is an ingenious theory, but it fails to introduce any genuine barriers. There is no evidence for it, and even if true, it only delays matters by a decade or two. There is no reason to believe that biological mechanisms (including quantum computing) are inherently impossible to replicate using nonbiological materials and mechanisms. Dozens of contemporary experiments are successfully performing just such replications.</p>
<h2>The Noninvasive Surgery-Free Reversible Programmable Distributed Brain Implant, Full-Immersion Shared Virtual Reality Environments, Experience Beamers, and Brain Expansion</h2>
<p>How will we apply technology that is more intelligent than its creators? One might be tempted to respond &#8220;Carefully!&#8221; But let&#8217;s take a look at some examples.</p>
<p>Consider several examples of the nanobot technology, which, based on miniaturization and cost reduction trends, will be feasible within 30 years. In addition to scanning your brain, the nanobots will also be able to expand our experiences and our capabilities.</p>
<p>Nanobot technology will provide fully immersive, totally convincing virtual reality in the following way. The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin). We already have the technology for electronic devices to communicate with neurons in both directions that requires no direct physical contact with the neurons. For example, scientists at the Max Planck Institute have developed &#8220;neuron transistors&#8221; that can detect the firing of a nearby neuron, or alternatively, can cause a nearby neuron to fire, or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. The Institute scientists demonstrated their invention by controlling the movement of a living leech from their computer. Again, the primary aspect of nanobot-based virtual reality that is not yet feasible is size and cost.</p>
<p>When we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses, and replace them with the signals that would be appropriate for the virtual environment. You (i.e., your brain) could decide to cause your muscles and limbs to move as you normally would, but the nanobots again intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move and provide the appropriate movement and reorientation in the virtual environment.</p>
<p>The web will provide a panoply of virtual environments to explore. Some will be recreations of real places, others will be fanciful environments that have no &#8220;real&#8221; counterpart. Some indeed would be impossible in the physical world (perhaps, because they violate the laws of physics). We will be able to &#8220;go&#8221; to these virtual environments by ourselves, or we will meet other people there, both real people and simulated people. Of course, ultimately there won&#8217;t be a clear distinction between the two.</p>
<p>By 2030, going to a web site will mean entering a full immersion virtual reality environment. In addition to encompassing all of the senses, these shared environments can include emotional overlays as the nanobots will be capable of triggering the neurological correlates of emotions, sexual pleasure, and other derivatives of our sensory experience and mental reactions.</p>
<p>In the same way that people today beam their lives from web cams in their bedrooms, &#8220;experience beamers&#8221; circa 2030 will beam their entire flow of sensory experiences, and if so desired, their emotions and other secondary reactions. We&#8217;ll be able to plug in (by going to the appropriate web site) and experience other people&#8217;s lives as in the plot concept of &#8216;Being John Malkovich.&#8217; Particularly interesting experiences can be archived and relived at any time.</p>
<p>We won&#8217;t need to wait until 2030 to experience shared virtual reality environments, at least for the visual and auditory senses. Full immersion visual-auditory environments will be available by the end of this decade with images written directly onto our retinas by our eyeglasses and contact lenses. All of the electronics for the computation, image reconstruction, and very high bandwidth wireless connection to the Internet will be embedded in our glasses and woven into our clothing, so computers as distinct objects will disappear.</p>
<p>In my view, the most significant implication of the Singularity will be the merger of biological and nonbiological intelligence. First, it is important to point out that well before the end of the twenty-first century, thinking on nonbiological substrates will dominate. Biological thinking is stuck at 10<sup>26</sup> calculations per second (for all biological human brains), and that figure will not appreciably change, even with bioengineering changes to our genome. Nonbiological intelligence, on the other hand, is growing at a double exponential rate and will vastly exceed biological intelligence well before the middle of this century. However, in my view, this nonbiological intelligence should still be considered human as it is fully derivative of the human-machine civilization. The merger of these two worlds of intelligence is not merely a merger of biological and nonbiological thinking mediums, but more importantly one of method and organization of thinking.</p>
<p>One of the key ways in which the two worlds can interact will be through the nanobots. Nanobot technology will be able to expand our minds in virtually any imaginable way. Our brains today are relatively fixed in design. Although we do add patterns of interneuronal connections and neurotransmitter concentrations as a normal part of the learning process, the current overall capacity of the human brain is highly constrained, restricted to a mere hundred trillion connections. Brain implants based on massively distributed intelligent nanobots will ultimately expand our memories a trillion fold, and otherwise vastly improve all of our sensory, pattern recognition, and cognitive abilities. Since the nanobots are communicating with each other over a wireless local area network, they can create any set of new neural connections, can break existing connections (by suppressing neural firing), can create new hybrid biological-nonbiological networks, as well as add vast new nonbiological networks.</p>
<p>Using nanobots as brain extenders is a significant improvement over the idea of surgically installed neural implants, which are beginning to be used today (e.g., ventral posterior nucleus, subthalmic nucleus, and ventral lateral thalamus neural implants to counteract Parkinson&#8217;s Disease and tremors from other neurological disorders, cochlear implants, and others.) Nanobots will be introduced without surgery, essentially just by injecting or even swallowing them. They can all be directed to leave, so the process is easily reversible. They are programmable, in that they can provide virtual reality one minute, and a variety of brain extensions the next. They can change their configuration, and clearly can alter their software. Perhaps most importantly, they are massively distributed and therefore can take up billions or trillions of positions throughout the brain, whereas a surgically introduced neural implant can only be placed in one or at most a few locations.</p>
<h2>The Double Exponential Growth of the Economy During the 1990s Was Not a Bubble</h2>
<p>Yet another manifestation of the law of accelerating returns as it rushes toward the Singularity can be found in the world of economics, a world vital to both the genesis of the law of accelerating returns, and to its implications. It is the economic imperative of a competitive marketplace that is driving technology forward and fueling the law of accelerating returns. In turn, the law of accelerating returns, particularly as it approaches the Singularity, is transforming economic relationships.</p>
<p>Virtually all of the economic models taught in economics classes, used by the Federal Reserve Board to set monetary policy, by Government agencies to set economic policy, and by economic forecasters of all kinds are fundamentally flawed because they are based on the intuitive linear view of history rather than the historically based exponential view. The reason that these linear models appear to work for a while is for the same reason that most people adopt the intuitive linear view in the first place: exponential trends appear to be linear when viewed (and experienced) for a brief period of time, particularly in the early stages of an exponential trend when not much is happening. But once the &#8220;knee of the curve&#8221; is achieved and the exponential growth explodes, the linear models break down. The exponential trends underlying productivity growth are just beginning this explosive phase.</p>
<p>The economy (viewed either in total or per capita) has been growing exponentially throughout this century:<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart24.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart25.jpg" alt="" vspace="10" /></p>
<p>There is also a second level of exponential growth, but up until recently the second exponent has been in the early phase so that the growth in the growth rate has not been noticed. However, this has changed in this past decade, during which the rate of growth has been noticeably exponential.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart26.jpg" alt="" vspace="10" /></p>
<p>Productivity (economic output per worker) has also been growing exponentially. Even these statistics are greatly understated because they do not fully reflect significant improvements in the quality and features of products and services. It is not the case that &#8220;a car is a car;&#8221; there have been significant improvements in safety, reliability, and features. There are a myriad of such examples. Pharmaceutical drugs are increasingly effective. Groceries ordered in five minutes on the web and delivered to your door are worth more than groceries on a supermarket shelf that you have to fetch yourself. Clothes custom manufactured for your unique body scan are worth more than clothes you happen to find left on a store rack. These sorts of improvements are true for most product categories, and none of them are reflected in the productivity statistics.</p>
<p>The statistical methods underlying the productivity measurements tend to factor out gains by essentially concluding that we still only get one dollar of products and services for a dollar despite the fact that we get much more for a dollar (e.g., compare a $1,000 computer today to one ten years ago). University of Chicago Professor Pete Klenow and University of Rochester Professor Mark Bils estimate that the value of existing goods has been increasing at 1.5% per year for the past 20 years because of qualitative improvements. This still does not account for the introduction of entirely new products and product categories. The Bureau of Labor Statistics, which is responsible for the inflation statistics, uses a model that incorporates an estimate of quality growth at only 0.5% per year, reflecting a systematic underestimate of quality improvement and a resulting overestimate of inflation by at least 1 percent per year.</p>
<p>Despite these weaknesses in the productivity statistical methods, the gains in productivity are now reaching the steep part of the exponential curve. Labor productivity grew at 1.6% per year until 1994, then rose at 2.4% per year, and is now growing even more rapidly. In the quarter ending July 30, 2000, labor productivity grew at 5.3%. Manufacturing productivity grew at 4.4% annually from 1995 to 1999, durables manufacturing at 6.5% per year.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart27.jpg" alt="" vspace="10" /></p>
<p>The 1990s have seen the most powerful deflationary forces in history. This is why we are not seeing inflation. Yes, it&#8217;s true that low unemployment, high asset values, economic growth, and other such factors are inflationary, but these factors are offset by the double exponential trends in the price-performance of all information based technologies: computation, memory, communications, biotechnology, miniaturization, and even the overall rate of technical progress. These technologies deeply affect all industries.</p>
<p>We are also undergoing massive disintermediation in the channels of distribution through the web and other new communication technologies, as well as escalating efficiencies in operations and administration.</p>
<p>All of the technology trend charts in this essay e represent massive deflation. There are many examples of the impact of these escalating efficiencies. BP Amoco&#8217;s cost for finding oil is now less than $1 per barrel, down from nearly $10 in 1991. Processing an internet transaction costs a bank one penny, compared to over $1 using a teller ten years ago. A Roland Berger / Deutsche Bank study estimates a cost savings of $1200 per North American car over the next five years. A more optimistic Morgan Stanley study estimates that Internet-based procurement will save Ford, GM, and DaimlerChrysler about $2700 per vehicle. Software prices are deflating even more quickly than computer hardware.</p>
<h2>Software Price-Performance Has Also Improved at an Exponential Rate</h2>
<p>(Example: Automatic Speech Recognition Software</p>
<table>
<tbody>
<tr>
<td></td>
<td>1985</td>
<td>1995</td>
<td>2000</td>
</tr>
<tr>
<td>Price</td>
<td>$5,000</td>
<td>$500</td>
<td>$50</td>
</tr>
<tr>
<td>Vocabulary Size (# words)</td>
<td>1,000</td>
<td>10,000</td>
<td>100,000</td>
</tr>
<tr>
<td>Continuous Speech?</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>User Training Required (Minutes)</td>
<td>180</td>
<td>60</td>
<td>5</td>
</tr>
<tr>
<td>Accuracy</td>
<td>Poor</td>
<td>Fair</td>
<td>Good</td>
</tr>
</tbody>
</table>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart28.jpg" alt="" vspace="10" /></p>
<p>Current economic policy is based on outdated models which include energy prices, commodity prices, and capital investment in plant and equipment as key driving factors, but do not adequately model bandwidth, MIPs, megabytes, intellectual property, knowledge, and other increasingly vital (and increasingly increasing) constituents that are driving the economy.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart29.jpg" alt="" vspace="10" /></p>
<p>The economy &#8220;wants&#8221; to grow more than the 3.5% per year, which constitutes the current &#8220;speed limit&#8221; that the Federal Reserve bank and other policy makers have established as &#8220;safe,&#8221; meaning noninflationary. But in keeping with the law of accelerating returns, the economy is capable of &#8220;safely&#8221; establishing this level of growth in less than a year, implying a growth rate in an entire year of greater than 3.5%. Recently, the growth rate has exceeded 5%.</p>
<p>None of this means that cycles of recession will disappear immediately. The economy still has some of the underlying dynamics that historically have caused cycles of recession, specifically excessive commitments such as capital intensive projects and the overstocking of inventories. However, the rapid dissemination of information, sophisticated forms of online procurement, and increasingly transparent markets in all industries have diminished the impact of this cycle. So &#8220;recessions&#8221; are likely to be shallow and short lived. The underlying long-term growth rate will continue at a double exponential rate.</p>
<p>The overall growth of the economy reflects completely new forms and layers of wealth and value that did not previously exist, or least that did not previously constitute a significant portion of the economy (but do now): intellectual property, communication portals, web sites, bandwidth, software, data bases, and many other new technology based categories.</p>
<p>There is no need for high interest rates to counter an inflation that doesn&#8217;t exist. The inflationary pressures which exist are counterbalanced by all of the deflationary forces I&#8217;ve mentioned. The current high interest rates fostered by the Federal Reserve Bank are destructive, are causing trillions of dollars of lost wealth, are regressive, hurt business and the middle class, and are completely unnecessary.</p>
<p>The Fed&#8217;s monetary policy is only influential because people believe it to be. It has little real power. The economy today is largely backed by private capital in the form of a growing variety of equity instruments. The portion of available liquidity in the economy that the Fed actually controls is relatively insignificant. The reserves that banks and financial institutions maintain with the Federal Reserve System are less than $50 billion, which is only 0.6% of the GDP, and 0.25% of the liquidity available in stocks.</p>
<p>Restricting the growth rate of the economy to an arbitrary limit makes as much sense as restricting the rate at which a company can grow its revenues&#8211;or its market cap. Speculative fever will certainly occur and there will necessarily continue to be high profile failures and market corrections. However the ability of technology companies to rapidly create new&#8211;real&#8211;wealth is just one of the factors that will continue to fuel ongoing double exponential growth in the economy. These policies have led to an &#8220;Alice in Wonderland&#8221; situation in which the market goes up on bad economic news (because it means that more unnecessary punishment will be avoided) and goes down on good economic news.</p>
<p>Speaking of market speculative fever and market corrections, the stock market values for so-called &#8220;B to B&#8221; (Business to Business) and &#8220;B to C&#8221; (Business to Consumer) web portals and enabling technologies is likely to come back strongly as it becomes clear that economic transactions are indeed escalating toward e-commerce, and that the (surviving) contenders are capable of demonstrating profitable business models.</p>
<p>The intuitive linear assumption underlying economic thinking reaches its most ludicrous conclusions in the political debate surrounding the long-term future of the social security system. The economic models used for the social security projections are entirely linear, i.e., they reflect fixed economic growth. This might be viewed as conservative planning if we were talking about projections of only a few years, but they become utterly unrealistic for the three to four decades being discussed. These projections actually assume a fixed rate of growth of 3.5% per year for the next fifty years! There are incredibly naïve assumptions that bear on both sides of the argument. On the one hand, there will be radical extensions to human longevity, while on the other hand, we will benefit from far greater economic expansion. These factors do not rule each other out, however, as the positive factors are stronger, and will ultimately dominate. Moreover, we are certain to rethink social security when we have centenarians who look and act like 30 year-olds (but who will think much faster than 30 year-olds circa the year 2000).</p>
<p>Another implication of the law of accelerating returns is exponential growth in education and learning. Over the past 120 years, we have increased our investment in K-12 education (per student and in constant dollars) by a factor of ten. We have a one hundred fold increase in the number of college students. Automation started by amplifying the power of our muscles, and in recent times has been amplifying the power of our minds. Thus, for the past two centuries, automation has been eliminating jobs at the bottom of the skill ladder while creating new (and better paying) jobs at the top of the skill ladder. So the ladder has been moving up, and thus we have been exponentially increasing investments in education at all levels.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart30.jpg" alt="" vspace="10" /></p>
<p><img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart31.jpg" alt="" vspace="10" /></p>
<p>Oh, and about that &#8220;offer&#8221; at the beginning of this essay, consider that present stock values are based on future expectations. Given that the (literally) short sighted linear intuitive view represents the ubiquitous outlook, the common wisdom in economic expectations are dramatically understated. Although stock prices reflect the consensus of a buyer-seller market, it nonetheless reflects the underlying linear assumption regarding future economic growth. But the law of accelerating returns clearly implies that the growth rate will continue to grow exponentially because the rate of progress will continue to accelerate. Although (weakening) recessionary cycles will continue to cause immediate growth rates to fluctuate, the underlying rate of growth will continue to double approximately every decade.</p>
<p>But wait a second, you said that I would get $40 trillion if I read and understood this essay.</p>
<p>That&#8217;s right. According to my models, if we replace the linear outlook with the more appropriate exponential outlook, current stock prices should triple. Since there&#8217;s about $20 trillion in the equity markets, that&#8217;s $40 trillion in additional wealth.</p>
<p>But you said I would get that money.</p>
<p>No, I said &#8220;you&#8221; would get the money, and that&#8217;s why I suggested reading the sentence carefully. The English word &#8220;you&#8221; can be singular or plural. I meant it in the sense of &#8220;all of you.&#8221;</p>
<p>I see, all of us as in the whole world. But not everyone will read this essay.</p>
<p>Well, but everyone could. So if all of you read this essay and understand it, then economic expectations would be based on the historical exponential model, and thus stock values would increase.</p>
<p>You mean if everyone understands it, and agrees with it.</p>
<p>Okay, I suppose I was assuming that.</p>
<p>Is that what you expect to happen.</p>
<p>Well, actually, no. Putting on my futurist hat again, my prediction is that indeed these views will prevail, but only over time, as more and more evidence of the exponential nature of technology and its impact on the economy becomes apparent. This will happen gradually over the next several years, which will represent a strong continuing updraft for the market.</p>
<h2>A Clear and Future Danger</h2>
<p>Technology has always been a double edged sword, bringing us longer and healthier life spans, freedom from physical and mental drudgery, and many new creative possibilities on the one hand, while introducing new and salient dangers on the other. We still live today with sufficient nuclear weapons (not all of which appear to be well accounted for) to end all mammalian life on the planet. Bioengineering is in the early stages of enormous strides in reversing disease and aging processes. However, the means and knowledge will soon exist in a routine college bioengineering lab (and already exists in more sophisticated labs) to create unfriendly pathogens more dangerous than nuclear weapons. As technology accelerates toward the Singularity, we will see the same intertwined potentials: a feast of creativity resulting from human intelligence expanded a trillion-fold combined with many grave new dangers.</p>
<p>Consider unrestrained nanobot replication. Nanobot technology requires billions or trillions of such intelligent devices to be useful. The most cost effective way to scale up to such levels is through self-replication, essentially the same approach used in the biological world. And in the same way that biological self-replication gone awry (i.e., cancer) results in biological destruction, a defect in the mechanism curtailing nanobot self-replication would endanger all physical entities, biological or otherwise.</p>
<p>Other primary concerns include &#8220;who is controlling the nanobots?&#8221; and &#8220;who are the nanobots talking to?&#8221; Organizations (e.g., governments, extremist groups) or just a clever individual could put trillions of undetectable nanobots in the water or food supply of an individual or of an entire population. These &#8220;spy&#8221; nanobots could then monitor, influence, and even control our thoughts and actions. In addition to introducing physical spy nanobots, existing nanobots could be influenced through software viruses and other software &#8220;hacking&#8221; techniques. When there is software running in our brains, issues of privacy and security will take on a new urgency.</p>
<p>My own expectation is that the creative and constructive applications of this technology will dominate, as I believe they do today. But there will be a valuable (and increasingly vocal) role for a concerned and constructive Luddite movement (i.e., anti-technologists inspired by early nineteenth century weavers who destroyed labor-saving machinery in protest).</p>
<p>If we imagine describing the dangers that exist today to people who lived a couple of hundred years ago, they would think it mad to take such risks. On the other hand, how many people in the year 2000 would really want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the human race struggled through a couple of centuries ago? We may romanticize the past, but up until fairly recently, most of humanity lived extremely fragile lives where one all too common misfortune could spell disaster. Substantial portions of our species still live in this precarious way, which is at least one reason to continue technological progress and the economic enhancement that accompanies it.</p>
<p>People often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age old problems, then a sense of dread at a new set of grave dangers that accompany these new technologies, followed, finally and hopefully, by the realization that the only viable and responsible path is to set a careful course that can realize the promise while managing the peril.</p>
<p>In his cover story for WIRED <em>Why The Future Doesn&#8217;t Need</em> Us, Bill Joy eloquently described the plagues of centuries&#8217; past, and how new self-replicating technologies, such as mutant bioengineered pathogens, and &#8220;nanobots&#8221; run amok, may bring back long forgotten pestilence. Indeed these are real dangers. It is also the case, which Joy acknowledges, that it has been technological advances, such as antibiotics and improved sanitation, which has freed us from the prevalence of such plagues. Suffering in the world continues and demands our steadfast attention. Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that these same technologies may someday be used for malevolent purposes? Having asked the rhetorical question, I realize that there is a movement to do exactly that, but I think most people would agree that such broad based relinquishment is not the answer.</p>
<p>The continued opportunity to alleviate human distress is one important motivation for continuing technological advancement. Also compelling are the already apparent economic gains I discussed above which will continue to hasten in the decades ahead. The continued acceleration of many intertwined technologies are roads paved with gold (I use the plural here because technology is clearly not a single path). In a competitive environment, it is an economic imperative to go down these roads. Relinquishing technological advancement would be economic suicide for individuals, companies, and nations.</p>
<p>Which brings us to the issue of relinquishment, which is Bill Joy&#8217;s most controversial recommendation and personal commitment. I do feel that relinquishment at the right level is part of a responsible and constructive response to these genuine perils. The issue, however, is exactly this: at what level are we to relinquish technology?</p>
<p>Ted Kaczynski would have us renounce all of it. This, in my view, is neither desirable nor feasible, and the futility of such a position is only underscored by the senselessness of Kaczynski&#8217;s deplorable tactics.</p>
<p>Another level would be to forego certain fields; nanotechnology, for example, that might be regarded as too dangerous. But such sweeping strokes of relinquishment are equally untenable. Nanotechnology is simply the inevitable end result of the persistent trend toward miniaturization which pervades all of technology. It is far from a single centralized effort, but is being pursued by a myriad of projects with many diverse goals.</p>
<p>One observer wrote:</p>
<blockquote><p>&#8220;A further reason why industrial society cannot be reformed. . . is that modern technology is a unified system in which all parts are dependent on one another. You can&#8217;t get rid of the &#8220;bad&#8221; parts of technology and retain only the &#8220;good&#8221; parts. Take modern medicine, for example. Progress in medical science depends on progress in chemistry, physics, biology, computer science and other fields. Advanced medical treatments require expensive, high-tech equipment that can be made available only by a technologically progressive, economically rich society. Clearly you can&#8217;t have much progress in medicine without the whole technological system and everything that goes with it.&#8221;</p></blockquote>
<p>The observer I am quoting is, again, Ted Kaczynski. Although one might properly resist Kaczynski as an authority, I believe he is correct on the deeply entangled nature of the benefits and risks. However, Kaczynski and I clearly part company on our overall assessment on the relative balance between the two. Bill Joy and I have dialogued on this issue both publicly and privately, and we both believe that technology will and should progress, and that we need to be actively concerned with the dark side. If Bill and I disagree, it&#8217;s on the granularity of relinquishment that is both feasible and desirable.</p>
<p>Abandonment of broad areas of technology will only push them underground where development would continue unimpeded by ethics and regulation. In such a situation, it would be the less stable, less responsible practitioners (e.g., the terrorists) who would have all the expertise.</p>
<p>I do think that relinquishment at the right level needs to be part of our ethical response to the dangers of twenty first century technologies. One constructive example of this is the proposed ethical guideline by the Foresight Institute, founded by nanotechnology pioneer Eric Drexler, that nanotechnologists agree to relinquish the development of physical entities that can self-replicate in a natural environment. Another is a ban on self-replicating physical entities that contain their own codes for self-replication. In what nanotechnologist Ralph Merkle calls the &#8220;Broadcast Architecture,&#8221; such entities would have to obtain such codes from a centralized secure server, which would guard against undesirable replication. The Broadcast Architecture is impossible in the biological world, which represents at least one way in which nanotechnology can be made safer than biotechnology. In other ways, nanotech is potentially more dangerous because nanobots can be physically stronger than protein-based entities and more intelligent. It will eventually be possible to combine the two by having nanotechnology provide the codes within biological entities (replacing DNA), in which case biological entities can use the much safer Broadcast Architecture.</p>
<p>Our ethics as responsible technologists should include such &#8220;fine grained&#8221; relinquishment, among other professional ethical guidelines. Other protections will need to include oversight by regulatory bodies, the development of technology-specific &#8220;immune&#8221; responses, as well as computer assisted surveillance by law enforcement organizations. Many people are not aware that our intelligence agencies already use advanced technologies such as automated word spotting to monitor a substantial flow of telephone conversations. As we go forward, balancing our cherished rights of privacy with our need to be protected from the malicious use of powerful twenty first century technologies will be one of many profound challenges. This is one reason that such issues as an encryption &#8220;trap door&#8221; (in which law enforcement authorities would have access to otherwise secure information) and the FBI &#8220;Carnivore&#8221; email-snooping system have been so contentious.</p>
<p>As a test case, we can take a small measure of comfort from how we have dealt with one recent technological challenge. There exists today a new form of fully nonbiological self replicating entity that didn&#8217;t exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer network medium they live in. Yet the &#8220;immune system&#8221; that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage from time to time, the injury is but a small fraction of the benefit we receive from the computers and communication links that harbor them. No one would suggest we do away with computers, local area networks, and the Internet because of software viruses.</p>
<p>One might counter that computer viruses do not have the lethal potential of biological viruses or of destructive nanotechnology. Although true, this strengthens my observation. The fact that computer viruses are not usually deadly to humans only means that more people are willing to create and release them. It also means that our response to the danger is that much less intense. Conversely, when it comes to self replicating entities that are potentially lethal on a large scale, our response on all levels will be vastly more serious.</p>
<p>Technology will remain a double edged sword, and the story of the Twenty First century has not yet been written. It represents vast power to be used for all humankind&#8217;s purposes. We have no choice but to work hard to apply these quickening technologies to advance our human values, despite what often appears to be a lack of consensus on what those values should be.</p>
<h2>Living Forever</h2>
<p>Once brain porting technology has been refined and fully developed, will this enable us to live forever? The answer depends on what we mean by living and dying. Consider what we do today with our personal computer files. When we change from one personal computer to a less obsolete model, we don&#8217;t throw all our files away; rather we copy them over to the new hardware. Although our software files do not necessary continue their existence forever, the longevity of our personal computer software is completely separate and disconnected from the hardware that it runs on. When it comes to our personal mind file, however, when our human hardware crashes, the software of our lives dies with it. However, this will not continue to be the case when we have the means to store and restore the thousands of trillions of bytes of information represented in the pattern that we call our brains.</p>
<p>The longevity of one&#8217;s mind file will not be dependent, therefore, on the continued viability of any particular hardware medium. Ultimately software-based humans, albeit vastly extended beyond the severe limitations of humans as we know them today, will live out on the web, projecting bodies whenever they need or want them, including virtual bodies in diverse realms of virtual reality, holographically projected bodies, physical bodies comprised of nanobot swarms, and other forms of nanotechnology.</p>
<p>A software-based human will be free, therefore, from the constraints of any particular thinking medium. Today, we are each confined to a mere hundred trillion connections, but humans at the end of the twenty-first century can grow their thinking and thoughts without limit. We may regard this as a form of immortality, although it is worth pointing out that data and information do not necessarily last forever. Although not dependent on the viability of the hardware it runs on, the longevity of information depends on its relevance, utility, and accessibility. If you&#8217;ve ever tried to retrieve information from an obsolete form of data storage in an old obscure format (e.g., a reel of magnetic tape from a 1970 minicomputer), you will understand the challenges in keeping software viable. However, if we are diligent in maintaining our mind file, keeping current backups, and porting to current formats and mediums, then a form of immortality can be attained, at least for software-based humans. Our mind file&#8211;our personality, skills, memories&#8211;all of that is lost today when our biological hardware crashes. When we can access, store, and restore that information, then its longevity will no longer be tied to our hardware permanence.</p>
<p>Is this form of immortality the same concept as a physical human, as we know them today, living forever? In one sense it is, because as I pointed out earlier, our contemporary selves are not a constant collection of matter either. Only our pattern of matter and energy persists, and even that gradually changes. Similarly, it will be the pattern of a software human that persists and develops and changes gradually.</p>
<p>But is that person based on my mind file, who migrates across many computational substrates, and who outlives any particular thinking medium, really me? We come back to the same questions of consciousness and identity, issues that have been debated since the Platonic dialogues. As we go through the twenty-first century, these will not remain polite philosophical debates, but will be confronted as vital, practical, political, and legal issues.</p>
<p>A related question is &#8220;is death desirable?&#8221; A great deal of our effort goes into avoiding it. We make extraordinary efforts to delay it, and indeed often consider its intrusion a tragic event. Yet we might find it hard to live without it. We consider death as giving meaning to our lives. It gives importance and value to time. Time could become meaningless if there were too much of it.</p>
<h2>The Next Step in Evolution and the Purpose of Life</h2>
<p>But I regard the freeing of the human mind from its severe physical limitations of scope and duration as the necessary next step in evolution. Evolution, in my view, represents the purpose of life. That is, the purpose of life&#8211;and of our lives&#8211;is to evolve. The Singularity then is not a grave danger to be avoided. In my view, this next paradigm shift represents the goal of our civilization.</p>
<p>What does it mean to evolve? Evolution moves toward greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and more of other abstract and subtle attributes such as love. And God has been called all these things, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially, it certainly moves rapidly in that direction. So evolution moves inexorably toward our conception of God, albeit never quite reaching this ideal. Thus the freeing of our thinking from the severe limitations of its biological form may be regarded as an essential spiritual quest.</p>
<p>In making this statement, it is important to emphasize that terms like evolution, destiny, and spiritual quest are observations about the end result, not the basis for these predictions. I am not saying that technology will evolve to human levels and beyond simply because it is our destiny and because of the satisfaction of a spiritual quest. Rather my projections result from a methodology based on the dynamics underlying the (double) exponential growth of technological processes. The primary force driving technology is economic imperative. We are moving toward machines with human level intelligence (and beyond) as the result of millions of small advances, each with their own particular economic justification.</p>
<p>To use an example from my own experience at one of my companies (Kurzweil Applied Intelligence), whenever we came up with a slightly more intelligent version of speech recognition, the new version invariably had greater value than the earlier generation and, as a result, sales increased. It is interesting to note that in the example of speech recognition software, the three primary surviving competitors stayed very close to each other in the intelligence of their software. A few other companies that failed to do so (e.g., Speech Systems) went out of business. At any point in time, we would be able to sell the version prior to the latest version for perhaps a quarter of the price of the current version. As for versions of our technology that were two generations old, we couldn&#8217;t even give those away. This phenomenon is not only true for pattern recognition and other &#8220;AI&#8221; software, but applies to all products, from bread makers to cars. And if the product itself doesn&#8217;t exhibit some level of intelligence, then intelligence in the manufacturing and marketing methods have a major effect on the success and profitability of an enterprise.</p>
<p>There is a vital economic imperative to create more intelligent technology. Intelligent machines have enormous value. That is why they are being built. There are tens of thousands of projects that are advancing intelligent machines in diverse incremental ways. The support for &#8220;high tech&#8221; in the business community (mostly software) has grown enormously. When I started my optical character recognition (OCR) and speech synthesis company (Kurzweil Computer Products, Inc.) in 1974, there were only a half-dozen high technology IPO&#8217;s that year. The number of such deals has increased one hundred fold and the number of dollars invested has increased by more than one thousand fold in the past 25 years. In the four years between 1995 and 1999 alone, high tech venture capital deals increased from just over $1 billion to approximately $15 billion.</p>
<p>We will continue to build more powerful computational mechanisms because it creates enormous value. We will reverse-engineer the human brain not simply because it is our destiny, but because there is valuable information to be found there that will provide insights in building more intelligent (and more valuable) machines. We would have to repeal capitalism and every visage of economic competition to stop this progression.</p>
<p>By the second half of this next century, there will be no clear distinction between human and machine intelligence. On the one hand, we will have biological brains vastly expanded through distributed nanobot-based implants. On the other hand, we will have fully nonbiological brains that are copies of human brains, albeit also vastly extended. And we will have a myriad of other varieties of intimate connection between human thinking and the technology it has fostered.</p>
<p>Ultimately, nonbiological intelligence will dominate because it is growing at a double exponential rate, whereas for all practical purposes biological intelligence is at a standstill. Human thinking is stuck at 10<sup>26</sup> calculations per second (for all biological humans), and that figure will never appreciably change (except for a small increase resulting from genetic engineering). Nonbiological thinking is still millions of times less today, but the cross over will occur before 2030. By the end of the twenty-first century, nonbiological thinking will be trillions of trillions of times more powerful than that of its biological progenitors, although still of human origin. It will continue to be the human-machine civilization taking the next step in evolution.</p>
<p>Most forecasts of the future seem to ignore the revolutionary impact of the Singularity in our human destiny: the inevitable emergence of computers that match and ultimately vastly exceed the capabilities of the human brain, a development that will be no less important than the evolution of human intelligence itself some thousands of centuries ago. And the primary reason for this failure is that they are based on the intuitive but short sighted linear view of history.</p>
<p>Before the next century is over, the Earth&#8217;s technology-creating species will merge with its computational technology. There will not be a clear distinction between human and machine. After all, what is the difference between a human brain enhanced a trillion fold by nanobot-based implants, and a computer whose design is based on high resolution scans of the human brain, and then extended a trillion-fold?</p>
<h2>Why SETI Will Fail (and why we are alone in the Universe)</h2>
<p>The law of accelerating returns implies that by 2099, the intelligence that will have emerged from human-machine civilization will be trillions of trillions of times more powerful than it is today, dominated of course by its nonbiological form.</p>
<p>So what does this have to do with SETI (the Search for Extra Terrestrial Intelligence)? The naïve view, going back to pre-Copernican days, was that the Earth was at the center of the Universe, and human intelligence its greatest gift (next to God). The more informed recent view is that even if the likelihood of a star having a planet with a technology creating species is very low (e.g., one in a million), there are so many stars (i.e., billions of trillions of them), that there are bound to be many with advanced technology.</p>
<p>This is the view behind SETI, was my view until recently, and is the common informed view today. Although SETI has not yet looked everywhere, it has already covered a substantial portion of the Universe.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart32.gif" alt="" vspace="10" /><br />
<span class="PhotoCredit">Chart by Scientific American</span></p>
<p>In the above diagram (courtesy of Scientific American), we can see that SETI has already thoroughly searched all star systems within 10<sup>7</sup> light-years from Earth for alien civilizations capable (and willing) to transmit at a power of at least 10<sup>25</sup> watts, a so-called Type II civilization (and all star systems within 10<sup>6</sup> light-years for transmission of at least 10<sup>18</sup> watts, and so on). No sign of intelligence has been found as of yet.</p>
<p>In a recent email to my research assistant, Dr. Seth Shostak of the SETI Institute points out that a new comprehensive targeted search, called Project Phoenix, which has up to 100 times the sensitivity and covers a greater range of the radio dial as compared to previous searches, has only been applied thus far to 500 star systems, which is, of course only a minute fraction of the half trillion star systems in just our own galaxy.</p>
<p>However, according to my model, once a civilization achieves our own level (&#8220;Earth-level&#8221;) of radio transmission, it takes no more than one century, two at the most, to achieve what SETI calls a Type II civilization. If the assumption that there are at least millions of radio capable civilizations out there, and that these civilizations are spread out over millions (indeed billions) of years of development, then surely there ought to be millions that have achieved Type II status.</p>
<p>Incidentally, this is not an argument against the SETI project, which in my view should have the highest possible priority because the negative finding is no less significant than a positive result.</p>
<p>It is odd that we find the cosmos so silent. Where is everybody? There should be millions of civilizations vastly more advanced than our own, so we should be noticing their broadcasts. A sufficiently advanced civilization would not be likely to restrict its broadcasts to subtle signals on obscure frequencies. Why are they so silent, and so shy?</p>
<p>As I have studied the implications of the law of accelerating returns, I have come to a different view.</p>
<p>Because exponential growth is so explosive, it is the case that once a species develops computing technology, it is only a matter of a couple of centuries before the nonbiological form of their intelligence explodes. It permeates virtually all matter in their vicinity, and then inevitably expands outward close to the maximum speed that information can travel. Once the nonbiological intelligence emerging from that species&#8217; technology has saturated its vicinity (and the nature of this saturation is another complex issue, which I won&#8217;t deal with in this essay), it has no other way to continue to evolve but to expand outwardly. The expansion does not start out at the maximum speed, but quickly achieves a speed within a vanishingly small delta from the maximum speed.</p>
<p>What is the maximum speed? We currently understand this to be the speed of light, but there are already tantalizing hints that this may not be an absolute limit. There were recent experiments that measured the flight time of photons at nearly twice the speed of light, a result of quantum uncertainty on their position. However, this result is actually not useful for this analysis, because it does not actually allow information to be communicated at faster than the speed of light, and we are fundamentally interested in communication speed.</p>
<p>Quantum disentanglement has been measured at many times the speed of light, but this is only communicating randomness (profound quantum randomness) at speeds far greater than the speed of light; again, this is not communication of information (but is of great interest for restoring encryption, after quantum computing destroys it). There is the potential for worm holes (or folds of the Universe in dimensions beyond the three visible ones), but this is not really traveling at faster than the speed of light, it just means that the topology of the Universe is not the simple three dimensional space that naïve physics implies. But we already knew that. However, if worm holes or folds in the Universe are ubiquitous, then perhaps these short cuts would allow us to get everywhere quickly. Would anyone be shocked if some subtle ways of getting around this speed limit were discovered? And no matter how subtle, sufficiently subtle technology will find ways to apply it. The point is that if there are ways around this limit (or any other currently understood limit), then the extraordinary levels of intelligence that our human-machine civilization will achieve will find those ways and exploit them.</p>
<p>So for now, we can say that ultra high levels of intelligence will expand outward at the speed of light, but recognize that this may not be the actual limit of the speed of expansion, or even if the limit is the speed of light that this limit may not restrict reaching other locations quickly.</p>
<p>Consider that the time spans for biological evolution are measured in millions and billions of years, so if there are other civilizations out there, they would be spread out by huge spans of time. If there are a lot of them, as contemporary thinking implies, then it would be very unlikely that at least some of them would not be ahead of us. That at least is the SETI assumption. And if they are ahead of us, they likely would be ahead of us by huge spans of time. The likelihood that any civilization that is ahead of us is ahead of us by only a few decades is extremely small.</p>
<p>If the SETI assumption that there are many (e.g., millions) of technological (at least radio capable) civilizations is correct, then at least some of them (i.e., millions of them) would be way ahead of us. But it takes only a few centuries at most from the advent of computation for that civilization to expand outward at least light speed. Given this, how can it be that we have not noticed them?</p>
<p>The conclusion I reach is that it is likely that there are no such other civilizations. In other words, we are in the lead. That&#8217;s right, our humble civilization with its Dodge pick up trucks, fried chicken fast food, and ethnic cleansings (and computation!) is in the lead.</p>
<p>Now how can that be? Isn&#8217;t this extremely unlikely given the billions of trillions of likely planets? Indeed it is very unlikely. But equally unlikely is the existence of our Universe with a set of laws of physics so exquisitely precisely what is needed for the evolution of life to be possible. But by the Anthropic principle, if the Universe didn&#8217;t allow the evolution of life we wouldn&#8217;t be here to notice it. Yet here we are. So by the same Anthropic principle, we&#8217;re here in the lead in the Universe. Again, if we weren&#8217;t here, we would not be noticing it.</p>
<p>Let&#8217;s consider some arguments against this perspective.</p>
<p>Perhaps there are extremely advanced technological civilizations out there, but we are outside their light sphere of intelligence. That is, they haven&#8217;t gotten here yet. Okay, in this case, SETI will still fail because we won&#8217;t be able to see (or hear) them, at least not before we reach Singularity.</p>
<p>Perhaps they are amongst us, but have decided to remain invisible to us. Incidentally, I have always considered the science fiction notion of large space ships with large squishy creatures similar to us to be very unlikely. Any civilization sophisticated enough to make the trip here would have long since passed the point of merging with their technology and would not need to send such physically bulky organisms and equipment. Such a civilization would not have any unmet material needs that require it to steal physical resources from us. They would be here for observation only, to gather knowledge, which is the only resource of value to such a civilization. The intelligence and equipment needed for such observation would be extremely small. In this case, SETI will still fail because if this civilization decided that it did not want us to notice it, then it would succeed in that desire. Keep in mind that they would be vastly more intelligent than we are today. Perhaps they will reveal themselves to us when we achieve the next level of our evolution, specifically merging our biological brains with our technology, which is to say, after the Singularity. Moreover, given that the SETI assumption implies that there are millions of such highly developed civilizations, it seems odd that all of them have made the same decision to stay out of our way.</p>
<h2>Why Intelligence is More Powerful than Physics</h2>
<p>As intelligence saturates the matter and energy available to it, it turns dumb matter into smart matter. Although smart matter still nominally follows the laws of physics, it is so exquisitely intelligent that it can harness the most subtle aspects of the laws to manipulate matter and energy to its will. So it would at least appear that intelligence is more powerful than physics.</p>
<p>Perhaps what I should say is that intelligence is more powerful than cosmology. That is, once matter evolves into smart matter (matter fully saturated with intelligence), it can manipulate matter and energy to do whatever it wants. This perspective has not been considered in discussions of future cosmology. It is assumed that intelligence is irrelevant to events and processes on a cosmological scale. Stars are born and die; galaxies go through their cycles of creation and destruction. The Universe itself was born in a big bang and will end with a crunch or a whimper, we&#8217;re not yet sure which. But intelligence has little to do with it. Intelligence is just a bit of froth, an ebullition of little creatures darting in and out of inexorable universal forces. The mindless mechanism of the Universe is winding up or down to a distant future, and there&#8217;s nothing intelligence can do about it.</p>
<p>That&#8217;s the common wisdom, but I don&#8217;t agree with it. Intelligence will be more powerful than these impersonal forces. Once a planet yields a technology creating species and that species creates computation (as has happened here on Earth), it is only a matter of a few centuries before its intelligence saturates the matter and energy in its vicinity, and it begins to expand outward at the speed of light or greater. It will then overcome gravity (through exquisite and vast technology) and other cosmological forces (or, to be fully accurate, will maneuver and control these forces) and create the Universe it wants. This is the goal of the Singularity.</p>
<p>What kind of Universe will that be? Well, just wait and see.</p>
<h2>Plan to Stick Around</h2>
<p>Most of you (again I&#8217;m using the plural form of the word) are likely to be around to see the Singularity. The expanding human life span is another one of those exponential trends. In the eighteenth century, we added a few days every year to human longevity; during the nineteenth century we added a couple of weeks each year; and now we&#8217;re adding almost a half a year every year. With the revolutions in genomics, proteomics, rational drug design, therapeutic cloning of our own organs and tissues, and related developments in bio-information sciences, we will be adding more than a year every year within ten years. So take care of yourself the old fashioned way for just a little while longer, and you may actually get to experience the next fundamental paradigm shift in our destiny.<br />
<img decoding="async" src="https://www.writingsbyraykurzweil.com/images/chart33.jpg" alt="" vspace="10" /></p>
<p>Copyright (C) Raymond Kurzweil 2001</p>
<p>Chart Graphics by Brett Rampata/Digital Organism</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/the-law-of-accelerating-returns/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A singularity q + a.</title>
		<link>https://www.writingsbyraykurzweil.com/a-singularity-q-a</link>
		<comments>https://www.writingsbyraykurzweil.com/a-singularity-q-a#respond</comments>
		<pubDate>Tue, 20 Feb 2024 09:52:09 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.writingsbyraykurzweil.com/?p=370821</guid>
		<description><![CDATA[year: 2005 Questions and Answers So what is the Singularity? Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>year: 2005</p>
<p><strong>Questions and Answers</strong></p>
<p><em>So what is the Singularity?</em></p>
<p>Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like <em>The Matrix</em>), “experience beaming” (like “Being John Malkovich”), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.</p>
<p><em>And that’s the Singularity?</em></p>
<p>No, that’s just the precursor. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. We’ll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.</p>
<p><em>When will that occur?</em></p>
<p>I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.</p>
<p><em>Why is this called the Singularity?</em></p>
<p>The term “Singularity” in my book is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied trillions-fold, be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. That’s what I’ve tried to do in this book.</p>
<p><em>Okay, let’s break this down. It seems a key part of your thesis is that we will be able to capture the intelligence of our brains in a machine.</em></p>
<p>Indeed.</p>
<p><em>So how are we going to achieve that?</em></p>
<p>We can break this down further into hardware and software requirements. In the book, I show how we need about 10 quadrillion (10<sup>16</sup>) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (10<sup>14</sup>) cps, and will hit 10<sup>16</sup> cps around the end of this decade. Several supercomputers with 1 quadrillion cps are already on the drawing board, with two Japanese efforts targeting 10 quadrillion cps around the end of the decade. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, came out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms.</p>
<p><em>And how will we recreate the algorithms of human intelligence?</em></p>
<p>To understand the principles of human intelligence we need to reverse-engineer the human brain. Here, progress is far greater than most people realize. The spatial and temporal (time) resolution of brain scanning is also progressing at an exponential rate, roughly doubling each year, like most everything else having to do with information. Just recently, scanning tools can see individual interneuronal connections, and watch them fire in real time. Already, we have mathematical models and simulations of a couple dozen regions of the brain, including the cerebellum, which comprises more than half the neurons in the brain. IBM is now creating a simulation of about 10,000 cortical neurons, including tens of millions of connections. The first version will simulate the electrical activity, and a future version will also simulate the relevant chemical activity. By the mid 2020s, it’s conservative to conclude that we will have effective models for all of the brain.</p>
<p><em>So at that point we’ll just copy a human brain into a supercomputer?</em></p>
<p>I would rather put it this way: At that point, we’ll have a full understanding of the methods of the human brain. One benefit will be a deep understanding of ourselves, but the key implication is that it will expand the toolkit of techniques we can apply to create artificial intelligence. We will then be able to create nonbiological systems that match human intelligence in the ways that humans are now superior, for example, our pattern- recognition abilities. These superintelligent computers will be able to do things we are not able to do, such as share knowledge and skills at electronic speeds.</p>
<p>By 2030, a thousand dollars of computation will be about a thousand times more powerful than a human brain. Keep in mind also that computers will not be organized as discrete objects as they are today. There will be a web of computing deeply integrated into the environment, our bodies and brains.</p>
<p><em>You mentioned the AI tool kit. Hasn’t AI failed to live up to its expectations?</em></p>
<p>There was a boom and bust cycle in AI during the 1980s, similar to what we saw recently in e-commerce and telecommunications. Such boom-bust cycles are often harbingers of true revolutions; recall the railroad boom and bust in the 19th century. But just as the Internet “bust” was not the end of the Internet, the so-called “AI Winter” was not the end of the story for AI either. There are hundreds of applications of “narrow AI” (machine intelligence that equals or exceeds human intelligence for specific tasks) now permeating our modern infrastructure. Every time you send an email or make a cell phone call, intelligent algorithms route the information. AI programs diagnose electrocardiograms with an accuracy rivaling doctors, evaluate medical images, fly and land airplanes, guide intelligent autonomous weapons, make automated investment decisions for over a trillion dollars of funds, and guide industrial processes. These were all research projects a couple of decades ago. If all the intelligent software in the world were to suddenly stop functioning, modern civilization would grind to a halt. Of course, our AI programs are not intelligent enough to organize such a conspiracy, at least not yet.</p>
<p><em>Why don’t more people see these profound changes ahead?</em></p>
<p>Hopefully after they read my new book, they will. But the primary failure is the inability of many observers to think in exponential terms. Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments because they are based on what I call the “intuitive linear” view of history rather than the “historical exponential” view. My models show that we are doubling the paradigm-shift rate every decade. Thus the 20th century was gradually speeding up to the rate of progress at the end of the century; its achievements, therefore, were equivalent to about twenty years of progress at the rate in 2000. We’ll make another twenty years of progress in just fourteen years (by 2014), and then do the same again in only seven years. To express this another way, we won’t experience one hundred years of technological advance in the 21st century; we will witness on the order of 20,000 years of progress (again, when measured by the rate of progress in 2000), or about 1,000 times greater than what was achieved in the 20th century.</p>
<p>The exponential growth of information technologies is even greater: we’re doubling the power of information technologies, as measured by price-performance, bandwidth, capacity and many other types of measures, about every year. That’s a factor of a thousand in ten years, a million in twenty years, and a billion in thirty years. This goes far beyond Moore’s law (the shrinking of transistors on an integrated circuit, allowing us to double the price-performance of electronics each year). Electronics is just one example of many. As another example, it took us 14 years to sequence HIV; we recently sequenced SARS in only 31 days.</p>
<p><em>So this acceleration of information technologies applies to biology as well?</em></p>
<p>Absolutely. It’s not just computer devices like cell phones and digital cameras that are accelerating in capability. Ultimately, everything of importance will be comprised essentially of information technology. With the advent of nanotechnology-based manufacturing in the 2020s, we’ll be able to use inexpensive table-top devices to manufacture on-demand just about anything from very inexpensive raw materials using information processes that will rearrange matter and energy at the molecular level.</p>
<p>We’ll meet our energy needs using nanotechnology-based solar panels that will capture the energy in .03 percent of the sunlight that falls on the Earth, which is all we need to meet our projected energy needs in 2030. We’ll store the energy in highly distributed fuel cells.</p>
<p><em>I want to come back to both biology and nanotechnology, but how can you be so sure of these developments? Isn’t technical progress on specific projects essentially unpredictable?</em></p>
<p>Predicting specific projects is indeed not feasible. But the result of the overall complex, chaotic evolutionary process of technological progress is predictable.</p>
<p>People intuitively assume that the current rate of progress will continue for future periods. Even for those who have been around long enough to experience how the pace of change increases over time, unexamined intuition leaves one with the impression that change occurs at the same rate that we have experienced most recently. From the mathematician’s perspective, the reason for this is that an exponential curve looks like a straight line when examined for only a brief duration. As a result, even sophisticated commentators, when considering the future, typically use the current pace of change to determine their expectations in extrapolating progress over the next ten years or one hundred years. This is why I describe this way of looking at the future as the “intuitive linear” view. But a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process, of which technology is a primary example.</p>
<p>As I show in the book, this has also been true of biological evolution. Indeed, technological evolution emerges from biological evolution. You can examine the data in different ways, on different timescales, and for a wide variety of technologies, ranging from electronic to biological, as well as for their implications, ranging from the amount of human knowledge to the size of the economy, and you get the same exponential—not linear—progression. I have over forty graphs in the book from a broad variety of fields that show the exponential nature of progress in information-based measures. For the price-performance of computing, this goes back over a century, well before Gordon Moore was even born.</p>
<p><em>Aren’t there are a lot of predictions of the future from the past that look a little ridiculous now?</em></p>
<p>Yes, any number of bad predictions from other futurists in earlier eras can be cited to support the notion that we cannot make reliable predictions. In general, these prognosticators were not using a methodology based on a sound theory of technology evolution. I say this not just looking backwards now. I’ve been making accurate forward-looking predictions for over twenty years based on these models.</p>
<p><em>But how can it be the case that we can reliably predict the overall progression of these technologies if we cannot even predict the outcome of a single project?</em></p>
<p>Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. For example, how will the wireless-communication protocols Wimax, CDMA, and 3G fare over the next several years? However, as I argue extensively in the book, we find remarkably precise and predictable exponential trends when assessing the overall effectiveness (as measured in a variety of ways) of information technologies. And as I mentioned above, information technology will ultimately underlie everything of value.</p>
<p><em>But how can that be?</em></p>
<p>We see examples in other areas of science of very smooth and reliable outcomes resulting from the interaction of a great many unpredictable events. Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting the properties of the entire gas—comprised of a great many chaotically interacting molecules—can be done very reliably through the laws of thermodynamics. Analogously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology, comprised of many chaotic activities, can nonetheless be dependably anticipated through what I call &#8220;the law of accelerating returns.&#8221;</p>
<p><em>What will the impact of these developments be?</em></p>
<p>Radical life extension, for one.</p>
<p><em>Sounds interesting, how does that work?</em></p>
<p>In the book, I talk about three great overlapping revolutions that go by the letters “GNR,” which stands for genetics, nanotechnology, and robotics. Each will provide a dramatic increase to human longevity, among other profound impacts. We’re in the early stages of the genetics—also called biotechnology—revolution right now. Biotechnology is providing the means to actually change your genes: not just designer babies but designer baby boomers. We’ll also be able to rejuvenate all of your body’s tissues and organs by transforming your skin cells into youthful versions of every other cell type. Already, new drug development is precisely targeting key steps in the process of atherosclerosis (the cause of heart disease), cancerous tumor formation, and the metabolic processes underlying each major disease and aging process. The biotechnology revolution is already in its early stages and will reach its peak in the second decade of this century, at which point we’ll be able to overcome most major diseases and dramatically slow down the aging process.</p>
<p>That will bring us to the nanotechnology revolution, which will achieve maturity in the 2020s. With nanotechnology, we will be able to go beyond the limits of biology, and replace your current “human body version 1.0” with a dramatically upgraded version 2.0, providing radical life extension.</p>
<p><em>And how does that work?</em></p>
<p>The “killer app” of nanotechnology is “nanobots,” which are blood-cell sized robots that can travel in the bloodstream destroying pathogens, removing debris, correcting DNA errors, and reversing aging processes.</p>
<p><em>Human body version 2.0?</em></p>
<p>We’re already in the early stages of augmenting and replacing each of our organs, even portions of our brains with neural implants, the most recent versions of which allow patients to download new software to their neural implants from outside their bodies. In the book, I describe how each of our organs will ultimately be replaced. For example, nanobots could deliver to our bloodstream an optimal set of all the nutrients, hormones, and other substances we need, as well as remove toxins and waste products. The gastrointestinal tract could be reserved for culinary pleasures rather than the tedious biological function of providing nutrients. After all, we’ve already in some ways separated the communication and pleasurable aspects of sex from its biological function.</p>
<p><em>And the third revolution?</em></p>
<p>The robotics revolution, which really refers to “strong” AI, that is, artificial intelligence at the human level, which we talked about earlier. We’ll have both the hardware and software to recreate human intelligence by the end of the 2020s. We’ll be able to improve these methods and harness the speed, memory capabilities, and knowledge- sharing ability of machines.</p>
<p>We’ll ultimately be able to scan all the salient details of our brains from inside, using billions of nanobots in the capillaries. We can then back up the information. Using nanotechnology-based manufacturing, we could recreate your brain, or better yet reinstantiate it in a more capable computing substrate.</p>
<p><em>Which means?</em></p>
<p>Our biological brains use chemical signaling, which transmit information at only a few hundred feet per second. Electronics is already millions of times faster than this. In the book, I show how one cubic inch of nanotube circuitry would be about one hundred million times more powerful than the human brain. So we’ll have more powerful means of instantiating our intelligence than the extremely slow speeds of our interneuronal connections.</p>
<p><em>So we’ll just replace our biological brains with circuitry?</em></p>
<p>I see this starting with nanobots in our bodies and brains. The nanobots will keep us healthy, provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the Internet, and otherwise greatly expand human intelligence. But keep in mind that nonbiological intelligence is doubling in capability each year, whereas our biological intelligence is essentially fixed in capacity. As we get to the 2030s, the nonbiological portion of our intelligence will predominate.</p>
<p><em>The closest life extension technology, however, is biotechnology, isn’t that right?</em></p>
<p>There’s certainly overlap in the G, N and R revolutions, but that’s essentially correct.</p>
<p>So tell me more about how genetics or biotechnology works.</p>
<p>As we are learning about the information processes underlying biology, we are devising ways of mastering them to overcome disease and aging and extend human potential. One powerful approach is to start with biology&#8217;s information backbone: the genome. With gene technologies, we&#8217;re now on the verge of being able to control how genes express themselves. We now have a powerful new tool called RNA interference (RNAi), which is capable of turning specific genes off. It blocks the messenger RNA of specific genes, preventing them from creating proteins. Since viral diseases, cancer, and many other diseases use gene expression at some crucial point in their life cycle, this promises to be a breakthrough technology. One gene we’d like to turn off is the fat insulin receptor gene, which tells the fat cells to hold on to every calorie. When that gene was blocked in mice, those mice ate a lot but remained thin and healthy, and actually lived 20 percent longer.</p>
<p>New means of adding new genes, called gene therapy, are also emerging that have overcome earlier problems with achieving precise placement of the new genetic information. One company I’m involved with, United Therapeutics, cured pulmonary hypertension in animals using a new form of gene therapy and it has now been approved for human trials.</p>
<p>So we’re going to essentially reprogram our DNA.</p>
<p>That’s a good way to put it, but that’s only one broad approach. Another important line of attack is to regrow our own cells, tissues, and even whole organs, and introduce them into our bodies without surgery. One major benefit of this “therapeutic cloning” technique is that we will be able to create these new tissues and organs from versions of our cells that have also been made younger—the emerging field of rejuvenation medicine. For example, we will be able to create new heart cells from your skin cells and introduce them into your system through the bloodstream. Over time, your heart cells get replaced with these new cells, and the result is a rejuvenated “young” heart with your own DNA.</p>
<p>Drug discovery was once a matter of finding substances that produced some beneficial effect without excessive side effects. This process was similar to early humans’ tool discovery, which was limited to simply finding rocks and natural implements that could be used for helpful purposes. Today, we are learning the precise biochemical pathways that underlie both disease and aging processes, and are able to design drugs to carry out precise missions at the molecular level. The scope and scale of these efforts is vast.</p>
<p>But perfecting our biology will only get us so far. The reality is that biology will never be able to match what we will be capable of engineering, now that we are gaining a deep understanding of biology&#8217;s principles of operation.</p>
<p><em>Isn’t nature optimal?</em></p>
<p>Not at all. Our interneuronal connections compute at about 200 transactions per second, at least a million times slower than electronics. As another example, a nanotechnology theorist, Rob Freitas, has a conceptual design for nanobots that replace our red blood cells. A conservative analysis shows that if you replaced 10 percent of your red blood cells with Freitas’ “respirocytes,” you could sit at the bottom of a pool for four hours without taking a breath.</p>
<p><em>If people stop dying, isn’t that going to lead to overpopulation?</em></p>
<p>A common mistake that people make when considering the future is to envision a major change to today’s world, such as radical life extension, as if nothing else were going to change. The GNR revolutions will result in other transformations that address this issue. For example, nanotechnology will enable us to create virtually any physical product from information and very inexpensive raw materials, leading to radical wealth creation. We’ll have the means to meet the material needs of any conceivable size population of biological humans. Nanotechnology will also provide the means of cleaning up environmental damage from earlier stages of industrialization.</p>
<p>So we’ll overcome disease, pollution, and poverty—sounds like a utopian vision.</p>
<p>It’s true that the dramatic scale of the technologies of the next couple of decades will enable human civilization to overcome problems that we have struggled with for eons. But these developments are not without their dangers. Technology is a double edged sword—we don’t have to look past the 20th century to see the intertwined promise and peril of technology.</p>
<p><em>What sort of perils?</em></p>
<p>G, N, and R each have their downsides. The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far more widespread than the tools and knowledge to create an atomic bomb, and the impact could be far worse.</p>
<p>So maybe we shouldn’t go down this road.</p>
<p>It’s a little late for that. But the idea of relinquishing new technologies such as biotechnology and nanotechnology is already being advocated. I argue in the book that this would be the wrong strategy. Besides depriving human society of the profound benefits of these technologies, such a strategy would actually make the dangers worse by driving development underground, where responsible scientists would not have easy access to the tools needed to defend us.</p>
<p><em>So how do we protect ourselves?</em></p>
<p>I discuss strategies for protecting against dangers from abuse or accidental misuse of these very powerful technologies in chapter 8. The overall message is that we need to give a higher priority to preparing protective strategies and systems. We need to put a few more stones on the defense side of the scale. I’ve given testimony to Congress on a specific proposal for a “Manhattan” style project to create a rapid response system that could protect society from a new virulent biological virus. One strategy would be to use RNAi, which has been shown to be effective against viral diseases. We would set up a system that could quickly sequence a new virus, prepare a RNA interference medication, and rapidly gear up production. We have the knowledge to create such a system, but we have not done so. We need to have something like this in place before its needed.</p>
<p>Ultimately, however, nanotechnology will provide a completely effective defense against biological viruses.</p>
<p><em>But doesn’t nanotechnology have its own self-replicating danger?</em></p>
<p>Yes, but that potential won’t exist for a couple more decades. The existential threat from engineered biological viruses exists right now.</p>
<p><em>Okay, but how will we defend against self-replicating nanotechnology?</em></p>
<p>There are already proposals for ethical standards for nanotechnology that are based on the Asilomar conference standards that have worked well thus far in biotechnology. These standards will be effective against unintentional dangers. For example, we do not need to provide self-replication to accomplish nanotechnology manufacturing.</p>
<p><em>But what about intentional abuse, as in terrorism?</em></p>
<p>We’ll need to create a nanotechnology immune system—good nanobots that can protect us from the bad ones.</p>
<p><em>Blue goo to protect us from the gray goo!</em></p>
<p>Yes, well put. And ultimately we’ll need the nanobots comprising the immune system to be self-replicating. I’ve debated this particular point with a number of other theorists, but I show in the book why the nanobot immune system we put in place will need the ability to self-replicate. That’s basically the same “lesson” that biological evolution learned.</p>
<p>Ultimately, however, strong AI will provide a completely effective defense against self-replicating nanotechnology.</p>
<p><em>Okay, what’s going to protect us against a pathological AI?</em></p>
<p>Yes, well, that would have to be a yet more intelligent AI.</p>
<p><em>This is starting to sound like that story about the universe being on the back of a turtle, and that turtle standing on the back of another turtle, and so on all the way down. So what if this more intelligent AI is unfriendly? Another even smarter AI?</em></p>
<p>History teaches us that the more intelligent civilization—the one with the most advanced technology—prevails. But I do have an overall strategy for dealing with unfriendly AI, which I discuss in chapter 8.</p>
<p>Okay, so I’ll have to read the book for that one. But aren’t there limits to exponential growth? You know the story about rabbits in Australia—they didn’t keep growing exponentially forever.</p>
<p>There are limits to the exponential growth inherent in each paradigm. Moore’s law was not the first paradigm to bring exponential growth to computing, but rather the fifth. In the 1950s they were shrinking vacuum tubes to keep the exponential growth going and then that paradigm hit a wall. But the exponential growth of computing didn’t stop. It kept going, with the new paradigm of transistors taking over. Each time we can see the end of the road for a paradigm, it creates research pressure to create the next one. That’s happening now with Moore’s law, even though we are still about fifteen years away from the end of our ability to shrink transistors on a flat integrated circuit. We’re making dramatic progress in creating the sixth paradigm, which is three-dimensional molecular computing.</p>
<p><em>But isn’t there an overall limit to our ability to expand the power of computation?</em></p>
<p>Yes, I discuss these limits in the book. The ultimate 2 pound computer could provide 10<sup>42</sup> cps, which will be about 10 quadrillion (10<sup>16</sup>) times more powerful than all human brains put together today. And that’s if we restrict the computer to staying at a cold temperature. If we allow it to get hot, we could improve that by a factor of another 100 million. And, of course, we’ll be devoting more than two pounds of matter to computing. Ultimately, we’ll use a significant portion of the matter and energy in our vicinity. So, yes, there are limits, but they’re not very limiting.</p>
<p><em>And when we saturate the ability of the matter and energy in our solar system to support intelligent processes, what happens then?</em></p>
<p>Then we’ll expand to the rest of the Universe.</p>
<p>Which will take a long time I presume.</p>
<p>Well, that depends on whether we can use wormholes to get to other places in the Universe quickly, or otherwise circumvent the speed of light. If wormholes are feasible, and analyses show they are consistent with general relativity, we could saturate the universe with our intelligence within a couple of centuries. I discuss the prospects for this in the chapter 6. But regardless of speculation on wormholes, we’ll get to the limits of computing in our solar system within this century. At that point, we’ll have expanded the powers of our intelligence by trillions of trillions.</p>
<p><em>Getting back to life extension, isn’t it natural to age, to die?</em></p>
<p>Other natural things include malaria, Ebola, appendicitis, and tsunamis. Many natural things are worth changing. Aging may be “natural,” but I don’t see anything positive in losing my mental agility, sensory acuity, physical limberness, sexual desire, or any other human ability.</p>
<p>In my view, death is a tragedy. It&#8217;s a tremendous loss of personality, skills, knowledge, relationships. We&#8217;ve rationalized it as a good thing because that&#8217;s really been the only alternative we&#8217;ve had. But disease, aging, and death are problems we are now in a position to overcome.</p>
<p><em>Wait, you said that the golden era of biotechnology was still a decade away. We don’t have radical life extension today, do we?</em></p>
<p>In my last book, Fantastic Voyage, Live Long Enough to Live Forever, which I coauthored with Terry Grossman, M.D., we describe a detailed and personalized program you can implement now (which we call “bridge one”) that will enable most people to live long enough to get to the mature phase of the biotechnology evolution (“bridge two”). That in turn will get us to “bridge three,” which is nanotechnology and strong AI, which will result in being able to live indefinitely.</p>
<p><em>Okay, but won’t it get boring to live many hundreds of years?</em></p>
<p>If humans lived many hundreds of years with no other change in the nature of human life, then, yes, that would lead to a deep ennui. But the same nanobots in the bloodstream that will keep us healthy—by destroying pathogens and reversing aging processes —will also vastly augment our intelligence and experiences. As is its nature, the nonbiological portion of our intelligence will expand its powers exponentially, so it will ultimately predominate. The result will be accelerating change—so we will not be bored.</p>
<p><em>Won’t the Singularity create the ultimate “digital divide” due to unequal access to radical life extension and superintelligent computers?</em></p>
<p>We need to consider an important feature of the law of accelerating returns, which is a 50 percent annual deflation factor for information technologies, a factor which itself will increase. Technologies start out affordable only by the wealthy, but at this stage, they actually don’t work very well. At the next stage, they’re merely expensive, and work a bit better. Then they work quite well and are inexpensive. Ultimately, they’re almost free. Cell phones are now at the inexpensive stage. There are countries in Asia where most people were pushing a plow fifteen years ago, yet now have thriving information economies and most people have a cell phone. This progression from early adoption of unaffordable technologies that don’t work well to late adoption of refined technologies that are very inexpensive is currently a decade-long process. But that too will accelerate. Ten years from now, this will be a five year progression, and twenty years from now it will be only a two- to three-year lag.</p>
<p>This model applies not just to electronic gadgets but to anything having to do with information, and ultimately that will be mean everything of value, including all manufactured products. In biology, we went from a cost of ten dollars to sequence a base pair of DNA in 1990 to about a penny today. AIDS drugs started out costing tens of thousands of dollars per patient per year and didn’t work very well, whereas today, effective drugs are about a hundred dollars per patient per year in poor countries. That’s still more than we’d like, but the technology is moving in the right direction. So the digital divide and the have-have not divide is diminishing, not exacerbating. Ultimately, everyone will have great wealth at their disposal.</p>
<p><em>Won’t problems such as war, intolerance, environmental degradation prevent us from reaching the Singularity?</em></p>
<p>We had a lot of war in the 20th century. Fifty million people died in World War II, and there were many other wars. We also had a lot of intolerance, relatively little democracy until late in the century, and a lot of environmental pollution. All of these problems of the 20th century had no effect on the law of accelerating returns. The exponential growth of information technologies proceeded smoothly through war and peace, through depression and prosperity.</p>
<p>The emerging 21st century technologies tend to be decentralized and relatively friendly to the environment. With the maturation of nanotechnology, we will also have the opportunity to clean up the mess left from the crude early technologies of industrialization.</p>
<p><em>But won’t there still be objections from religious and political leaders, not to mention the common man and woman, to such a radical transformation of humanity?</em></p>
<p>There were objections to the plow also, but that didn’t stop people form using it. The same can be said for every new step in technology. Technologies do have to prove themselves. For every technology that is adopted, many are discarded. Each technology has to demonstrate that it meets basic human needs. The cell phone, for example, meets our need to communicate with one another. We are not going to reach the Singularity in some single great leap forward, but rather through a great many small steps, each seemingly benign and modest in scope.</p>
<p>But what about controversies such as the stem cell issue? Government opposition is clearly slowing down progress in that field.</p>
<p>I clearly support stem cell research, but it is not the case that the field of cell therapies has been significantly slowed down. If anything, the controversy has accelerated creative ways of achieving the holy grail of this field, which is transdifferentiation, that is, creating new differentiated cells you need from your own cells—for example, converting skin cells into heart cells or pancreatic Islet cells. Transdifferentiation has already been demonstrated in the lab. Objections such as those expressed against stem- cell research end up being stones in the water: the stream of progress just flows around them.</p>
<p><em>Where does God fit into the Singularity?</em></p>
<p>Although the different religious traditions have somewhat different conceptions of God, the common thread is that God represents unlimited—infinite—levels of intelligence, knowledge, creativity, beauty, and love. As systems evolve &#8212; through biology and technology &#8212; we find that they become more complex, more intelligent and more knowledgeable. They become more intricate and more beautiful, more capable of higher emotions such as love. So they grow exponentially in intelligence, knowledge, creativity, beauty, and love, all of the qualities people ascribe to God without limit. Although evolution does not reach a literally infinite level of these attributes, it does accelerate towards ever greater levels, so we can view evolution as a spiritual process, moving ever closer to this ideal. The Singularity will represent an explosion of these higher values of complexity.</p>
<p><em>So are you trying to play God?</em></p>
<p>Actually, I’m trying to play a human. I’m trying to do what humans do well, which is solve problems.</p>
<p><em>But will we still be human after all these changes?</em></p>
<p>That depends on how you define human. Some observers define human based on our limitations. I prefer to define us as the species that seeks &#8212; and succeeds &#8212; in going beyond our limitations.</p>
<p>Many observers point out how science has thrown us off our pedestal, showing us that we’re not as central as we thought, that the stars don’t circle around the Earth, that we’re not descended from the Gods but rather from monkeys, and before that earthworms.</p>
<p>All of that is true, but it turns out that we are central after all. Our ability to create models &#8212; virtual realities &#8212; in our brains, combined with our modest-looking thumbs, are enabling us to expand our horizons without limit.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/a-singularity-q-a/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The new era of health + medicine.</title>
		<link>https://www.writingsbyraykurzweil.com/the-new-era-of-health-medicine</link>
		<comments>https://www.writingsbyraykurzweil.com/the-new-era-of-health-medicine#respond</comments>
		<pubDate>Thu, 01 Feb 2024 07:07:41 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.writingsbyraykurzweil.com/?p=370811</guid>
		<description><![CDATA[year: 2011 Is it time to rethink the promise of genomics? There has been recent disappointment in public opinion about the progress in the field of genomics. In my view, this is from an overly narrow view of the science of genes and biological information processing, in general. It reminds me of the time when [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>year: 2011</p>
<p><span style="color: #ffaa00;">Is it time to rethink the promise of genomics?</span></p>
<p>There has been recent disappointment in public opinion about the progress in the field of genomics. In my view, this is from an overly narrow view of the science of genes and biological information processing, in general. It reminds me of the time when the field of artificial intelligence (AI) was equated with the methodology of “expert systems.” If someone referred to AI they were actually referring to expert systems and there were many articles on how limited this technique was and all of the things that it could not and would never be able to do.</p>
<p>At the time, I expressed my view that although expert systems was a useful approach for a certain limited class of problems it did indeed have restrictions and that the field of AI was far broader.</p>
<p>The human brain works primarily by recognizing patterns (we have about a billion pattern recognizers in the neocortex, for example) and there were at the time many emerging methods in the field of pattern recognition that were solving real world problems and that should properly be considered part of the AI field. Today, no one talks much about expert systems and there is a thriving multi-hundred billion dollar AI industry and a consensus in the AI field that non-biological intelligence will continue to grow in sophistication, flexibility, and diversity.</p>
<p>The same thing is happening here. The problem starts with the word “genomics.” The word sounds like it refers to “all things having to do with genes.” But as practiced, it deals almost exclusively with single genes and their ability to predict traits or conditions, which has always been a narrow concept. The idea of sequencing genes of an individual is even narrower and typically involves individual single-nucleotide polymorphisms (SNPs) which are variations in a single nucleotide (A, T, C or G) within a gene, basically a two bit alteration.</p>
<p>I have never been overly impressed with this approach and saw it as a first step based on the limitations of early technology. There are some useful SNPs such as Apo E4 but even here it only gives you statistical information on your likelihood of such conditions as Alzheimer’s Disease and macular degeneration based on population analyses. It is certainly not deterministic and has never been thought of that way.</p>
<p>As Craig Venter PhD points out in his <em>Der Spiegel</em> <a href="https://www.spiegel.de/international/world/spiegel-interview-with-craig-venter-we-have-learned-nothing-from-the-genome-a-709174.html" target="_blank" rel="noopener">magazine interview</a>: there are 100s of diseases that can be traced to defects in individual genes, but most of these affect developmental processes. So if you provide a medication that reverses the effect of the faulty gene you still have the result of the developmental process (of, say, the nervous system) that has been going on for many years. You would need to detect and reverse the condition very early, which of course is possible and a line of current investigation.</p>
<p>To put this narrow concept of genomics into perspective, think of genes as analogous to lines of code in a software program. If you examine a software program, you generally cannot assign each line of code to a property of the program. The lines of code work together in a complex way to produce a result. Now it is possible that in some circumstances you may be able to find one line of code that is faulty and improve the program’s performance by fixing that one line or even by removing it. But such an approach would be incidental and accidental, it is not the way that one generally thinks of software.</p>
<p>To understand the program you would need to understand the language it is written in and how the various lines interact with each other. In this analogy, a SNP would be comparable to a single letter within a single line (actually a quarter of one letter to be precise since a letter is usually represented by 8 bits and a nucleotide by 2 bits). You might be able to find a particularly critical letter in a software program, but again that is not a well motivated approach.</p>
<p>The collection of the human genome was indeed an exponential process with the amount of genetic data doubling each year and the cost of sequencing coming down by half each other. But its completion around 2003 was just the beginning of another even more daunting process, which is to understand it. The language is the three-dimensional properties and interaction of proteins. We started with individual genes as a reasonable place to start but that was always going to be inherently limited if you consider my analogy above to the role of single lines in a software program.</p>
<p>As we consider the genome, the first thing we notice is only about 3 percent of the human genome codes for proteins. With about 23,000 genes, there are over 23,000 proteins (as some portions of genes also produce proteins) and, of course, these proteins interact with each other in complicated pathways.</p>
<p>A trait in a complex organism such as a human being is actually an emergent property of this complex and organized collection of proteins. The 97 percent of the genome that does not code for proteins was originally called “junk DNA.”</p>
<p>We now understand that this portion of the genome has an important role in controlling and influencing gene expression. It is the case that there is less information in these non-coding regions and they are replete with redundancies that we do not see in the coding regions.</p>
<p>For example, one lengthy sequence called ALU is repeated hundreds of thousands of times. Gene expression is a vital aspect of understanding these genetic processes. The noncoding DNA plays an important role in this, but so do environmental factors. Even ignoring the concept that genes work in networks not as individual entities, genes have never been thought of as deterministic.</p>
<p>The “nature versus nurture” discussion goes back eons. What our genetic heritage describes (and by genetic heritage I include the epigenetic information that influences gene expression) is an entity (a human being) that is capable of evolving in and adapting to a complex environment. Our brain, for example, only becomes capable of intelligent decision making through its constant adaptation to and learning from its environment.</p>
<p>To reverse-engineer biology we need to examine phenomena at different levels, especially looking at the role that proteins (which are coded for in the genome) play in biological processes. In understanding the brain, for example, there is indeed exponential progress being made in simulating neurons, neural clusters, and entire regions.</p>
<p>This work includes understanding the “wiring” of the brain (which incidentally includes massive redundancy) and how the modules in the brain (which involve multiple neuron types) process information. Then we can link these processes to biochemical pathways, which ultimately links back to genetic information. But in the process of reverse-engineering the brain, genetic information is only one source and not the most important one at that.</p>
<p>So genes are one level of understanding biology as an information process, but there are other levels as well, and some of these other levels (such as actual biochemical pathways, or mechanisms in organs including the brain) are more accessible than genetic information. In any event, just examining individual genes, let alone SNPs, is like looking through a very tiny keyhole.</p>
<p>As another example of why the idea of examining individual genes is far from sufficient, I am currently involved with a cancer stem cell project with MIT scientists Dr. William Thilly and Dr. Elena Gostjeva. What we have found is that mutations in certain stem cells early in life will turn that stem cell into a cancer stem cell which in turn will reproduce and ultimately seed a cancer tumor. It can take years and often decades for the tumor to become clinically evident.</p>
<p>But you won’t find these mutations in a blood test because they are mutations originally in a single cell (which then reproduces to create nearby cells), not in all of your cells. However, understanding the genetic mutations is helping us to understand the process of metastasis, which we hope will lead to treatments that can inhibit the formation of new tumors. This is properly part of gene science but is not considered part of the narrow concept of “genomics,” as that term is understood.</p>
<p>Indeed there is a burgeoning field of stem cell treatments using adult stem cells in the positive sense of regenerating needed tissues. This is certainly a positive and clinically relevant result of the overall science and technology of genes.</p>
<p>If we consider the science and technology of genes and information processing in biology in its proper broad context, there are many exciting developments that have current or near term clinical implications, and enormous promise going forward.</p>
<p>A few years ago, Joslin Diabetes Center researchers showed that by inhibiting a particular gene (which they called the fat insulin receptor gene) in the fat cells (but not the muscle cells as that would negatively affect muscles) enabled caloric restriction without the restriction. The test animals ate ravenously and remained slim. They did not get diabetes or heart disease and lived 20 percent longer, getting most of the benefit of caloric restriction. This research is continuing now focusing on doing the same thing in humans, and the researchers whom I spoke with recently, are optimistic.</p>
<p>We have a new technology that can turn genes off, and that has emerged since the completion of the human genome project (and which has already been recognized with the Noble prize), which is RNA interference (RNAi). There are hundreds of drugs and other processes in the development and testing pipeline using this methodology.</p>
<p>As I said above, human characteristics, including disease, result from the interplay of multiple genes. There are often individual genes which if inhibited can have a significant therapeutic effect (such as we might disable a rogue software program by overwriting one line of code or one machine instruction).</p>
<p>There are also new methods of adding genes. I am an advisor (and board member) to <a href="http://www.unither.com/" target="_blank" rel="noopener"><strong>United Therapeutics</strong></a>, which has developed a method to take lung cells out of the body, add a new gene in vitro (so that the immune system is not triggered — which was a downside of the old methods of gene therapy), inspect the new cell, and replicate it several million fold.</p>
<p>You now have millions of cells with your DNA but with a new gene that was not there before. These are injected back into the body and end up lodged in the lungs. This has cured a fatal disease (pulmonary hypertension) in animal trials and is now undergoing human testing. There are also hundreds of such projects using this and other new forms of gene therapy.</p>
<p>As we understand the network of genes that are responsible for human conditions, especially reversible diseases, we will have the means of changing multiple genes, and turning some off or inhibiting them, turning others on or amplifying them. Some of these approaches are entering human trials. More complex approaches involving multiple genes will require greater understanding of gene networks but that is coming.</p>
<p>There is a new wave of drugs entering trials, some late stage trials that are based on gene results. For example, an experimental drug PLX4032 from Roche is designed to attack tumor cells with a mutation in a particular gene called BRAF. For patients with this genetic variant, 81 percent of patients with advanced melanoma had their tumors shrink (rather than grow), which is an impressive result for a form of cancer that is generally resistant to conventional treatment.</p>
<p>There is the whole area of regenerative medicine from stem cells. Some of this is now being done from adult autologous stem cells. Particularly exciting is the recent breakthrough in induced pluripotent stem cells (IPSCs). This involves using in-vitro genetic engineering to add genes to normal adult cells (such as skin cells) to convert them into the equivalent of embryonic stem cells which can subsequently be converted into any type of cell (with your own DNA).</p>
<p>IPSCs have been shown to be pluripotent, to have efficacy, and to not trigger the immune system because they are genetically identical. IPSCs offer the potential to repair essentially any organ from hearts to the liver and pancreas. These methods are part of genetic engineering which in turn is part of gene science and technology.</p>
<p>And then of course there is the entire new field of synthetic biology which is based on synthetic genomes. A major enabling breakthrough was recently announced by Craig Venter’s company in which an organism with a synthetic genome (which previously existed only as a computer file) was created. This field is based on entire genomes not just individual genes and it is certainly part of the broad field of gene science and technology. The goal is to create organisms that can do useful work such as produce vaccines and other medicines, biofuels and other valuable industrial substances.</p>
<p>You could write a book (or many books) about all of the advances that are being made in which knowledge of genetic processes and other biological information processes play a critical role. Health and medicine used to be entirely hit or miss without any concept of how biology worked on an information level.</p>
<p>Our knowledge is still very incomplete, but our knowledge of these processes is growing exponentially and that is feeding into medical research which is already bearing fruit. To focus just on the narrow concepts that were originally associated with “genomics” is as limited a view as the old idea of AI being just expert systems.</p>
<p>&#8212; <em>Ray Kurzweil</em><br />
[ post file ]</p>
<p>title:<br />
deck:</p>
<p>collection: writings by Ray Kurzweil<br />
tab: essays</p>
<p>[ end of post }</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/the-new-era-of-health-medicine/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>2 :: the Future of Libraries &#8212; the end of books.</title>
		<link>https://www.writingsbyraykurzweil.com/2-the-future-of-libraries-the-end-of-books</link>
		<comments>https://www.writingsbyraykurzweil.com/2-the-future-of-libraries-the-end-of-books#respond</comments>
		<pubDate>Mon, 01 Jan 2024 06:01:16 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[IMAGE label :: publication: Library Journal column: FutureCast part: 2 set: the Future of Libraries story: The end of books. author: by Ray Kurzweil date: January 1992 < hr /> An introduction. It is said that in the development of technology we overestimate what can be accomplished in the short term and underestimate what can [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>IMAGE</p>
<hr />
<p>label ::</p>
<p>publication: Library Journal<br />
column: FutureCast<br />
part: 2<br />
set: the Future of Libraries<br />
story: The end of books.<br />
author: by Ray Kurzweil<br />
date: January 1992</p>
<p>< hr /></p>
<p>An introduction.</p>
<p>It is said that in the development of technology we overestimate what can be accomplished in the short term and underestimate what can be accomplished in the long term. With the exception of a few prescient observers (such as Charles Babbage in the case of the computer), most predictions of the 20th century overlooked such breakthroughs as the computer, radio, television, and atomic energy, not to mention such recent innovations as the laser and bioengineering.<span id="more-80582"></span></p>
<p>Beyond the breakthrough, it is also difficult to anticipate serendipity, the coming together of diverse trends with profound yet unanticipated effects. In the case of the book, it is the interplay of such multifarious trends that will determine its destiny. The trends themselves are not hard to anticipate, although the stunning pace of development, particularly of computer hardware, is often not fully appreciated. In most fields, we take it for granted that things get more expensive each year. But in the computer field, we can at least double functionality for the same unit cost every 12 to 15 months, and even this pace is accelerating. </p>
<p>The implications of this geometric trend can be understood by recalling the legend of the inventor of chess and his patron, the emperor of China. The emperor had so fallen in love with his new game, he offered the inventor a reward of anything he wanted in the kingdom. </p>
<p>&#8220;Just one grain of rice on the first square, your majesty.&#8221;&nbsp;&#8220;Just one grain of rice?&#8221;&nbsp;&#8220;Yes, and two on the second, four on the third, and so on.&#8221; </p>
<p>The emperor immediately granted the inventor&#8217;s seemingly humble request. One version of the story has the emperor going bankrupt (the doubling per square ultimately equaled 18 million trillion grains of rice). The more believable version has the inventor losing his head. </p>
<p>As an example of what this trend has already accomplished, computer scientist David Waltz points out that computer memory today, after adjustment for inflation, costs only one-hundred-millionth of what it did in 1950 (which is consistent with a doubling of price-performance every 18 months). If the automotive industry had made as much progress in the past four decades, a typical automobile today would cost about one-hundredth of a cent. </p>
<p>With the price-performance of computer hardware doubling every year in every dimension, the impact will become increasingly hard to ignore. This becomes all the more significant as computers begin to affect virtually every other area of endeavor. </p>
<h1>The Powerbook looms </h1>
<p>So let us examine how just the predictable trends will affect the technology of the book. Last month, we discussed the emergence of the first wave of false pretenders to the functionality of the paper book. While the electronic book provides profound advantages in the quantity and accessibility of information, it falls short in some of the fundamental characteristics of paper and ink in the areas of flicker, contrast, resolution, and color. But as noted above, computer technology is anything but static, and already some of these limitations are being overcome. Alan Kay, senior fellow at Apple Computer, points out that the recently introduced Apple Powerbook 170 is flicker free and has a contrast ratio of 95:1, close to paper&#8217;s 120:1. Apple is actually positioning its new computer as an electronic book and plans to provide a library of books as software, hence the name Powerbook. </p>
<p>By next year, the first wave of color notebook computers will appear. Perhaps the most significant issue is resolution. Interestingly, the Jacquard loom, perfected by Joseph Marie Jacquard in 1805, which we might regard as the world&#8217;s first computer display, had a resolution of 1000 silk threads to the inch, equaling that of paper. Jacquard&#8217;s loom was controlled by punched cards and foreshadowed the emergence of the punched card-based data processing industry 85 years later. Today&#8217;s notebook computers have a resolution of only about 100 dots per inch (dpi), substantially less than paper. Within two to three years, however, we will see notebook computers with about 250 dpi, which for many applications will begin to rival paper and ink. </p>
<p>Let us jump ahead and describe the notebook computer that we are likely to see by the turn of the century based on readily discernible trends. Resolution will range from 500 to 1000 dpi, the same as high-quality printed documents. The displays will be flicker free and will have contrast ratios and color capabilities comparable to paper and ink. The devices will come in a variety of sizes ranging from pocket sized to double-hinged displays that will present two large pages. These computers will be thin (perhaps 1/2&#8243; deep) and lightweight. </p>
<p>By the end of this decade, the standard RAM chip will be one gigabit (one billion bits), so the typical personal notebook will provide at least a billion bytes (characters) of random access memory. Low-bandwidth communication (text, voice, still pictures) will be by wireless cellular transmission. High-bandwidth communication (moving high-resolution pictures) will be by optical fiber. In my November 15, 1991 column (&#8220;Learning in the Age of Knowledge,&#8221; p. 60-62), I mentioned Japan&#8217;s plan to install a fiber optic-based information superhighway into every home and office by early in the next century. President Bush recently signed a $3 billion bill to begin research in this area, but we still lack anything comparable to Japan&#8217;s multi-hundred-billion-dollar commitment. I do anticipate, however, that we will wake up sooner or later to this enormous competitive threat. </p>
<h1>Whatchamacallit </h1>
<p>Communication between user and machine will be through voice for entering text and a pen-like device for pointing and for graphical gestures such as crossing out words. The keyboard will be entering obsolescence as we enter the first decade of the next century. </p>
<p>So what is this thing? A PC? A telephone? A television? A personal transcriptionist? A cybernetic research assistant? A book? </p>
<p>Obviously, it is all of the above. As a telephone, it will include realtime language translation (at least between certain popular languages) so that we can readily communicate with people around the globe (the translating telephone capability will mature during the first decade of the next century). With the addition of a small, hand-held digital camera, this &#8220;telephone&#8221; will also include moving high-definition pictures. </p>
<p>George Gilder describes high-definition television &#8211; the marriage of the two great communication technologies of the 20th century (the computer and television) as creating a highly flexible telecomputer that is interactive and intelligent. </p>
<p>As a personal research assistant, the operating system of our future PCs will contain intelligent knowledge navigators that have the knowledge of where to find knowledge through instantaneous wireless communication with increasingly comprehensive databases. </p>
<p>However, let us concentrate for a moment on its application as a book. The personal computer of the early 2000s will not be a false pretender. These electronic books will have enormous advantages, with pictures that can move and interact with the user, increasingly intelligent search paradigms, simulated environments that the user can enter and explore, and vast quantities of accessible material. Yet vital to its ability to truly make the paper book obsolete is that the essential qualities of paper and ink will have been fully matched. The book will enter obsolescence, although because of its long history and enormous installed base, it will linger for a couple of decades before reaching antiquity. </p>
<h1>The virtue of virtual books </h1>
<p>The paper book will be replaced by a category of software that we can call virtual books. Is the virtual book really a new technology or just a continuation of the old (paper) technology by other means? It is certainly a new technology in the same sense that the &#8220;horseless carriage&#8221; automobile was a different technology from the horse and buggy. Changing such a central component of an old technology opens up so many new possibilities that we can truly say that a new technology has been born. </p>
<p>Yet haven&#8217;t we been hearing about the paperless society for at least a decade now? American business&#8217;s use of paper for printed documents increased from 850 billion pages in 1981 to nearly four trillion pages in 1990. It is certainly the case that while computers make it possible to handle documents without paper, they also greatly increase the productivity of producing paper documents. Until the computer display truly rivals the qualities of paper, computers will increase the use of paper rather than replace it. But once these qualities are matched, and the requisite communication technologies are in place, the printed book and other paper documents along with it will begin a rapid descent into obsolescence. </p>
<p>Many people were skeptical that the compact disc (CD) would replace the phonograph record. I remember being hesitant to buy a CD player because I was attached to my extensive collection of LPs collected over a lifetime, and I did not desire it becoming obsolete. My curiosity finally drove me to acquire a CD, and then I was hooked. My CD collection has grown, but I still have several shelves in my living room filled with my old album collection. But it has now been years since I have even touched one of these old phonograph albums, and even more years since I purchased one. People are also attached to their collections of paper books, but when the truly viable electronic book comes along, which will happen by the end of the decade, resistance to it will not last long. </p>
<h1>Click and pick </h1>
<p>So how do you buy a virtual book? By going to a bookstore, obviously. Not physically of course, you simply &#8220;click&#8221; on <i>bookstore</i>. Icons then appear for different choices. So let&#8217;s say we click on <i>Brentano&#8217;s.</i> We now see icons for different categories of books: <i>best sellers</i>, <i>fantasy &amp; </i><i>science fiction</i>, etc. Let&#8217;s click on <i>best sellers</i>. We now see images of book spines, which can be scrolled across the screen. Some books that the bookstore wishes to highlight are shown with the full front jacket. </p>
<p>Ah, here is an interesting one, <i>The Best of Futurecast</i> by Raymond Kurzweil. We click on that, and we now see the full front and back jacket. We click on the photo of the author, and he comes alive explaining the virtues of his book. We click on the front cover, and we see the front matter. We scroll through the table of contents. Here is an interesting old article from 1992 on &#8220;The End of Books.&#8221; We click on it and start reading. Hmmm, this is very interesting; Kurzweil&#8217;s predictions weren&#8217;t all bad! </p>
<p>Whoops, the computer now tells us that if we want to continue reading, we have to acquire the book. Options are presented. We can <i>purchase</i> it, we can <i>rent</i> it, there are several other choices. Well, this is a book we will certainly want to return to over and over, so we click on purchase. Now we see icons for <i>debit to checking account</i>, <i>charge to American Express</i>, etc. Once the transaction is complete, the book is transmitted via wireless cellular communication and becomes part of the permanent database of our PC. </p>
<p>What about the public library? Okay, click on <i>city library</i>. We see icons for categories. Click on <i>best sellers</i>. Now click on <i>The Best of Futurecast</i>. Looks interesting, so we click on <i>borrow book</i>. </p>
<p>Now wait a second. The library scenario sounds very similar to the bookstore scenario. Why would anyone buy a book if you can borrow it just as easily for free anytime you want to read it? </p>
<p>Other questions come to mind. What happens to that big library building? Will there still be paper books printed? Will libraries still carry these? How will the library work? What will librarians do? </p>
<p>There are reasonable answers to these questions, which we will examine next month. </p>
<p>Reprinted with permission from <b>Library Journal</b>, February 1992. Copyright &copy; 1992, Reed Elsevier, USA</p>
<p>
<a href="http://www.kurzweilai.net/meme/memelist.html?m=13" target="_self">Other Futurecast columns</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/2-the-future-of-libraries-the-end-of-books/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Promise + Peril</title>
		<link>https://www.writingsbyraykurzweil.com/promise-and-peril</link>
		<comments>https://www.writingsbyraykurzweil.com/promise-and-peril#respond</comments>
		<pubDate>Mon, 01 Jan 2024 06:00:51 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">http://www.kurzweilai.net/promise-and-peril</guid>
		<description><![CDATA[IMAGE &#8212; contents &#8212; ~ essay ~ reference essay: Promise + Peril author: by Ray Kurzweil date: 2000 An introduction. Bill Joy wrote a controversial article in Wired magazine advocating &#8220;relinquishment&#8221; of research on self-replicating technologies, such as nanobots. In this rebuttal, originally published in Interactive Week, Ray Kurzweil argues that these developments are inevitable [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>IMAGE</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">&#8212; contents &#8212;</span></p>
<p>~ essay<br />
~ reference</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">essay:</span> Promise + Peril<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil<br />
<span style="color: #ffaa00;">date:</span> 2000</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">An introduction.</span></p>
<p>Bill Joy wrote a controversial article in <em>Wired</em> magazine advocating &#8220;relinquishment&#8221; of research on self-replicating technologies, such as nanobots. In this rebuttal, originally published in Interactive Week, Ray Kurzweil argues that these developments are inevitable and advocates ethical guidelines and responsible oversight.</p>
<p>Originally published October 23, 2000 at <a href="http://www.zdnet.com/intweek/" target="_new" rel="noopener">Interactive Week</a>.<br />
Read Max More&#8217;s response to Bill Joy <a href="/embrace-dont-relinquish-the-future" target="_self" rel="noopener">here</a>.</p>
<p>A response to Bill Joy&#8217;s Wired article <a href="http://www.wired.com/wired/archive/8.04/joy.html" target="_new" rel="noopener">Why The Future Doesn&#8217;t Need Us</a>.</p>
<p>Bill Joy &#8212; the co-founder of Sun Microsystems, and principal developer of the Java programming language &#8212; took-up a personal mission to warn us of the impending dangers from the emergence of self-replicating technologies in the fields of genetics, nanotechnology and robotics, which he aggregates under the label &#8220;GNR.&#8221;</p>
<p>Although his warnings are not entirely new, they have attracted considerable attention because of Joy&#8217;s credibility as one of our leading technologists. It reminds me of the attention that George Soros, the currency arbitrager and arch capitalist, received when he made vaguely critical comments about the excesses of unrestrained capitalism.</p>
<p>According to Joy, the day is close at hand when it will be feasible to create genetically altered designer pathogens in college laboratories. Then, at a later date, we&#8217;ll have to contend with self-replicating entities created through nanotechnology, the field devoted to manipulating matter on the scale of individual atoms. Although nanoengineered &#8220;self-replicators&#8221; are at least one decade, and probably more than two decades, away, the specter that concerns Joy can be described as an unstoppable, non-biological cancer.</p>
<p>Finally, if we manage to survive these first two perils, we&#8217;ll encounter robots whose intelligence will rival and ultimately exceed our own. Such robots may make great assistants, but who&#8217;s to say that we can count on them to remain reliably friendly to mere humans?</p>
<p>Although I am often cast as the technology optimist who counters Joy&#8217;s pessimism, I do share his concerns regarding self-replicating technologies; indeed, I played a role in bringing these dangers to Bill&#8217;s attention. In many of the dialogues and forums in which I have participated on this subject, I end up defending Joy&#8217;s position with regard to the feasibility of these technologies and scenarios when they come under attack by commentators who I believe are being quite shortsighted in their skepticism. Even so, I do find fault with Joy&#8217;s prescription&#8211;halting the advance of technology and the pursuit of knowledge in broad fields such as nanotechnology.</p>
<p>Before addressing our differences, let me first discuss the salient issue of feasibility. Many long-range forecasts of technical feasibility dramatically underestimate the power of future technology for one simple reason: They are based on what I call the &#8220;intuitive linear&#8221; view of technological progress rather than the &#8220;historical exponential view.&#8221;</p>
<p>When people think of a future period, they intuitively assume that the current rate of progress will continue for the period being considered. In fact, the rate of technological progress is not constant, but since it is human nature to adapt to the changing pace, the intuitive view is that the pace will continue at the current rate. It is typical, therefore, that even sophisticated commentators, when considering the future, extrapolate the current pace of change over the next 10 years or 100 years to determine their expectations&#8211;the &#8220;intuitive linear&#8221; view.</p>
<p>But any serious examination of the history of technology reveals that technological change is at least exponential. There are a great many examples of this, including constantly accelerating developments in computation, communication, brain scanning, multiple aspects of biotechnology and miniaturization. One can examine these data in many different ways, on many different time scales and for a wide variety of phenomena. Whatever the approach, we find&#8211;at least&#8211;double exponential growth.</p>
<p>This phenomenon, which I call the &#8220;law of accelerating returns,&#8221; does not rely on a mere assumption of the continuation of Moore&#8217;s Law, which predicts, in effect, the quadrupling of computer power every 24 months. Rather, it is based on a rich model of diverse technological processes, a model I have been developing over the past couple of decades.</p>
<p>What it clearly shows is that technology, particularly the pace of technological change, has been advancing at least exponentially since the advent of technology. Thus, while people often overestimate what can be achieved in the short term because there is a tendency to leave out necessary details, we typically underestimate what can be achieved in the long term because exponential growth is ignored.</p>
<p>This observation also applies to rates of paradigm shifts, which are currently doubling approximately every decade. At that rate, the technological progress in the 21st century will be equivalent to changes that in the linear view would require on the order of 20,000 years.</p>
<p>This exponential progress in computation and communication technologies is greatly empowering the individual. That&#8217;s good news in many ways, because those technologies are largely responsible for the pervasive trend toward democratization and the reshaping of power relations at all levels of society. But these technologies are also empowering and amplifying our destructive impulses. It&#8217;s not necessary to anticipate all the ultimate uses of a technology to see danger in, for example, every college biotechnology lab&#8217;s having the ability to create self-replicating biological pathogens.</p>
<p>Nevertheless, I do reject Joy&#8217;s call for relinquishing broad areas of technology&#8211;for example, nanotechnology. Technology has always been a double-edged sword. We don&#8217;t need to look any further than today&#8217;s technology to see this. Take biotechnology. We have already seen substantial benefits: more effective AIDS treatments, human insulin and many others. In the years ahead, we will see enormous gains in overcoming cancer and many other diseases, as well as in greatly extending human longevity, all presumably positive developments&#8211;although even these are controversial.</p>
<p>On the other hand, the means will soon exist in a routine biotechnology laboratory to create a pathogen that could be more destructive to humans or other living organisms than an atomic bomb.</p>
<p>If we imagine describing the dangers that exist today&#8211;enough nuclear explosive power to destroy all mammalian life, just for starters&#8211;to people who lived a couple of hundred years ago, they would think it mad to take such risks. On the other hand, how many people in the year 2000 would really want to go back to the short, disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the human race struggled through a couple of centuries ago? We may romanticize the past, but until fairly recently, most of humanity lived extremely fragile lives, in which a single common misfortune could spell disaster. Substantial portions of our species still live this precarious existence, which is at least one reason to continue technological progress and the social and economic enhancements that accompany it.</p>
<p>People often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age-old problems, a sense of dread at a new set of grave dangers that accompany these new technologies, followed, finally and hopefully, by the realization that the only viable and responsible path is to set a careful course that can realize the promise while managing the peril.</p>
<p>Joy eloquently describes the plagues of centuries past and how new, self-replicating technologies, such as mutant bioengineered pathogens or &#8220;nanobots&#8221; (molecule-sized robots), run amok may bring back the fading notion of pestilence. As I stated earlier, these are real dangers. It is also the case, which Joy acknowledges, that it has been technological advances, such as antibiotics and improved sanitation, that have freed us from the prevalence of such plagues.</p>
<p>Human suffering continues and demands our steadfast attention. Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that these same technologies might one day be used for malevolent purposes? That should be a rhetorical question. Yet, there is a movement to do exactly that. Most people, I believe, would agree that such broad-based relinquishment of research and development is not the answer.</p>
<p>In addition to the continued opportunity to alleviate human distress, another important motivation for continuing technological advancement is economic gain. The continued acceleration of many intertwined technologies are roads paved with gold. (I use the plural here because technology is clearly not a single path.) In a competitive environment, it is an economic imperative to go down these roads. Relinquishing technological advancement would be economic suicide for individuals, companies and nations.</p>
<h1>The Relinquishment Issue</h1>
<p>Which brings us to the issue of relinquishment&#8211;the wholesale abandonment of certain fields of research&#8211;which is Joy&#8217;s most controversial recommendation and personal commitment. I do feel that relinquishment at the right level is part of a responsible and constructive response to genuine perils. The issue, however, is exactly this: At what level are we to relinquish technology?</p>
<p>Ted Kaczynski, the infamous Unabomber, would have us renounce all of it. This, in my view, is neither desirable nor feasible, and the futility of such a position is only underscored by the senselessness of Kaczynski&#8217;s deplorable tactics.</p>
<p>Another level would be to forgo certain fields, nanotechnology, for example, that might be regarded as too dangerous. But even these slightly less sweeping strokes of relinquishment are also untenable. Nanotechnology is simply the inevitable result of a persistent trend toward miniaturization that pervades all of technology. It is far from a single, centralized effort, but rather is being pursued by myriad projects with diverse goals.</p>
<p>One observer wrote:</p>
<blockquote><p>A further reason why industrial society cannot be reformed . . . is that modern technology is a unified system in which all parts are dependent on one another. You can&#8217;t get rid of the &#8216;bad&#8217; parts of technology and retain only the &#8216;good&#8217; parts. Take modern medicine, for example. Progress in medical science depends on progress in chemistry, physics, biology, computer science and other fields. Advanced medical treatments require expensive, high-tech equipment that can be made available only by a technologically progressive, economically rich society. Clearly you can&#8217;t have much progress in medicine without the whole technological system and everything that goes with it.</p></blockquote>
<p>The observer I am quoting is Kaczynski. Although one might properly resist him as an authority, I believe he is correct on the deeply entangled nature of the benefits and risks of technology. Where Kaczynski and I clearly part company is in our overall assessment of the relative balance between the two. Joy and I have engaged in dialogues on this issue both publicly and privately, and we concur that technology will and should progress and that we need to be actively concerned with its dark side. If Bill and I disagree, it&#8217;s on the granularity of relinquishment that is both feasible and desirable.</p>
<p>Abandonment of broad areas of technology will only push these technologies underground where development would continue unimpeded by ethics or regulation. In such a situation, less stable, less responsible practitioners&#8211;for example, terrorists&#8211;would have a monopoly on deadly expertise.</p>
<p>I do think that relinquishment at the right level needs to be part of our ethical response to the dangers of 21st century technologies. One salient and constructive example of this is the proposed ethical guideline by the Foresight Institute, founded by nanotechnology pioneer Eric Drexler. This guideline would call on nanotechnologists to relinquish the development of physical entities that can self-replicate in a natural environment. Another example is a ban on self-replicating physical entities that contain their own codes for self-replication. In a design that nanotechnologist Ralph Merkle calls the &#8220;Broadcast Architecture,&#8221; such entities would have to obtain such codes from a centralized secure server, which would guard against undesirable replication.</p>
<p>The Broadcast Architecture is impossible in the biological world, which represents at least one way in which nanotechnology can be made safer than biotechnology. In other ways, nanotech is potentially more dangerous because nanobots can be physically stronger than protein-based entities and more intelligent. But it will eventually be possible to combine the two by having nanotechnology provide the codes within biological entities (replacing DNA), in which case we can use the much safer Broadcast Architecture.</p>
<p>As responsible technologists, our ethics should include such &#8220;fine-grained&#8221; relinquishment, among other professional ethical guidelines. Other protections will need to include oversight by regulatory bodies, the development of technology-specific &#8220;immune&#8221; responses, as well as computer-assisted surveillance by law enforcement organizations. Many people are not aware that our intelligence agencies already use advanced technologies such as automated word spotting to monitor a substantial flow of telephone conversations. As we go forward, balancing our cherished rights of privacy with our need to be protected from the malicious use of powerful 21st century technologies will be one of many profound challenges. This is the reason recent issues of an encryption &#8220;trap door,&#8221; in which law enforcement authorities would have access to otherwise secure information, and the FBI&#8217;s Carnivore e-mail snooping system have been so contentious.</p>
<p>As a test case, we can take a small measure of comfort from how we have dealt with one recent technological challenge. There exists today a new form of fully non-biological, self-replicating entity that didn&#8217;t exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, strong concerns were voiced that as such viruses became more sophisticated, software pathogens had the potential to destroy the computer network medium in which they live. Yet the &#8220;immune system&#8221; that has evolved in response to this challenge has been largely effective.</p>
<p>Although destructive, self-replicating software entities do cause damage from time to time, the injury is but a small fraction&#8211;much less than one-tenth of 1 percent&#8211;of the benefit we receive from the computers and communication links that harbor them.</p>
<p>One might counter that computer viruses lack the lethal potential of biological viruses or of destructive nanotechnology. Although true, this strengthens my observation. The fact that computer viruses are not usually deadly to humans only encourages more people to create and release them. It also means that our response to the danger is relatively relaxed. Conversely, when it comes to self-replicating entities that are potentially lethal on a large scale, our response on all levels will be vastly more intense.</p>
<p>Technology will remain a double-edged sword, and the story of the 21st century has not yet been written. So, while we must acknowledge and deal with the dangers, we must also recognize that technology represents vast power to be used for all humankind&#8217;s purposes. We have no choice but to work hard to apply these quickening technologies to advance our human values, despite what often appears to be a lack of consensus on what those values should be.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/promise-and-peril/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The future of accelerating intelligence.</title>
		<link>https://www.writingsbyraykurzweil.com/the-future-of-accelerating-intelligence</link>
		<comments>https://www.writingsbyraykurzweil.com/the-future-of-accelerating-intelligence#respond</comments>
		<pubDate>Tue, 01 Aug 2023 05:00:38 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.writingsbyraykurzweil.com/?p=370709</guid>
		<description><![CDATA[March 26, 2002 Excerpts from Ray Kurzweil&#8217;s remarks: &#8220;In considering the genesis of Moore&#8217;s Law, I put 49 famous computing devices over the past century on an exponential graph. From this exercise, it became apparent that the acceleration of computing power did not start with integrated circuits, but has continued through multiple paradigm shifts(electromechanical calculators, [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>March 26, 2002</p>
<h1>Excerpts from Ray Kurzweil&#8217;s remarks:</h1>
<p>&#8220;In considering the genesis of Moore&#8217;s Law, I put 49 famous computing devices over the past century on an exponential graph. From this exercise, it became apparent that the acceleration of computing power did not start with integrated circuits, but has continued through multiple paradigm shifts(electromechanical calculators, relays, vacuum tubes, transistors, and finally integrated circuits).<span id="more-370709"></span></p>
<p>&#8220;Moore&#8217;s Law was not the first, but the fifth paradigm, to provide exponential growth in computing. The next paradigm, which will involve computing in three dimensions rather than the two manifested in today&#8217;s flat chips, will lead to computing at the molecular, and ultimately the subatomic level. We can be confident that the acceleration of computing will survive the well-anticipated demise of Moore&#8217; s Law.</p>
<p>&#8220;There are comparable exponential trends underlying a wide variety of other technologies: communications (both wired and wireless), brain scanning speeds and resolutions, genome scanning, and miniaturization (we are currently shrinking technology at a rate of 5.6 per linear dimension per decade). Even the rate of technological progress is speeding up, now doubling each decade. The mathematical models I&#8217;ve developed over the past couple of decades to describe these trends, which I call the law of accelerating returns, has proven predictive of the developments we&#8217;ve seen during the 1990s. From these models, I believe we can be confident of continued exponential growth in these and other technologies for the foreseeable future.</p>
<p>&#8220;By 2009, computers will disappear. Displays will be written directly onto our retinas by devices in our eyeglasses and contact lenses. In addition to virtual high-resolution displays, these intimate displays will provide full immersion visual virtual reality. We will have ubiquitous, very-high-bandwidth wireless connection to the Internet at all times. &#8220;Going to a web site&#8221; will mean entering a virtual reality environment &#8212; at least for the visual and auditory sense &#8212; where we will meet other real people. There will be simulated people as well, but the virtual personalities will not be up to human standards, at least not by 2009. The electronics for all of this will be so small that it will be invisibly embedded in our glasses and clothing.</p>
<p>&#8220;By 2029, as a result of continuing trends in miniaturization, computation, and communication, we will have billions of nanobots &#8212; intelligent robots the same of blood cells or smaller &#8212; traveling through the capillaries of our brain communicating directly with our biological neurons. By taking up positions next to every nerve fiber coming from all of our senses, the nanobots will provide full-immersion virtual reality involving all five of the senses. So we will enter virtual reality environments (via the web, of course) of our choice and meet people, both real and virtual, only now the difference won&#8217;t be so clear.</p>
<p>&#8220;Just as people today beam their images from little web cams out onto the Internet for others to share, many people in 2029 will beam the full stream of signals coming directly from their senses onto the web. We will then be able to experience what other people are experiencing, à la John Malkovich. Of course, the everyday lives of many such experience beamers may not be all that compelling, so there will be plenty of prerecorded experiences we can plug into it. Beyond just the five senses, these shared experiences will include emotional responses, sexual pleasure, and other mental reactions.</p>
<p>&#8220;Brain implants based on these distributed intelligent nanobots will extend our brains in every conceivable way, massively expanding our memory and otherwise vastly improving all of our sensory, pattern-recognition and cognitive abilities.</p>
<p>&#8220;Oh, and one more thing: we&#8217;ll live a long time too. The expanding human life span is another one of those exponential trends. In the eighteenth century, we added a few days every year to human longevity; during the nineteenth century we added a couple of weeks each year; and now we&#8217;re adding almost a half a year every year. With the revolutions in rational drug design, genomics, therapeutic cloning of our own organs and tissues, and related developments in bio-information sciences, we will be adding more than a year every year within ten years. So take care of yourself the old-fashioned way for just a little while longer, and you may actually get to experience the remarkable century ahead.&#8221;</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/the-future-of-accelerating-intelligence/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>for essays</title>
		<link>https://www.writingsbyraykurzweil.com/for-essays</link>
		<comments>https://www.writingsbyraykurzweil.com/for-essays#respond</comments>
		<pubDate>Thu, 01 Jun 2023 05:00:19 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.writingsbyraykurzweil.com/?p=370439</guid>
		<description><![CDATA[This is the index for essays.]]></description>
			<content:encoded><![CDATA[<p>This is the index for essays.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/for-essays/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Who am I + What am I</title>
		<link>https://www.writingsbyraykurzweil.com/who-am-i-what-am-i</link>
		<comments>https://www.writingsbyraykurzweil.com/who-am-i-what-am-i#respond</comments>
		<pubDate>Thu, 01 Jun 2023 05:00:07 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.writingsbyraykurzweil.com/?p=370587</guid>
		<description><![CDATA[IMAGE &#8212; contents &#8212; ~ essay ~ essay title: Who am I + What am I author: by Ray Kurzweil date: undated essay &#124; An introduction. Maybe I&#8217;m this stuff here &#8212; the ordered and chaotic collection of molecules that comprise my body and brain. But there’s a problem. The specific set of particles that [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>IMAGE</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">&#8212; contents &#8212;</span></p>
<p>~ essay<br />
~</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">essay title:</span> Who am I + What am I<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil<br />
<span style="color: #ffaa00;">date:</span> undated</p>
<hr class="dotted" />
<p>essay |</p>
<p><span style="color: #ffaa00;">An introduction.</span></p>
<p>Maybe I&#8217;m this stuff here &#8212; the ordered and chaotic collection of molecules that comprise my body and brain. But there’s a problem. The specific set of particles that comprise my body and brain are completely different from the ones that comprised me a short time ago. We know that most of our cells are turned-over in a few weeks. Even those that persist longer (such as neurons) change their component molecules in weeks.</p>
<p><span style="color: #ffaa00;">I&#8217;m a different set of stuff than I was.</span></p>
<p>I’m a completely different set of stuff than I was a month ago. All that persists is the pattern of organization of that stuff. The pattern changes also &#8212; but slowly, and in a continuum from my past self.</p>
<p>From this perspective I&#8217;m like the pattern that water makes in a stream, as it rushes past the rocks in its path. The actual molecules (of water) change every milli-second. But the pattern continues for hours, or even years.</p>
<p>Maybe we should say I’m a pattern of matter and energy that persists in time. But there’s a problem here also. We’ll ultimately be able to scan + copy this pattern in a at least sufficient detail to replicate my body and brain to a sufficiently high degree of accuracy such that the copy is indistinguishable from the original &#8212; that is, the copy could pass a Ray Kurzweil Turing test. I explore this scenario in my essay <a href="http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html" target="_blank" rel="noopener">the law of accelerating returns</a>.</p>
<p>So the copy will share my pattern. One might counter that we may not get every detail correct. But if that is true, then such an attempt would not constitute a proper copy. As time goes on, our ability to create a neural and body copy will increase in resolution and accuracy at the same exponential pace that pertains to all info-based tech.</p>
<p>We ultimately will be able to capture and recreate my pattern of salient neural + physical details to any desired degree of accuracy. Although the copy shares my pattern, it would be hard to say that the copy is me &#8212; because I would, or could, still be here. You could even scan and copy me while I was sleeping.</p>
<p>If you come to me in the morning and say, “Good news, Ray, we’ve successfully re-instantiated you into a more durable substrate, so we won’t be needing your old body and brain anymore,” I may beg to differ.</p>
<p>If you do the thought experiment, it’s clear that the copy may look and act just like me, but it’s nonetheless not me because I may not even know that he was created. Although he would have all my memories and recall having been me, from the point in time of his creation, Ray 2 would have his own unique experiences and his reality would begin to diverge from mine.</p>
<p><span style="color: #ffaa00;">Let’s follow this train of thought.</span></p>
<p>Now let’s follow this train of thought a bit further &#8212; and you will see where the dilemma comes in. If we copy me, and then destroy the original, then that’s the end of me because as we concluded above the copy is not me.</p>
<p>Since the copy will do a convincing job of impersonating me, no one may know the difference, but it’s nonetheless the end of me.</p>
<p>However, this scenario is entirely equivalent to one in which I am replaced gradually. In the case of gradual replacement, there is no simultaneous old me and new me.</p>
<p>But at the end of the gradual replacement process, you have the equivalent of the new me, and no old me. So gradual replacement also means the end of me.</p>
<p>However — as I pointed out at the beginning of this question — it’s the case that I’m actually being continually replaced. And by the way, it’s not so gradual, it’s a rapid process. As we concluded — all that persists is my pattern.</p>
<p>But the thought experiment above shows that gradual replacement means the end of me even if my pattern is preserved. So am I constantly being replaced by someone else who just seems a like lot me a few moments earlier?</p>
<p>So again — who am I? It’s the ultimate ontological question. We often refer to this question as the issue of consciousness. I have consciously — no pun intended — phrased the issue entirely in the first person because that’s the nature of the issue. It’s not a 3rd person question. So my question isn’t: who is John Doe? — although John Doe may ask this question himself.</p>
<p>When people speak of consciousness, they often slip into issues of behavioral + neurological correlates of consciousness &#8212; like whether or not an entity can be self-reflective.</p>
<p>But these are 3rd person, objective issues, and don’t represent what David Chalmers PhD — the philosopher + cognitive scientist — calls the “hard question” of consciousness. Chalmers specializes in the area of philosophy of mind and philosophy of language.</p>
<p>The question of whether or not an entity is conscious is only apparent to himself. The difference between neurological correlates of consciousness &#8212; that is, intelligent behavior &#8212; and the ontological reality of consciousness is the difference between objective (3rd person) and subjective (1st person) reality. For this reason, we’re unable to propose an objective consciousness detector that doesn’t have philosophical assumptions built into it.</p>
<p><span style="color: #ffaa00;">Well, you see the problem.</span></p>
<p>I believe that humans will come to accept that non-biological entities are conscious. Because ultimately they’ll have all the subtle cues that humans currently possess that we associate with emotional (and other subjective) experiences.</p>
<p>But that’s a political and psychological prediction, not an observation that we’ll be able to scientifically verify. We assume that other humans are conscious, but this is an assumption, and it’s not something we can objectively demonstrate.</p>
<p>I acknowledge that people seem conscious to me, but I shouldn’t be too quick to accept this impression. Maybe I’m really living in a simulation, and othere people are part of the simulation.</p>
<p>Or maybe it’s only my memories that exist, and the actual experience never took place. Or maybe I’m only now experiencing the sensation of recalling apparent memories of having met a person — but neither the experience nor the memories really exist. Well, you see the problem.</p>
<hr class="dotted" />
<p>IMAGE</p>
<hr class="dotted" />
<p>— notes —</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/who-am-i-what-am-i/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Don&#8217;t delay AI development for large language models.</title>
		<link>https://www.writingsbyraykurzweil.com/dont-delay-ai-development-for-large-language-models</link>
		<comments>https://www.writingsbyraykurzweil.com/dont-delay-ai-development-for-large-language-models#respond</comments>
		<pubDate>Sun, 01 Jan 2023 06:00:48 +0000</pubDate>
								<dc:creator></dc:creator>
		
								<media:thumbnail url="https://www.writingsbyraykurzweil.com/images/photo-AI-chat-mobile-no-thumb-v3-140x129.png" width="140" height="129" />
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.writingsbyraykurzweil.com/?p=370672</guid>
		<description><![CDATA[&#8212; contents &#8212; ~ essay ~ webpages label &#124; essay: My opinion on delaying AI development for large language models. author: by Ray Kurzweil date: April 2023 essay &#124; Regarding the Open Letter to “pause” research on AI &#8220;more powerful than GPT-4,&#8221; this criterion is too vague to be practical. And the proposal faces a [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><img style=' float: left; padding: 4px; margin: 0 7px 2px 0;'  loading="lazy" decoding="async" class="noshadow alignleft size-large wp-image-344350" src="https://www.writingsbyraykurzweil.com/images/photo-AI-chat-mobile-no-thumb-v3-680x626.png" alt="" width="680" height="626" /></p>
<hr />
<p><span style="color: #ffaa00;">&#8212; contents &#8212;</span></p>
<p>~ essay<br />
~ webpages</p>
<hr />
<p>label |</p>
<p><span style="color: #ffaa00;">essay:</span> My opinion on delaying AI development for large language models.<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil<br />
<span style="color: #ffaa00;">date:</span> April 2023</p>
<hr />
<p><span style="color: #ffaa00;">essay |</span></p>
<p>Regarding the <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments" target="_blank" rel="noopener">Open Letter</a> to “pause” research on AI &#8220;more powerful than <a href="https://openai.com/product/gpt-4" target="_blank" rel="noopener">GPT-4</a>,&#8221; this criterion is too vague to be practical. And the proposal faces a serious coordination problem: those that agree to a pause may fall far behind corporations or nations that disagree.</p>
<p>There are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields. I didn&#8217;t sign, because I believe we can address the signers&#8217; safety concerns in a more tailored way that doesn&#8217;t compromise these vital lines of research.</p>
<p>I participated in the <a href="https://futureoflife.org/open-letter/ai-principles" target="_blank" rel="noopener">Asilomar AI Principles Conference</a> in 2017 and was actively involved in the creation of guidelines to create Artificial Intelligence in an ethical manner. So I know that safety is a critical issue. But more nuance is needed if we wish to unlock AI&#8217;s profound advantages to health and productivity while avoiding the real perils.</p>
<p>&#8212; <em>Ray Kurzweil</em><br />
Inventor, best-selling author, and futurist.</p>
<p><img style=' float: left; padding: 4px; margin: 0 7px 2px 0;'  loading="lazy" decoding="async" class="alignleft wp-image-343225 noshadow" title="autograph - Ray Kurzweil - no. 400" src="https://www.thekurzweillibrary.com/images/autograph-Ray-Kurzweil-no.-400.png" alt="" width="320" height="192" /></p>
<hr />
<p><span style="color: #ffaa00;">reference</span></p>
<hr />
<p><span style="color: #ffaa00;">group:</span> the Future of Life Institute<br />
<span style="color: #ffaa00;">web:</span> <a href="https://futureoflife.org/" target="_blank" rel="noopener">home</a> ~ <a href="https://www.youtube.com/@futureoflifeinstitute2787" target="_blank" rel="noopener">channel</a></p>
<p style="text-align: left;"><span style="color: #ffaa00;">open letter:</span> Pause Giant AI Experiments. | <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" target="_blank" rel="noopener">view<br />
</a><span style="color: #ffaa00;">date:</span> March 2023</p>
<p><span style="color: #ffaa00;">&#8212; about &#8212;</span></p>
<p>This letter&#8217;s signatories call on all AI labs to immediately pause for at least 6 months the training of artificial intelligence systems more powerful than the GPT family of AI software products made by the company OpenAI.</p>
<hr />
<p><span style="color: #ffaa00;">-— notes -—</span></p>
<p>AI = artificial intelligence<br />
GPT = generative pre-trained transformer</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/dont-delay-ai-development-for-large-language-models/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The new era of health + medicine as info-tech.</title>
		<link>https://www.writingsbyraykurzweil.com/the-new-era-of-health-medicine-as-info-tech</link>
		<comments>https://www.writingsbyraykurzweil.com/the-new-era-of-health-medicine-as-info-tech#respond</comments>
		<pubDate>Sat, 01 Jan 2022 06:00:46 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">http://www.kurzweilai.net/?p=106736</guid>
		<description><![CDATA[IMAGE &#8212; content &#8212; ~ essay &#124; by Ray Kurzweil ~ webpages essay title: The new era of health + medicine as info-tech is broader than individual genes. author: by Ray Kurzweil date: 2010 read &#124; essay ESSAY The new era of health + medicine as info-tech is broader than individual genes. by Ray Kurzweil [&#8230;]]]></description>
			<content:encoded><![CDATA[<ul>
<li>IMAGE</li>
</ul>
<hr class="dotted" />
<p>&#8212; content &#8212;</p>
<p>~ essay | by Ray Kurzweil<br />
~ webpages</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">essay title:</span> The new era of health + medicine as info-tech is broader than individual genes.<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil<br />
<span style="color: #ffaa00;">date:</span> 2010</p>
<p><span style="color: #ffaa00;">read</span> | <a href="https://transcend.me/blogs/aging-longevity/a-broader-view-of-genomics-with-ray-kurzweil">essay</a></p>
<hr class="dotted" />
<p>ESSAY<br />
The new era of health + medicine as info-tech is broader than individual genes.<br />
by Ray Kurzweil</p>
<p><span style="color: #ffaa00;">An introduction.</span></p>
<p>Is it time to re-think the promise of genomics? Because the impact on health care is bigger than genetics.</p>
<p>There has been recent disappointment expressed in the progress in the field of genomics. In my view, this results from an overly narrow view of the science of genes and biological information processing in general. It reminds me of the time when the field of “artificial intelligence” (AI) was equated with the methodology of “expert systems.” If someone referred to AI they were actually referring to expert systems and there were many articles on how limited this technique was and all of the things that it could not and would never be able to do.</p>
<p>At the time, I expressed my view that although expert systems was a useful approach for a certain limited class of problems it did indeed have restrictions and that the field of AI was far broader.</p>
<p>The human brain works primarily by recognizing patterns (we have about a billion pattern recognizers in the neocortex, for example) and there were at the time many emerging methods in the field of pattern recognition that were solving real world problems and that should properly be considered part of the AI field. Today, no one talks much about expert systems and there is a thriving multi-hundred billion dollar AI industry and a consensus in the AI field that nonbiological intelligence will continue to grow in sophistication, flexibility, and diversity.</p>
<p>The same thing is happening here. The problem starts with the word “genomics.” The word sounds like it refers to “all things having to do with genes.” But as practiced, it deals almost exclusively with single genes and their ability to predict traits or conditions, which has always been a narrow concept. The idea of sequencing genes of an individual is even narrower and typically involves individual single-nucleotide polymorphisms (SNPs) which are variations in a single nucleotide (A, T, C or G) within a gene, basically a two bit alteration.</p>
<p>I have never been overly impressed with this approach and saw it as a first step based on the limitations of early technology. There are some useful SNPs such as Apo E4 but even here it only gives you statistical information on your likelihood of such conditions as Alzheimer’s Disease and macular degeneration based on population analyses. It is certainly not deterministic and has never been thought of that way. As Dr. Venter points out in his <em>Der Spiegel</em> interview, there are hundreds of diseases that can be traced to defects in individual genes, but most of these affect developmental processes. So if you provide a medication that reverses the effect of the faulty gene you still have the result of the developmental process (of, say, the nervous system) that has been going on for many years. You would need to detect and reverse the condition very early, which of course is possible and a line of current investigation.</p>
<p>To put this narrow concept of genomics into perspective, think of genes as analogous to lines of code in a software program. If you examine a software program, you generally cannot assign each line of code to a property of the program. The lines of code work together in a complex way to produce a result. Now it is possible that in some circumstances you may be able to find one line of code that is faulty and improve the program’s performance by fixing that one line or even by removing it. But such an approach would be incidental and accidental, it is not the way that one generally thinks of software. To understand the program you would need to understand the language it is written in and how the various lines interact with each other. In this analogy, a SNP would be comparable to a single letter within a single line (actually a quarter of one letter to be precise since a letter is usually represented by 8 bits and a nucleotide by 2 bits). You might be able to find a particularly critical letter in a software program, but again that is not a well motivated approach.</p>
<p>The collection of the human genome was indeed an exponential process with the amount of genetic data doubling each year and the cost of sequencing coming down by half each other. But its completion around 2003 was just the beginning of another even more daunting process, which is to understand it. The language is the three-dimensional properties and interaction of proteins. We started with individual genes as a reasonable place to start but that was always going to be inherently limited if you consider my analogy above to the role of single lines in a software program.</p>
<div id="attachment_106747" style="width: 346px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; float: left;" class="wp-caption alignleft"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-106747" class="size-full wp-image-106747" title="dna structure" src="http://www.kurzweilai.net/images/dnastructure.jpg" alt="" width="346" height="338" /><p style=' padding: 0 4px 5px; margin: 0;'  id="caption-attachment-106747" class="wp-caption-text">The structure of DNA. (Image: The U.S. National Library of Medicine)</p></div>
<p>As we consider the genome, the first thing we notice is only about 3 percent of the human genome codes for proteins. With about 23,000 genes, there are over 23,000 proteins (as some portions of genes also produce proteins) and, of course, these proteins interact with each other in complicated pathways.</p>
<p>A trait in a complex organism such as a human being is actually an emergent property of this complex and organized collection of proteins. The 97 percent of the genome that does not code for proteins was originally called “junk DNA.”</p>
<p>We now understand that this portion of the genome has an important role in controlling and influencing gene expression. It is the case that there is less information in these non-coding regions and they are replete with redundancies that we do not see in the coding regions.</p>
<p>For example, one lengthy sequence called ALU is repeated hundreds of thousands of times. Gene expression is a vital aspect of understanding these genetic processes. The noncoding DNA plays an important role in this, but so do environmental factors. Even ignoring the concept that genes work in networks not as individual entities, genes have never been thought of as deterministic.</p>
<p>The “nature versus nurture” discussion goes back eons. What our genetic heritage describes (and by genetic heritage I include the epigenetic information that influences gene expression) is an entity (a human being) that is capable of evolving in and adapting to a complex environment. Our brain, for example, only becomes capable of intelligent decision making through its constant adaptation to and learning from its environment.</p>
<p>To reverse-engineer biology we need to examine phenomena at different levels, especially looking at the role that proteins (which are coded for in the genome) play in biological processes. In understanding the brain, for example, there is indeed exponential progress being made in simulating neurons, neural clusters, and entire regions. This work includes understanding the “wiring” of the brain (which incidentally includes massive redundancy) and how the modules in the brain (which involve multiple neuron types) process information. Then we can link these processes to biochemical pathways, which ultimately links back to genetic information. But in the process of reverse-engineering the brain, genetic information is only one source and not the most important one at that.</p>
<p>So genes are one level of understanding biology as an information process, but there are other levels as well, and some of these other levels (such as actual biochemical pathways, or mechanisms in organs including the brain) are more accessible than genetic information. In any event, just examining individual genes, let alone SNPs, is like looking through a very tiny keyhole.</p>
<p>As another example of why the idea of examining individual genes is far from sufficient, I am currently involved with a cancer stem cell project with MIT scientists Dr. William Thilly and Dr. Elena Gostjeva. What we have found is that mutations in certain stem cells early in life will turn that stem cell into a cancer stem cell which in turn will reproduce and ultimately seed a cancer tumor. It can take years and often decades for the tumor to become clinically evident. But you won’t find these mutations in a blood test because they are mutations originally in a single cell (which then reproduces to create nearby cells), not in all of your cells. However, understanding the genetic mutations is helping us to understand the process of metastasis, which we hope will lead to treatments that can inhibit the formation of new tumors. This is properly part of gene science but is not considered part of the narrow concept of “genomics,” as that term is understood.</p>
<p>Indeed there is a burgeoning field of stem cell treatments using adult stem cells in the positive sense of regenerating needed tissues. This is certainly a positive and clinically relevant result of the overall science and technology of genes.</p>
<p>If we consider the science and technology of genes and information processing in biology in its proper broad context, there are many exciting developments that have current or near term clinical implications, and enormous promise going forward.</p>
<p>A few years ago, Joslin Diabetes Center researchers showed that by inhibiting a particular gene (which they called the fat insulin receptor gene) in the fat cells (but not the muscle cells as that would negatively affect muscles) enabled caloric restriction without the restriction. The test animals ate ravenously and remained slim. They did not get diabetes or heart disease and lived 20 percent longer, getting most of the benefit of caloric restriction. This research is continuing now focusing on doing the same thing in humans, and the researchers whom I spoke with recently, are optimistic.</p>
<p>We have a new technology that can turn genes off, and that has emerged since the completion of the human genome project (and which has already been recognized with the Noble prize), which is RNA interference (RNAi). There are hundreds of drugs and other processes in the development and testing pipeline using this methodology. As I said above, human characteristics, including disease, result from the interplay of multiple genes. There are often individual genes which if inhibited can have a significant therapeutic effect (such as we might disable a rogue software program by overwriting one line of code or one machine instruction).</p>
<p>There are also new methods of adding genes. I am an advisor (and board member) to <a href="http://www.unither.com/" target="_blank" rel="noopener"><strong>United Therapeutics</strong></a>, which has developed a method to take lung cells out of the body, add a new gene in vitro (so that the immune system is not triggered — which was a downside of the old methods of gene therapy), inspect the new cell, and replicate it several million fold. You now have millions of cells with your DNA but with a new gene that was not there before. These are injected back into the body and end up lodged in the lungs. This has cured a fatal disease (pulmonary hypertension) in animal trials and is now undergoing human testing. There are also hundreds of such projects using this and other new forms of gene therapy.</p>
<p>As we understand the network of genes that are responsible for human conditions, especially reversible diseases, we will have the means of changing multiple genes, and turning some off or inhibiting them, turning others on or amplifying them. Some of these approaches are entering human trials. More complex approaches involving multiple genes will require greater understanding of gene networks but that is coming.</p>
<p>There is a new wave of drugs entering trials, some late stage trials that are based on gene results. For example, an experimental drug PLX4032 from Roche is designed to attack tumor cells with a mutation in a particular gene called BRAF. For patients with this genetic variant, 81 percent of patients with advanced melanoma had their tumors shrink (rather than grow), which is an impressive result for a form of cancer that is generally resistant to conventional treatment.</p>
<p>There is the whole area of regenerative medicine from stem cells. Some of this is now being done from adult autologous stem cells. Particularly exciting is the recent breakthrough in induced pluripotent stem cells (IPSCs). This involves using in-vitro genetic engineering to add genes to normal adult cells (such as skin cells) to convert them into the equivalent of embryonic stem cells which can subsequently be converted into any type of cell (with your own DNA). IPSCs have been shown to be pluripotent, to have efficacy, and to not trigger the immune system because they are genetically identical. IPSCs offer the potential to repair essentially any organ from hearts to the liver and pancreas. These methods are part of genetic engineering which in turn is part of gene science and technology.</p>
<p>And then of course there is the entire new field of synthetic biology which is based on synthetic genomes. A major enabling breakthrough was recently announced by Craig Venter’s company in which an organism with a synthetic genome (which previously existed only as a computer file) was created. This field is based on entire genomes not just individual genes and it is certainly part of the broad field of gene science and technology. The goal is to create organisms that can do useful work such as produce vaccines and other medicines, biofuels and other valuable industrial substances.</p>
<p>You could write a book (or many books) about all of the advances that are being made in which knowledge of genetic processes and other biological information processes play a critical role. Health and medicine used to be entirely hit or miss without any concept of how biology worked on an information level. Our knowledge is still very incomplete, but our knowledge of these processes is growing exponentially and that is feeding into medical research which is already bearing fruit. To focus just on the narrow concepts that were originally associated with “genomics” is as limited a view as the old idea of AI being just expert systems.</p>
<p><em>End of the essay.</em></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/the-new-era-of-health-medicine-as-info-tech/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A wager on the Turing test &#8212; the rules</title>
		<link>https://www.writingsbyraykurzweil.com/a-wager-on-the-turing-test-the-rules-2</link>
		<comments>https://www.writingsbyraykurzweil.com/a-wager-on-the-turing-test-the-rules-2#respond</comments>
		<pubDate>Wed, 01 Jul 2020 05:00:22 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.kurzweilai.net/?p=340781</guid>
		<description><![CDATA[An explanation of rules behind the Turing Test, used to determine the winner of a long bet between Ray Kurzweil and Mitch Kapor over whether artificial intelligence will be achieved by 2029. Published April 2002 . Click here to see why Ray Kurzweil thinks he will win. Click here to read why Mitch Kapor thinks [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><em>An explanation of rules behind the Turing Test, used to determine the winner of a long bet between Ray Kurzweil and Mitch Kapor over whether artificial intelligence will be achieved by 2029.</em></p>
<p><i>Published April 2002 . Click <a href="/a-wager-on-the-turing-test-why-i-think-i-will-win" target="_self" rel="noopener">here</a> to see why Ray Kurzweil thinks he will win. Click <a href="/why-i-think-i-will-win" target="_self" rel="noopener">here</a> to read why Mitch Kapor thinks he&#8217;ll win. Finally, see Ray&#8217;s <a href="/response-to-mitchell-kapor-s-why-i-think-i-will-win" target="_self" rel="noopener">response</a>.</i></p>
<p><b>Background on the &#8220;Long Now </b><b>Turing Test</b><b> Wager.&#8221;</b> Ray Kurzweil maintains that a computer (i.e., a machine intelligence) will pass the Turing test by 2029. Mitchell Kapor believes this will not happen.<span id="more-369803"></span></p>
<p>This wager is intended to be the inaugural long term bet to be administered by the Long Now Foundation. The proceeds of the wager are to be donated to a charitable organization designated by the winner.</p>
<p>This document provides a brief description of the Turing Test and a set of high level rules for administering the wager. These rules contemplate setting up a &#8220;Turing Test Committee&#8221; which will create the detailed rules and procedures to implement the resolution of the wager. A primary objective of the Turing Test Committee will be to set up rules and procedures that avoid and deter cheating.</p>
<p><b>Brief Description of the </b><b>Turing test</b><b>.</b> In a 1950 paper (&#8220;Computing Machinery and Intelligence,&#8221; <i>Mind</i><i> 59</i> (1950): 433- 460, reprinted in E. Feigenbaum and J. Feldman, eds., <i>Computers and </i><i>Thought</i>, New York: McGraw-Hill, 1963), Alan Turing describes his concept of the Turing Test, in which one or more human judges interview computers and human foils using terminals (so that the judges won&#8217;t be prejudiced against the computers for lacking a human appearance). The nature of the dialog between the human judges and the candidates (i.e., the computers and the human foils) is similar to an online chat using instant messaging. The computers as well as the human foils try to convince the human judges of their humanness. If the human judges are unable to reliably unmask the computers (as imposter humans) then the computer is considered to have demonstrated human-level intelligence<sup>1</sup>.</p>
<p>Turing was very specifically nonspecific about many aspects of how to administer the test. He did not specify many key details, such as the duration of the interrogation and the sophistication of the human judge and foils. The purpose of the rules described below is to provide a set of procedures for administering the test some decades hence.</p>
<h1>The Procedure for the Turing Test Wager: The Turing Test General Rules</h1>
<p>These Turing Test General Rules may be modified by agreement of Ray Kurzweil and Mitchell Kapor, or, if either Ray Kurzweil and / or Mitchell Kapor is not available, then by the Turing Test Committee (described below). However, any such change to these Turing Test General Rules shall only be made if (i) these rules are determined to have an inconsistency, or (ii) these rules are determined to be inconsistent with Alan Turing&#8217;s intent of determining human-level intelligence in a machine, or (iii) these rules are determined to be unfair, or (iv) these rules are determined to be infeasible to implement.</p>
<h2>I. Definitions.</h2>
<p>A Human is a biological human person as that term is understood in the year 2001 whose intelligence has not been enhanced through the use of machine (i.e., nonbiological) intelligence, whether used externally (e.g., the use of an external computer) or internally (e.g., neural implants). A Human may not be genetically enhanced (through the use of genetic engineering) beyond the level of human beings in the year 2001.</p>
<p>A Computer is any form of nonbiological intelligence (hardware and software) and may include any form of technology, but may not include a biological Human (enhanced or otherwise) nor biological neurons (however, nonbiological emulations of biological neurons are allowed).</p>
<p>The Turing Test Committee will consist of three Humans, to be selected as described below.</p>
<p>The Turing Test Judges will be three Humans selected by the Turing Test Committee.</p>
<p>The Turing Test Human Foils will be three Humans selected by the Turing Test Committee.</p>
<p>The Turing Test Participants will be the three Turing Test Human Foils and one Computer.</p>
<h2>II. The Procedure</h2>
<p>The Turing Test Committee will be appointed as follows.</p>
<ul>
<li>One member will be Ray Kurzweil or his designee, or, if not available, a person appointed by the Long Now Foundation. In the event that the Long Now Foundation appoints this person, it shall use its best efforts to appoint a Human person that best represents the views of Ray Kurzweil (as expressed in the attached essay &#8220;Why I Think I Will Win The Long Now Turing Test Wager.&#8221;)</li>
<li>A second member will be Mitchell Kapor or his designee, or, if not available, a person appointed by the Long Now Foundation. In the event that the Long Now Foundation appoints this person, it shall use its best efforts to appoint a Human person that best represents the views of Mitchell Kapor (as expressed in the attached essay &#8220;Why I Think I Will Win The Long Now Turing Test Wager.&#8221;)</li>
<li>A third member will be appointed by the above two members, or if the above two members are unable to agree, then by the Long Now Foundation, who in its judgment, is qualified to represent a &#8220;middle ground&#8221; position.</li>
</ul>
<p>Ray Kurzweil, or his designee, or another member of the Turing Test Committee, or the Long Now Foundation may, from time to time call for a Turing Test Session to be conducted and will select or provide one Computer for this purpose. For those Turing Test Sessions called for by Ray Kurzweil or his designee or another member of the Turing Test committee (other than the final one in 2029), the person calling for the Turing Test Session to be conducted must provide (or raise) the funds necessary for the Turing Test Session to be conducted. In any event, the Long Now Foundation is not obligated to conduct more than two such Turing Test Sessions prior to the final one (in 2029) if it determines that conducting such additional Turing Test Sessions would be an excessive administrative burden.</p>
<p>The Turing Test Committee will provide the detailed rules and procedures to implement each such Turing Test Session using its best efforts to reflect the rules and procedures described in this document. <u>The primary goal of the </u><u>Turing Test</u><u> Committee will be to devise rules and procedures which avoid and deter cheating to the maximum extent possible.</u> These detailed rules and procedures will include (i) specifications of the equipment to be used, (ii) detailed procedures to be followed, (iii) specific instructions to be given to all participants including the Turing Test Judges, the Turing Test Human Foils and the Computer, (iv) verification procedures to assure the integrity of the proceedings, and (v) any other details needed to implement the Turing Test Session. Beyond the Turing Test General Rules described in this document, the Turing Test Committee will be guided to the best of its ability by the original description of the Turing Test by Alan Turing in his 1950 paper. The Turing Test Committee will also determine procedures to resolve any deadlocks that may occur in its own deliberations.</p>
<p>Each Turing Test Session will consist of at least three Turing Test Trials.</p>
<p>For each such Turing Test Trial, a set of Turing Test Interviews will take place, followed by voting by the Turing Test Judges as described below.</p>
<p>Using its best judgment, the Turing Test Committee will appoint three Humans to be the Turing Test Judges.</p>
<p>Using its best judgment, the Turing Test Committee will appoint three Humans to be the Turing Test Human Foils. The Turing Test Human Foils should not be known (either personally or by reputation) to the Turing Test Judges.</p>
<p>During the Turing Test Interviews (for each Turing Test Trial), each of the three Turing Test Judges will conduct online interviews of each of the four Turing Test Candidates (i.e., the Computer and the three Turing Test Human Foils) for two hours each for a total of eight hours of interviews conducted by each of the three Turing Test Judges (for a total of 24 hours of interviews).</p>
<p>The Turing Test Interviews will consist of online text messages sent back and forth as in a online &#8220;instant messaging&#8221; chat, as that concept is understood in the year 2001.</p>
<p>The Human Foils are instructed to try to respond in as human a way as possible during the Turing Test Interviews.</p>
<p>The Computer is also intended to respond in as human a way as possible during the Turing Test Interviews.</p>
<p>Neither the Turing Test Human Foils nor the Computer are required to tell the truth about their histories or other matters. All of the candidates are allowed to respond with fictional histories.</p>
<p>At the end of the interviews, each of the three Turing Test Judges will indicate his or her verdict with regard to each of the four Turing Test Candidates indicating whether or not said candidate is human or machine. The Computer will be deemed to have passed the &#8220;Turing Test Human Determination Test&#8221; if the Computer has fooled two or more of the three Human Judges into thinking that it is a human.</p>
<p>In addition, each of the three Turing Test Judges will rank the four Candidates with a rank from 1 (least human) to 4 (most human). The computer will be deemed to have passed the &#8220;Turing Test Rank Order Test&#8221; if the median rank of the Computer is equal to or greater than the median rank of two or more of the three Turing Test Human Foils.</p>
<p>The Computer will be deemed to have passed the Turing Test if it passes both the Turing Test Human Determination Test and the Turing Test Rank Order Test.</p>
<p>If a Computer passes the Turing Test, as described above, prior to the end of the year 2029, then Ray Kurzweil wins the wager. Otherwise Mitchell Kapor wins the wager.</p>
<p><sup>1</sup> Turing&#8217;s initial description of his test was as a parlor game in which judges try to determine the gender of male and female human contestants. He then suggests the applicability of this type of game to its present purpose of determining when the level of intelligence of a machine is indistinguishable from that of a human.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/a-wager-on-the-turing-test-the-rules-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>essays &#124; celebrating: 15 year anniversary of the book the Singularity Is Near</title>
		<link>https://www.writingsbyraykurzweil.com/essays-celebrating-15-year-anniversary-of-the-book-the-singularity-is-near</link>
		<comments>https://www.writingsbyraykurzweil.com/essays-celebrating-15-year-anniversary-of-the-book-the-singularity-is-near#respond</comments>
		<pubDate>Wed, 01 Jan 2020 06:00:11 +0000</pubDate>
								<dc:creator></dc:creator>
		
								<media:thumbnail url="https://www.writingsbyraykurzweil.com/images/art-15-year-anniversary-no.-1-140x88.png" width="140" height="88" />
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">http://www.kurzweilai.net/?p=263066</guid>
		<description><![CDATA[&#8212; contents &#8212; ~ welcome ~ bibliography ~ summary &#8212; welcome &#8212; Dear readers, Year 2020 celebrates the 15th anniversary of the acclaimed book the Singularity Is Near written by Ray Kurzweil &#8212; best-selling author, inventor, and futurist. The book was published September 2005. In the time since its publication, society has witnessed a whirlwind of [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><img style=' float: left; padding: 4px; margin: 0 7px 2px 0;'  loading="lazy" decoding="async" class="alignleft  wp-image-333176 noshadow" title="art - 15 year anniversary - no. 1" src="https://www.kurzweilai.net/images/art-15-year-anniversary-no.-1.png" alt="" width="700" height="442" srcset="https://www.writingsbyraykurzweil.com/images/art-15-year-anniversary-no.-1.png 5700w, https://www.writingsbyraykurzweil.com/images/art-15-year-anniversary-no.-1-259x164.png 259w, https://www.writingsbyraykurzweil.com/images/art-15-year-anniversary-no.-1-680x429.png 680w, https://www.writingsbyraykurzweil.com/images/art-15-year-anniversary-no.-1-140x88.png 140w" sizes="auto, (max-width: 700px) 100vw, 700px" /></p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">&#8212; contents &#8212;</span></p>
<p>~ welcome<br />
~ bibliography<br />
~ summary</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">&#8212; welcome &#8212;</span></p>
<p>Dear readers,</p>
<p>Year 2020 celebrates the 15th anniversary of the acclaimed book <em>the Singularity Is Near</em> written by Ray Kurzweil &#8212; best-selling author, inventor, and futurist. The book was published September 2005.</p>
<p>In the time since its publication, society has witnessed a whirlwind of breakthroughs: genetic engineering, autonomous robots, extreme computation, renewable energy. Advanced sensor arrays and internet meshes are uniting all people + things in the connected habitats we live in, and with each other. Today&#8217;s massively scaled knowledge &#8212; and shared human experience &#8212; will take us to the future.</p>
<p>15 years later, this classic book is still relevant. Ray is currently completing his new book <em>the Singularity Is Nearer</em> &#8212; debuting in year 2022.</p>
<p>&#8212; <em>library editor</em></p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">bibliography</span> | books by Ray Kurzweil</p>
<p>&#8212; non-fiction &#8212;</p>
<ul>
<li>no. 1 &#8212; <span style="color: #ff9900;">year:</span> 1990 • the Age of Intelligent Machines</li>
<li>no. 2 &#8212;<span style="color: #ff9900;"> year:</span> 1992 • 10% Solution for a Healthy Life &#8212; how to eliminate virtually all risk of heart disease + cancer</li>
<li>no. 3 &#8212; <span style="color: #ff9900;">year:</span> 1999 • the Age of Spiritual Machines &#8212; when computers exceed human intelligence</li>
<li>no. 4 &#8212; <span style="color: #ff9900;">year:</span> 2004 • Fantastic Voyage &#8212; live long enough to live forever, the science behind radical life extension</li>
<li>no. 5 &#8212; <span style="color: #ffaa00;">year:</span> 2005 • the Singularity Is Near &#8212; when humans transcend biology</li>
<li>no. 6 &#8212; <span style="color: #ffaa00;">year:</span> 2009 • Transcend &#8212; 9 steps to living well forever</li>
<li>no. 7 &#8212; <span style="color: #ffaa00;">year:</span> 2012 • How to Create a Mind &#8212; the secret of human thought revealed</li>
</ul>
<p>&#8212; novel &#8212;</p>
<ul>
<li>no. 8 &#8212; <span style="color: #ffaa00;">year:</span> 2019 • <a href="http://www.DanielleWorld.com" target="_blank" rel="noopener">Danielle</a> &#8212; Chronicles of a Superheroine<br />
<em>Includes 2 non-fiction companion books.</em></li>
</ul>
<hr class="dotted /" />
<p>IMAGE</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">&#8212; summary &#8212;</span></p>
<p><em>The Singularity Is Near</em> presents the next stage of Ray Kurzweil&#8217;s compelling view of the future &#8212; the merger of humans + machines. He refers to this period as a singularity, when the pace of technological change is so rapid, and its impact so deep, that human life transforms.</p>
<p>Kurzweil explains we&#8217;re already in the early stages of this transition. And within a few decades, life as we know it will be completely different. The book has sold 255,000 copies, and is printed in 17 languages &#8212; spotlighting a growing mainstream + international interest in humanity&#8217;s future.</p>
<p>Kurzweil writes:</p>
<p style="padding-left: 30px;">“The singularity will be a merger of our bodies + minds with our technology. The world will still be human, but transcend our biology&#8217;s roots. There will be no distinction between human and machine &#8212; nor between physical and virtual reality. If you wonder what will remain unequivocally human, it’s this quality &#8212; our species inherently seeks to extend its physical and mental reach beyond current limitations.”</p>
<p><span style="color: #ffaa00;">book title:</span> <em><a href="http://www.kurzweilai.net/the-singularity-is-near-when-humans-transcend-biology" target="_blank" rel="noopener">the Singularity Is Near</a></em><br />
<span style="color: #ffaa00;">deck:</span> When humans transcend biology<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil<br />
<span style="color: #ffaa00;">date:</span> 2005</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">&#8212; special essay selection &#8212;</span></p>
<p><span style="color: #ffaa00;">writings:</span> 3 essays on the technological future<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil</p>
<p><span style="color: #ffaa00;">occasion:</span> 15 year anniversary<br />
<span style="color: #ffaa00;">for the book:</span> <em>the Singularity Is Near</em></p>
<p>For the 15 year anniversary &#8212; author Ray Kurzweil selected 3 of his essays below, touching-on these topics:</p>
<ul>
<li>How does the human brain + mind work?</li>
<li>What will it take to probe its mysteries?</li>
<li>Why do science + tech breakthroughs seem to come out of nowhere?</li>
<li>How are people caught off guard by change?</li>
</ul>
<p>The essay set is a journey through themes presented in <em>the Singularity Is Near &#8212; </em> for your interest + curiosity, and to spark conversation about humanity&#8217;s accelerating relationship with tech.</p>
<p><span style="color: #ffaa00;">&#8212; table of contents  &#8212;</span></p>
<p>1.  <span style="color: #ffaa00;">essay</span> | Who am I &#8212; What am I<br />
2.  <span style="color: #ffaa00;">essay</span> | We are entering the singularity<br />
3.  <span style="color: #ffaa00;">essay</span> | Exponential technological progress in the 21 century</p>
<p>4.  <span style="color: #ffaa00;">supplement</span> | math &#8212; from the law of accelerating returns</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">essay</span> | no. 1</p>
<p><span style="color: #ffaa00;">essay title:</span> Who am I &#8212; What am I<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil</p>
<p><span style="color: #ffaa00;">&#8212; introduction &#8212;</span></p>
<p>Perhaps I am this stuff here, that is, the ordered and chaotic collection of molecules that comprise my body + brain.</p>
<p>But there’s a problem. The specific set of particles that comprise my body and brain are completely different from the atoms and molecules than comprised me only a short while &#8212; on the order of weeks ago. We know that most of our cells are turned over in a matter of weeks. Even those that persist longer, like neurons, nonetheless change their component molecules in a matter of weeks.</p>
<p><span style="color: #ffaa00;">I&#8217;m a completely different set of stuff than I was a month ago.</span></p>
<p>So I am a completely different set of stuff than I was a month ago. All that persists is the pattern of organization of that stuff. The pattern changes also, but slowly and in a continuum from my past self. From this perspective I am rather like the pattern that water makes in a stream as it rushes past the rocks in its path. The actual molecules (of water) change every millisecond, but the pattern persists for hours or even years.</p>
<p>So, perhaps we should say I am a pattern of matter and energy that persists in time.</p>
<p>But there is a problem here as well. We will ultimately be able to scan and copy this pattern in a at least sufficient detail to replicate my body and brain to a sufficiently high degree of accuracy such that the copy is indistinguishable from the original &#8212; that is, the copy could pass a “Ray Kurzweil&#8221; Turing test. I won’t repeat all the arguments for this here, but I describe this scenario in a number of documents including the essay &#8220;<a href="http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html" target="_blank" rel="noopener">the Law of Accelerating Returns</a>.&#8221;</p>
<p>The copy, therefore, will share my pattern. One might counter that we may not get every detail correct. But if that is true, then such an attempt would not constitute a proper copy. As time goes on, our ability to create a neural and body copy will increase in resolution and accuracy at the same exponential pace that pertains to all information based technologies.</p>
<p>We ultimately will be able to capture and recreate my pattern of salient neural and physical details to any desired degree of accuracy. Although the copy shares my pattern, it would be hard to say that the copy is me &#8212; because I would, or could, still be here. You could even scan and copy me while I was sleeping.</p>
<p>If you come to me in the morning and say, “Good news, Ray, we’ve successfully re-instantiated you into a more durable substrate, so we won’t be needing your old body and brain anymore,” I may beg to differ.</p>
<p>If you do the thought experiment, it’s clear that the copy may look and act just like me, but it’s nonetheless not me because I may not even know that he was created. Although he would have all my memories and recall having been me, from the point in time of his creation, Ray 2 would have his own unique experiences and his reality would begin to diverge from mine.</p>
<p><span style="color: #ffaa00;">Let’s pursue this train of thought a bit further.</span></p>
<p>Now let’s pursue this train of thought a bit further &#8212; and you will see where the dilemma comes in. If we copy me, and then destroy the original, then that’s the end of me because as we concluded above the copy is not me.</p>
<p>Since the copy will do a convincing job of impersonating me, no one may know the difference, but it’s nonetheless the end of me.</p>
<p>However, this scenario is entirely equivalent to one in which I am replaced gradually. In the case of gradual replacement, there is no simultaneous old me and new me.</p>
<p>But at the end of the gradual replacement process, you have the equivalent of the new me, and no old me. So gradual replacement also means the end of me.</p>
<p>However, as I pointed out at the beginning of this question, it is the case that I am in fact being continually replaced. And, by the way, it’s not so gradual, but a rather rapid process. As we concluded, all that persists is my pattern.</p>
<p>But the thought experiment above shows that gradual replacement means the end of me even if my pattern is preserved. So am I constantly being replaced by someone else who just seems a like lot me a few moments earlier?</p>
<p>So, again, who am I? It’s the ultimate ontological question. We often refer to this question as the issue of consciousness. I have consciously (no pun intended) phrased the issue entirely in the first person because that is the nature of the issue. It is not a third person question. So my question is not “Who is John Doe?” although John Doe may ask this question himself.</p>
<p>When people speak of consciousness, they often slip into issues of behavioral and neurological correlates of consciousness &#8212; like whether or not an entity can be self-reflective. But these are third person, objective issues, and do not represent what philosopher and cognitive scientist David Chalmers, PhD calls the “hard question” of consciousness. Chalmers specializes in the area of philosophy of mind and philosophy of language.</p>
<p>The question of whether or not an entity is conscious is only apparent to himself. The difference between neurological correlates of consciousness &#8212; that is, intelligent behavior &#8212; and the ontological reality of consciousness is the difference between objective (third person) and subjective (first person) reality. For this reason, we are unable to propose an objective consciousness detector that does not have philosophical assumptions built into it.</p>
<p><span style="color: #ffaa00;">Well, you see the problem.</span></p>
<p>I do say that we humans will come to accept that non-biological entities are conscious because ultimately they will have all the subtle cues that humans currently possess that we associate with emotional and other subjective experiences. But that’s a political and psychological prediction, not an observation that we will be able to scientifically verify. We do assume that other humans are conscious, but this is an assumption, and not something we can objectively demonstrate.</p>
<p>I will acknowledge that people seem conscious to me, but I should not be too quick to accept this impression. Perhaps I am really living in a simulation, and othere people are part of the simulation. Or, perhaps it’s only my memories that exist, and the actual experience never took place. Or maybe I am only now experiencing the sensation of recalling apparent memories of having met a person, but neither the experience nor the memories really exist. Well, you see the problem.</p>
<p><span style="color: #ffaa00;">&#8212; end &#8212;</span></p>
<p>IMAGE</p>
<hr />
<p><span style="color: #ffaa00;">essay</span> | no. 2.</p>
<p><span style="color: #ffaa00;">essay title:</span> We are entering the singularity<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil</p>
<p><span style="color: #ffaa00;">&#8212; introduction &#8212;</span></p>
<p>My interest in the future really stems from my interest in being an inventor. I had the idea of being an inventor when I was 5 years old, and I quickly realized you had to have a good idea of the future, if you&#8217;re going to succeed. It&#8217;s like surfing, you have to catch a wave at the right time.</p>
<p>I noticed the world becomes a different place than it was when you started by the time you finally get something done. Most inventors fail not because they can&#8217;t get something to work, but because all the market&#8217;s enabling forces are not right place, right time.</p>
<p><span style="color: #ffaa00;">My interest in the future.</span></p>
<p>So I became a student of technology trends, and have developed mathematical models about how technology evolves in different areas like computers, electronics in general, communication storage devices, biological technologies like genetic scanning, reverse engineering of the human brain, miniaturization, the size of technology, and the pace of paradigm shifts. This helped guide me as an entrepreneur and as a technology creator so that I could catch the wave at the right time.</p>
<p>This interest in technology trends took on a life of its own, and I began to project some of them using what I call the law of accelerating returns, which I believe underlies technology evolution to future periods. I did that in a book I wrote in the 1980s, which had a road map of what the 1990s and the early 2000&#8217;s would be like, and that worked out quite well. I&#8217;ve now refined these mathematical models, and have begun to really examine what the 21st century would be like.</p>
<p>It allows me to be inventive with the technologies of the 21st century, because I have a conception of what technology, communications, the size of technology, and our knowledge of the human brain will be like in 2010, 2020, or 2030. If I can come up with scenarios using those technologies, I can be inventive with the technologies of the future. I can&#8217;t actually create these technologies yet, but I can write about them.</p>
<p>One thing I&#8217;d say is that if anything the future will be more remarkable than any of us can imagine, because although any of us can only apply so much imagination, there&#8217;ll be thousands or millions of people using their imaginations to create new capabilities with these future technology powers. I&#8217;ve come to a view of the future that really doesn&#8217;t stem from a preconceived notion, but really falls out of these models, which I believe are valid both for theoretical reasons and because they also match the empirical data of the 20th century.</p>
<p><span style="color: #ffaa00;">The pace of change itself has accelerated.</span></p>
<p>One thing that observers don&#8217;t fully recognize, and that a lot of otherwise thoughtful people fail to take into consideration adequately, is the fact that the pace of change itself has accelerated. Centuries ago people didn&#8217;t think that the world was changing at all. Their grandparents had the same lives that they did, and they expected their grandchildren would do the same, and that expectation was largely fulfilled.</p>
<p>Today it&#8217;s an axiom that life is changing and that technology is affecting the nature of society. But what&#8217;s not fully understood is that the pace of change is itself accelerating, and the last 20 years are not a good guide to the next 20 years. We&#8217;re doubling the paradigm shift rate, the rate of progress, every decade. This will actually match the amount of progress we made in the whole 20th century, because we&#8217;ve been accelerating up to this point.</p>
<p>The 20th century was like 25 years of change at today&#8217;s rate of change. In the next 25 years we&#8217;ll make four times the progress you saw in the 20th century. And we&#8217;ll make 20,000 years of progress in the 21st century, which is almost a thousand times more technical change than we saw in the 20th century.</p>
<p>Specifically, computation is growing exponentially. The one exponential trend that people are aware of is called Moore&#8217;s law. But Moore&#8217;s law itself is just one method for bringing exponential growth to computers. People are aware that we&#8217;re doubling the power of computation every 12 months, because we can put twice as many transistors on an integrated circuit every two years. But in fact, they run twice as fast, and double both the capacity and the speed, which means that the power quadruples.</p>
<p>What&#8217;s not fully realized is that Moore&#8217;s law was not the first but the fifth paradigm to bring exponential growth to computers. We had electro-mechanical calculators, relay based computers, vacuum tubes, and transistors. Every time one paradigm ran out of steam another took over. For a while there were shrinking vacuum tubes, and finally they couldn&#8217;t make them any smaller and still keep the vacuum, so a whole different method came along. They weren&#8217;t just tiny vacuum tubes, but transistors, which constitute a whole different approach. There&#8217;s been a lot of discussion about Moore&#8217;s law running out of steam in about 12 years because by that time the transistors will only be a few atoms in width and we won&#8217;t be able to shrink them any more. And that&#8217;s true, so that particular paradigm will run out of steam.</p>
<p><span style="color: #ffaa00;">Computers will be based on biologically inspired models.</span></p>
<p>We&#8217;ll then go to the sixth paradigm, which is massively parallel computing in three dimensions. We live in a 3 dimensional world, and our brains organize in three dimensions, so we might as well compute in three dimensions. The brain processes information using an electro-chemical method that&#8217;s ten million times slower than electronics. But it makes up for this by being three dimensional. Every intra-neural connection computes simultaneously, so you have a hundred trillion things going on at the same time. And that&#8217;s the direction we&#8217;re going to go in. Right now, chips, even though they&#8217;re very dense, are flat. Fifteen or twenty years from now computers will be massively parallel and will be based on biologically inspired models, which we will devise largely by understanding how the brain works.</p>
<p>We&#8217;re already being significantly influenced by it. It&#8217;s generally recognized, or at least accepted by a lot of observers, that we&#8217;ll have the hardware to manipulate human intelligence within a brief period of time &#8212; I&#8217;d say about twenty years. A thousand dollars of computation will equal the 20 million billion calculations per second of the human brain. What&#8217;s more controversial is whether or not we will have the software. People acknowledge that we&#8217;ll have very fast computers that could in theory emulate the human brain, but we don&#8217;t really know how the brain works, and we won&#8217;t have the software, the methods, or the knowledge to create a human level of intelligence. Without this you just have an extremely fast calculator.</p>
<p><span style="color: #ffaa00;">The brain is not of infinite complexity.</span></p>
<p>But our knowledge of how the brain works is also growing exponentially. The brain is not of infinite complexity. We&#8217;re not going to achieve a total understanding through one simple breakthrough, but we&#8217;re further along in understanding the principles of operation of the human brain than most people realize.</p>
<p>The technology for scanning the human brain is growing exponentially, our ability to actually see the internal connection patterns is growing, and we&#8217;re developing more and more detailed models of biological neurons. We have intricate math models of several dozen brain regions and how they work &#8212; recreating their methodologies using conventional computation. The results of those re-engineered or re-implemented synthetic models of those brain regions match the human brain closely.</p>
<p>We&#8217;re also literally replacing sections of the brain that are degraded or don&#8217;t work any more because of disabilities or disease. There are neural implants for Parkinson&#8217;s disease and well known cochlear implants for deafness. There&#8217;s a new generation of those that are coming out now that provide a thousand points of frequency resolution and will allow deaf people to hear music for the first time. The Parkinson&#8217;s implant actually replaces the cortical neurons themselves that are destroyed by that disease. So we&#8217;ve shown that it&#8217;s feasible to understand regions of the human brain, and re-implement those regions in conventional electronics computation that will actually interact with the brain and perform those functions.</p>
<p>If you follow this work and work out the mathematics of it. It&#8217;s a conservative scenario to say that within 30 years — possibly much sooner — we will have a complete map of the human brain, we will have complete mathematical models of how each region works, and we will be able to re-implement the methods of the human brain, which are quite different than many of the methods used in contemporary artificial intelligence.</p>
<p>But these are actually similar to methods that I use in my own field — pattern recognition — which is the fundamental capability of the human brain. We can&#8217;t think fast enough to logically analyze situations very quickly, so we rely on our powers of pattern recognition. Within 30 years we&#8217;ll be able to create non-biological intelligence that&#8217;s comparable to human intelligence. Just like a biological system, we&#8217;ll have to provide it an education, but here we can bring to bear some of the advantages of machine intelligence: machines are much faster, and much more accurate. A $1,000 dollar computer can remember billions of things accurately &#8212; we&#8217;re hard pressed to remember a handful of phone numbers.</p>
<p><span style="color: #ffaa00;">Machines can share their knowledge with other machines.</span></p>
<p>Once they learn something, machines can also share their knowledge with other machines. We don&#8217;t have quick downloading ports at the level of our intra-neuronal connection patterns and our concentrations of neurotransmitters, so we can&#8217;t just download knowledge.</p>
<p>I can&#8217;t take my knowledge of French and download it to you, but machines can. So we can educate machines through a process that can be hundreds or thousands of times faster than the comparable process in humans.</p>
<p>It can provide a 20 year education to a human level machine in weeks or days, these machines then share their knowledge. The primary implication will be enhancing our own human intelligence.</p>
<p>We&#8217;re going to be putting these machines inside our own brains. We&#8217;re starting to do that now with people who have severe medical problems and disabilities, but ultimately we&#8217;ll all be doing this. Without surgery, we&#8217;ll be able to introduce calculating machines into the blood stream.</p>
<p>They will be able to pass through the capillaries of the brain. These intelligent, blood-cell-sized nanobots will actually be able to go to the brain and interact with biological neurons. The basic feasibility of this has already been demonstrated in animals.</p>
<p>One application of sending billions of nanobots into the brain is full-immersion virtual reality. If you want to be in real reality, the nanobots sit there and do nothing, but if you want to go into virtual reality, the nanobots shut down the signals coming from my real senses, replace them with the signals I would be receiving if I were in the virtual environment, and then my brain feels as if it&#8217;s in the virtual environment. And you can go there yourself — or, more interestingly you can go there with other people &#8212; and you can have everything from sexual and sensual encounters to business negotiations, in full-immersion virtual reality environments that incorporate all of the senses.</p>
<p><span style="color: #ffaa00;">People will beam their flow of sensory experiences.</span></p>
<p>People will beam their own flow of sensory experiences and the neurological correlates of their emotions out into the web, the way people now beam images from web cams in their living rooms and bedrooms.</p>
<p>This will enable you to plug in and actually experience what it&#8217;s like to be someone else, including their emotional reactions, a la the plot concept of the film <em>Being John Malkovic</em>h. In virtual reality you don&#8217;t have to be the same person. You can be someone else, and can project yourself as a different person.</p>
<p>Most importantly, we&#8217;ll be able to enhance our biological intelligence with non-biological intelligence through intimate connections. This won&#8217;t mean just having one thin pipe between the brain and a non-biological system, but actually having non-biological intelligence in billions of different places in the brain.</p>
<p>I don&#8217;t know about you, but there are lots of books I&#8217;d like to read and websites I&#8217;d like to go to, and I find my bandwidth limiting. So instead of having a mere hundred trillion connections, we&#8217;ll have a hundred trillion times a million. We&#8217;ll be able to enhance our cognitive pattern recognition capabilities greatly, think faster, and download knowledge.</p>
<p>If you follow these trends further, you get to a point where change is happening so rapidly that there appears to be a rupture in the fabric of human history. Some people have referred to this as the &#8220;singularity.&#8221; There are many different definitions of the Singularity, a term borrowed from physics, which means an actual point of infinite density and energy that&#8217;s kind of a rupture in the fabric of space-time.</p>
<p>Here, that concept is applied by analogy to human history, where we see a point where this rate of technological progress will be so rapid that it appears to be a rupture in the fabric of human history. It&#8217;s impossible in physics to see beyond a singularity, which creates an event boundary, and some people have hypothesized that it will be impossible to characterize human life after the singularity.</p>
<p>My question is what human life will be like after the singularity, which I predict will occur somewhere right before the middle of the 21st century.</p>
<p><strong>The book I wrote 10 years later, <em>The Age of Spiritual Machines</em></strong></p>
<p>A lot of the concepts we have of the nature of human life &#8212; such as longevity &#8212; suggest a limited capability as biological, thinking entities. All of these concepts are going to undergo significant change as we basically merge with our technology. It&#8217;s taken me a while to get my own mental arms around these issues. In the book I wrote in the 1980s, <em>The Age of Intelligent Machines</em>, I ended with the spectre of machines matching human intelligence somewhere between 2020 and 2050, and I basically have not changed my view on that time frame, although I left behind my view that this is a final spectre.</p>
<p>In the book I wrote ten years later, <em>The Age of Spiritual Machines</em>, I began to consider what life would be like past the point where machines could compete with us. Now I&#8217;m trying to consider what that will mean for human society.</p>
<p>One thing that we should keep in mind is that innate biological intelligence is fixed. We have 10^(26) calculations per second in the whole human race and there are ten billion human minds. 50 years from now, the biological intelligence of humanity will still be at that same order of magnitude. On the other hand, machine intelligence is growing exponentially, and today it&#8217;s a million times less than that biological figure. So although it still seems that human intelligence is dominating, which it is, the crossover point is around 2030 and non-biological intelligence will continue its exponential rise.</p>
<p><span style="color: #ffaa00;">Is knowledge tautological?</span></p>
<p>This leads some people to ask: how can we know if another species or entity is more intelligent that we are? Isn&#8217;t knowledge tautological? How can we know more than we do know? Who would know it, except us?</p>
<p>One response is not to want to be enhanced, not to have nano-bots. A lot of people say that they just want to stay a biological person.</p>
<p>But what will the singularity look like to people who want to remain biological? The answer is that they really won&#8217;t notice it, except for the fact that machine intelligence will appear to biological humanity to be their transcendent servants. It will appear that these machines are very friendly are taking care of all of our needs.</p>
<p>But providing that service of meeting all of the material and emotional needs of biological humanity will comprise a very tiny fraction of the mental output of the non-biological component of our civilization. So there&#8217;s a lot that, in fact, biological humanity won&#8217;t actually notice.</p>
<p>There are two levels of consideration here. On the economic level, mental output will be the primary criterion. We&#8217;re already getting close to the point that the only thing that has value is information. Information has value to the extent that it really reflects knowledge, not just raw data.</p>
<p>There are a few products on this table &#8212; a clock, a camera, tape recorder &#8212; that are physical objects, but really the value of them is in the information that went into their design: the design of their chips and the software that&#8217;s used to invent and manufacture them. The actual raw materials &#8212; a bunch of sand and some metals and so on &#8212; is worth a few pennies, but these products have value because of all the knowledge that went into creating them.</p>
<p>And the knowledge component of products and services is asymptoting towards 100 percent. By the time we get to 2030 it will be basically 100 percent. With a combination of nanotechnology and artificial intelligence, we&#8217;ll be able to create virtually any physical product and meet all of our material needs. When everything is software and information, it&#8217;ll be a matter of just downloading the right software, and we&#8217;re already getting pretty close to that.</p>
<p><span style="color: #ffaa00;">We will have entities that seem to be conscious.</span></p>
<p>On a spiritual level, the issue of what is consciousness is another important aspect of this, because we will have entities by 2030 that seem to be conscious, and that will claim to have feelings.</p>
<p>We have entities today, like characters in your kids&#8217; video games, that can make that claim, but they are not very convincing.</p>
<p>If you run into a character in a video game and it talks about its feelings, you know it&#8217;s just a machine simulation. You&#8217;re not convinced that it&#8217;s a real person there. This is because that entity, which is a software entity, is still a million times simpler than the human brain. In 2030, that won&#8217;t be the case.</p>
<p>Say you encounter another person in virtual reality that looks just like a human but there&#8217;s actually no biological human behind it — it&#8217;s completely an AI projecting a human-like figure in virtual reality, or even a human like image in real reality using an android robotic technology.</p>
<p>These entities will seem human. They won&#8217;t be a million times simpler than humans. They&#8217;ll be as complex as humans. They&#8217;ll have all the subtle cues of being humans. They&#8217;ll be able to sit here and be interviewed and be just as convincing as a human, just as complex, just as interesting. And when they claim to have been angry or happy it&#8217;ll be just as convincing as when another human makes those claims.</p>
<p>At this point, it becomes a really deeply philosophical issue. Is that just a very clever simulation that&#8217;s good enough to trick you, or is it really conscious in the way that we assume other people are? In my view there&#8217;s no real way to test that scientifically. There&#8217;s no machine you can slide the entity into where a green light goes on and says okay, this entity&#8217;s conscious, but no, this one&#8217;s not. You could make a machine, but it will have philosophical assumptions built into it. Some philosophers will say that unless it&#8217;s squirting impulses through biological neurotransmitters, it&#8217;s not conscious, or that unless it&#8217;s a biological human with a biological mother and father it&#8217;s not conscious. But it becomes a matter of philosophical debate. It&#8217;s not scientifically resolvable.</p>
<p><span style="color: #ffaa00;">There&#8217;s not going to be any clear boundary.</span></p>
<p>The next big revolution that&#8217;s going to affect us right away is biological technology, because we&#8217;ve merged biological knowledge with information processing. We are in the early stages of understanding life processes and disease processes by understanding the genome, and how the genome expresses itself in protein. And we&#8217;re going to find &#8212; and this has been apparent all along &#8212; that there&#8217;s a slippery slope and no clear definition of where life begins. Both sides of the abortion debate have been afraid to get off the edges of that debate: that life starts at conception on the one hand or it starts literally at birth on the other. They don&#8217;t want to get off those edges, because they realize it&#8217;s just a completely slippery slope from one end to the other.</p>
<p>But we&#8217;re going to make it even more slippery. We&#8217;ll be able to create stem cells without ever actually going through the fertilized egg. What&#8217;s the difference between a skin cell, which has all the genes, and a fertilized egg? The only differences are some proteins in the eggs and some signalling factors that we don&#8217;t fully understand, yet that are basically proteins. We will get to the point where we&#8217;ll be able to take some protein mix, which is just a bunch of chemicals and clearly not a human being, and add it to a skin cell to create a fertilized egg that we can then immediately differentiate into any cell of the body. When I brush off thousands of skin cells, I will be destroying thousands of potential people. There&#8217;s not going to be any clear boundary.</p>
<p><span style="color: #ffaa00;">Science and tech find a way around the controversy.</span></p>
<p>This is another way of saying also that science and technology are going to find a way around the controversy. In the future, we&#8217;ll be able to do therapeutic cloning, which is a very important technology that completely avoids the concept of the fetus. We&#8217;ll be able to take skin cells and create, pretty directly without ever going through a fetus, all the cells we need.</p>
<p>We&#8217;re not that far away from being able to create new cells. For example, I&#8217;m 53 but with my DNA, I&#8217;ll be able to create the heart cells of a 25 year old man, and I can replace my heart with those cells without surgery just by sending them through my blood stream. They&#8217;ll take up residence in the heart, so at first I&#8217;ll have a heart that&#8217;s one percent young cells and 99 percent older ones.</p>
<p>But if I keep doing this every day, a year later, my heart is 99 percent young cells. With that kind of therapy we can ultimately replenish all the cell tissues and the organs in the body. This is not something that will happen tomorrow, but these are the kinds of revolutionary processes we&#8217;re on the verge of.</p>
<p>If you look at human longevity — which is another one of these exponential trends — you&#8217;ll notice that we added a few days every year to the human life expectancy in the 18th century. In the 19th century we added a few weeks every year, and now we&#8217;re now adding over a hundred days a year, through all of these developments, which are going to continue to accelerate. Many knowledgeable observers, including myself, feel that within ten years we&#8217;ll be adding more than a year every year to life expectancy.</p>
<p>As we get older, human life expectancy will actually move out at a faster rate than we&#8217;re actually progressing in age, so if we can hang in there, our generation is right on the edge. We have to watch our health the old-fashioned way for a while longer so we&#8217;re not the last generation to die prematurely. But if you look at our kids, by the time they&#8217;re 20, 30, 40 years old, these technologies will be so advanced that human life expectancy will be pushed way out.</p>
<p>There is also the more fundamental issue of whether or not ethical debates are going to stop the developments that I&#8217;m talking about. It&#8217;s all very good to have these mathematical models and these trends, but the question is if they&#8217;re going to hit a wall because people, for one reason or another &#8212; through war or ethical debates such as the stem cell issue controversy &#8212; thwart this ongoing exponential development. I strongly believe that&#8217;s not the case.</p>
<p><span style="color: #ffaa00;">You can&#8217;t stop the river of advances.</span></p>
<p>These ethical debates are like stones in a stream. The water runs around them. You haven&#8217;t seen any of these biological technologies held up for one week by any of these debates.</p>
<p>To some extent, they may have to find some other ways around some of the limitations, but there are so many developments going on.</p>
<p>There are dozens of very exciting ideas about how to use genomic information and proteonic information. Although the controversies may attach themselves to one idea here or there, there&#8217;s such a river of advances.</p>
<p>The concept of technological advance is so deeply ingrained in our society that it&#8217;s an enormous imperative. Bill Joy, activist and co-founder of Sun Microsystems, has gotten around — correctly — talking about the dangers, and I agree that the dangers are there, but you can&#8217;t stop ongoing development.</p>
<p>The kinds of scenarios I&#8217;m talking about 20 or 30 years from now are not being developed because there&#8217;s one laboratory that&#8217;s sitting there creating a human level intelligence in a machine. They&#8217;re happening because it&#8217;s the inevitable end result of thousands of little steps.</p>
<p>Each little step is conservative, not radical, and makes perfect sense. Each one is just the next generation of some company&#8217;s products. If you take thousands of those little steps — which are getting faster and faster — you end up with some remarkable changes 10, 20, or 30 years from now. You don&#8217;t see Sun Microsystems saying the future implication of these technologies is so dangerous that they&#8217;re going to stop creating more intelligent networks and more powerful computers. Sun can&#8217;t do that. No company can do that because it would be out of business. There&#8217;s enormous economic imperative.</p>
<p>There is also a tremendous moral imperative. We still have not millions but billions of people who are suffering from disease and poverty, and we have the opportunity to overcome those problems through these technological advances. You can&#8217;t tell the millions of people who are suffering from cancer that we&#8217;re really on the verge of great breakthroughs that will save millions of lives from cancer, but we&#8217;re cancelling all that because the terrorists might use that same knowledge to create a bioengineered pathogen.</p>
<p>This is a true and valid concern, but we&#8217;re not going to do that. There&#8217;s a tremendous belief in society in the benefits of continued economic and technological advance. Still, it does raise the question of the dangers of these technologies, and we can talk about that as well, because that&#8217;s also a valid concern.</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">infographic</span> | the 6 epochs of evolution<br />
<em>by Ray Kurzweil</em></p>
<p><a href="http://www.kurzweilai.net/images/writings-6-epochs-of-evolution-A1.png"><img style=' float: left; padding: 4px; margin: 0 7px 2px 0;'  loading="lazy" decoding="async" class="alignleft  wp-image-263601" title="writings - 6 epochs of evolution - A1" src="http://www.kurzweilai.net/images/writings-6-epochs-of-evolution-A1.png" alt="" width="595" height="499" srcset="https://www.writingsbyraykurzweil.com/images/writings-6-epochs-of-evolution-A1.png 1240w, https://www.writingsbyraykurzweil.com/images/writings-6-epochs-of-evolution-A1-259x217.png 259w, https://www.writingsbyraykurzweil.com/images/writings-6-epochs-of-evolution-A1-680x570.png 680w, https://www.writingsbyraykurzweil.com/images/writings-6-epochs-of-evolution-A1-140x117.png 140w" sizes="auto, (max-width: 595px) 100vw, 595px" /></a></p>
<p>&nbsp;</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">The cutting edge of evolution on our planet.</span></p>
<p>Another aspect of all of these changes is that they force us to re-evaluate our concept of what it means to be human. There is a common viewpoint that reacts against the advance of technology and its implications for humanity. The objection goes like this: we&#8217;ll have very powerful computers but we haven&#8217;t solved the software problem. And because the software&#8217;s so incredibly complex, we can&#8217;t manage it.</p>
<p>I address this objection by saying that the software required to emulate human intelligence is actually not beyond our current capability. We have to use different techniques — different self-organizing methods — that are biologically inspired. The brain is complicated but it&#8217;s not that complicated. You have to keep in mind that it is characterized by a genome of only 23 million bytes.</p>
<p>The genome is six billion bits &#8212; that&#8217;s eight hundred million bytes &#8212; and there are massive redundancies. One pretty long sequence called ALU is repeated 300 thousand times. If you use conventional data compression on the genomes &#8212; at 23 million bytes, a small fraction of the size of Microsoft Word), it&#8217;s a level of complexity that we can handle. But we don&#8217;t have that information yet.</p>
<p>You might wonder how something with 23 million bytes can create a human brain that&#8217;s a million times more complicated than itself. That&#8217;s not hard to understand. The genome creates a process of wiring a region of the human brain involving a lot of randomness. Then, when the fetus becomes a baby and interacts with a very complicated world, there&#8217;s an evolutionary process within the brain in which a lot of the connections die out, others get reinforced, and it self-organizes to represent knowledge about the brain. It&#8217;s a very clever system, and we don&#8217;t understand it yet, but we will, because it&#8217;s not a level of complexity beyond what we&#8217;re capable of engineering.</p>
<p>In my view there is something special about human beings that&#8217;s different from what we see in any of the other animals. By happenstance of evolution we were the first species to be able to create technology. Actually there were others, but we are the only one that survived in this ecological niche. But we combined a rational faculty, the ability to think logically, to create abstractions, to create models of the world in our own minds, and to manipulate the world. We have opposable thumbs so that we can create technology, but technology is not just tools. Other animals have used primitive tools, but the difference is actually a body of knowledge that changes and evolves itself from generation to generation. The knowledge that the human species has is another one of those exponential trends.</p>
<p>We use one stage of technology to create the next stage, which is why technology accelerates, why it grows in power. Today, for example, a computer designer has these tremendously powerful computer system design tools to create computers, so in a couple of days they can create a very complex system and it can all be worked out very quickly. The first computer designers had to actually draw them all out in pen on paper. Each generation of tools creates the power to create the next generation.</p>
<p>So technology itself is an exponential, evolutionary process that is a continuation of the biological evolution that created humanity in the first place. Biological evolution itself evolved in an exponential manner. Each stage created more powerful tools for the next, so when biological evolution created DNA it now had a means of keeping records of its experiments so evolution could proceed more quickly. Because of this, the Cambrian explosion only lasted a few tens of millions of years, whereas the first stage of creating DNA and primitive cells took billions of years.</p>
<p>Finally, biological evolution created a species that could manipulate its environment and had some rational faculties, and now the cutting edge of evolution actually changed from biological evolution into something carried out by one of its own creations, Homo sapiens, and is represented by technology. In the next epoch this species that ushered in its own evolutionary process &#8212; that is, its own cultural and technological evolution, as no other species has &#8212; will combine with its own creation and will merge with its technology. At some level that&#8217;s already happening, even if most of us don&#8217;t necessarily have them yet inside our bodies and brains, since we&#8217;re very intimate with the technology, it&#8217;s in our pockets. We&#8217;ve certainly expanded the power of the mind of the human civilization through the power of its technology.</p>
<p>We are entering a new era. I call it &#8220;the Singularity.&#8221; It&#8217;s a merger between human intelligence and machine intelligence that is going to create something bigger than itself. It&#8217;s the cutting edge of evolution on our planet. One can make a strong case that it&#8217;s actually the cutting edge of the evolution of intelligence in general, because there&#8217;s no indication that it&#8217;s occurred anywhere else.</p>
<p><span style="color: #ffaa00;">To me, this is what civilization is all about.</span></p>
<p>To me, that is what human civilization is all about. It is part of our destiny and part of the destiny of evolution to continue to progress ever faster, and to grow the power of intelligence exponentially. To contemplate stopping that — to think human beings are fine the way they are — is a misplaced fond remembrance of what human beings used to be. What human beings are is a species that has undergone a cultural and technological evolution, and it&#8217;s the nature of evolution that it accelerates, and that its powers grow exponentially, and that&#8217;s what we&#8217;re talking about. The next stage of this will be to amplify our own intellectual powers with the results of our technology.</p>
<p>What is unique about human beings is our ability to create abstract models and to use these mental models to understand the world and do something about it. These mental models have become more and more sophisticated, and by becoming embedded in technology, they have become very elaborate and very powerful. Now we can actually understand our own minds. This ability to scale up the power of our own civilization is what&#8217;s unique about human beings.</p>
<p>Patterns are the fundamental ontological reality, because they are what persists, not anything physical. Take myself, Ray Kurzweil. What is Ray Kurzweil? Is it this stuff here? Well, this stuff changes very quickly. Some of our cells turn over in a matter of days. Even our skeleton, which you think probably lasts forever because we find skeletons that are centuries old, changes over within a year. Many of our neurons change over. But more importantly, the particles making up the cells change over even more quickly, so even if a particular cell is still there the particles are different. So I&#8217;m not the same stuff, the same collection of atoms and molecules that I was a year ago.</p>
<hr class="dotted" />
<hr class="dotted" />
<p><span style="color: #ffaa00;">The pattern persists.</span></p>
<p>But what does persist is that pattern. The pattern evolves slowly, but the pattern persists. So we&#8217;re kind of like the pattern that water makes in a stream; you put a rock in there and you&#8217;ll see a little pattern. The water is changing every few milliseconds. If you come a second later, it&#8217;s completely different water molecules, but the pattern persists.</p>
<p>Patterns are what have resonance. Ideas are patterns, technology is patterns. Even our basic existence as people is nothing but a pattern. Pattern recognition is the heart of human intelligence. 99 percent of our intelligence is our ability to recognize patterns.</p>
<p>There&#8217;s been a sea change just in the last several years in the public understanding of the acceleration of change and the potential impact of all of these technologies — computer technology, communications, biological technology — on human society.</p>
<p>There&#8217;s really been tremendous change in popular public perception in the past three years because of the onslaught of stories and news developments that document and support this vision. There are now several stories every day that are significant developments and that show the escalating power of these technologies.</p>
<p><span style="color: #ffaa00;">&#8212; end &#8212;</span></p>
<p>IMAGE</p>
<hr />
<p><span style="color: #ffaa00;">essay</span> | no. 3<strong><br />
</strong></p>
<p><span style="color: #ffaa00;">essay title:</span> Exponential technological progress in the 21 century<br />
<span style="color: #ffaa00;">author:</span> by Ray Kurzweil</p>
<p><span style="color: #ffaa00;">&#8212; introduction &#8212;</span></p>
<p>Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. Each epoch of evolution has progressed more rapidly by building on the products of the previous stage.</p>
<p>Evolution works through indirection: evolution created humans, humans created technology, humans are now working with increasingly advanced technology to create new generations of technology. As a result, the rate of progress of an evolutionary process increases exponentially over time.</p>
<p>Over time, the &#8220;order&#8221; of the information embedded in the evolutionary process &#8212; the measure of how well the information fits a purpose, which in evolution is survival &#8212; increases.</p>
<p><strong>A comment on the nature of order</strong></p>
<p>The concept of the &#8220;order&#8221; of information is important here, as it is not the same as the opposite of disorder. If disorder represents a random sequence of events, then the opposite of disorder should imply &#8220;not random.&#8221; Information is a sequence of data that is meaningful in a process, such as the DNA code of an organism, or the bits in a computer program. Noise, on the other hand, is a random sequence. Neither noise nor information is predictable. Noise is inherently unpredictable, but carries no information. Information, however, is also unpredictable. If we can predict future data from past data, then that future data stops being information. We might consider an alternating pattern (0101010&#8230;) to be orderly, but it carries no information &#8212; beyond the first couple of bits.</p>
<p>Thus orderliness does not constitute order because order requires information. However, order goes beyond mere information. A recording of radiation levels from space represents information, but if we double the size of this data file, we have increased the amount of data, but we have not achieved a deeper level of order.</p>
<p>Order is information that fits a purpose. The measure of order is the measure of how well the information fits the purpose. In the evolution of life-forms, the purpose is to survive. In an evolutionary algorithm (a computer program that simulates evolution to solve a problem) applied to, say, investing in the stock market, the purpose is to make money. Simply having more information does not necessarily result in a better fit. A superior solution for a purpose may very well involve less data.</p>
<p>The concept of &#8220;complexity&#8221; is often used to describe the nature of the information created by an evolutionary process. Complexity is a close fit to the concept of order that I am describing, but is also not sufficient. Sometimes, a deeper order &#8212; a better fit to a purpose &#8212; is achieved through simplification rather than further increases in complexity.</p>
<p>For example, a new theory that ties together apparently disparate ideas into one broader more coherent theory reduces complexity but nonetheless may increase the &#8220;order for a purpose&#8221; that I am describing. Indeed, achieving simpler theories is a driving force in science. Evolution has shown, however, that the general trend towards greater order does generally result in greater complexity.</p>
<p>Thus improving a solution to a problem &#8212; which may increase or decrease complexity &#8212; increases order. Now that just leaves the issue of defining the problem. Indeed, the key to an evolution algorithm (and to biological and technological evolution) is exactly this: defining the problem.</p>
<p><strong>Innovations created by evolution encourage and enable faster evolution</strong></p>
<p>We may note that this aspect of &#8220;the law of accelerating returns&#8221; appears to contradict the second law of thermodynamics, which implies that entropy &#8212; randomness in a closed system &#8212; cannot decrease, and, therefore, generally increases. However, the law of accelerating returns pertains to evolution, and evolution is not a closed system. It takes place amid great chaos, and indeed depends on the disorder in its midst, from which it draws its options for diversity.</p>
<p>And from these options, an evolutionary process continually prunes its choices to create ever greater order. Even a crisis, such as the periodic large asteroids that have crashed into the Earth, although increasing chaos temporarily, end up increasing &#8212; deepening &#8212; the order created by an evolutionary process.</p>
<p><strong>The law of accelerating returns </strong></p>
<p style="padding-left: 30px;">• A primary reason that evolution &#8212; of life-forms or of technology &#8212; speeds up is that it builds on its own increasing order, with ever more sophisticated means of recording and manipulating information. Innovations created by evolution encourage and enable faster evolution. In the case of the evolution of life forms, the most notable early example is DNA, which provides a recorded and protected transcription of life’s design from which to launch further experiments.</p>
<p style="padding-left: 30px;">In the case of the evolution of technology, ever improving human methods of recording information have fostered further technology. The first computers were designed on paper and assembled by hand. Today, they are designed on computer workstations with the computers themselves working out many details of the next generation’s design, and are then produced in fully-automated factories with human guidance but limited direct intervention.</p>
<p style="padding-left: 30px;">• The evolutionary process of technology seeks to improve capabilities in an exponential fashion. Innovators seek to improve things by multiples. Innovation is multiplicative, not additive. Technology, like any evolutionary process, builds on itself. This aspect will continue to accelerate when the technology itself takes full control of its own progression.</p>
<p style="padding-left: 30px;">• We can thus conclude the following with regard to the evolution of life-forms, and of technology: the law of accelerating returns as applied to an evolutionary process: An evolutionary process is not a closed system; therefore, evolution draws upon the chaos in the larger system in which it takes place for its options for diversity; and evolution builds on its own increasing order. Therefore, in an evolutionary process, order increases exponentially.</p>
<p style="padding-left: 30px;">• A correlate of the above observation is that the &#8220;returns&#8221; of an evolutionary process &#8212; that is: the speed, cost-effectiveness, or overall &#8220;power&#8221; of a process &#8212;  increase exponentially over time. We see this in Moore’s law, in which each new generation of computer chip (now spaced about two years apart) provides twice as many components, each of which operates substantially faster  &#8212; because of the smaller distances required for the electrons to travel, and other innovations. This exponential growth in the power and price-performance of information-based technologies &#8212; roughly doubling every year &#8212; is not limited to computers, but is true for a wide range of technologies, measured many different ways.</p>
<p style="padding-left: 30px;">• In another positive feedback loop, as a particular evolutionary process (e.g., computation) becomes more effective (e.g., cost effective), greater resources are deployed towards the further progress of that process. This results in a second level of exponential growth (i.e., the rate of exponential growth itself grows exponentially). For example, it took three years to double the price-performance of computation at the beginning of the twentieth century, two years around 1950, and is now doubling about once a year. Not only is each chip doubling in power each year for the same unit cost, but the number of chips being manufactured is growing exponentially.</p>
<p style="padding-left: 30px;">• Biological evolution is one such evolutionary process. Indeed it is the quintessential evolutionary process. It took place in a completely open system (as opposed to the artificial constraints in an evolutionary algorithm). Thus many levels of the system evolved at the same time.</p>
<p style="padding-left: 30px;">• Technological evolution is another such evolutionary process. Indeed, the emergence of the first technology-creating species resulted in the new evolutionary process of technology. Therefore, technological evolution is an outgrowth of &#8212; and a continuation of &#8212; biological evolution. Early stages of humanoid created technology were barely faster than the biological evolution that created our species. Homo sapiens evolved in a few hundred thousand years. Early stages of technology &#8212; the wheel, fire, stone tools &#8212; took tens of thousands of years to evolve and be widely deployed. A thousand years ago, a paradigm shift such as the printing press, took on the order of a century to be widely deployed. Today, major paradigm shifts, such as cell phones and the world wide web were widely adopted in only a few years time.</p>
<p style="padding-left: 30px;">• A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential. When this happens, a paradigm shift (a fundamental change in the approach) occurs, which enables exponential growth to continue.</p>
<p style="padding-left: 30px;">• Each paradigm follows an &#8220;S curve,&#8221; which consists of slow growth (the early phase of exponential growth), followed by rapid growth (the late, explosive phase of exponential growth), followed by a leveling off as the particular paradigm matures.</p>
<p style="padding-left: 30px;">• During this third or maturing phase in the life cycle of a paradigm, pressure builds for the next paradigm shift, and research dollars are invested to create the next paradigm. We can see this in the enormous investments being made today in the next computing paradigm &#8212; three dimensional molecular computing &#8212; despite the fact that we still have at least a decade left for the paradigm of shrinking transistors on a flat integrated circuit using photo-lithography  &#8212; Moore’s law.</p>
<p style="padding-left: 30px;">Generally, by the time a paradigm approaches its asymptote (limit) in price|performance, the next technical paradigm is already working in niche applications. For example, engineers were shrinking vacuum tubes in the 1950s to provide greater price|performance for computers, and reached a point where it was no longer feasible to shrink tubes and maintain a vacuum. At this point, around 1960, transistors had already achieved a strong niche market in portable radios.</p>
<p style="padding-left: 30px;">• When a paradigm shift occurs for a particular type of technology, the process begins a new S curve.</p>
<p style="padding-left: 30px;">• Thus the acceleration of the overall evolutionary process proceeds as a sequence of S curves,&#8221; and the overall exponential growth consists of this cascade of S curves.</p>
<p style="padding-left: 30px;">• The resources underlying the exponential growth of an evolutionary process are relatively unbounded.</p>
<p style="padding-left: 30px;">• One resource is the (ever growing) order of the evolutionary process itself. Each stage of evolution provides more powerful tools for the next. In biological evolution, the advent of DNA allowed more powerful and faster evolutionary &#8220;experiments.&#8221; Later, setting the &#8220;designs&#8221; of animal body plans during the Cambrian explosion allowed rapid evolutionary development of other body organs, such as the brain. Or to take a more recent example, the advent of computer-assisted design tools allows rapid development of the next generation of computers.</p>
<p style="padding-left: 30px;">• The other required resource is the &#8220;chaos&#8221; of the environment in which the evolutionary process takes place and which provides the options for further diversity. In biological evolution, diversity enters the process in the form of mutations and ever- changing environmental conditions. In technological evolution, human ingenuity combined with ever-changing market conditions keep the process of innovation going.</p>
<p style="padding-left: 30px;">• If we apply these principles at the highest level of evolution on Earth, the first step, the creation of cells, introduced the paradigm of biology. The subsequent emergence of DNA provided a digital method to record the results of evolutionary experiments. Then, the evolution of a species that combined rational thought with an opposable appendage (the thumb) caused a fundamental paradigm shift from biology to technology. The upcoming primary paradigm shift will be from biological thinking to a hybrid combining biological and nonbiological thinking. This hybrid will include &#8220;biologically inspired&#8221; processes resulting from the reverse engineering of biological brains.</p>
<p style="padding-left: 30px;">• If we examine the timing of these steps, we see that the process has continuously accelerated. The evolution of life forms required billions of years for the first steps (primitive cells); later on progress accelerated. During the Cambrian explosion, major paradigm shifts took only tens of millions of years. Later on, Humanoids developed over a period of millions of years, and Homo sapiens over a period of only hundreds of thousands of years.</p>
<p style="padding-left: 30px;">• With the advent of a technology-creating species, the exponential pace became too fast for evolution through DNA guided protein synthesis and moved on to human created technology. Technology goes beyond mere tool making; it is a process of creating ever more powerful technology using the tools from the previous round of innovation, and is, thereby, an evolutionary process.</p>
<p style="padding-left: 30px;">As I noted, the first technological took tens of thousands of years. For people living in this era, there was little noticeable technological change in even a thousand years. By 1000 AD, progress was much faster and a paradigm shift required only a century or two. In the nineteenth century, we saw more technological change than in the nine centuries preceding it. Then in the first twenty years of the twentieth century, we saw more advancement than in all of the nineteenth century. Now, paradigm shifts occur in only a few years time.</p>
<p style="padding-left: 30px;">• The paradigm shift rate &#8212; the overall rate of technical progress &#8212; is currently doubling (approximately) every decade. That is, paradigm shift times are halving every decade &#8212; and the rate of acceleration is itself growing exponentially. So, the technological progress in the 21 century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the 20 century saw only about 20 years of progress (again, at today’s rate of progress) since we have been speeding up to current rates. So the 21 century will see about a thousand times greater technological change than its predecessor.</p>
<hr class="dotted" />
<p><span style="color: #ffaa00;">special supplement</span> | no. 4<strong><br />
</strong></p>
<p><strong>Math from the law of accelerating returns</strong><br />
<em>by Ray Kurzweil</em></p>
<p><a href="http://www.kurzweilai.net/images/math-Ray-Kurzweil-A1.png"><img style=' display: block; margin-right: auto; margin-left: auto;'  loading="lazy" decoding="async" class="aligncenter  wp-image-263108" title="math - Ray Kurzweil - A1" src="http://www.kurzweilai.net/images/math-Ray-Kurzweil-A1.png" alt="" width="591" height="772" srcset="https://www.writingsbyraykurzweil.com/images/math-Ray-Kurzweil-A1.png 765w, https://www.writingsbyraykurzweil.com/images/math-Ray-Kurzweil-A1-259x339.png 259w, https://www.writingsbyraykurzweil.com/images/math-Ray-Kurzweil-A1-680x889.png 680w, https://www.writingsbyraykurzweil.com/images/math-Ray-Kurzweil-A1-140x183.png 140w" sizes="auto, (max-width: 591px) 100vw, 591px" /></a></p>
<p>&#8212; end &#8212;</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/essays-celebrating-15-year-anniversary-of-the-book-the-singularity-is-near/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>My view on the chatbot claiming to pass the Turing test.</title>
		<link>https://www.writingsbyraykurzweil.com/my-view-on-the-chatbot-claiming-to-pass-the-turing-test</link>
		<comments>https://www.writingsbyraykurzweil.com/my-view-on-the-chatbot-claiming-to-pass-the-turing-test#respond</comments>
		<pubDate>Sun, 01 Jun 2014 05:00:21 +0000</pubDate>
								<dc:creator></dc:creator>
		
								<media:thumbnail url="https://www.writingsbyraykurzweil.com/images/Eugene-Goostman-chatbot-screenshot-140x53.png" width="140" height="53" />
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">http://www.kurzweilai.net/?p=229565</guid>
		<description><![CDATA[&#160; essay title: My view on the Eugene Goostman chatbot claiming to pass the Turing test. author: by Ray Kurzweil date: June 2014 On June 8, 2014, The University of Reading announced that a computer program “has passed the Turing test for the first time.” University of Reading professor Kevin Warwick, PhD, described it this [&#8230;]]]></description>
			<content:encoded><![CDATA[<div id="attachment_229570" style="width: 621px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><a href="http://www.kurzweilai.net/images/Eugene-Goostman-chatbot-screenshot.png"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-229570" class="wp-image-229570 " title="Eugene Goostman chatbot screenshot" src="http://www.kurzweilai.net/images/Eugene-Goostman-chatbot-screenshot.png" alt="" width="621" height="235" srcset="https://www.writingsbyraykurzweil.com/images/Eugene-Goostman-chatbot-screenshot.png 690w, https://www.writingsbyraykurzweil.com/images/Eugene-Goostman-chatbot-screenshot-259x98.png 259w, https://www.writingsbyraykurzweil.com/images/Eugene-Goostman-chatbot-screenshot-680x257.png 680w, https://www.writingsbyraykurzweil.com/images/Eugene-Goostman-chatbot-screenshot-140x53.png 140w" sizes="auto, (max-width: 621px) 100vw, 621px" /></a><p style=' padding: 0 4px 5px; margin: 0;'  id="caption-attachment-229570" class="wp-caption-text">Eugene Goostman chatbot. (credit: Vladimir Veselov and Eugene Demchenko)</p></div>
<p>&nbsp;</p>
<p>essay title: My view on the Eugene Goostman chatbot claiming to pass the Turing test.<br />
author: by Ray Kurzweil<br />
date: June 2014</p>
<p>On June 8, 2014, The University of Reading announced that a computer program “has passed the Turing test for the first time.”</p>
<p>University of Reading professor Kevin Warwick, PhD, described it this way:</p>
<blockquote><p>&#8220;Some will claim that the test has already been passed. The words &#8216;Turing test&#8217; have been applied to similar competitions around the world. However, this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing&#8217;s test was passed for the first time on Saturday.” <em>&#8212; Kevin Warwick, PhD</em></p></blockquote>
<p>I have had a long-term wager with Mitch Kapor in which I predicted that a computer program would pass the Turing test by 2029 and he predicted that this would not happen, see links below.</p>
<p>This was the first long-term wager on the “Long Now” website. The bet called for $20,000 to be donated from us to the charity of the winner’s choice.</p>
<p>As a result, messages have been streaming in from around the world congratulating me for having won the bet.</p>
<p>However, I think this is premature. I am disappointed that Professor Warwick, with whom I agree on many things, would make this statement. There are several problems that I describe below, including a transcript of a conversation that I had with Eugene Goostman, the chatbot in question.</p>
<p>In my 2004 book <em>The Singularity Is Near</em>, I anticipated that there would be premature announcements of this kind:</p>
<blockquote><p><strong><em>The Singularity Is Near</em>, page 295</strong> | Turing was carefully imprecise in setting the rules for his test, and significant literature has been devoted to the subtleties of establishing the exact procedures for determining how to assess when the Turing test has been passed. In 2002 I negotiated the rules for a Turing test wager with Mitch Kapor on the Long Now website.</p>
<p>The question underlying our twenty-thousand-dollar bet, the proceeds of which go to charity of the winner&#8217;s choice, was, &#8220;Will the Turing test be passed by a machine by 2029?&#8221; I said yes, and Kapor said no. It took us months of dialogue to arrive at the intricate rules to implement our wager.</p>
<p>Simply defining &#8220;machine&#8221; and &#8220;human,&#8221; for example, was not a straightforward matter. Is the human judge allowed to have any nonbiological thinking processes in his or her brain? Conversely, can the machine have any biological aspects?</p>
<p>Because the definition of the Turing test will vary from person to person, Turing test capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been passed, the actual threshold will have long since been achieved.&#8221;</p></blockquote>
<p>Professor Warwick claims that the test was “unrestricted.” However, having the chatbot claim to be a 13-year-old child, and one for whom English is not a first language, is effectively a restriction. Moreover, the interactions were reportedly limited to five minutes each. There is a high likelihood of fooling naïve judges in such a brief period of time.</p>
<p>I chatted with the chatbot Eugene Goostman, and was not impressed. Eugene does not keep track of the conversation, repeats himself word for word, and often responds with typical chatbot non sequiturs. Here is the transcript.</p>
<hr />
<p><strong>Ray Kurzweil conversation with chatbot Eugene Goostman</strong></p>
<p><strong>Question:</strong> How old are you?<br />
<strong>Eugene:</strong> I’m a little boy. 13 years old.</p>
<p><strong>Question:</strong> How much is two plus two?<br />
<strong>Eugene:</strong> Now I&#8217;m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!</p>
<p><strong>Question:</strong> If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?<br />
<strong>Eugene:</strong> Not too many. But I can&#8217;t tell you the exact number, I forgot it. If I&#8217;m not mistaken &#8211; you still didn&#8217;t tell me where you live. OR it&#8217;s a secret?:-)</p>
<p><strong>Question:</strong> I live in the capital of the United States. Do you know where that is?<br />
<strong>Eugene:</strong> Every nerd knows that the capital of The United States is Washington.</p>
<p><strong>Question</strong> I live in the capital of the country that put a man on the moon.<br />
<strong>Eugene:</strong> Well, if you want it, the capital of The United States is Washington.</p>
<p><strong>Question:</strong> I live in the capital of the country that built the great wall.<br />
<strong>Eugene:</strong> Tell me more about Capital. I like to learn about different places!</p>
<p><strong>Question:</strong> How old were you when you started to wear glasses?<br />
<strong>Eugene:</strong> No I was not! How can you think so?! If I&#8217;m not mistaken &#8211; you still didn&#8217;t tell me where you live. OR it&#8217;s a secret?:-)</p>
<p><strong>Question:</strong> I did tell you where I live. I told you several times. Do you remember?<br />
<strong>Eugene:</strong> Oh! Poor confused cyborg you are! Those malicious robots erased your memory! You don&#8217;t remember your dwelling place &#8211; do you remember at least your serial number?</p>
<p><strong>Question:</strong> I remember. You are the one who did not remember.<br />
<strong>Eugene:</strong> Am I really the one who did not remember? But well, no time for arguing now! Could you enlighten me with any other hidden knowledge about myself? If I&#8217;m not mistaken &#8212; you still didn&#8217;t tell me where you live. OR it&#8217;s a secret?</p>
<p><strong>Question:</strong> If I have three boxes with two marbles in each box how many marbles do I have?<br />
<strong>Eugene:</strong> Not too many. But I can&#8217;t tell you the exact number, I forgot it. Some another topic?</p>
<hr />
<p>This is about typical for the state of the art in chatbots, and I’ve seen better. We have a live chatbot named Ramona on the KurzweilAI website. Ramona can be amusing but we don’t claim that she passes the Turing test.</p>
<blockquote><p><strong>Ramona live chatbot on KurzweilAI website</strong> | “<a href="http://www.kurzweilai.net/Ramona4.2/ramona.html" target="_blank" rel="noopener">Chat with Ramona 4.2</a>”</p></blockquote>
<p>A large part of the problem is that in his famous 1950 paper, Alan Turing did not specify the rules. Here are the rules that Mitch Kapor and I painstakingly devised together, along with our essays on why we think each of us will win the wager.</p>
<blockquote><p><strong>The rules I devised with Mitch Kapor</strong> | <a href="http://www.kurzweilai.net/a-wager-on-the-turing-test-the-rules" target="_blank" rel="noopener">&#8220;A Wager on the Turing test: the rules&#8221;</a></p>
<p><em>An explanation of rules behind the Turing test, used to determine the winner of a long bet between Ray Kurzweil and Mitch Kapor over whether artificial intelligence will be achieved by 2029.</em></p>
<p><strong>Essay by Ray Kurzweil</strong> | <a href="http://www.kurzweilai.net/a-wager-on-the-turing-test-why-i-think-i-will-win" target="_blank" rel="noopener">&#8220;A Wager on the Turing test: Why I think I will win&#8221;</a></p>
<p><em>Will Ray Kurzweil’s predictions come true? He’s putting his money on it. Here’s why he thinks he will win a bet on the future of artificial intelligence. The wager: an artifical intelligence that passes the Turing test by 2029.</em></p>
<p><strong>Essay by Mitch Kapor</strong> | <a href="http://www.kurzweilai.net/why-i-think-i-will-win" target="_blank" rel="noopener">&#8220;Why I think I will win&#8221;</a></p>
<p><em>Will a computer pass the Turing Test (convincingly impersonate a human) by 2029? Mitchell Kapor has bet Ray Kurzweil that a computer can’t because it lacks understanding of subtle human experiences and emotions.</em></p>
<p><strong>Essay by Ray Kurzweil</strong> | <a href="http://www.kurzweilai.net/response-to-mitchell-kapor-s-why-i-think-i-will-win" target="_blank" rel="noopener">&#8220;Response to Mitchell Kapor’s essay titled &#8216;Why I think I will win&#8217;”</a></p>
<p><em>Ray Kurzweil responds to Mitch Kapor’s arguments against the possibility that an AI will succeed, in this final counterpoint on the bet: an artificial intelligence will pass a Turing Test by 2029.</em></p></blockquote>
<p>Apparently, we have now entered the era of premature announcements of a computer having passed Turing’s eponymous test. I continue to believe that with the right rules, this test is the right assessment of human-level intelligence in a machine.</p>
<p>In my 1989 book <em>The Age of Intelligent Machines</em>, I predicted that the milestone of a computer passing the Turing test would occur in the first half of the 21st century. I specified the 2029 date in my 1999 book <em>The Age of Spiritual Machines</em>. After that book was published, we had a conference at Stanford University and the consensus of AI experts at that time was that it would happen in hundreds of years, if ever.</p>
<p>In 2006 we had a conference called “AI at 50” at Dartmouth College, celebrating the 50th anniversary of the 1956 Dartmouth conference that gave artificial intelligence its name. We had instant polling devices and the consensus at that time, among AI experts, was 25 to 50 years. Today, my prediction appears to be median view. So, I am gratified that a growing group of people now think that I am being too conservative.</p>
<hr />
<p><strong>related reading:</strong><br />
news | <a href="http://www.kurzweilai.net/chatbot-eugene-goostman-passes-turing-test-warwick-claims" target="_blank" rel="noopener">&#8220;Chatbot Eugene Goostman passes Turing test, Warwick claims&#8221;</a></p>
<p>University of Reading | <a href="http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx" target="_blank" rel="noopener">&#8220;Turing test success marks milestone in computing history&#8221;</a><br />
Dartmouth College | <a href="http://www.dartmouth.edu/~ai50/homepage.html" target="_blank" rel="noopener">Dartmouth Artificial Intelligence Conference: The Next Fifty Years &#8212; AI at 50</a></p>
<p>Eugene Goostman chatbot | <a href="http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/" target="_blank" rel="noopener">main</a><br />
Eugene Goostman chatbot | <a href="https://play.google.com/store/apps/details?id=com.theworld.eugenegootsman" target="_blank" rel="noopener">Google Play app</a></p>
<p>Wikipedia | <a href="http://en.wikipedia.org/wiki/Turing_test" target="_blank" rel="noopener">Turing test</a><br />
Wikipedia | <a href="http://en.wikipedia.org/wiki/Eugene_Goostman" target="_blank" rel="noopener">Eugene Goostman</a><br />
Wikipedia | <a href="http://en.wikipedia.org/wiki/Alan_Turing" target="_blank" rel="noopener">Alan Turing</a><br />
Wikipedia | <a href="http://en.wikipedia.org/wiki/Mitch_Kapor" target="_blank" rel="noopener">Mitch Kapor</a></p>
<p><em>The New Yorker</em> | <a href="http://www.newyorker.com/tech/elements/why-cant-my-computer-understand-me" target="_blank" rel="noopener">&#8220;Why can&#8217;t my computer understand me?&#8221;</a> by Gary Marcus, PhD<br />
<em>The New Yorker</em> | <a href="http://www.newyorker.com/tech/elements/what-comes-after-the-turing-test" target="_blank" rel="noopener">&#8220;What comes after the Turing Test&#8221;</a> by Gary Marcus, PhD</p>
<blockquote><p><em>excerpt</em> | Allow me to propose a Turing Test for the twenty first century: build a computer program that can watch any arbitrary TV program or YouTube video and answer questions about its content &#8212; &#8220;Why did Russia invade Crimea?” or “Why did Walter White consider taking a hit out on Jessie?” Chatterbots like Goostman can hold a short conversation about TV, but only by bluffing. When asked what <em>Cheers</em> was about, it responded, “How should I know, I haven’t watched the show.” But no existing program &#8212; not Watson, Goostman, nor Siri &#8212; can currently come close to doing what any teenager can.</p></blockquote>
<p>event | <a href="http://www.aaai.org/Conferences/AAAI/aaai15.php" target="_blank" rel="noopener">AAAI 2015</a><br />
event | AAAI 2015: <a href="http://www.math.unipd.it/~frossi/BeyondTuring2015/" target="_blank" rel="noopener">Beyond the Turing Test Workshop</a></p>
<p>AAAI 2015: Beyond the Turing Test Workshop | This workshop is modeled on a set of early meetings that shapes the annual RoboCup competitions. Our goal is to craft a replacement, an annual or bi-annual Turing Championship, that might consist of 3-5 different challenging tasks, with bragging rights given to the first programs to achieve human level performance in each task.</p>
<p>We envision the definition of at least two events. The first, recently sponsored by Nuance, will be the Winograd Schema Challenge, proposed by Hector Levesque, which tests the ability of machines to resolve linguistic antecedents in contexts in which common sense knowledge is critical.</p>
<p>The second, suggested by workshop co-chair, Gary Marcus, PhD, will focus on the comprehension of novel materials, such as videos, texts, photos, and podcasts. Marcus suggested programs be asked to watch any arbitrary TV program or YouTube video and answer questions about its content — &#8220;Why did Russia invade Crimea?&#8221; Several leading researchers, including Guruduth Banavar, Ned Block, Ernest Davis, Oren Etzioni, Ken Forbus, Hiroaki Kitano, Danica Kragic, Leora Morgenstein, Charles Ortiz, Stuart Shieber, Moshe Vardi, and Patrick Winston have agreed to be in the advisory board of this initiative.</p>
<p><a href="http://www.kurzweilai.net/images/AAAI-2015-Beyond-the-Turing-Test-logo.png"><img style=' display: block; margin-right: auto; margin-left: auto;'  loading="lazy" decoding="async" class="aligncenter wp-image-236485" title="AAAI 2015 - Beyond the Turing Test - logo" src="http://www.kurzweilai.net/images/AAAI-2015-Beyond-the-Turing-Test-logo.png" alt="" width="446" height="98" /></a></p>
<p>Long Now Foundation | <a href="http://longbets.org/1/" target="_blank" rel="noopener">&#8220;Long Bet: By 2029 no computer or machine intelligence will have passed the Turing Test&#8221;</a><br />
Long Now Foundation | <a href="http://longnow.org/" target="_blank" rel="noopener">main</a></p>
<p>Mitchell Kapor | <a href="http://www.kapor.com/" target="_blank" rel="noopener">main</a></p>
<p><a href="http://www.kurzweilai.net/images/The-Long-Now-Foundation-logo.png"><img style=' float: left; padding: 4px; margin: 0 7px 2px 0;'  loading="lazy" decoding="async" class="alignleft wp-image-229665" title="The Long Now Foundation logo" src="http://www.kurzweilai.net/images/The-Long-Now-Foundation-logo.png" alt="" width="198" height="128" /></a><a href="http://www.kurzweilai.net/images/University-of-Reading-logo.png"><img style=' float: left; padding: 4px; margin: 0 7px 2px 0;'  loading="lazy" decoding="async" class="alignleft wp-image-229669" title="University of Reading logo" src="http://www.kurzweilai.net/images/University-of-Reading-logo.png" alt="" width="181" height="72" /></a></p>
<hr />
<p><strong>related viewing from Newsy:</strong></p>
<p><object width="640" height="360" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="//www.youtube.com/v/7RxucKNEgA0?version=3&amp;hl=en_US&amp;rel=0" /><param name="allowfullscreen" value="true" /><embed width="640" height="360" type="application/x-shockwave-flash" src="//www.youtube.com/v/7RxucKNEgA0?version=3&amp;hl=en_US&amp;rel=0" allowfullscreen="allowfullscreen" allowscriptaccess="always" allowfullscreen="allowfullscreen" /></object></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/my-view-on-the-chatbot-claiming-to-pass-the-turing-test/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Space Wars: The First Six Hours of World War III</title>
		<link>https://www.writingsbyraykurzweil.com/space-wars-the-first-six-hours-of-world-war-iii</link>
		<comments>https://www.writingsbyraykurzweil.com/space-wars-the-first-six-hours-of-world-war-iii#respond</comments>
		<pubDate>Tue, 17 Apr 2007 02:59:32 +0000</pubDate>
								<dc:creator>William B. Scott, Amara D. Angelica</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">http://www.kurzweilai.net/space-wars-the-first-six-hours-of-world-war-iii</guid>
		<description><![CDATA[Space Wars by Willliam Scott, Michael Coumatos, and William Birnes, Forge Books (April 17, 2007) describes how the first hours of World War III might play out in the year 2010. While fiction, it's based on real-world military scenarios and technologies, dramatically highlighting the West's vulnerability to destruction of its space-based commercial and military communications infrastructure.]]></description>
			<content:encoded><![CDATA[<p><i>In 2010, advanced space weapons fall into the hands of radical Islamic terrorists, who launch them against the West&#8217;s reconnaissance, weather, and communications satellites. Meanwhile, inside U.S. Strategic Command, top military commanders, space-company executives, and U.S. intelligence experts are conducting a &#8220;DEADSATS II&#8221; wargame, exploring how the loss of critical satellites could lead to nuclear war. The players don&#8217;t know that the war they are gaming has already begun in space and that the Pentagon is about to find out that data from the GPS satellite system is no longer reliable, making accurate military operations impossible worldwide&#8230;.</i></p>
<p><i>What makes </i>Space Wars<i> especially credible—and a fascinating and informative read—is the outstanding technical and military expertise of two of the authors. Michael Coumatos is a former U.S. Navy test pilot, ship&#8217;s captain and commodore, US Space Command director of war gaming, and government counterterrorism advisor. </i><span id="more-80965"></span></p>
<p><i>William Scott recently retired as Rocky Mountain bureau chief for Aviation Week and Space Technology magazine, a Flight Test Engineer graduate of the U.S. Air Force Test Pilot School, and an electronics engineering officer at the National Security Agency. I asked him for a reality check. &#8211; Amara D. Angelica</i></p>
<p><i><b>How close are the scenarios and wargaming descriptions in </b></i><b>Space Wars</b><i><b> to the real world? </b></i></p>
<p>In my opinion, the <i>Space Wars</i> scenarios are very realistic, based on my years of reporting on military space issues. The vulnerability of U.S. satellites—commercial, civil and military—has concerned milspace professionals and leaders for many years. As one Cincspace told me almost 10 years ago (paraphrased), &#8220;I have nightmares about getting that call from the president, saying: &#8216;What&#8217;s killing our satellites, who or what&#8217;s responsible and what are you doing about it?&#8217; I sure don&#8217;t want my answer to be: &#8216;I don&#8217;t know, I don&#8217;t know and I don&#8217;t know.'&#8221; In other words, that four-star Cincspace (we no longer have a &#8220;Commander-in-chief&#8221; of space, so that term&#8217;s out of date) and his U.S. &#8220;space warriors&#8221; are in dire need of national policies, doctrines, realistic strategies and tactics, and more tools to deal with myriad threats to our space infrastructure.</p>
<p>Still, progress IS being made. Sensors that will help engineers and space operators quickly determine whether an anomaly is caused by cosmic rays or somebody lasing or jamming a satellite ARE being built into new national security spacecraft. However, those sensors are still not being installed routinely on commercial satellites—even though the Defense Dept. relies heavily on commercial comsats and imaging sats.</p>
<p>The wargaming scenarios—as well as some of the &#8220;real-world&#8221; scenarios—in <i>Space Wars</i> are amalgamations of outcomes and insights gained from actual wargames, such as those listed on pg. 7 of the book&#8217;s forward.</p>
<p>Finally, weapons and systems depicted in SW are real or based on real-world technologies, although some remain classified. For instance, as an <i>AvWeek</i> reporter, I confirmed years ago that classified tests done at China Lake NAS, Calif., proved that a maser could be accurately controlled and targeted by first firing a laser, then firing the maser a split second later. The latter&#8217;s microwave beam would follow the laser-formed &#8220;waveguide&#8221; through the air, enabling the beam to be aimed accurately and controlled.</p>
<p>Has such a weapon been developed and deployed? I don&#8217;t know. Would it also work in space, or would the maser beam start wandering like wet spaghetti, once it left the atmosphere? I don&#8217;t know that, either. Some scientists believe the beam would remain coherent and stable in space, but I was never able to confirm that tests had demonstrated that ability. Inside the atmosphere, though, actual testing DID confirm that the laser-maser combination enabled accurately targeting objects with high-energy microwave beams.</p>
<p>Ref. the Blackstar system: I now have several photos of the XOV spaceplane sitting on a Lockheed Martin flightline ramp, so the vehicle definitely exists. Based on 15+ years of sighting reports, inside sources, etc., I determined that Blackstar&#8217;s SR-3 carrier aircraft and several versions of the XOV were built and flown. An <a href="http://www.aviationweek.com/aw/generic/story_generic.jsp?channel=awst&amp;id=news/030606p1.xml" target="_blank" rel="noopener">AvWeek cover story</a> describing the system ran in the March 6, 2006, issue.</p>
<p><img decoding="async" src="http://www.aviationweek.com/media/images/news/030606p1.jpg" /><br />
<i>Blackstar spaceplane? (Aviation Week)</i></p>
<p>Despite considerable feedback that spanned the spectrum from attaboy support to flaming criticism, the stories DID prompt airtight confirmation to come back to me from impeccable sources. Bottom line: some may dispute it, but the Blackstar system exists and has flown. Whether it can achieve orbit and was/is used exactly as we&#8217;ve depicted via &#8220;Speed&#8217;s&#8221; flights in Space Wars is strictly an educated guess, based on my <i>AvWeek</i> reporting.</p>
<p><b><i>What are your thoughts on the recent Chinese destruction of their satellite, and the possibility that it was an ASAT test?</i></b></p>
<p>It was definitely considered to be an ASAT test, according to several general officers who spoke at last week&#8217;s Space Symposium here in Colo. Springs. I think such an ASAT threat has existed for some time, and our milspace professionals knew it was just a matter of time until some entity demonstrated it. The Russians already HAD demo&#8217;d the capability decades ago, and Doug Pearson really DID shoot down a satellite in 1985, firing a missile from his F-15. As the USAF commander of Space Command said last week, the Chinese ASAT test was a major wakeup call for all spacefaring nations, proving once and for all that &#8220;space is no longer a sanctuary.&#8221;</p>
<p><b><i>How does Russia&#8217;s planned Glonass system relate to the European nav sat system described in the book?</i></b></p>
<p>Both are considered alternatives to the U.S. GPS network. Ultimately, Russia, Europe and the U.S. envision some commercial receivers will be able to use any of these signals for precise navigation and timing. Glonass and Galileo are being developed to (ostensibly) ensure satellite-based nav and timing will always be available, because the U.S. system could be turned off at will. The U.S. military controls GPS, and the Pentagon could disable certain or all GPS signals during a national emergency — writ &#8220;war.&#8221;</p>
<p>Yet, GPS signals are becoming virtual global utilities, depended upon by millions of users. The Euros, Russians, Japan and others see billions of dollars to be made by selling receivers and GPS-embedded products, as well, and want to get in on that commercial action. Bottom line, though, is this: they&#8217;re alternatives to GPS, sold to financiers as &#8220;guaranteed service&#8221; options, should the U.S. turn off GPS.</p>
<p><b><i>Are there any other recent technical, military, political, and other developments that tie in with the book or that were predicted in the book?</i></b></p>
<p>The Iranian political situation today is playing out largely as we anticipated. Technologies for &#8220;operationally responsive space&#8221; — smallsats and quick-response launchers — are evolving quickly. Autnomous on-orbit servicing of satellites is being demonstrated now by the Orbital Express spacecraft, a feature that plays more dramatically in our sequel, Space Wars II (now being written by the same coauthors). The Chinese ASAT test has awakened Congress and American citizens to the potential threats facing our space infrastructure, but I don&#8217;t think our political leaders fully appreciate what impacts those threats could have on the U.S. national security posture and citizens&#8217; activities.</p>
<p><b><i>What kind of comments are you getting from savvy early readers so far?</i></b></p>
<p>Initial feedback we&#8217;re getting is that <i>Space Wars</i>&#8216; message is &#8220;bang-on,&#8221; prophetic, scary and very timely. Many readers either had no idea the U.S.—and modern civilization, in general—was so dependent on &#8220;space,&#8221; or that losing satellites might have such dramatic impacts in the geopolitical realm, as well on people&#8217;s daily lives. Perhaps the most succinct feedback I&#8217;ve heard was: &#8220;This is a very possible, very scary future. I hope it doesn&#8217;t come true.&#8221; Although many of our readers, who have a military background, are aware of the threats we depict, they hadn&#8217;t put the IMPACTS of attacks on satellites and the ISS into context the way Space Wars does—or so they&#8217;re telling us.</p>
<p>Last Thursday, during the annual Space Symposium (attended by approx. 7,000 space professionals from across the globe), many senior military, commercial and civil leaders bought copies of Space Wars and had Mike and me sign their books. Interestingly, the first two copies were purchased by a two-star USAF general, who is the chancellor of the National Security Space Institute, and her aide. She also wants to talk to us about some &#8220;hot-button&#8221; issues we should consider for our second <i>Space Wars </i>book.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/space-wars-the-first-six-hours-of-world-war-iii/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Gelernter, Kurzweil debate machine consciousness</title>
		<link>https://www.writingsbyraykurzweil.com/gelernter-kurzweil-debate-machine-consciousness-2</link>
		<comments>https://www.writingsbyraykurzweil.com/gelernter-kurzweil-debate-machine-consciousness-2#respond</comments>
		<pubDate>Wed, 06 Dec 2006 11:34:36 +0000</pubDate>
								<dc:creator>Rodney Brooks, Ray Kurzweil, David Gelernter</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Are we limited to building super-intelligent robotic "zombies" or will it be possible and desirable for us to build conscious, creative, volitional, perhaps even "spiritual" machines? David Gelernter and Ray Kurzweil debated this key question at MIT on Nov. 30.]]></description>
			<content:encoded><![CDATA[<p><i>Transcript by <a href="http://www.csail.mit.edu/index.php" target="_blank">MIT Computer Science and Artificial Intelligence Laboratory</a> (CSAIL), published with permission on KurzweilAI.net December 6, 2006. Participants: Yale professor of computer science David Gelernter, Ray Kurzweil, and CSAIL Director Rodney Brooks, acting as moderator, with questions from the audience. </i><span id="more-80953"></span></p>
<p>BROOKS: This is a double-headed event today. We&#8217;re going to start off with a debate. Then we&#8217;re going&#151;maybe it&#8217;s a triple-headed event. We&#8217;re going to start off with a debate, then we&#8217;re going to have a break for pizza and soda&#151;pizza lover here&#151;outside, and then we&#8217;re going to come back for a lecture. </p>
<p> The event that this is around is the 70<sup>th</sup> anniversary of a paper by Alan Turing, &quot;On Computable Numbers,&quot; published in 1936, which one can legitimately, I think&#151;I think one can legitimately think of that paper as the foundation of computer science. It included the invention of the Turing&#151;what we now call the Turing Machine. And Turing went on to have lots of contributions to our field, we at the Computer Science and Artificial Intelligence Lab. In 1948, he had a paper titled, &quot;Intelligent Machinery,&quot; which I think is really the foundation of artificial intelligence. </p>
<p> So in honor of that 70<sup>th</sup> anniversary, we have a workshop going on in the next couple days and this even tonight. This event is sponsored by the Templeton Foundation. Charles Harper of the Templeton Foundation is here, and so is Mary Ann Meyers and some other people sponsoring this event. And Charles, I have to ask you one question &#151;A or B? You have to say. You have to choose. This is going to choose who goes first, but I&#8217;m not telling you who A or B is. </p>
<p>HARPER: A.</p>
<p>BROOKS: OK. So we&#8217;re going to start this debate between Ray Kurzweil and David Gelernter. And it turns out that Ray is going to go first. Thanks, Charles. So I&#8217;m first going to introduce Ray and David. I will point out that after we finish and after the break, we&#8217;re going to come back at 6:15, and Jack Copeland, who&#8217;s down here, will then give a lecture on Turing&#8217;s life. And Jack has been&#151;runs the Alanturing.net, the archives in New Zealand of Alan Turing, and he&#8217;s got a wealth of material and new material that&#8217;s being declassified over time that he&#8217;ll be talking about some of Alan Turing&#8217;s contributions.</p>
<p>But the debate that we&#8217;re about to have is really about the AI side of Alan Turing and the limits that we can expect or that we might be afraid of or might be celebrating of whether we can build superintelligent machines, or are we limited to building just superintelligent zombies. We&#8217;re pretty sure we can build programs with intelligence, but will they just be zombies that don&#8217;t have the real oomph of us humans? Will it be possible or desirable for us to build conscious, volitional, and perhaps even spiritual machines? </p>
<p>So we&#8217;re going to have a debate. Ray is going to speak for five minutes and then David is going to speak for five minutes&#151;opening remarks. Then Ray will speak for ten minutes, David for ten minutes &#151;that&#8217;s a total of 30 minutes, and I&#8217;m going to time them. And then we&#8217;re going to have a 15-minute interplay between the two of them. They get to use as much time as they can get from the other one during that. And then we&#8217;re going to open up to some questions from the audience. But I do ask that when we have the questions, the questions shouldn&#8217;t be for you to enter the debate. It would be better if you can come up with some question which you think they can argue about, because that&#8217;s what we&#8217;re here to see. </p>
<p>Ray Kurzweil has been a well-known name since his&#151;in artificial intelligence since his appearance on Steve Allen&#8217;s show in 1965, where he played a piano piece that a computer he had built had composed. Ray has gone on to&#151;</p>
<p>KURZWEIL: I was three years old.</p>
<p>BROOKS: He was three years old, yes. Ray has gone on to build the Kurzweil synthesizers that many musicians use, the Kurzweil reading machines, and many other inventions that have gone out there and are in everyday use. He&#8217;s got prizes and medals up the wazoo. He won the Lemelson Prize from MIT, he won the National Medal of Technology, presented by President Clinton in 1999. And Ray has written a number of books that have been&#151;come out and been very strong sellers on all sorts of questions about our future and the future of robot kind.</p>
<p> David Gelernter is a professor at Yale University, professor of computer science, but he&#8217;s sort of a strange professor of computer science, in the sense that he writes essays for <i>Weekly Standard</i>, <i>Time</i>, <i>Wall Street Journal</i>, <i>Washington Post</i>, <i>Los Angeles Times</i>, and many other sorts of places. And I see a few of my colleagues here, and I&#8217;m glad they don&#8217;t write columns for all those places. His research interests include AI, philosophy of mind, parallel distributed systems, visualization, and information management. And you can read all about them with Google if you want to get more details. Both very distinguished people, and I hope we have some interesting things to hear from them. So we&#8217;ll start with Ray. And five minutes, Ray. </p>
<p>KURZWEIL: OK. Well, thanks, Rodney. You&#8217;re very good at getting a turnout. That went quickly. [laughter] So there&#8217;s a tie-in with my tie, which this was given to me by Intel. It&#8217;s a photomicrograph of the Pentium, which I think symbolizes the progress we&#8217;ve made since Turing&#8217;s relay-based computer Ultra that broke the Nazi Enigma code and enabled Britain to win the Battle of Britain. But we&#8217;ve come a long way since then.</p>
<p> And in terms of this 70<sup>th</sup> anniversary, the course I enjoyed the most here at MIT, when I was here in the late &#8217;60s, was 6.253&#151;I don&#8217;t remember all the numbers, and numbers are important here &#151;but that was theoretical models of computation, and it was about that paper and about the Turing Machine and what it could compute and computable functions and the busy beaver function, which is non-computable, and what computers can do, and really established computation as a sub-field of mathematics and, arguably, mathematics as a sub-field of computation. </p>
<p> So in terms of the debate topic, I thought it was interesting that there&#8217;s an assumption in the title that we will build superintelligent machines, we&#8217;ll build superintelligent machines that are conscious or not conscious. And it brings up the issue of consciousness, and I want to focus on that for a moment, because I think we can define consciousness in two ways. We can define apparent consciousness, which is an entity that appears to be conscious&#151;and I believe, in fact, you have to be apparently conscious to pass the Turing test, which means you really need a command of human emotion. Because if you&#8217;re just very good at doing mathematical theorems and making stock market investments and so on, you&#8217;re not going to pass the Turing test. And in fact, we have machines that do a pretty good job with those things. Mastering human emotion and human language is really key to the Turing test, which has held up as our exemplary assessment of whether or not a non-biological intelligence has achieved human levels of intelligence.</p>
<p> And that will require a machine to master human emotion, which in my view is really the cutting edge of human intelligence. That&#8217;s the most intelligent thing we do. Being funny, expressing a loving sentiment&#151;these are very complex behaviors. And we have characters in video games that can try to do these things, but they&#8217;re not very convincing. They don&#8217;t have the complex, subtle cues that we associate with those emotions. They don&#8217;t really have emotional intelligence. But emotional intelligence is not some sideshow to human intelligence. It&#8217;s really the cutting edge. And as we build machines that can interact with us better and really master human intelligence, that&#8217;s going to be the frontier. And in the ten minutes, I&#8217;ll try to make the case that we will achieve that. I think that&#8217;s more of a 45-minute argument, but I&#8217;ll try to summarize my views on that. </p>
<p> I will say that the community, AI community and myself, have gotten closer in our assessments of when that will be feasible. There was a conference on my 1999 book, Spiritual Machines, at Stanford, and there were AI experts. And the consensus then&#151;my feeling then was we would see it in 2029. The consensus in the AI community was, oh, it&#8217;s going to&#151;it&#8217;s very complicated, it&#8217;s going to take hundreds of years, if we can ever do it. I gave a presentation&#151;I think you were there, Rodney, as well, at AI50, on the 50th anniversary of the Dartmouth Conference that gave AI its name in 1956. And we had these instant polling devices, and they asked ten different ways when a machine would pass the Turing test&#151;when will we know enough about the brain, when will we have sophisticated enough software, when will a computer actually pass the Turing test. They got the same answer&#151;it was basically the same question, and they got the same answer. And of course it was a bell curve, but the consensus was 50 years, which, at least if you think logarithmically, as I do, that&#8217;s not that different from 25 years. </p>
<p>So I haven&#8217;t changed my position, but the AI community is getting closer to my view. And I&#8217;ll try to explain why I think that&#8217;s the case. It&#8217;s because of the exponential power of growth in information technology, which will affect hardware, but also will affect our understanding of the human brain, which is at least one source of getting the software of intelligence.</p>
<p>The other definition of consciousness is subjectivity. Consciousness is a synonym for subjectivity and really having subjective experience, not just an entity that appears to have subjective experience. And fundamentally&#151;and I&#8217;ll try to make this point more fully in my ten-minute presentation&#151;that&#8217;s not a scientific concept. There&#8217;s no consciousness detector we can imagine creating, that you&#8217;d slide an entity in&#151;green light goes on, OK, this one&#8217;s conscious, no, this one&#8217;s not conscious&#151;that doesn&#8217;t have some philosophical assumptions built into it. So John Searle would make sure that it&#8217;s squirting human neurotransmitters&#151;</p>
<p>BROOKS: Time&#8217;s up.</p>
<p>KURZWEIL: OK. And Dan Dennett would make sure it&#8217;s self-reflexive. But we&#8217;ll return to this.</p>
<p>[applause]</p>
<p>BROOKS: David?</p>
<p>GELERNTER: Let&#8217;s see. First, I&#8217;d like to say thanks for inviting me. My guess is that the position I&#8217;m representing&#151;the anti-cognitivist position, broadly speaking&#151;is not the overwhelming favorite at this particular site. But I appreciate your willingness to listen to unpopular opinions, and I&#8217;ll try to make the most of it by being as unpopular as I can. [Laughter]</p>
<p> First, it seems to me we won&#8217;t even be able to build superintelligent zombies unless we attack the problem right, and I&#8217;m not sure we&#8217;re doing that. I&#8217;m pretty sure we&#8217;re not. We need to understand, it seems to me, in model thought as a whole the cognitive continuum. Not merely one or a discrete handful of cognitive styles, the mind supports a continuum or spectrum of thought styles reaching from focused analytical thought at one extreme, associated with alertness or wide-awakeness, toward steadily less-focused thought, in which our tendency to free-associate increases. Finally, at the other extreme, that tendency overwhelms everything else and we fall asleep. </p>
<p> So the spectrum reaches from focused analysis to unfocused continuous free association and the edge of sleep. As we move down-spectrum towards free association, naturally our tendency to think analogically increases. As we move down-spectrum, emotion becomes more important. I have to strongly agree with Ray on the importance of emotion. We speak of being coldly logical on the one hand, but dreaming on the other is an emotional experience. Is it possible to simulate the cognitive continuum in software? I don&#8217;t see why not. But only if we try.</p>
<p> Will we ever be able to build a conscious machine? Maybe, but building one out of software seems to me virtually impossible. First, of course, we have to say what conscious means. For my purpose, consciousness means a subjectivity. And Ray&#8217;s&#151;and consciousness means the presence of mental states that are strictly private, with no visible functions or consequences. A conscious entity can call up some thought or memory merely to feel happy, to enjoy the memory, be inspired or soothed or angered by the thought, get a rush of adrenaline from the thought. And the outside world needn&#8217;t see any evidence of all that this act of thought or remembering is taking place. </p>
<p> Now, the reason I believe consciousness will never be built out of software is that where software is executing, by definition we can separate out, peel off a portable layer that can run in a logically identical way on any computing platform&#151;for example, on a human mind. I know what it&#8217;s like to be a computer executing software, because I can execute that separable, portable set of instructions just as an electronic digital computer can and with the same logical effect. If you believe that you can build consciousness out of software, you believe that when you execute the right sort of program, a new node of consciousness gets created. But I can imagine executing any program without ever causing a new node of consciousness to leap into being. Here I am evaluating expressions, loops, and conditionals. I can see this kind of activity producing powerful unconscious intelligence, but I can&#8217;t see it creating a new node of consciousness. I don&#8217;t even see where that new node would be&#151;floating in the air someplace, I guess. </p>
<p> And of course, there&#8217;s no logical difference between my executing the program and the computer&#8217;s doing it. Notice that this is not true of the brain. I do not know what it&#8217;s like to be a brain whose neurons are firing, because there is no separable, portable layer that I can slip into when we&#8217;re dealing with the brain. The mind cannot be ported to any other platform or even to another instance of the same platform. I know what it&#8217;s like to be an active computer in a certain abstract sense. I don&#8217;t know what it&#8217;s like to be an active brain, and I can&#8217;t make those same statements about the brain&#8217;s creating or not creating a new node of consciousness. </p>
<p> Sometimes people describe spirituality&#151;to move finally to the last topic&#151;as a feeling of oneness with the universe or a universal flow through the mind, a particular mode of thought and style of thought. In principle, you could get a computer to do that. But people who strike me as spiritual describe spirituality as a physical need or want. My soul thirsteth for God, for the living God, as the Book of Psalm says. Can we build a robot with a physical need for a non-physical thing? Maybe, but don&#8217;t count on it. And forget software. </p>
<p> Is it desirable to build intelligent, conscious computers, finally? I think it&#8217;s desirable to learn as much as we can about every part of the human being, but assembling a complete conscious artificial human is a different project. We might easily reach a state someday where we prefer the company of a robot from Wal-Mart to our next door neighbors or roommates or whatever, but it&#8217;s sad that in a world where we tend to view such a large proportion of our fellow human beings as useless, we&#8217;re so hot to build new ones. [laughter]</p>
<p> In a Western world that no longer cares to have children at the replacement rate, we can&#8217;t wait to make artificial humans. Believe it or not, if we want more complete, fully functional people, we could have them right now, all natural ones. Consult me afterwards, and I&#8217;ll let you know how it&#8217;s done. [laughter]</p>
<p>BROOKS: OK, great. </p>
<p>GELERNTER: Thank you. </p>
<p>KURZWEIL: You heard glimpses in David&#8217;s presentation of both of these concepts of consciousness, and we can debate them both. I think principally he was talking about a form of performance that incorporates emotional intelligence. Because emotional intelligence, even though it seems private and we assume that there is someone actually home there experiencing the emotions that are apparently the case, we can&#8217;t really tell that when we look at someone else. In fact, all that we can discuss scientifically is objective observation, and science is really a synonym for objectivity, and consciousness is a synonym for subjectivity, and there is an inherent gulf between them. </p>
<p>So some people feel that actual consciousness doesn&#8217;t exist, since it&#8217;s not a scientific concept, it&#8217;s just an illusion, and we shouldn&#8217;t waste time talking about it. That&#8217;s not fully satisfactory, in my view, because our whole moral and ethical and legal system is based on consciousness. If you cause suffering to some other conscious entity, that&#8217;s the basis of our legal code and ethical values. Some people describe some magical or mystical property to consciousness. There were some elements in David&#8217;s remarks, say, in terms of talking about a new node of consciousness and how that would suddenly emerge from software. </p>
<p>My view is it&#8217;s an emergent property of a complex system. It&#8217;s not dependent on substrate. But that is not a scientific view, because there&#8217;s really no way to talk about or to measure the subjective experience of another entity. We assume that each other are conscious. It&#8217;s a share human assumption. But that assumption breaks down when we go out of shared human experience. The whole debate about animal rights has to do with are these entities actually conscious. Some people feel that animals are just machines in the old-fashioned sense of that term, not&#151;there&#8217;s nobody really home. Some people feel that animals are conscious. I feel that my cat&#8217;s conscious. Other people don&#8217;t agree. They probably haven&#8217;t met my cat, but &#151;(laughter) </p>
<p>But then the other view is apparent consciousness, an entity that appears to be conscious, and that will require emotional intelligence. There are several reasons why I feel that we will achieve that in a machine, and that has to do with the acceleration of information technology&#151;and this is something I&#8217;ve studied for several decades. And information technology, not just computation, but in all fields, is basically doubling every year in price, performance, capacity, and bandwidth. We certainly can see that in computation, but we can also see that in other areas, like the resolution of brain-scanning in 3D volume is doubling every year, the amount of data gathering on the brain is doubling every year. And we&#8217;re showing that we can actually turn this data into working models and simulations of brain regions. There&#8217;s about 20 regions of the brain that have already been modeled and simulated. </p>
<p>And I&#8217;ve actually had a debate with Tomaso Poggio as to whether this is useful, because he kept saying, well, OK, we&#8217;ll learn how the visual cortex works, but that&#8217;s really not going to be useful in creating artificial vision systems. And I said, well, when we got these early transformations of the auditory cortex, that actually did help us in speech recognition. It was not intuitive, we didn&#8217;t expect it, but when we plugged it into the front-end transformations of speech recognition, we got a big jump in performance. They haven&#8217;t done that yet in visual modeling of the visual cortex. And I saw him recently&#151;in fact, at AI50&#151;and he said, you know, you were right about that, because now they&#8217;re actually getting models, these early models of how the visual cortex works, and that that has been helpful in artificial vision systems.</p>
<p>I make the case in chapter four of my book that we will have models and simulations of all several hundred regions of the human brain within 20 years. And you have to keep in mind that the progress is exponential. So it&#8217;s very seductive. It looks like nothing is happening. People dismissed the genome project. Now we think it&#8217;s a mainstream project, but halfway through the project, only 1% of the project had been done, but the amount of genetic data doubled smoothly every year and the project was done on time. If you can factor in this exponential pace of progress, I believe we will have models and simulations of these different brain regions&#151;IBM is already modeling a significant slice of the cerebral cortex. And that will give us the templates of intelligence, it will expand the AI toolkit, and it&#8217;ll also give us new insights into ourselves. And we&#8217;ll be able to create machines that have more facile emotional intelligence and that really do have the subtle cues of emotional intelligence, and that will be necessary to passing the Turing test. </p>
<p>But that still doesn&#8217;t&#151;that still begets the key question as to whether or not those entities just appear to be conscious and feeling emotion or whether they really have emotional subjective experiences. David, I think, was giving a sophisticated version of John Searle&#8217;s Chinese room argument, where&#151;I don&#8217;t have time to explain the whole argument, but for those of you familiar with it, you&#8217;ve got a guy that&#8217;s just following some rules on a piece of paper and he&#8217;s answering questions in Chinese, and John says, well, isn&#8217;t it ridiculous to think that that system is actually conscious? Or he has a mechanical typewriter which types out answers in Chinese, but it&#8217;s following complex rules. The premise seems absurd that that system could actually be&#151;have true understanding and be conscious when it&#8217;s just following a simple set of rules on a piece of paper. </p>
<p>Of course, the sleight of hand in that argument is that these set of rules would be immensely complex, and the whole premise is unrealistic that such a simple system could, in fact, realistically answer unanticipated questions in Chinese or any language. Because basically what the man is doing in the Chinese room, in John Searle&#8217;s argument, is passing a Turing test. And that entity would have to be very complex. And in that complexity is a key emergent property. So David says, well, it seems ridiculous to think that software could be conscious or even&#151;and I&#8217;m not sure if he&#8217;s&#151;which flavor of consciousness he&#8217;s using there, the true subjectivity or just apparent consciousness, but in either case it seems absurd that a little software program could display that kind of complexity and self-emergent awareness. </p>
<p>But that&#8217;s because you&#8217;re thinking of software as you know it today, if in fact you have a massively parallel system, as the brain is, with 100 trillion internal connections, all of which are computing simultaneously, and which in fact we can model those internal connections and neurons quite realistically in some cases today. We&#8217;re still in the early part of that process. But even John Searle agrees that a neuron is basically a machine and can be modeled and simulated, so why can&#8217;t we do that with massively parallel system with 100 trillion-fold parallelism? And if that seems ridiculous, that is ridiculous today, but it&#8217;s not ridiculous with the kind of technology we&#8217;ll have with 30 more doublings of price, performance, capacity, and bandwidth of information technology, the kind of technology we&#8217;ll have around 2030.</p>
<p>These massively parallel systems with the complexity of the human brain, which is a moderate level of complexity, because the design of the human brain is in the genome and the genome has 800 million bytes, but that&#8217;s uncompressed, has massive redundancies&#151;ALU&#8217;s repeated 300,000 times. If you apply loss that&#8217;s compression of the genome, you can reduce it to 30-50 million bytes, which is not simple, but it&#8217;s a level of complexity we can manage.</p>
<p>BROOKS: Ray, the logarithm of your remaining time is one. [laughter]</p>
<p>KURZWEIL: So the&#151;we&#8217;ll be able to achieve that level of complexity. We are making exponential progress in reverse engineering the brain. We&#8217;ll have systems that have the suppleness of human intelligence. This will not be conventional software as we understand it today. There is a difference in the (inaudible) field of technology when it achieves that level of parallelism and that level of complexity, and I think we&#8217;ll achieve that if you consider these exponential progressions. And it still doesn&#8217;t penetrate the ultimate mystery of how consciousness can emerge, true subjectivity. We assume that each other are conscious, but that assumption breaks down in the case of animals, and we&#8217;ll have a vigorous debate when we have these machines. But I&#8217;ll make one point. We will&#151;I&#8217;ll make a prediction that we will come to believe these machines, because they&#8217;ll be very clever and they&#8217;ll get mad at us if we don&#8217;t believe them, and we won&#8217;t want that to happen. So thank you.</p>
<p>BROOKS: OK. David?</p>
<p>GELERNTER: Well, thank you for those very eloquent remarks. And I want to say, first of all, many points were raised. The premise of John Searle&#8217;s Chinese room and of the thought experiment which is related, that I outlined, is certainly unrealistic. Granted, the premise is unrealistic. That&#8217;s why we have thought experiments. If the premise were not unrealistic, if it were easy to run in a lab, we wouldn&#8217;t need to have a thought experiment. </p>
<p>Now, the fact remains that when we conduct a thought experiment, any thought experiment needs to be evaluated carefully. The fact that we can imagine something doesn&#8217;t mean that what we imagine is the case. We need to know whether our thought experiment is based on experience. I would say the thought experiment of imagining that you&#8217;re executing the instructions that constitute a program or that realize a virtual machine is founded on experience, because we&#8217;ve all had the experience of executing algorithms by hand. It isn&#8217;t any&#151;and there&#8217;s no exotic ingredient in executing instructions. I may be wrong. I don&#8217;t know for sure what would happen if I executed a truly enormous program that went on for billions of pages. But I don&#8217;t have any reason for believing that consciousness would emerge. It seems to me a completely arbitrary claim. It might be true. Anything might be true. But I don&#8217;t see why you make the claim. I don&#8217;t see what makes it plausible. </p>
<p>You mentioned massive parallelism, but massive parallelism, after all, adds absolutely zero in terms of expressivity. You could have a billion processors going, or ten billion or ten trillion or 1081, and all those processors could be simulated on a single jalopy PC. I could run all those processes asynchronously on one processor, as you know, and what I get from parallelism is performance, obviously, and a certain amount of cleanliness and modularity when I write the program, but I certainly don&#8217;t get anything in terms of expressivity that I didn&#8217;t have anyway.</p>
<p>You mentioned consciousness, which is the key issue here. And you pointed out consciousness is subjective. I&#8217;m only aware of mine, you&#8217;re only aware of yours, granted. You say that consciousness is an emergent property of a complex system. Granted, of course, the brain is obviously a complex system and consciousness is clearly an emergent property. Nobody would claim that one neuron tweezed out of the brain was conscious. So yes, it is an emergent property. The business about animals and people denying animal consciousness, I haven&#8217;t really heard that since the 18th century, but who knows, maybe there are still Cartesians out there&#151;raise your hands. </p>
<p>But in the final analysis, although it&#8217;s true that consciousness is irreducibly subjective, you can&#8217;t possibly claim to understand the human mind if you don&#8217;t understand consciousness. It&#8217;s true that I can&#8217;t see yours and you can&#8217;t see mine. It doesn&#8217;t change the fact that I know I&#8217;m conscious and you know that you are. And I&#8217;m not going to believe that you understand the human mind unless you can explain to me what consciousness is, how it&#8217;s created and how it got there. Now, that doesn&#8217;t mean that you can&#8217;t do a lot of useful things without being&#151;creating consciousness. You certainly can. If your ultimate goal is utilitarian, forget about consciousness. But if your goals are philosophical and scientific and you want to understand how the mind really operates, then you must be able to tell me how consciousness works, or you don&#8217;t have a theory of the human mind. </p>
<p>One element that I think you left out in your discussion of the thought experiment and the fact that, granted, we&#8217;re able to build more and more complex systems and they are more and more powerful, and we&#8217;re able to build more and more accurate and effective simulations of parts of the brain and indeed of other parts of the body&#151;because keep in mind that when we allow the importance of emotion and thinking, it&#8217;s clear that you don&#8217;t just think with your brain, you think with your body. When you feel an emotion, when you have an emotion, the body acts as a resonator or a sounding board or an amplifier, and you need to understand how the body works, as well as the brain does, if you&#8217;re going to understand emotion. But granted, we&#8217;re getting&#151;we&#8217;re able to build more complex and more and more effective simulators.</p>
<p>What isn&#8217;t clear is the role of the brain&#8217;s chemical structure. The role of the brain stuff itself, of course, is a point that Searle harps on, but it goes back to a paper by Paul Ziff in the late 1950s, and many people have remarked on this point. We don&#8217;t have the right to dismiss out of hand the role of the actual chemical makeup of the brain in creating the emergent property of consciousness. We don&#8217;t know whether it can be created using any other substance. Maybe it can&#8217;t and maybe it can. It&#8217;s an empirical question. </p>
<p>One is reminded of the famous search that went on for so many centuries for a substitute source of the pigment ultramarine. Ultramarine, a tremendously important pigment for any painter. You get it from lapis lazuli, and there are not very many sources of lapis lazuli. It&#8217;s very expensive, and it&#8217;s a big production number to get it and grind it down, turn it into ultramarine. So ultramarine paint used to be as expensive as gold leaf. People wanted to know, where else can I get ultramarine? And they went to the scientific community, and the scientific community said, we don&#8217;t know. There&#8217;s no law that says there is some other place to get ultramarine from lapis lazuli, but we&#8217;ll try. And at a certain point in the late 19th century, a team of French chemists did succeed in producing a fake ultramarine pigment which was indeed much cheaper than lapis lazuli. And the art world rejoiced.</p>
<p>The moral of the story? If you can do it, great, but you have no basis for insisting on an a priori assumption that you can do it. I don&#8217;t know whether there is a way to achieve consciousness in any way other than living organisms achieve it. If you think there is, you&#8217;ve got to show me. I have no reason for accepting that a priori. And I think I&#8217;m finished.</p>
<p>BROOKS: I can&#8217;t believe it. Everyone stopped&#151;Ray, I think&#151;stay up there, and we&#8217;ll&#151;now we&#8217;ll go back and forth in terms of, Ray, maybe you want to answer that. </p>
<p>KURZWEIL: So I&#8217;m struggling as I listen to your remarks, David, to really tell what you mean by consciousness. I&#8217;ve tried to distinguish these two different ways of looking at it&#151;the objective view, which is usually what people lapse into when they talk about consciousness. They talk about some neurological property, or they talk about self-reflection, an entity that can create models of its own intelligence and behavior and model itself, or what-if experiments in its mind or have imagination, thinking about itself and transforming models of itself and this kind of self-reflection. That is consciousness. Or maybe it has to do with mirror neurons and that we can empathize&#151;that is to say, understand the conscious or the emotions of somebody else.</p>
<p> But that&#8217;s all objective performance. And these&#151;our emotional intelligence, our ability to be funny or be sad or express a loving sentiment, those are things that the brain does. And I&#8217;d make the case that we are making progress, exponential progress in understanding the human brain and different regions, and modeling them in mathematical terms and then simulating them and testing those simulations. And the precision of those simulations is gearing up. We can argue about the timeframe. I think, though, within a quarter century or so, we will have detailed models that&#151;and simulations that can then do the same things that the brain does apparently. And we won&#8217;t be able to really tell them apart. </p>
<p>That is what the Turing test is all about, that this machine will pass the Turing test. But that is an objective test. We could argue about the rules. Mitch Kapor and I argued for three months about the rules. Turing wasn&#8217;t very specific about them. But it is a objective test and it&#8217;s an objective property. So I&#8217;m not sure if you&#8217;re talking about that or talking about the actual sense one has of feeling, your apparent feelings, the subjective sense of consciousness. And so you talk about&#151;</p>
<p>GELERNTER: (inaudible), could I answer that question?</p>
<p>BROOKS: Yeah, let (inaudible).</p>
<p>GELERNTER: You say there are two kinds of consciousness, and I think you&#8217;re right. I think most people, when they talk about consciousness, think of something that&#8217;s objectively visible. As I said, for my purposes, I want consciousness to mean mental states, mental states &#151;specifically a mental state that has no external functionality.</p>
<p>KURZWEIL: But that&#8217;s still&#151;</p>
<p>GELERNTER: You know that you are capable of feeling or being happy. You know you&#8217;re capable of thinking of something good that makes you feel good, of thinking of something bad that makes you depressed, or thinking of something outrageous that makes you angry. You know you&#8217;re capable of mental states that are your property alone. As you say, there&#8217;s objective&#151;absolutely&#151;</p>
<p>KURZWEIL: But these mental states do have&#151;</p>
<p>GELERNTER: That&#8217;s what I mean by consciousness.</p>
<p>KURZWEIL: But these mental states still have objective neurological correlates. And in fact, we now have means of where we can begin to look inside the brain with increasing resolution&#151;strike doubling in 3D volume every year&#151;to actually see what&#8217;s going on in the brain. So sitting there quietly, thinking happy thoughts and making myself happy, you can&#151;there are actually things going on inside the brain, we&#8217;re able to see them. And so now this supposedly subjective mental state is, in fact, becoming an objective behavior. Not&#151;</p>
<p>GELERNTER: Can I comment on that? I think you&#8217;re&#151;I think the idea that you&#8217;re arguing with Descartes is a straw man approach. I don&#8217;t think anybody argues anymore that the mind is a result of mind stuff, some intangible substance that has no relation to the brain. By arguing that consciousness is objective&#151;I&#8217;m agreeing with you that consciousness is objective&#151;I&#8217;m certainly not denying that it&#8217;s created by physical mechanisms. I&#8217;m not claiming there&#8217;s some magical or transcendental metaphysical property. But that doesn&#8217;t change the fact that in terms of the way you understand it and perceive it, your experiences of it is subjective. That was your term, and I&#8217;m agreeing with you. And that doesn&#8217;t change the fact that it is created by the brain.</p>
<p> Clearly, we&#8217;re reaching better and better understandings of the brain and of everything else. You&#8217;ve said that a few times, and I certainly don&#8217;t disagree. The fact that we&#8217;re getting better and better doesn&#8217;t mean that necessarily we&#8217;re going to reach any arbitrary goal. It depends on our methods. It depends if we understand the problem the right way. It depends if we&#8217;re taking the right route. It seems to me that consciousness is necessary. Unless we understand consciousness as this objective phenomenon that we&#8217;re all aware of, our brain simulators haven&#8217;t really told us anything fundamental about the human mind. Haven&#8217;t told us what I want to know.</p>
<p>KURZWEIL: I think our brain simulators are going to have to work not just the level of the Turing test, but at the level of measuring the objective neurological correlates of these supposedly internal mental states. And there&#8217;s some information processing going on when we daydream and we think happy thoughts or sad thoughts or worry about something. There&#8217;s same kinds of things going on as when we do more visibly intelligent tasks. We&#8217;re, in fact, more and more able to penetrate that by seeing what&#8217;s going on and modeling these different regions of the brain, including, say, the spindle cells and the mirror neurons, which are involved with things like empathy and emotion&#151;which are uniquely human, although a few other animals have some of them&#151;and really beginning to model this. </p>
<p>We&#8217;re at an early stage, and it&#8217;s easy to ridicule the primitiveness of today&#8217;s technology, which will also always appear primitive compared to what will be feasible, given the exponential progression. But these internal mental states are, in fact, objective behaviors, because we will need to expand our definition of objective behavior to the kinds of things that we can see when we look inside the brain.</p>
<p>GELERNTER: If I could comment on that? If your tests are telling us that they are unable to distinguish that the same thing creates, on the one hand, a mental state of sharply-focused, in which I&#8217;m able to concentrate on a problem without my mind drifting and solving it&#151;there&#8217;s no way to distinguish that mental state from a mental state in which my mind is wandering, I am unable to focus or concentrate on what I&#8217;m doing, and then I start dreaming. In fact, cognitive psychologists have found out that we start dreaming and then we fall asleep. If your tests or your simulators are unable to distinguish between the mental state of dreaming or continuous free association on the one hand and focused logical analytic problem-solving on the other, then I think you&#8217;re just telling us that your tests have failed, because we know that these states are different and we want to know why they&#8217;re different. It doesn&#8217;t do any good to say, well, they&#8217;re caused in the same way. We need to explain the difference that we can observe.</p>
<p>BROOKS: Can I ask a question which I think gets at what this disagreement is? Then I&#8217;ll ask you two different questions. The question for David is, what would it take to convince you so that you would accept that you could build a conscious computer built on digital substrate? And Ray, what would it take to convince you that digital stuff isn&#8217;t good enough, we need some other chemicals or something else that David talked about? </p>
<p>KURZWEIL: To answer it myself, I wouldn&#8217;t get too hung up on digital, because, in fact, the brain is not digital. The neurotransmitters are kind of a digitally-controlled analog phenomena. But when we figure out the salient&#151;the important thing is to figure out what is salient and how information is modeled and what these different regions are actually doing to transform information.</p>
<p> The actual neurons are very complex. There&#8217;s lots of things going on, but we find out in the&#151;one region of the auditory cortex is basically conducting a certain type of algorithm, the information is represented perhaps by the location of certain neurotransmitters in relation to another, whereas in another case it has to do with the production of some unique neurotransmitter. There&#8217;s different ways in which the information is represented. And these are chemical processes, but we can model really anything like that at whatever level of specificity is needed digitally. We know that. We can model it analog&#151;</p>
<p>BROOKS: OK, so you didn&#8217;t answer the question. Can you then answer the question? (laughter)</p>
<p>GELERNTER: I will continue in exactly the same spirit, by not answering the question. I wish I could answer the question. It is a very good question and a deep question. Given the fact that mental states that are purely private are also purely subjective, how can we tell when they are present? And the fact is, just as you don&#8217;t know how to produce them, I don&#8217;t know how to tell whether they are there. It&#8217;s a research question, it&#8217;s a philosophical question. </p>
<p> It&#8217;s&#151;we know how to understand particular technologies. That is, we say I&#8217;ve created consciousness and I&#8217;ve done it by running software on a digital computer. I can think about that and say I don&#8217;t buy that, I don&#8217;t believe there&#8217;s consciousness there. If you wheel in some other technology, my only stratagem is to try and understand that new technology. I need to understand what you&#8217;re doing, I need to understand what moves you&#8217;re making, because unfortunately I don&#8217;t know of any general test. The only test that one reads about or hears about philosophically is relevant similarity&#151;that is, we assume that our fellow human beings are conscious, because we can see they&#8217;re people like us. We assume that if I had mental states, other similar creatures have mental states. And we make that same assumption about animals. And the more similar to us they seem, the more we assume their mental states are like ours.</p>
<p> How are we going to handle creatures who are&#151;or things or entities, objects, that are radically unlike us and are not organic? It&#8217;s a hard question and an interesting question. I&#8217;d like to see more work done on it.</p>
<p>KURZWEIL: In some ways, they&#8217;ll be more like us than animals, because animals are not perfect models of humans either medically or mentally. Whereas as we really reverse-engineer what&#8217;s going on, the salient processes, and learn what&#8217;s important in the different regions of the brain and recreate those properties and abilities to transform information similar ways, and then get an entity that in fact acts very human-like and a lot more human-like than an animal, for example, can pass a Turing test, which involves mastery of language which animals basically don&#8217;t have, for the most part, they will be closer to humans than animals are. </p>
<p> If we really model&#151;take an extreme case. I don&#8217;t think this is necessary to model neuron by neuron and neurotransmitter by neurotransmitter, but one could in theory do that. And we have, in fact, do have simulations of neurons that are highly detailed already, of one neuron or a cluster of three or four of them. So why not extend that to 100 billion neurons? It&#8217;s theoretically possible, and it&#8217;s a different substrate, but it&#8217;s really doing the same things. And it&#8217;s closer to humans than animals are. </p>
<p>BROOKS: So while David responds, if people who want to ask questions can come to the two microphones. Go ahead. </p>
<p>GELERNTER: When you say act very human-like, this is a key issue. You have to keep in mind that the Turing test is rejected by many people, and has been from the very beginning, as a superficial test of performance, a test that fails to tell us anything about mental states, fails to tell us the things that we really most want to know. So when you say something acts very human-like, that&#8217;s exactly what we don&#8217;t do when we attribute the presence of consciousness on the basis of relevant similarity. </p>
<p> When I see somebody, even if he isn&#8217;t acting human-like at all, if he&#8217;s fast asleep, even if he&#8217;s out cold, I don&#8217;t need to see him do anything, I don&#8217;t need to have him answer any fancy questions on the Turing test. I can see he&#8217;s a creature like I am, and I therefore attribute to him a mind and believe he&#8217;s capable of mental states. On the other hand, the Turing test, which is a test of performance rather than states of being, has been&#151;has certainly failed to convince people who are interested in what you would call the subjective kind of consciousness. </p>
<p>KURZWEIL: Well, I think now we&#8217;re&#151;</p>
<p>GELERNTER: That doesn&#8217;t tell me anything about&#151;</p>
<p>KURZWEIL: Well, now I think we&#8217;re getting somewhere, because I would agree. The Turing test is an objective test. And we can argue about making it super-rigorous and so forth, but&#151;and if an entity passed that test, the super-rigorous one, it is really convincingly human. It&#8217;s convincingly funny and sad, and we really&#151;is really displaying those emotions in a way that we cannot distinguish from human beings. But you&#8217;re right&#151;I mean, this gets back to a point I made initially. That doesn&#8217;t prove that that entity is conscious, and we don&#8217;t absolutely know that people are conscious. I think we will come to accept them as conscious. That&#8217;s a prediction I can make. But fundamentally, this is the underlying ontological question. </p>
<p>There is actually a role for philosophy, because it&#8217;s not fundamentally a scientific question. If you reject the Turing test or any variant of it, then we&#8217;re just left with this philosophical issue. My own philosophical take is if an entity seems to be conscious, I would accept its consciousness. But that&#8217;s a philosophical and not a scientific position.</p>
<p>BROOKS: So I think we&#8217;ll take the first question. And remember, not a monologue, something to provoke discussion.</p>
<p>M: Yeah, no problem. Let&#8217;s see. What if everything is conscious and connected, and it&#8217;s just a matter of us learning how to communicate and connect with it? </p>
<p>KURZWEIL: That&#8217;s a good point, because we can communicate with other humans, to some extent&#151;although history is full of examples where we dehumanize a certain portion of the population and don&#8217;t really accept their conscious experience&#151;and we have trouble communicating with animals, so that really underlies the whole animal rights&#151; what&#8217;s it like to be a giant squid? Their behavior seems very intelligent, but it&#8217;s also very alien and we don&#8217;t&#151;there&#8217;s no way we can even have the terminology to express that, because it&#8217;s not experiences that are human. And that is part of the deep mystery of consciousness and gets at the subjective aspects of it.</p>
<p> But as we do really begin to model our own brain and then extend that to other species, as we&#8217;re doing with the genome&#151;we&#8217;re now trying to reverse-engineer the genome in other species, and we&#8217;ll do the same thing ultimately with the brain&#151;that will give us more insight. We can translate into our own human terms the kinds of mental states as we can see them manifest as we really understand how to model other brains.</p>
<p>GELERNTER: If we think we are communicating with a software-powered robot, we&#8217;re kidding ourselves, because we&#8217;re using words in a fundamentally different way. To use an example that Turing himself discusses, we could ask the computer or the robot, do you like strawberries, and the computer could lie and say yes or it could, in a sense, tell the truth and say no. But the more fundamental issue is that not only does it not like strawberries, it doesn&#8217;t like anything. It&#8217;s never had the experience of liking, it&#8217;s never had the experience of eating. It doesn&#8217;t know what a strawberry is or any other kind of berry or any other kind of fruit or any other kind of food item. It doesn&#8217;t know what liking is, it doesn&#8217;t know what hating is. It&#8217;s using words in a purely syntactic way with no meanings behind them.</p>
<p>KURZWEIL: This is now the Searlean argument, and John Searle&#8217;s argument can be really rephrased to prove that the human being has no understanding and no consciousness, because each neuron is just a machine. Instead of just shuffling symbols, it&#8217;s just shuffling chemicals. And obviously, just shuffling chemicals around is no different than shuffling symbols around. And if shuffling chemicals and symbols around doesn&#8217;t really lead to real understanding or consciousness, then why isn&#8217;t that true for a collection of 100 neurons, which are all just little machines, or 100 billion?</p>
<p>GELERNTER: There&#8217;s a fundamental distinction, which is software. Software is the distinction. I can&#8217;t download your brain onto the computer up there&#151;</p>
<p>KURZWEIL: Well, that&#8217;s just a limitation of my brain, because we don&#8217;t have&#151;we don&#8217;t have quick downloading ports.</p>
<p>GELERNTER: You need somebody else&#8217;s brain in the audience?</p>
<p>KURZWEIL: No, that&#8217;s something that biology left out. We&#8217;re just not going to leave that out of our non-biological base.</p>
<p>GELERNTER: It turns out to be an important point. It&#8217;s the fundamental issue&#151;</p>
<p>KURZWEIL: It&#8217;s a limitation, not&#151;</p>
<p>GELERNTER: I think there&#8217;s a very big difference whether I can take this computer and upload it to a million other computers or to machines that are nothing like this digital computer, to a Turing machine, to an organic computer, to an optical computer. I can upload it to a class full of freshmen, I can upload it to all sorts of things. But your mind is yours and will never be downloaded (multiple conversations; inaudible)&#151;</p>
<p>KURZWEIL: That&#8217;s just because we left&#151;</p>
<p>GELERNTER: It&#8217;s stuck to your brain. </p>
<p>KURZWEIL: We left out the&#151;</p>
<p>GELERNTER: And I think that&#8217;s a thought-provoking fact. I don&#8217;t think you can just dismiss it as an&#151;</p>
<p>KURZWEIL: You&#8217;re posing that as a&#151;</p>
<p>GELERNTER:&#151;envir&#151;a developmental accident. Maybe it is, but&#151;</p>
<p>KURZWEIL: You&#8217;re posing that as a benefit and advantage of biological intelligence, that we don&#8217;t have these quick downloading ports to access information&#151;</p>
<p>GELERNTER: Not an advantage. It&#8217;s just a fact.</p>
<p>KURZWEIL: But that&#8217;s not an advantage. If we added quick downloading ports, which we will add to our non-biological brain emulations, that&#8217;s just an added feature. We could leave it out. But we put it in there, that doesn&#8217;t deprive it of any capability that it would otherwise have. </p>
<p>GELERNTER: You think you could upload your mind to somebody with a different body, with a different environment, who had a different set of experiences, who had a different set of books, feels things in a different way, has a different set of likes, responds in a different kind of way, and get an exact copy of you? I think that&#8217;s a na&iuml;ve idea. I don&#8217;t think there&#8217;s any way to upload your mind anywhere else and that lets you upload your entire being, including your body. </p>
<p>KURZWEIL: Well, it&#8217;s hard to upload to another person who already has a brain and a body that&#8217;s&#151;it&#8217;s like trying to upload to a machine that&#8217;s incompatible. But ultimately we will be able to gather enough data on a specific brain and simulate that, including our body and our environmental influences.</p>
<p>BROOKS: Next question.</p>
<p>M: Thanks. If we eventually develop a machine which appears intelligent, and let&#8217;s say given appropriate body so that it can answer meaningful questions about how does a strawberry taste or something like that or whether it likes strawberries, if we are wondering if this machine is actually experiencing consciousness the same way that we do, why not just ask it? They&#8217;ll presumably have no reason to lie if you haven&#8217;t specifically gone out of your way to program that in. </p>
<p>KURZWEIL: Well, that doesn&#8217;t tell us anything, because we can ask it today. You can ask a character in a video game and it will say, well, I&#8217;m really angry or I&#8217;m sad or whatever. And we don&#8217;t believe it, because it doesn&#8217;t&#151;it&#8217;s not very convincing yet. It doesn&#8217;t&#151;because it doesn&#8217;t have the subtle cues and it&#8217;s not as complex and not a realistic emulation of&#151;</p>
<p>M: Well, if we built 1000 of them, let&#8217;s say&#151;</p>
<p>GELERNTER: I strongly agree with (inaudible)&#151;</p>
<p>M:&#151;presumably they wouldn&#8217;t all agree to lie ahead of time. Somebody&#151;one of them might tell us the truth if the answer is no. </p>
<p>BROOKS: We&#8217;ll finish that question (multiple conversations; inaudible)&#151; </p>
<p>GELERNTER: I strongly agree. Keep in mind that the whole basis of the Turing test is lying. The computer is instructed to lie and pass itself off as a human being. Turing assumes that everything it says will be a lie. He doesn&#8217;t talk about the real deep meaning of lying, or he doesn&#8217;t care about that, and that&#8217;s fine, that&#8217;s not his topic. But he&#8217;d&#151;it&#8217;s certainly not the case that the computer is in any sense telling the truth. It&#8217;s telling you something about its performance, not something about facts or reality or the way it&#8217;s made or what its mental life is like. </p>
<p>KURZWEIL: John Searle, by the way, thinks that a snail could be conscious if it had this magic property, which we don&#8217;t understand it, that causes consciousness. And when we figure it out, we may discover that snails have it. That&#8217;s his view. So I do think that&#151;</p>
<p>GELERNTER: Do you think it&#8217;s inherently implausible that we should need a certain chemical to produce a certain result? Do you think chemical structure is irrelevant?</p>
<p>KURZWEIL: No, but we can simulate chemical interactions. We just simulated the other day something that people said will never be able to be simulated, which is protein folding. And we can now take an arbitrary amino acid sequence and actually simulate and watch it fold up, and it&#8217;s an accurate simulation (multiple conversations; inaudible)</p>
<p>GELERNTER: You understand it, but you don&#8217;t get any amino acids out. As Searle points out, if you want to talk Searlean, you can simulate photosynthesis and no photosynthesis takes place. You can simulate a rainstorm, nobody gets wet. There&#8217;s an important distinction. Certainly you&#8217;re going to understand the process, but you&#8217;re not going to produce the result&#151;</p>
<p>KURZWEIL: Well, if you simulate creativity, you&#8217;ll&#151;if you simulate creativity, you&#8217;ll get real ideas out. </p>
<p>BROOKS: Next&#151;sure.</p>
<p>M: So up until this point, there seems to have been a lot of discussion just about a fully&#151;just software, just a human or whatnot. But I&#8217;m kind of curious your thoughts towards more of a gray area, if it&#8217;s possible. That is, if we in some way augment the brain with some sort of electronic component, or somebody has some sort of operation to add something to them. I don&#8217;t think it&#8217;s been done yet today, but just is it possible to have fully&#151;what you would consider to be a fully conscious human take part of the brain out, say, replace it with something to do a similar function, and then have obviously the person still survive. Is that person conscious? Is it (inaudible)?</p>
<p>KURZWEIL: Absolutely. And we&#8217;ve done things like that, which I&#8217;ll mention. But I think&#151;in fact, that is the key application or one key application of this technology. We&#8217;re not just going to create these superintelligent machines to compete with us from over the horizon. We&#8217;re going to enhance our own intelligence, which we do now with the machines in our pockets&#151;and when we put them in our bodies and brains, we&#8217;ll enhance our bodies and brains with them. </p>
<p>But we are applying this for medical problems. You can get a pea-sized computer placed in your brain or placed at biological neurons (inaudible) Parkinson&#8217;s disease. And in fact, the latest generation now allows you to download new software to your neural implant from outside the patient, and that does replace the function of the corpus of biological neurons. And now you&#8217;ve got biological neurons in the vicinity getting signals from this computer where they used to get signals from the biological neurons, and this hybrid works quite well. And there&#8217;s about a dozen neural implants, some of which are getting more and more sophisticated, in various stages of development.</p>
<p>So right now we&#8217;re trying to bring back &quot;normal&quot; function, although normal human function is in fact a wide range. But ultimately we will be sending blood cell-sized robots to the bloodstream non-invasively to interact with our biological neurons. And that sounds very fantastic. I point out there&#8217;s already four major conferences on blood cell-sized devices that can produce therapeutic functions in animals and&#151;we don&#8217;t have time to discuss all that, but we will&#151;</p>
<p>BROOKS: Let&#8217;s hear David&#8217;s response.</p>
<p>GELERNTER: When you talk about technological interventions that could change the brain, it&#8217;s a remarkable&#151;it&#8217;s a fascinating topic, and it can do a lot of good. And one of the really famous instances of that is the frontal lobotomy, an operation invented in the 1950s or maybe the last 1940s. Made people feel a lot better, but somehow it didn&#8217;t really catch on, because it bent their personality out of shape. So the bottom line is not everything that we do, not every technological intervention that affects your mental state is necessarily going to be good. </p>
<p> Now, it is a great thing to be able to come up with something that cures a disease, makes somebody feel better. We need to do as much of that as we can, and we are. But we&#151;it&#8217;s impossible to be too careful when you fool around with consciousness. You may make a mistake that you will regret. And lobotomy cases are undoable.</p>
<p>BROOKS: I&#8217;m afraid this is going to be the last question.</p>
<p>M: How close do the brain simulation people know they are to the right architecture, and how do they know it? You made the assertion that you don&#8217;t need to simulate the neurons in detail, and that the IBM people are simulating a slice of neocortex and that&#8217;s good. And I think that is good, but do they have a theory that says this architecture good, this architecture not good enough? How do they measure it?</p>
<p>KURZWEIL: Well, say, in the case of the simulation of a dozen regions of the auditory cortex done on the West Coast, they&#8217;ve applied sophisticated psychoacoustic tests to the simulation and they get very similar results as applying the same test to human auditory perception. There&#8217;s a simulation of the cerebellum where they apply skill formation tests. It doesn&#8217;t prove that these are perfect simulations, but it does show it&#8217;s on the right track. And the overall performance of these regions appears to be doing the kinds of things that we can measure, that the biological versions do. And the scale and sophistication and resolution of these simulations is scaling up.</p>
<p> The IBM one on the cerebral cortex is actually going to do it neuron by neuron and ultimately at the chemical level, which I don&#8217;t believe is actually necessary when we&#151;ultimately, to actually create those functions, when we learn the salient algorithms, we can basically implement them using our computer science methods more efficiently. But that&#8217;s a very useful project to really understand how the brain works.</p>
<p>GELERNTER: I&#8217;m all in favor of neural simulations. I think one should keep in mind that we don&#8217;t think just with our brains, we think with our brains and our bodies. Ultimately, we&#8217;ll have to simulate both. And we also have to keep in mind that unless our simulators can tell us not only what the input/output behavior of the human mind is, but how it understands and how it produces consciousness &#151;unless it can tell us where consciousness comes from, it&#8217;s not enough to say it&#8217;s an emergent phenomenon. Granted, but how? How does it work? Unless those questions are answered, we don&#8217;t understand the human mind. We&#8217;re kidding ourselves if we think otherwise.</p>
<p>BROOKS: So with that, I think I&#8217;d like to thank both Ray and David. [applause]</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/gelernter-kurzweil-debate-machine-consciousness-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Interview: How much do we need to know?</title>
		<link>https://www.writingsbyraykurzweil.com/interview-how-much-do-we-need-to-know</link>
		<comments>https://www.writingsbyraykurzweil.com/interview-how-much-do-we-need-to-know#respond</comments>
		<pubDate>Mon, 10 Jul 2006 11:10:44 +0000</pubDate>
								<dc:creator>Bill Joy</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[To limit access to risky information and technologies by bioterrorists, we should price catastrophe into the cost of doing business, rather than regulate things, says Bill Joy. Things judged to be dangerous would be expensive, and the most expensive would be withdrawn.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.newscientist.com/channel/opinion/mg19025562.000-interview-how-much-do-we-need-to-know.html;jsessionid=ENMAAGMPKNIO" target="_blank">New Scientist</a><i> June 17, 2006. Reprinted with permission on KurzweilAI.net July 10, 2006.</i></p>
<p><i>Interview by Gregory T. Huang</i></p>
<p>Technology doesn&#8217;t make everyone happy. Just ask computer scientist Bill Joy, who has pioneered everything from operating systems to networking software. These days the Silicon Valley guru is best known for preaching about the perils of technology with a gloom that belies his name. Joy&#8217;s message is simple: limit access to information and technologies that could put unprecedented power into the hands of malign individuals (what is sometimes called asymmetric warfare). He is also translating that message into action: earlier this year, his venture-capital firm announced a $200 million initiative to fund projects in biodefence and preparation for pandemics. Gregory T. Huang caught up with Joy at the recent Technology Entertainment Design conference in Monterey, California.<span id="more-80947"></span></p>
<p><i>Do you think your fears about technological abuse have been proven right since your </i>Wired<i> essay?</i></p>
<p>When I wrote that essay in 2000, I was very concerned about the potential for abuse. Throughout history, we dealt with individuals through the Ten Commandments, cities through individual liberty, and nation states through mutual non-aggression plus an international bargain to keep the peace. Now we face an asymmetric situation where technology is so powerful that it extends beyond nations to individuals &#8212; some with revenge on their minds. On 11 September 2001 I was living in New York City. Our company had a floor in a building that went down. I had a friend on a plane that crashed. That was a huge warning about asymmetric warfare and terrorism.</p>
<p><i>Did we learn the right lesson?</i></p>
<p>We can&#8217;t give up the rule of law to fight an asymmetric threat, which is what we seem to be doing at the moment, because that is to give up what makes us a civilisation. A million-dollar act causes a billion dollars&#8217; damage and then a trillion-dollar response that makes the problem worse. September 11 was essentially a collision of early 20th-century technology: the aeroplane and the skyscraper. We don&#8217;t want to see a collision of 21st-century technology.</p>
<p><i>What would that sort of collision look like?</i></p>
<p>A recent article in Science said the 1918 flu is too dangerous to FedEx: if you want to work on it in a lab, just reconstruct it yourself. The reason we can do this is a consequence of the fact that new technologies tend to be digital. You can download gene sequences of pathogens from the internet. So individuals and small groups super-empowered by access to self-replicating technologies are clearly a danger. They can cause a pandemic.</p>
<p><i>Why do pandemics pose such a huge danger?</i></p>
<p>AIDS is a sort of pandemic, but it moves slowly. We don&#8217;t have much experience with the fast-moving varieties. We are not very good as a society at adapting to things we don&#8217;t have gut-level experience with. People don&#8217;t understand the magnitude of the problem: in terms of the number of deaths, there&#8217;s a factor of 1000 between a pandemic and a normal flu season. Public policy has not been constructive, and scientists continue to publish pathogen sequences, which is really quite dangerous.</p>
<p><i>Why is it so dangerous?</i></p>
<p>If in turning AIDS into a chronic disease, or making cocktails of antivirals for flu, or using systems biology to construct broad-spectrum cures for many diseases, we make the tools universally available to people of bad intent, I don&#8217;t know how we will defend ourselves. We have only a certain amount of time to come to our senses and realise some information has to be handled in a different way. We can reduce the risk greatly without losing much of our ability to innovate. I understand why scientists are reluctant, but it&#8217;s the only ethically responsible thing to do.</p>
<p><i>So more technology is making the problem worse?</i></p>
<p>Unfortunately, yes. We need more policy.</p>
<p><i>What would that look like?</i></p>
<p>We could use the very strong force of markets. Rather than regulate things, we could price catastrophe into the cost of doing business. Right now, if you want approval for things, you go through a regulatory system. If we used insurance and actuaries to manage risk, we might have a more rational process. Things judged to be dangerous would be expensive, and the most expensive would be withdrawn. Drugs would make it to market on economic estimates of risk not regulatory evaluations of safety. This process could also be used to make companies more liable for the environmental consequences of their products. It&#8217;s both less regulation and more accountability.</p>
<p><i>How are you combating the threat of pandemics?</i></p>
<p>We recently raised $200 million for biodefence and pandemic preparedness. We have started out focusing on bird flu. We need several antivirals, better surveillance, rapid diagnostics and new kinds of vaccines that can be manufactured quickly. If we fill these gaps, we can reduce the risk of a pandemic.</p>
<p><i>Do other technological advances excite you?</i></p>
<p>I have great confidence that we will extend the limits of Moore&#8217;s law to give us another factor of 100 in what computer chips can do. If a computer costs $1000 today, we can have that for $10 in 2020. The challenge is: will we develop educational tools to take advantage of such devices? That&#8217;s a great force for peace.</p>
<p>Another area that gives us hope is new materials. The world&#8217;s urban population is expected to more than double to 6 billion this century. We need clean water, energy and transportation. Carbon nanotubes have incredible properties, and can be applied to develop fuel cells, make clean water, or make ethanol for electric-powered transport. My company has dedicated $100 million to this.</p>
<p><i>How do you see the increasing connectedness of human societies affecting innovation?</i></p>
<p>It&#8217;s diffusing ideas at an incredible rate. You can use communications and search tools and find out incredible things. You see companies doing interesting things, and you can find out huge amounts very quickly. We can write a worldwide research briefing paper in an hour if we shut the door and unplug the telephone. That&#8217;s something you couldn&#8217;t do before.
</p>
<p><i>What&#8217;s the downside?</p>
<p></i>It&#8217;s like putting a stick in a hornet&#8217;s nest. We have religious and secular societies coming into contact, pre-Enlightenment values conflicting with Enlightenment values. It will be a messy process of change. Technology has brought western pop culture to the rest of the world. I&#8217;m not a fan of it, but the values it has brought to the world actually offend people in cultures that have been around for longer than my particular set of world views.</p>
<p><i>Will the human race survive the next 100 years?</i></p>
<p>We have to make it through a pandemic to understand the nature of that sort of threat. Whether we do that before we unleash the technology, I&#8217;m not sure. Either way, I don&#8217;t believe we will become extinct this century, though we could make a pretty big mess. I hope we can do some sensible things. It is not enough to do great science and technology, we need sensible policy. We still think that if we find true things and publish them, good things happen. We should not be that naive.</p>
<p><i>If you could ask the god of technology one question, what would it be?</i></p>
<p>It seems that a perfect immune system is a disadvantage. If you are perfectly immune, you cannot evolve. A lot of evolution occurs because of selective pressure that your perfect immune system would prevent. This would leave the abusers of biotechnology with the advantage over the defenders, because society needs to be vulnerable so it can evolve. My question is, is that true, because it would prove that we had better limit access to some information. It would mean not only that we cannot make a perfect immune system, but that it would be a bad idea.</p>
<p><i>&copy; 2006 New Scientist</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/interview-how-much-do-we-need-to-know/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Corporate Cornucopia: Examining the Special Implications of Commercial MNT Development</title>
		<link>https://www.writingsbyraykurzweil.com/corporate-cornucopia-examining-the-special-implications-of-commercial-mnt-development</link>
		<comments>https://www.writingsbyraykurzweil.com/corporate-cornucopia-examining-the-special-implications-of-commercial-mnt-development#respond</comments>
		<pubDate>Wed, 24 May 2006 13:47:38 +0000</pubDate>
								<dc:creator>Michael Vassar</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Molecular nanotech is the largest commercial opportunity of all time. But it may also create severe roadblocks and risks, including terrorism, unstable arms races, competitive pricing, restrictive patents, import opposition, economic disruption, and out-of-control AI.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i>Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology, Volume 2, No. 2<i>, May 8, 2006. Reprinted May 24, 2006 by KurzweilAI.net.</i></p>
<p>The development of molecular nanotechnology (MNT) promises to lead rapidly to cheap superior replacements for a large majority of durable goods, a substantial fraction of all non-durable goods, <i>all</i> existing utilities, and some services. For this reason and due to the relatively low expected cost of developing nanofactories,<sup>1</sup> MNT represents the largest commercial opportunity of all time. Unfortunately, the very size of the opportunity &#151;combined with its extreme suddenness, military significance, potential for disruption of existing institutions, and ease of duplication&#151;creates certain severe complications that lead to difficulties in capturing the value created.<span id="more-80942"></span></p>
<p>MNT also has the potential to impact the timeframes and severities of a number of major global risks such as those of terrorism, emergent disease, global warming, omnicidal war, and human extinction due to competition by either intelligent or unintelligent robotic competitors, for which reason there are important non-commercial motivations for preventing its unrestricted utilization. As a result of these difficulties and of the intrinsic uncertainty associated with any particular attempt to develop MNT, commercial development of MNT is likely to be much less rapid than would be predicted from a simple consideration of the value to be created, relevant time horizon, and risk adjusted discount rate.</p>
<p>Despite this, it remains highly probable that MNT will first be realized by a commercial project for the simple reason that probabilistic priors so strongly favor commercial development of new technologies. A slew of militarily relevant technologies were developed by the US, German, and Russian governments during the Second World War and in its aftermath, but that was at a time when the commercial and public sectors were far more fully integrated than they are today and when the external pressures forcing governmental efficacy were greater. By contrast, over the last few decades, virtually every significant technological development has been commercial in origin (or even recreational, e.g. the Open Source movement and SpaceShip One) rather than public. Governmental R&amp;D initiatives, such as those aimed at curing cancer and AIDS and at developing space travel and fusion power have tended to fail totally or almost totally during the past 30+ years.</p>
<p>Given that an important subset of possible scenarios are driven by commercial development, it seems prudent to examine in some detail the major features of most commercial scenarios and to identify the ways in which developers may experience unique difficulties distinct from those associated with the development of other products and the ways in which they may manage those difficulties. This paper will attempt to do that, examining the probable implications of both relatively open and relatively secretive development programs in the event of successful development of MNT. It will be assumed that the developers are highly rational and informed, and that they are attempting to maximize profit in the relatively short term while avoiding the most serious risks of MNT. Development will be assumed to occur within the next 20 years, over the backdrop of a world politically and technologically fairly similar to our own, and with a historically typical gap of a few years between the initial development of the technology and its successful imitation or implementation by competing projects. It also will be assumed that the more powerful MNT applications, such as those in intelligence amplification, neuroscience, extremely powerful distributed robotic systems, and artificial intelligence (AI) will take some time to emerge even given nanofactories and massive funding.</p>
<h2>Part 1: Competitive Strategy</h2>
<h3>Pricing</h3>
<p>The simplest and most traditional of the problems facing MNT developers is competitive pricing. Setting the prices of MNT goods close to the cost of production provides little profit with which to expand or compensate for risk undertaken, while setting prices too high threatens both to unnecessarily reduce consumption below the optimal level and to draw both legal and illegal competitors into the field. In addition, given the number of industries in which MNT products are likely to compete and the political clout of many of those industries, either high or low prices could motivate antitrust concerns. Theoretically, a higher price is indicative of a monopoly while a lower price indicates competition, but a lower price will also lead to more successful and rapid competition with existing companies and to greater market share, and this could be seen as evidence of monopoly status or of anticompetitive tactics.</p>
<p>Motivating competitors to develop MNT is probably the most serious risk associated with high pricing. In order to minimize this risk it will be necessary for prices to be relatively low, and also for expenses to appear as great as possible. It will be particularly desirable (from the commercial developer&#8217;s point of view) that the <i>apparent</i> cost of developing MNT be as great as possible, as this is the expense that can most easily be inflated. One way in which this can be done is to publicly spend as much money as possible on research ostensibly aimed at developing nanofactories over a fairly long period of time <i>after</i> nanofactories actually have been developed. Money can soundly be borrowed in order to fund this research, even at high interest rates, due to the certainty of eventual success. Meanwhile, profits can be generated via the sale of supposedly incremental results of the nanofactory research such as gem quality or better diamonds, doped silicon computers modestly more powerful than those otherwise available at a given price, and inexpensive carbon nanotubes.</p>
<p>Once the nanofactories are publicly acknowledged to exist, the apparent low hanging fruit associated with the supposed development trajectory will be depleted, and a substantial fraction of the global pool of technical experts plausibly capable of relevant work will have already been recruited, discouraging imitation. In addition, the creditors will constitute a class of stakeholders in the new technology who are nonetheless integrated into the existing economic system. Loan repayment will contribute to the justification of profit to the public and to the government. In general, the public appears to accept the legitimacy of high profit margins most readily when the product in question is an extremely expensive luxury, an extremely inexpensive everyday item, or a new product with an explicit need to amortize development costs. It is important to point out that it is excessive profit <i>margins</i>, not excessive profits that usually are considered objectionable. For this reason, actual profits will be greater if expenses can be increased, because the dollar value of a 200% markup is larger on a product costing $100 to produce than on one costing $10. Wasteful expenditures on supposed inputs also can create stakeholders.</p>
<p>Like software, restricted versions of MNT products can easily be designed and can be sold for lower prices than unrestricted versions. For instance, less expensive copies of a given product can be sold to less wealthy countries, or even less wealthy regions within a country. This might be accomplished without competing with the products sold to wealthier regions by installing GPS or inertial locators to monitor product location and disable them from functioning outside of their licensed area. In this manner, profitability can be maximized by selling to all potential customers for prices that constitute a reasonable fraction of their willingness to pay. With built-in biometric sensors, some MNT devices could even be assigned prices based on the personal characteristics of their purchaser. In addition to maximizing profit, this sort of strategy should greatly reduce any humanitarian concerns regarding the distribution of MNT products. The public generally accepts the existence of restricted software without resentment. Nanostructured physical objects can be made more difficult to hack than either software or contemporary hardware, so the restrictions on use built into MNT products can be more robust than those built into today&#8217;s printers or software.
</p>
<h3>IP Protection</h3>
<p>The most likely outcome of patenting nanofactories in any given country would be widespread patent violation both by other countries and by many criminal organizations. This would probably be followed by the slew of problems<sup>2</sup> that long have been predicted to accompany uncontrolled MNT development, such as unstable arms races, malicious grey goo, and massively oppressive MNT empowered governments. In addition, pirate nanofactories would be used to build nanofactories of unpatented design, which then would be patented.</p>
<p>All this does not mean that IP law cannot contribute some value to an MNT &#8220;first mover.&#8221; A large number of patents of variable scope can be produced to restrict the products that a competing MNT developer can produce legally. Patents on key components can obstruct possible commercial efforts to develop competing nanofactories without revealing too much about the workings of existing nanofactories. In a field as large and as unexplored as nanotechnology, there surely will be room for a number of extremely broad patents that can be used to slow down competitors. In such a fast moving field, even a patent that delays competition by a few months before being overturned could be extremely valuable. Potential patents might include mechanochemistry, carbon mechanochemistry, self-replicating machines, self-replicating programmable productive systems, diamondoid nanoscale machines, and more, but should be chosen to avoid revealing too much about how a nanofactory can be built.</p>
<p>Governments may attempt to force developers to share MNT production capabilities or may simply steal such capabilities. When high-level officials finally begin to distinguish between reality and science fantasy and to recognize the technology&#8217;s potential, they rightly will see MNT as a national security issue. However, preventing simple theft is relatively easy. Nanofactories can be made large enough that they can&#8217;t be stolen covertly and/or lost. They can also be networked wirelessly or otherwise equipped for easy inventory. It would add little complexity to equip all nanofactories with oxidative self-destruction systems. The best way to resist forceful interrogation is probably to not have any individuals within the company who know everything or almost everything that is needed in order to build a new nanofactory, and to hold out the threat of not doing business with countries that violate the company&#8217;s rights. Directly threatening a country like the United States in this manner would be unwise. Rather than doing that, an indirect threat could be delivered by setting up production facilities in some high political risk countries with little respect for private property. If this is done, it is likely that one of these countries will attempt to steal MNT production capabilities prior to any developed country doing so. If the company responds by destroying all stolen assets, not sharing information, and refusing to trade with that country, this will deter other nations from repeating their mistake, at least in the short term. The desire not to imitate the behavior of disreputable states will be another incentive for developed countries to respect the rights of the developing company.</p>
<p>Throughout the early commercialization of MNT, the continual borrowing of as much money as possible will be a major imperative. This is true for several reasons. The first of these is that it is important to retain control of the company and associated technology in order to implement a relatively long-term plan rather than one that might maximize shareholder profits in the very short term, for which reason stock should not be sold to raise capital. The second is that over the first decade or so, the scale of operation associated with the developing company will be continually increasing at such a rate as to make even ludicrous debts from a few years back trivial. The third reason is to acquire the previously mentioned sets of justificatory expenses and of influential stake-holding creditors. A fourth reason will become relevant later in development, once the potential of MNT is well established and the broader public and public intellectuals become hostile. Hostility is a nearly certain early result of any massive technological disruption regardless of the quality of life improvements it makes available (aging reversal technologies may turn out to be an exception to this generalization, since their psychological impact will be unprecedented in scope and is not easily predicted, but thus far even aging reversal seems to fit this generalization). As hostility develops in response to massive technological impact, it may be both possible and desirable to slow governmental activity by reducing governmental access to funds. This might be accomplished by competing with the government to drive up the price of debt and by releasing products which make an attractive lifestyle achievable on the interest payments from a moderate amount of high yield debt, reducing the size of the work-force and thus increasing the cost of running a large bureaucracy. Such actions should be undertaken gradually so that they are not interpreted as an attack on borrowers and bureaucracies, as that would lead to escalation. By raising both the interest rate and the wages of skilled labor, potential competitors can be further prevented from developing MNT independently.
</p>
<h3>Dealing with Opposition</h3>
<p>Due to the potential for economic and social disruption, some countries may refuse to allow the import of MNT-derived products. This is not a serious problem for an MNT producer. A general boycott by all major nations is extremely unlikely, especially considering the magnitude of the benefits that MNT will make available. Tariffs would take some time to put into effect and whatever nation stood to improve its trade balance via MNT exports would petition the WTO for tariff elimination. In addition, MNT can be used to produce traditional capital for the production of non-MNT products.4</p>
<p>One of the earliest products released by an MNT developer is likely to be inexpensive hydrocarbons for fuel and other applications. These can be made by harvesting solar energy over the oceans, using it to hydrolyze water, and using the hydrogen to reduce atmospheric or other (limestone?) CO2. The machinery for all of this can be produced quickly in any quantity with MNT. Floating solar platforms can be made with either hydrocarbon production or MNT manufacturing capabilities. The manufacturing centers should be designed to utilize the hydrocarbons as feedstock and solar energy as a power source in order to rapidly produce more platforms of both types. Design and control for such platforms should be non-problematic, and their products could be sold on the global petrochemicals and natural gas market. In this case, there would be no practical difference between a country that chooses to purchase oil from traditional sources and one that purchases MNT-derived oil, as both would apply demand to the same pool of global production and impacting the same global price, making boycotts ineffective unless they were extremely broad. Hydrocarbon storage facilities probably will have to conform to all normal laws regarding the storage and transport of hydrocarbons, complicating implementation somewhat. However, simply violating regulations and hiring legal teams to delay the imposition of fines until they are no longer relevant may be an acceptable strategy for faster implementation if the regulatory framework would otherwise slow development overly much.</p>
<p>While MNT will accelerate the development of new products, it will reduce the time required to build new capital even more. As a result, production capabilities sufficient to satisfy global petrochemical demand should take much less time to develop than designs capable of competing in a wide variety of industries. The revenue generated via the initial products will be an important part of what enables the rapid development of newer products.</p>
<p>The revenue from this early activity will be more than sufficient to hire as many researchers and administrators as can be productively utilized to develop new MNT designs. Integrating so many new employees without critical security risks will be a difficult problem, but it should be a manageable one as there are already many companies that face similar difficulties. At this point, the MNT developers also should have enough money to purchase both public opinion and political influence in so far as these goods can be rapidly purchased.</p>
<p>In order to minimize opposition it will be critically important for the developers not to be seen as a non-competitive monolith. This will be particularly difficult if MNT development is overt as opposed to remaining a secret, but it is probably possible under either secret or public development. The company may be best able to avoid conveying the impression of monopoly if it carefully and legally shares its technology with a few select partners who thoroughly appreciate the dangers associated with MNT (especially the critical dangers of uncontrolled AI and unstable arms races), the need to avoid them, and the consequent need to avoid further disseminating the basic technology. If these partners compete in the production and sale of relatively safe MNT products, it is possible that the market generally will be seen as saturated and further entrants will be discouraged. This decision would constitute a non-secretive alternative to the earlier prospect of inflating the apparent cost and difficulty of MNT development, although both strategies could be pursued sequentially. In the case of such a strategy, as in contemporary oligopoly arrangements, branding will become an extremely important part of profit maximization. A more trusted brand probably would be able to charge a substantial premium, especially for nanomedical products and services once those are developed.<br />d) First Mover Advantages</p>
<p>A large fraction of the profitability associated with nanomedicine, and to a lesser degree that associated with any new MNT product, is likely to occur during the period of initial release. This is true because MNT products often will solve problems cleanly and completely, leaving no significant vestigial market. For instance, one of the first novel nanomedical devices produced using MNT is likely to be a powder of biocompatible glucose oxygen fuel cells with internal temperature sensors to avoid excess waste heat and a binding site for later removal from the bloodstream. The purpose of this device would be simply to burn fuel, producing waste heat. From the public&#8217;s perspective it will be a rapid weight loss infusion capable of safely producing one to two pounds of weight loss per day (or several times that in extremely cold weather or while the body is immersed in cool water). Once this system is safely developed and successfully marketed, the market effectively will be gone. People may continue to become overweight, but the world&#8217;s accumulated pool of overweight people willing to use nanomedicine will be expended. Those overweight people who are reluctant to use new medical technologies will surely still prefer, when they eventually decide to use one, to use the established brand even if it costs somewhat more than its competition, as its safety will have been more thoroughly established. Furthermore, later nanomedical devices will incorporate the weight loss function as a mere side effect of their other capabilities, making this design obsolete. In other fields, the advantages from safety, branding, superior R&amp;D, and expansion into a technological frontier will not favor the first mover as completely, but it is a basic economic result that, all else being equal, oligopoly quantity competition leaves first movers with dominant market share even in the long run.<sup>3</sup></p>
<p>Given the above result, are competing MNT producers likely to engage in the alternation of <i>de facto</i> collusion and quantity or monopolistic competition typical of contemporary oligopolies? The simple answer is yes, at least in the short term, as this behavior maximizes short run profits for all competitors under the constraints imposed by antitrust law and prisoner&#8217;s dilemmas. However, MNT will be associated with novel productive powers that may call the default assumption into doubt. For instance, the traditional MNT vision of home manufacturing, the software metaphor of unlimited manufacturing capacity matching production precisely to demand, and even the growing paradigm of online agent-based purchasing all suggest price competition as a plausible alternative. Still, there seem to be few large examples of actual price competition in the world of retail, even where they would be most expected, such as in the sale of bottled water, public domain IP, internet retailing, and the like. Even freelance service work such as housekeeping, therapy, tutoring, and most other examples of work by the self employed are far from perfectly competitive, with agencies matching consumers to producers and keeping large commissions and with many producers spending more time searching for clients than working, and demanding far more for an hour of work than the value of an hour of their time.</p>
<p>By reducing the scale of manufacture, in addition to improving the ability to match supply to demand, MNT and nanoblock<sup>4</sup> assembly seem likely to produce a world where retail is relatively more important and wholesale less. Wal-Mart or its successor still may sell MNT-built products, but if they do, they probably will sell them primarily through large factory/grocery stores rather than from giant wholesale stores, as the combination of a nanofactory with virtual reality environments for trying out products will greatly reduce the necessary floor space and inventory space. It is also reasonable to suggest that members of a much wealthier society will be less inclined to travel substantial distances in order to shop, and less likely to accept uninteresting work for under ten dollars an hour. Smaller stores that offer a better atmosphere and knowledgeable service thus will have both more customers and less difficulty finding employees. As a result, brands will be easily differentiated and price competition will be even less prevalent than it is today.</p>
<p>The sale of energy will provide the first MNT mover with yet another advantage over later competitors. If claims can be established to solar energy streams sufficient to satisfy global energy demand, and environmental laws can be passed to restrict the utilization of solar energy streams other than those initially tapped, competitors may have to pay a larger amount for solar energy inputs than first movers.</p>
<p>At this point, it is still far from clear whether the developers of MNT will or should choose to publicize their achievement. Their decision probably will be driven in part by the nature of the company that makes the final enabling innovations, and in part by the intensity of the technological competition. If MNT is developed in a world where it is still widely considered a retro-futurist fantasy, competition will be much less intense than in one where it is developed as the result of intense international competition. I personally expect a scenario reminiscent of that accompanying the birth pangs of the airplane, i.e. many competitors all over the world but no very large and competent concerted efforts aiming at a technology that was still taken by consensus to be impossible despite a technological infrastructure that was making its achievement noticeably less difficult every year. In such a scenario, a private company that wishes to utilize MNT productive capabilities will be able to do so rather overtly without creating widespread awareness of what is happening. Inexpensive solar panels are surely within the range of what they can publicly produce, but rapidly deployed macroscale floating solar oil factories are not. In a world where MNT is seen as completely discredited, or in one where ubiquitous but mundane &#8220;nanotechnology&#8221; had made Drexlerian predictions seem as quaint as those once made about nuclear energy or space travel, even the solar oil factories might not lead to widespread correct conclusions without an accurate explanation; conversely, if MNT was the 21<sup>st</sup> century&#8217;s space race, there would be little point in secrecy and every reason to develop and market all important applications possible applications as quickly as possible.</p>
<p>Unfortunately, it is hard to imagine a world where the replacement of traditional industry by molecular manufacturing is taken for granted by everyone even moderately future-oriented in the same way that today all such people see as inevitable the digital replacement of analogue film-making, Chinese dominance of durable goods manufacture, or the transition to HDTV. The economic and political havoc that would be expected to result from a widespread belief in truly radically near future change is difficult to calculate, and might even be sufficient to make such a prophesy self-preventing. For this reason among others, it is fair to say that even weeks after the development of MNT is announced, the majority of investors still will not know about it. Even those who do will probably understand it less well than today&#8217;s typical science fiction author, and will thus not base any informed investment decisions on their knowledge of MNT. It is also easy to imagine a near-future world filled with constant inaccurate claims of MNT breakthroughs, such that accurate information would not trigger immediate market adjustments upon its release.</p>
<h2>Part 2: MNT Risk Management</h2>
<h3>Economic Disruption</h3>
<p>Much has been made of the large number of jobs that might be eliminated with the advent of molecular manufacturing. If all or nearly all jobs were to rapidly become unnecessary, the resulting economic disruption would not necessarily cause major hardship, as some have feared. However, most work is not associated with the production of products that can easily be replaced by MNT. Instead, early MNT products will almost eliminate certain sectors, such as manufacturing; will greatly reduce the need for workers in some others, such as mining, utilities, construction, and transportation/warehousing of goods; will have little direct impact on the demand for work in some fields, such as educational services, management, and food services; and will greatly increase the demand for a few professions, especially information technology and possibly scientific and technical services. Theoretically, capital can be substituted for most varieties of labor, and MNT also will greatly expand the ease of creation of capital while devaluing existing capital, but it will take time for new capital to replace most workers. For instance, in the short term, trash-collecting robots are unlikely, but in the long term, home recycling and incineration units are likely.</p>
<p>I estimate that MNT will make 10% &#8211; 20% of all current US jobs obsolete within a year of development, 20% &#8211; 40% within two years, and in the absence of strong AI will make 60% &#8211; 80% of current work unnecessary within a decade of development, as more powerful tools multiply the capabilities of service workers in fields like waste management and accommodations/food services. Many workers probably will be retained by their employers for months or years after their services are no longer necessary due either to contractual stipulations or simply to slow managerial reaction times. In addition, laws may be passed further restricting the elimination of jobs, but ultimately obsolete industries will disappear even with government life support and will eliminate jobs by closing if they can&#8217;t do so with layoffs.</p>
<p>At the same time that many jobs disappear, so will many workers. Great uncertainty, high discount rates, high interest rates, and novel low cost lifestyle options will provide many workers with strong incentives to leave their jobs and either retire or try to found businesses more suited to the new economy. This will drive the expenses faced by many employers upwards, as noted earlier, but will do little to mitigate the problem of unemployment, as the workers who have the capital to invest and retire are by definition not those most threatened by the loss of their jobs and typically cannot be easily replaced by even larger numbers of inappropriately trained workers.</p>
<p>Most of the neediest workers will be covered by state unemployment insurance, which will have the added benefit of increasing non-discretionary governmental spending. Increases in the duration of unemployment payouts should be lobbied for, but even if these are successful, more will be needed. Further subsidies for the unemployed may be possible through investments in companies (such as MyRichUncle.com) that give loans in exchange to a fraction of the borrower&#8217;s future earnings. However, several million people still will be in need of both money to live on and meaningful work that they are not able to find for themselves. Dealing with those people is not a core business function, but providing low cost goods to any agencies that show competence in doing so (groups such as Habitat for Humanity, etc.) probably will be a very sound investment in good will.</p>
<p>By contrast, although it would be possible to support all of the displaced people or hire them for make-work, spending money directly to do so generally would be expected to aggravate the resentment that was supposed to be mitigated. One of the most important things to do when mitigating resentment is to work hard to fight the impression that people with MNT can do anything and that all remaining problems are therefore their fault. For PR purposes, it is probably best to downplay what the technology is capable of. This also will tend to reduce governmental fear, public paranoia, and pressure to share dangerous technologies with militaries that cannot be trusted with them.
</p>
<h3>Abuse of Novel Capabilities</h3>
<p>The second major class of risk that must be avoided is that associated with intentional abuse. This includes everything from the production of self-replicating robots to rapid military build-ups to universal intrusive surveillance (even, possibly, surveillance of brain activity, hence of thoughts). The extreme number of potentially disastrous abuses that MNT lends itself to is a very strong argument for making every possible effort to either maintain secrecy regarding MNT techniques, or at least to limiting access to extremely trustworthy parties. Many other essays in this collection will discuss the consequences of failing to maintain secrecy, but for the purposes of this paper, it should suffice to assert that so long as MNT remains tightly controlled these risks should be manageable.
</p>
<h3>Dangerous Consequences of Excessive Computing Power</h3>
<p>The final and most critical danger associated with MNT is that it will lead to the release of massive computing power and the acquisition of neurological knowledge that will make it easier to develop AI (artificial intelligence) than to control it, leading to a total loss of control and human extinction. It is obviously best to respond to this by being extremely judicious with respect to the distribution of devices for studying the brain and by limiting the available computing power available for a dollar to a level significantly greater than that being produced by competing companies but far less than what could be made available. It is best if the gap between available MNT computers and traditional</p>
<p>9 computers is great enough to dominate the market and end incremental development of computing power, but small enough not to contribute substantially to reducing the cost of parallel projects aimed at developing MNT or AI. Despite such precautions, MNT development will accelerate AI development in many ways. The most significant of these may be the increased ability to spend time on long-term personal projects resulting from increased personal freedom.</p>
<p>The largest risks are likely to be of an internal origin, as some of the thousands of researchers in the company may attempt to evolve an AI on internal nanocomputers. An obvious way to ameliorate this problem is to limit design and production to low power computers, or to dedicated computers for running molecular simulations and designing products, or for other very specific purposes. In the long run though, this is a stopgap measure. Some strategy must be developed for ensuring that mankind is not accidentally wiped out by an AI. The scope of this problem goes beyond that of this paper, but it is probably a good starting place to assert the desirability of doing whatever is possible to direct global R&amp;D towards the development of technology for making people more intelligent and away from technology for making machines more intelligent.</p>
<p>Ultimately, it does appear that AI can be developed safely and that preventing unsafe AI permanently should be possible, but it also appears that the level of intelligence required to safely develop AI is approximately independent of the available level of computing power, while that required to unsafely develop AI decreases with computing power. For this reason, increasing intelligence and reducing available computing power both contribute to risk reduction. Anti-aging technology also may contribute, because it provides a <i>de facto</i> increase in the amount of thought that a person can ultimately apply to any given problem, although the development of anti-aging technology will be strongly commercially and PR driven in any event, and thus requires no further justification. </p>
<hr />
<p>1. &#8220;Molecular Manufacturing: What, Why and How&#8221; by Chris Phoenix (<a href="http://wise-nano.org/w/Doing_MM" target="_blank">http://wise-nano.org/w/Doing_MM</a>)</p>
<p>2. See &#8220;Dangers of Molecular Manufacturing&#8221; (<a href="http://www.crnano.org/dangers.htm" target="_blank">http://www.crnano.org/dangers.htm</a>)</p>
<p>3. In price competition, producers compete to sell for the lowest possible price. They choose what price they will sell at and then sell as many as the public demands at that price. In practice, this requires that the company be able to match supply precisely to demand. Economically this is equivalent to perfect competition and eliminates all profit. In quantity competition, producers sell undifferentiated products to wholesalers, setting the quantity sold to maximize profits. As the number of competitors increases this becomes more like perfect competition because each producer has increasingly little incentive to restrict quantity in order to maintain demand. By committing to a particular level of production in advance, earlier entrants can establish equilibria where they sell larger volumes than later entrants. With a linear demand curve, each entrant will sell half the volume of its predecessor. In monopolistic competition, companies sell similar but branded goods and use marketing and reputation to maintain a willingness to pay a premium over the market price for branded products. Branded goods are imperfect substitutes with high cross elasticities of demand, so as the price of one brand increases, consumers gradually switch over to its competition.</p>
<p>4. For an explanation of nanoblock manufacturing, see &#8220;Safe Utilization of Advanced Nanotechnology&#8221; by Chris Phoenix and Mike Treder (<a href="http://www.crnano.org/safe.htm" target="_blank">http://www.crnano.org/safe.htm</a>).</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/corporate-cornucopia-examining-the-special-implications-of-commercial-mnt-development/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>From The Enlightenment to N-Lightenment</title>
		<link>https://www.writingsbyraykurzweil.com/from-the-enlightenment-to-n-lightenment</link>
		<comments>https://www.writingsbyraykurzweil.com/from-the-enlightenment-to-n-lightenment#respond</comments>
		<pubDate>Mon, 08 May 2006 19:02:01 +0000</pubDate>
								<dc:creator>Michael Buerger</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[The criminal potentials inherent in molecular manufacturing include powerful new illegal drugs, mass murder via compromised assembly codes, and a "killer virus" crossing out of cyberspace into the physical realm. A criminal-justice futurist examines the possibilities. 
]]></description>
			<content:encoded><![CDATA[<p>On top of my physical desk sits a copy of <i>Pandaemonium: The Coming of the Machine as Seen by Contemporary Observers, 1660-1886</i>, Humphrey Jennings&#146; &#147;imaginative history of the Industrial Revolution.&#148; On my computer desktop are essays by the authors of this volume (and the previous one<sup>1</sup>), the possible precursors of <i>Pan-nano-daemonium: The Coming of the Micro-Machine</i>. <span id="more-80936"></span></p>
<p>In one of those essays, &#147;The Need for Limits,&#148; Chris Phoenix speaks of the Enlightenment in terms of a synergy: enhanced human productivity with machines, partially supporting a philosophical examination of the human condition. Though certainly that, the Enlightenment also was a watershed period when the economic foundations of the European economy changed, and the authority of Revealed Truth was forced to contend with the authority of Rational Thought and its practical cousin, Scientific Inquiry. The shifts in the economy created a massive transformation of social life, from agrarian to urban. The current era has parallels to all of these forces, movements already in play but not yet complete&#8230;and in some cases not fully articulated.</p>
<p>As a peripheral member of a futurists group<sup>2</sup> in my professional field (policing, and more broadly, criminal justice), I have noticed that futurists tend to be concerned with the end results of trends, the state of things ten, twenty, or fifty years from now. By contrast, I am more concerned with the collateral damage we may sustain in the process of getting to those future states from where we are now.</p>
<p>This essay approaches that interstitial state in four sections. The first section looks at the control of the technology; the second, for the criminal potentials inherent in it. Using the template of the Enlightenment, the third section looks at the darker channels of social transformation, particularly the impact on work and social worth. The fourth section draws an admittedly leap-of-faith parallel between the Enlightenment&#146;s impact on religious authority, and technology&#146;s impact upon the authority of economic capital and law.</p>
<p>Nanotechnology holds remarkable potential to change the world, but like most recent technologies, it emerges within a larger system of laws, codes of conduct, and social expectations developed for previous capacities. Those mechanisms will shape its emerging uses, possibly retarding or constraining the applications of the technology in undesirable ways. At issue is whether micro-level processing will be merely one more tool (and thus alter our lives incrementally), or a Promethean breakthrough that will alter human existence in profound ways. My interest, as one who stands outside the Halls of Science looking in, tends to center on the possibilities that I can understand from a layman&#146;s perspective.</p>
<p>Trying to grasp in layman&#146;s terms the implications of a new and only marginally understood technology leads to a search for analogies, framing the new in terms of the familiar (for good or ill).</p>
<h4>Control</h4>
<p>As a non-scientist, the most salient question for me is, &#147;When do I get to play with the new toy?&#148; Given the general limits of corporate use of nanotechnology, the first new toy that will become available to me most likely will be the desktop assembler, or personal nanofactory (PN).</p>
<p>The most knowledgeable members of CRN&#146;s Global Task Force<sup>3</sup> have engaged in a lengthy discussion about desktop manufacturing and its social consequences, and as of this writing, there seems to be a lack of consensus about the capacity, and thus the full impact, of PNs. If we accept the position of the optimists, and expect fully-capable devices to be available in the not-too-distant future, secondary questions arise: Will the devices be provided in fully-capable form (probably transformative), or will their functionality be curtailed in defense of the corporate profits to be derived from them? If the latter, how will control be maintained? Some answers are perhaps to be found in current trends, since the courts often look to historical analogs in dealing with new issues.</p>
<p>If we posit that desktop manufacturing becomes widely available, as seems inevitable, the dominant forces of the economy have two avenues of recourse to maintain control over the new technology for monetary benefit. The first will be the control of raw materials for molecular assembly, which appears to share the delivery profile of heating fuels in contemporary life. More important is the second area, already suggested by Phoenix: patents and copyrights.<sup>4</sup></p>
<p>The development of nanotechnology is taking place within a corporate nest of ideas and resources (much like licensed computer software development), with some independent researchers and consortia operating on a freeware basis. Molecular assembly at any sort of commercial or individual level will require patterns to guide assembly, and these are likely to be controlled by patents. The majority of patents are almost certain to be controlled by corporate interests. Renewable user site licenses, comparable to commercial software packages, are the most likely form of retaining economic benefit for a corporate entity. One of the possible ways of maintaining economic control over site licenses would be some form of cyber-degradable program that self-destructs after a finite period, and must be renewed. For example, a user could download (or purchase on a one-use or renewable-use media platform) the code that would allow the manufacture of only a certain number of rolls of toilet paper by a personal nanofactory.</p>
<p>Patents and the fundamental premises of intellectual property are already under challenge, but the challenges have been met with an equally strong legal response anchored in precedent. The courts have handed the reins of control over digital recordings of music to the star-making machinery behind the popular songs through conservative interpretation of intellectual property statutes. The huge profits to be made from licensing technological advancements for industry virtually assures that the field of nanotechnology will be similarly bound.</p>
<p>The most recent Promethean technology, file-sharing, theoretically stood to liberate music from the chains of capital. However, Napster, Kaaza, Grokster, and their lesser clones have lost the legal battles, and the technology has been co-opted by industry giants into new distribution-for-profit mechanisms. Corporations and universities alike write eminent domain over patents and patentable discoveries into their employment contracts, and genetic patterns and discoveries are subject to copyright. Unknown garage bands and the metaphorical garage workshops of independent researchers still can be found beyond the current reach of over-grasping capital, but only until they become good or useful enough to attract attention.</p>
<p>As new genetic &#147;building block&#148; discoveries and other chemical compounds are placed under patent, the copyright has become the new castle moat or the new dog in the manger (depending upon one&#146;s perspective), intended to keep easily-duplicated &#147;properties&#148; under the control of their owners. Paradoxically, only those products deemed legitimate are defended by patents and lawsuits so vigorously; illegal products and contraband are not. Corporate interests have far deeper pockets and a true metric for measuring loss and injury. There is greater freedom in the illicit trades, where control of trafficked, harmful artifacts rests with hugely inefficient, underfunded, and understaffed public enforcement agencies.</p>
<p>The exponential explosion of child pornography (and its hate- and racial supremacy-based counterparts) over the Internet is a cautionary tale in its own right. Like the illicit drug trade in the physical world, neither child porn nor hate-mongering is impervious to law enforcement efforts, but the occasional victories of enforcement seem to have little long-term effect on the larger industry or movement. The underground distribution of molecular patterns for assembly might easily be accomplished by the same mechanisms, like the basic virus codes that any script-kiddie can download, tinker with, and release back into the wild.</p>
<p>While the first generation of personal nanofactories probably will come with a fixed number of pre-programmed patterns, market forces will demand versatility. Units will need a capacity to acquire new assembly patterns as they are developed, and there seem to be few options beyond what is now available for computer data. Patterns may be downloaded over hardwired or Wi-Fi networks, or be manually transferred by whatever media replace the current disk drives and flash memory sticks. Each format would spawn a black market of unknown proportions, and with the black markets come the accompanying risks of epidemic and pandemic consequences of criminal use.</p>
<h4>Criminal Potentials</h4>
<p>We should anticipate that a new drug industry will piggyback on the basic molecular assembly phenomenon, and the potential implications for the social fabric are enormous. One of the most desirable benefits of nanotechnology is that of precise targeting of therapeutic drugs; however, the same technology will have associated benefits for illegal pharmacopoeia. While the complexity of the patterns most likely will delay this until a second or third-level level of PN development, once the basic patterns for psychotropic drugs are understood and the assembly technology sufficiently enabled, individual drug manufacture is almost certain to become a social tsunami. There are strong analogies to the current methamphetamine epidemic: less than two decades ago, the manufacture of crystal methamphetamine required a well-equipped clandestine lab, a chemist, a criminal organization for protection and distribution. Today, meth is the new bathtub gin, easily made in any number of Rube Goldberg processes in basements, trailers, campers, garages, or pickup trucks. </p>
<p>Unlike methamphetamine, a micro-assembly drug manufacture process would need only the basic molecular components, not the more elaborate precursor chemicals (like pseudoephedrine) whose control is now part of our anti-drug strategy. That suggests a much greater availability, with corollary hazards of greater social experimentation and conceivably even poly-drug experimentation. The toxic byproducts of meth labs are threats to law enforcement agencies, the families of meth addicts, and neighborhoods. We do not yet know the degree to which micro-manufacture byproducts will be toxic, if at all. </p>
<p>Illicit micro-manufacture may be a mixed blessing. On the one hand, effectively eliminating organized crime from the market may lessen the toxic effects of the war on drugs: the corruption involved with importation of drugs, and the violence of competing drug markets. At least potentially, even the criminogenic nature of drug dependency may be lessened: since the base materials would likely be the same as for legitimate micro-manufacture, it is less likely that a specialized, higher-priced supply chain would be necessary. The dynamics of that supply chain create additional crimes: violence among criminal enterprises competing for turf high, and both personal and property crime committed by addicts desperate to meet the dealer&#146;s price. Absent the supply market, the cost of personally manufactured drugs would be cheaper, and the risks of their creation considerably lower in terms of legal discovery and interdiction. However, the potential free access to addictive and mind-altering substances will almost certainly exacerbate the social problems associated with the addictions and dependencies that result. The same delivery method could surreptitiously create markets for new designer drugs, addictive and involuntarily piggybacked on legitimately disseminated nanoproduct codes. The number of &#147;what ifs&#148; that need to be resolved before either scenario happens leave the possibilities within the realm of fiction for now, but if the analogies to the Internet hold true, they must be anticipated as a contingency.</p>
<p>Should we ever develop a drug-based cure for the addictions, of course, it might be to our collective advantage to attempt to disseminate it via whatever outlaw networks and mechanisms develop, the angelic counterpart to the demonic assault-by-micro-drugs of the original scenario. Therapeutic nano-rehab, even at the time of a medical crisis, may not be sufficient to stem the drug crisis, however. Involuntary detoxification has a poor history of neutralizing the psychological dependencies that drive post-sobriety returns to addictive substances. The &#147;evil twin&#148; of involuntary detoxification is involuntary addiction.</p>
<p>Lurking beyond therapeutic use is the possibility of totalitarian control using the same methods. The Promethean paradox that attends all new technologies is even more pronounced for those that escape Newtonian-level detection. Medical research is racing ahead in its understanding of neural processes, including the sites in the brain responsible for certain behaviors. As nanomedicine develops capacities for intervening in psychological dependencies or other maladies, it also develops the capacity for inducing mind control or other forms of incapacitation.</p>
<p>Downstream, there is also the potential for mass murder via compromised assembly codes. In the physical world, tainting a medicine with poison can only be done efficiently at the factory source, and even then must bypass or defeat stringent quality control measures. Any other corruption can take place only on a relatively small scale. The introduction of a virulent and unsuspected corruption of a drug assembly code is not so limited. It shares more in common with the computer virus than the Tylenol poisoner. Since black market codes originate and enter the data stream outside the domain of legitimate quality control measures, and the drug-using community is unlikely to give designer drug codes great scrutiny (at least in the initial rounds), &#147;massassination&#148; (mass assassination or &#147;pharmaceutical cleansing&#148;) via bogus codes is a distinct possibility in a networked distribution system. It would challenge both medical institutions and law enforcement agents. It is admittedly an outside possibility, requiring a rare combination of technological savvy and social alienation, but the world since September 2001 has been dealing with more and more &#147;one in a bazillion&#148; scenarios. Nothing should be taken off the table in terms of exploring, and preparing for, unpleasant misappropriation of technology.</p>
<p>To a certain degree, the massassination scenario depends upon the nature of the dissemination of manufacture codes. The most logical assumption is that distribution of product blueprints for desktop manufacturing will be done via the Internet or its successor entity. The current attempts to defeat music and film pirate copies would have serious analogs in any new process that challenged traditional sources of corporate and investment income, especially unrestricted use of molecular assembly technology. The Spy vs. Spy battle between corporate interests and hacktivism will doubtless continue in the nano- and micro-arenas as in cyberspace. Even if controls evolve another way, such as physical distribution of codes on one-use portable media like the flash memory stick, markets for stolen and counterfeit products will emerge, just as the current computer viruses and malware are piggybacked on the legitimate use of the Internet. Beating security encryptions to transform a one-use code into a version capable of electronic dissemination will be an instant challenge for the criminal and black-hat hacking communities.</p>
<p>There are some differences, though. While the viruses and Trojan horses that hector cyberspace have consequences ranging from irritating (the Blue Screen of Death) to life-changing (severe financial crises resulting from identity theft), it is only at the most extreme range that they could be considered life-threatening. Identity theft that labels an innocent citizen as a dangerous criminal has some potential for creating life-threatening situations, but most of the jeopardy is financial or social. Viruses and worms may take down a network or three, or transform the World Wide Web into the World Wide Wait with deleterious consequences for commerce, but they do not directly assault the networks&#146; users. A corrupted, mislabeled, or maliciously designed micro-manufacture code could &#147;break the fourth wall,&#148; crossing out of cyberspace into the physical realm.</p>
<p>The closest parallel in the physical world, the batch of bad heroin that kills users in clusters, does not really provide an accurate analog for a malicious assembly code incident. Relatively few seek heroin under any circumstances, and no one but the most desperate heroin addict would seek out bad heroin (as has happened in some isolated cases). The first &#147;killer virus&#148; loose in whatever network provides product codes for PNs will affect hundreds and perhaps thousands of innocents, whether it comes as a terrorist strike or an unintended consequence of a hacking adventure. No one will have to seek it: once in the wild, it will arrive unbidden in the In Box.</p>
<p>Defenses to such a scenario potentially exist, but security measures are one of the most attractive fruits of the Tree of Knowledge. Like contemporary Internet defenses, and the laws passed to outlaw new designer drugs, defensive maneuvers almost always stimulate new offensive attacks. Any combination of zeros and ones, in any transportation medium, can be hijacked and compromised: the track record of Internet security does not bode well for the free and easy commercial transfer of assembly codes for the molecules-up creation of products.</p>
<h4>Social</h4>
<p>During the Industrial Revolution in England, improved agricultural efficiencies accelerated the process of enclosure, dislocating the rural population no longer needed for raising and harvesting crops. Simultaneous improvements in the production of iron and steel, in weaving, and other areas began to transform cottage industries into factory-based industries, and urbanization rapidly changed the face of the country. The nature of trade shifted from one-off mercantile ventures and royal charters to stable capital for long-term ventures. Factory industries supplanted cottage industries, local artisans, and craft guilds, but the concentration of work in brick-and-mortar containers still left some out of work: the notorious &#147;surplus labor&#148; that kept wages low. The expansion of the new manufacturing base managed to absorb surplus labor for some time, until the advent of widespread robotics in the second half of the twentieth century.</p>
<p>A robust generation of personal nanofactories may very well bifurcate commerce into those items that can be manufactured at home and those which still must be purchased through the familiar retail supply chains. While a certain amount of jobs will be created around the transportation of raw materials for PNs, they will be paltry in comparison to the jobs the devices displace in manufacture, transport, and sales. Globalization has already imposed a certain amount of social dislocation in the manufacturing sectors; a maturing nanotechnology could very well trigger a long-term social dislocation not seen since the English migration from the newly-enclosed farmlands to the new factories of the Industrial Revolution.</p>
<p>The need for human labor seems to be diminishing at an accelerated rate inverse to Ray Kurzweil&#146;s description<sup>5</sup> of the advance of technology. The shift from human muscle to animal muscle took millennia; from animal to human-guided mechanical, centuries; from human-guided to robotic, decades; and the emergence of computer-directed manufacture seems measured in years if not months. Human society, however, still is anchored in a near-medieval paradigm where social worth is measured by the type and extent of work one engages in. The pecking order of work starts at the menial and dirty level, maids and animal rendering and manual labor (the province of illegal immigrants and paroled convicts) comparable to carrying the hod. The next step up is the marginally cleaner and less taxing &#147;service economy&#148; of McJobs, which jousts with the decline of blue-collar union-affiliated manufacturing jobs for the next higher rung (salaries and benefits alone give the advantage to unionized jobs, regardless of the decades-long decline in union membership, though the recent perturbations in the airline and automobile industries in particular, and corporate pension plans generally, leave even that in doubt). Above that are the traditional white-collar jobs, but the new aristocracy&#038;#8212sharply defined by the accelerating concentration of wealth in at least American society&#038;#8212is comprised of those who &#147;let their money work for them,&#148; the investing class, the owners of the means of production.</p>
<p>Work is devalued in other ways: in the symbolic change of language in which employees are now called &#147;associates,&#148; with a presumed stake in the corporate success that is not mirrored anywhere in the reward system; in the stock market rewarding corporate actions that trim the workforce; and in the precipitous erosion of industry-sponsored pensions. Human labor has been, or is in the process of being, effectively decoupled from the part of the economy that is valued. The long-term consequences of this are by no means clear, but the advent of a personal &#150;nanofactories will not necessarily create a widespread leisure class.</p>
<p>Another of the volumes on my physical desktop is William Julius Wilson&#146;s <i>When Work Disappears: The World of the New Urban Poor</i>. It deals with the &#147;left behind&#148; problem of those under a double burden of low social status and of being dependent upon jobs in industries that have moved elsewhere (to Alabama, to Mexico, or to China). While the analogy to a nanotechnology shift need not be exact, Wilson&#146;s depictions and analyses offer a powerful warning we may need to confront within a generation: what are the social consequences when there are no alternative employment outlets for surplus labor? American history of the 20<sup>th</sup> century holds small hope that our social attitudes will change rapidly: the unemployed, underemployed, and &#147;idle&#148; always have been despised for not somehow rising above the crushing weight of social and economic forces beyond their control. Revolution traditionally has been pointless or counterproductive, and Cite Soliel endures in its multiple forms around the globe despite the potential and promises of globalization, the Green Revolution, and countless other advances.</p>
<p>It is tempting to suggest that nano-communes, with internal self-sufficiency that leaven the worst effects of industrial-era unemployment, will free the human spirit for more cerebral endeavors. Futures are almost never equally distributed when they arrive, and Utopian dreams of that kind have a history of being measured in months rather than decades or eras. It is difficult to envision the rise of a labor movement comparable to those of nineteenth-century Britain and the United States; it is almost easier to predict the widespread distribution of limited-capacity PNs as a form of social welfare (and social placation of the underclass).</p>
<p>Larger questions arise out of this potential for increased social marginality. The income gap between rich and poor has been widening for more than two decades. Globalization has transformed the American economy, and the household economy has suffered as a result. The degree to which nanotechnology, the Internet, and other technologies accelerate or buffer the social decoupling of work and status is still an undiscovered country. If the cumulative effect is acceleration, we need to anticipate the range of human adaptations that will follow. If one no longer is attached in any meaningful way to an economy and the political ideology that supports it, how long can that authority hold one&#146;s allegiance? And what are the alternatives if the allegiance cannot otherwise be reinforced?</p>
<h4>Authority</h4>
<p>Although it is a commonplace to think of religious worship as timeless, it actually undergoes periodic major shifts, often triggered by secular events. In the first century of the Common Era, the nature of revelation itself was transformed from the direct presence of a transcendent deity to the interpretation of a written Scripture. For Jews, the destruction of the Holy of Holies in the Second Temple ended the traditional direct contact of the High Priests. For Christians, the sudden absence of their Messiah from the streets of Jerusalem transformed the Judaic concept of messianic return into an entirely new understanding the relationship between human beings and their Creator. </p>
<p>The struggle for primacy between the Catholic Church and secular governments began soon after Christianity was adopted as the official religion of the Roman Empire. It continued through the Investiture Controversy of the Middle Ages, and was the decisive factor in the success of the Reformation. However, the waning of the dominance of religion was a process begun centuries earlier by resistance (&#147;heresy&#148;) within the Church itself, beginning with the Great Schism of the Eastern Orthodox traditions. The purification movements that created monastic orders within the Church presaged the later coming of the Reformation, which relocated purifying reform outside the Church and ended the sole authority of Rome to arbitrate Christian salvation. The secular challenges arising from the Enlightenment remain at play in the contemporary questions of Church and State, Science and Belief, and authority to define human relations. Increasing secularity jousts with the rise of fundamentalism and of sects, undermining traditional &#147;mainstream&#148; churches.</p>
<p>Whether the maturing of nanotechnology will impact the continuing struggle of religious authority is unclear. The potential is there, certainly, as the manipulation of matter at the molecular level comes perilously close to &#147;playing God,&#148; especially where it might affect what it means to be human. Artificial intelligence, genetic engineering, and cybernetic enhancements pose imminent challenges to the religious understandings of &#147;human,&#148; and nanotechnology bids to play a major role within each of those technologies. Public discourse in areas where the definitions of &#147;life&#148; are most contended are fueled as much by symbolism and metaphor as by science; misapprehensions and misunderstandings about nanotechnology may well be fuel for new battlefronts in what has been dubbed &#147;the culture wars.&#148;</p>
<p>During the Reformation, the monolithic authority of the Church of Rome was transformed into a limited number of Protestant denominations. The existence of each one allowed anyone to resist the Authority of the Catholic Church, and beyond that, the authority of any other church. (The earliest attempt to incorporate a denial of secular authority, under the banner of &#147;No Bishops, No Barons,&#148; was ruthlessly suppressed by secular forces, whose worldly enforcement had more immediate clout than the afterlife of religion.) The transformation of monolithic Authority into micro-authority created a market for allegiances. The old concept of rules defined and enforced by a monopolistic Church&#038;#8212enforced by excommunication, the denial of sacraments, and the resulting condemnation to an infernal afterlife&#038;#8212gave way to a free market of ideas and selection of, rather than submission to, authority that continues to this day. Catholic priests who wish to marry may find refuge in the Anglican communion. Protestant churches may fracture over rules of control and worship, and denominations may enter schism over ecclesiastical matters, as witness the current strain in the Anglican communion over the issue of gay bishops and clergy, and the social acceptance of homosexuality. Other issues less anchored in scriptural interpretation, like finances, may also trigger the sundering of ways for a congregation.</p>
<p>Using this as an analogy for secular considerations, it is an interesting exercise in speculation to consider whether nanotechnology generally, and desktop manufacturing in particular, will lead to nano-communes that eventually decouple individuals from the larger economy and the political system so closely tied to it. Such communities would be the natural descendants of the self-sufficient medieval monastic orders, the utopian communities of the mid-1800s, and the communes of the 1960s and beyond. Unlike their predecessors, they could be &#147;off the grid&#148; in important ways, but not necessarily withdrawn from the larger society.</p>
<p>In other realms, there is some additional promise in the potential for using nanotechnology as a recycling outlet. Molecular disassembly as a precursor to molecular assembly may be a completely different set of technological difficulties, and raises a series of questions about disposal of nonessential elements. The Newtonian-world vision of a methane burnoff is impractical at the molecular level, and the state of byproduct disposal is unclear at this point. If unwanted matter can be converted to energy, and stored for use, nanotechnology could change the nature of both recycling and of power. If each household ran on a &#147;green power&#148; combination of solar energy and molecular conversions, entire industries might be transformed. It stretches the imagination a bit to think that factories could be powered with wind, solar, and nano power, so the traditional power industries might not disappear, but important sectors might achieve relative independence from them.</p>
<p>At the same time, the intellectual property forces would still work to bind nano&shy;based anything to the existing corporate world. If nano goes &#147;into the wild,&#148; via bootleg or Robin Hood dissemination, it could weaken the corporate hold, inspire a widespread law enforcement crackdown on piracy, or dissolve society into above-ground and Morlock-like subcultures that coexist because they have little reason to compete. In any of these scenarios, nanotechnology by itself is not an actor: it is a tool of other interests, and its impacts are dampened or enhanced by the decisions of social engineering and politics. But if the end result is the alienation of large masses of citizens from the engines of the economy and the icons of government, the costs and secondary developments will be far ranging.</p>
<p>Nanotechnology has its own limits. A host of major decisions in the social realm will not be changed to any great degree by nanotechnology. It will not protect the Arctic Natural Wildlife Refuge (indeed, if natural gas is the first and basic fuel for desktop manufacturing, it may exacerbate the pressures on the ANWR), nor will it stop the denuding of the Amazon rain forest. It will not eliminate prejudice, nor resolve the multiple questions of authority and Authority that attend the modern estate of humankind. We can predict safely that when this particular future of mature nanotechnology arrives, it will not be equally distributed, and may easily be a weapon of social dominance rather than the delivery vehicle of social equity. Even the utopian visions of Gene Roddenberry included a period of troubled dystopia, which Alvin Toffler captured in <i>Future Shock</i>: &#147;the premature arrival of the future&#8230; the imposition of a new culture on an old one&#148; that results in &#147;human beings&#8230;. increasingly disoriented, progressively incompetent to deal with their environments.&#148;</p>
<p>Which leaves me almost where I began: What do I make of this nanotechnology thing? I suspect it will be very much like its predecessors, a potentially transformative technology that will be bound on the bed of Procrustes of the older social and economic systems that midwifed it. Because of that, it has considerable potential to be more Pandora&#146;s Box than Holy Grail in the early going. Assuming that its byproducts do not poison the groundwater or become an airborne grey goo, it will almost have to achieve an outlaw status (or its more egalitarian potential championed by those who will be deemed outlaws) before it reaches a socially transformative cusp. In the near term, whether I buy it in a store or make it with my nanofactory, I will still have to pay for toilet paper.</p>
<p><i>Michael Buerger, an Associate Professor of Criminal Justice at Bowling Green State University and a former police officer, is a member of the Futures Working Group, a collaboration between the FBI and the Society of Police Futurists International. His broad interests mainly concern the impact of large-scale social changes and reactions to them.</i></p>
<p><sup>1</sup> <i>Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology (Collegium Basilea, Basel, Switzerland), Volume 2, Number 1a</i></p>
<p><sup>2</sup> The Futures Working Group, a collaboration between the FBI and the Society of Police Futurists International (http://www.policefuturists.org/futures/fwg.htm)</p>
<p><sup>3</sup> Global Task Force on Implications and Policy (http://www.crnano.org/CTF.htm), organized by the Center for Responsible Nanotechnology </p>
<p><sup>4</sup> &#147;The Need For Limits&#148; (http://www.kurzweilai.net/the-need-for-limits)</p>
<p><sup>5</sup> <i>The Singularity is Near</i> (http://singularity.com/)</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/from-the-enlightenment-to-n-lightenment/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nanoethics and Technological Revolutions: A Precis.</title>
		<link>https://www.writingsbyraykurzweil.com/nanoethics-and-technological-revolutions-a-precis</link>
		<comments>https://www.writingsbyraykurzweil.com/nanoethics-and-technological-revolutions-a-precis#respond</comments>
		<pubDate>Fri, 05 May 2006 00:05:41 +0000</pubDate>
								<dc:creator>Nick Bostrom</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[If we believe that nanotechnology will eventually amount to a technological revolution, and if we are going to attempt nanoethics, we should consider some of the earlier technological revolutions that humanity has undergone and how our moral principles and technology impact assessment exercises would have fared.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology</a><i>, Volume 2, No. 2, May 8, 2006. Reprinted with permission on KurzweilAI.net, May 8, 2006.</i></p>
<p>1. Some eleven thousand years ago, in the neighborhood of Mesopotamia, some of our ancestors took up agriculture, thereby beginning the end of the hunter-gatherer existence that our species had lived ever since it first evolved. Population exploded even as nutritional status and quality of life declined, at least initially. Eventually, greater population densities led to greatly accelerated cultural and technological development.<span id="more-80933"></span></p>
<p>In 1448, Johan Gutenberg invented the movable type printing process in Europe, enabling copies of the Bible to be mass-produced. Gutenberg&#8217;s invention became a major factor fueling the Renaissance, the Reformation, and the scientific revolution, and helped give rise to mass literacy. A few hundred years later, <i>Mein Kampf</i> was mass-produced using an improved version of the same technology.</p>
<p>Work in atomic physics and quantum mechanics in the first three decades of the 20<sup>th</sup> century laid the foundation for the subsequent Manhattan project during World War II, which raced to beat Hitler to the nuclear bomb.</p>
<p>In 1957, Soviet scientists launched Sputnik 1. In the following year, the US created the Defense Advanced Research Projects Agency to ensure that US would keep ahead of its enemies in military technology. DARPA began developing a communication system that could survive nuclear bombardment by the USSR. The result, ARPANET, later became the Internet&#038;#8212the long-term consequences of which remain to be seen.</p>
<p>2. Suppose you are an individual involved in some way in what may become a technological revolution. You might be an inventor, a funder of research, a user of a new technology, a regulator, a policy-maker, an opinion leader, or a voting citizen. Suppose you are concerned with the ethical issues that arise from your potential involvement. You want to act responsibly and with moral integrity. What does morality require of you in such a situation? What does it permit but does not require? What questions do you need to find answers to in order to determine what you ought to do?</p>
<p>If you consult the literature on applied ethics, you will not find much advice that applies directly to this situation. Ethicists have written at length about war, the environment, our duties towards the developing world; about doctor-patient relationships, euthanasia, and abortion; about the fairness of social redistribution, race and gender relations, civil rights, and many other things. Arguably, nothing humans do has such profound and wide-ranging consequences as technological revolutions. Technological revolutions can change the human condition and affect the lives of billions. Their consequences can be felt for hundreds if not thousands of years. Yet, on this topic, moral philosophers have had precious little to say.</p>
<p>3. In recent years, there have been increasing efforts to evaluate the ethical, social, and legal implications (“ELSI”) of important new technologies ahead of time. Much attention has been focused on ethical issues related to the human genome project. Now there is a push to look at the ethics of advances in information technology (information and computer ethics), brain science (neuroethics), and nanotechnology (nanoethics).</p>
<p>Will “ESLI” research produce any important findings? Will it have any significant effects on public policy, regulation, research priorities, or social attitudes? If so, will these effects be for the better or for the worse? It is too early to tell.</p>
<p>But if we believe that nanotechnology will eventually amount to a technological revolution, and if we are going to attempt nanoethics, then we might do well to consider some of the earlier technological revolutions that humanity has undergone. Perhaps there are hidden features of our current situation with regard to nanotechnology that would become more easily visible if we considered how our moral principles and technology impact assessment exercises would have fared if they had been applied in equivalent circumstances in any of the preceding technological revolutions.</p>
<p>If such a comparison were made, we might (for example) become more modest about our ability to predict or anticipate the long-term consequences of what we were about to do. We might become sensitized to certain kinds of impacts that we might otherwise overlook&#038;#8212such as impacts on culture, geopolitical strategy and balance of power, people&#8217;s preferences, and on the size and composition of the human population. Perhaps most importantly, we might be led to pay closer attention to what impacts there might be in terms of further technological developments that the initial revolution would enable. We might also become more sophisticated, and perhaps more humble, in our thinking about how individuals or groups might exert predictable positive influence on the way things develop. Finally, we might be led to focus more on systems level aspects, such as institutions and technologies for aggregating and processes information, for making decisions regarding e.g. regulations and funding priorities, and for implementing these decisions.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/nanoethics-and-technological-revolutions-a-precis/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Response to &#8216;The Singularity Is Always Near&#8217;</title>
		<link>https://www.writingsbyraykurzweil.com/response-to-the-singularity-is-always-near</link>
		<comments>https://www.writingsbyraykurzweil.com/response-to-the-singularity-is-always-near#respond</comments>
		<pubDate>Wed, 03 May 2006 23:49:38 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
								<media:thumbnail url="https://www.writingsbyraykurzweil.com/images/Technium-140x105.jpg" width="140" height="105" />
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[In &#8220;The Singularity Is Always Near,&#8221; an essay in The Technium, an online &#8220;book in progress,&#8221; author Kevin Kelly critiques arguments on exponential growth made in Ray Kurzweil&#8217;s book, The Singularity Is Near. Kurzweil responds. Allow me to clarify the metaphor implied by the term &#8220;singularity.&#8221; The metaphor implicit in the term &#8220;singularity&#8221; as applied [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><em><img style=' float: left; padding: 4px; margin: 0 7px 2px 0;'  loading="lazy" decoding="async" class="alignleft size-full wp-image-135478" title="Technium" src="http://www.kurzweilai.net/images/Technium.jpg" alt="" width="200" height="150" srcset="https://www.writingsbyraykurzweil.com/images/Technium.jpg 200w, https://www.writingsbyraykurzweil.com/images/Technium-140x105.jpg 140w" sizes="auto, (max-width: 200px) 100vw, 200px" />In &#8220;The Singularity Is Always Near,&#8221; an essay in </em>The Technium<em>, an online &#8220;book in progress,&#8221; author Kevin Kelly critiques arguments on exponential growth made in Ray Kurzweil&#8217;s book, </em>The Singularity Is Near<em>. Kurzweil responds.</em></p>
<p>Allow me to clarify the metaphor implied by the term &#8220;singularity.&#8221; The metaphor implicit in the term &#8220;singularity&#8221; as applied to future human history is not to a point of infinity, but rather to the event horizon surrounding a black hole. Densities are not infinite at the event horizon but merely large enough such that it is difficult to see past the event horizon from outside.</p>
<p>I say difficult rather than impossible because the Hawking radiation emitted from the event horizon is likely to be quantum entangled with events inside the black hole, so there may be ways of retrieving the information. This was the concession made recently by Hawking. However, without getting into the details of this controversy, it is fair to say that seeing past the event horizon is difficult (impossible from a classical physics perspective) because the gravity of the black hole is strong enough to prevent classical information from inside the black hole getting out. <span id="more-80931"></span></p>
<p>We can, however, use our intelligence to infer what life is like inside the event horizon even though seeing past the event horizon is effectively blocked. Similarly, we can use our intelligence to make meaningful statements about the world after the historical singularity, but seeing past this event horizon is difficult because of the profound transformation that it represents.</p>
<p>So discussions of infinity are not relevant. You are correct that exponential growth is smooth and continuous. From a mathematical perspective, an exponential looks the same everywhere and this applies to the exponential growth of the power (as expressed in price-performance, capacity, bandwidth, etc.) of information technologies. However, despite being smooth and continuous, exponential growth is nonetheless explosive once the curve reaches transformative levels. Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in one year, and then to 40,000 and then 80,000, it was of interest only to a few thousand scientists. When ten years later it went from 10 million nodes to 20 million, and then 40 million and 80 million, the appearance of this curve looks identical (especially when viewed on a log plot), but the consequences were profoundly more transformative. There is a point in the smooth exponential growth of these different aspects of information technology when they transform the world as we know it.</p>
<p>You cite the extension made by Kevin Drum of the log-log plot that I provide of key paradigm shifts in biological and technological evolution (which appears on page 17 of <em>The Singularity Is Near</em>). This extension is utterly invalid. You cannot extend in this way a log-log plot for just the reasons you cite. The only straight line that is valid to extend on a log plot is a straight line representing exponential growth when the time axis is on a <em>linear</em> scale and the a value (such as price-performance) is on a log scale. Then you can extend the progression, but even here you have to make sure that the paradigms to support this ongoing exponential progression are available and will not saturate. That is why I discuss at length the paradigms that will support ongoing exponential growth of both hardware and software capabilities. But it is not valid to extend the straight line when the time axis is on a log scale. The only point of these graphs is that there has been acceleration in paradigm shift in biological and technological evolution.</p>
<p>If you want to extend this type of progression, then you need to put time on a linear x axis and the number of years (for the paradigm shift or for adoption) as a log value on the y axis. Then it may be valid to extend the chart. I have a chart like this on page 50 of the book.</p>
<p>This acceleration is a key point. These charts show that technological evolution emerges smoothly from the biological evolution that created the technology creating species. You mention that an evolutionary process can create greater complexity—and greater intelligence—than existed prior to the process. And it is precisely that intelligence creating process that will go into hyper drive once we can master, understand, model, simulate, and extend the methods of human intelligence through reverse-engineering it and applying these methods to computational substrates of exponentially expanding capability.</p>
<p>That chimps are just below the threshold needed to understand their own intelligence is a result of the fact that they do not have the prerequisites to create technology. There were only a few small genetic changes, comprising a few tens of thousands of bytes of information, that distinguish us from our primate ancestors: a bigger skull (allowing a larger brain), a larger cerebral cortex, and a workable opposable appendage. There were a few other changes that other primates share to some extent such as mirror neurons and spindle cells</p>
<p>As I pointed out in my long now talk, a chimp&#8217;s hand looks similar but the pivot point of the thumb does not allow facile manipulation of the environment. In contrast, our human ability to look inside the human brain and to model and simulate and recreate the processes we encounter there has already been demonstrated. The scale and resolution of these simulations will continue to expand exponentially. I make the case that we will reverse-engineer the principles of operation of the several hundred information processing regions of the human brain within about twenty years and then apply these principles (along with the extensive tool kit we are creating through other means in the AI field) to computers that will be many times (by the 2040s, billions of times) more powerful than needed to simulate the human brain.</p>
<p>You write that &#8220;Kurzweil found that if you make a very crude comparison between the processing power of neurons in human brains and the processing powers of transistors in computers, you could map out the point at which computer intelligence will exceed human intelligence.&#8221; That is an oversimplification of my analysis. I provide in book four different approaches to estimating the amount of computation required to simulate all regions of the human brain based on actual functional recreations of brain regions. These all come up with answers in the same range, from 10<sup>14</sup> to 10<sup>16</sup> cps for creating a functional recreation of all regions of the human brain, so I&#8217;ve used 10<sup>16</sup> cps as a conservative estimate.</p>
<p>This refers only to the hardware requirement. As noted above, I have an extensive analysis of the software requirements. While reverse-engineering the human brain is not the only source of intelligent algorithms (and, in fact, has not been a major source at all up until just recently because we did not have scanners that could see into the human with sufficient resolution until recently), my analysis of reverse-engineering the human brain is along the lines of an existence proof that we will have the software methods underlying human intelligence within a couple of decades.</p>
<p>Another important point in this analysis is that the complexity of the design of the human brain is about a billion times simpler than the actual complexity we find in the brain. This is due to the brain (like all biology) being a probabilistic recursively expanded fractal. This discussion goes beyond what I can write here (although it is in the book). We can ascertain the complexity of the design of the human brain because the design is contained in the genome and I show that the genome (including non-coding regions) only has about 30 to 100 million bytes of compressed information in it due to the massive redundancies in the genome.</p>
<p>So in summary, I agree that the singularity is not a discrete event. A single point of infinite growth or capability is not the metaphor being applied. Yes, the exponential growth of all facts of information technology is smooth, but is nonetheless explosive and transformative.</p>
<p>©<em> 2006 Ray Kurzweil</em></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/response-to-the-singularity-is-always-near/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Strategic Sustainable Brain</title>
		<link>https://www.writingsbyraykurzweil.com/strategic-sustainable-brain</link>
		<comments>https://www.writingsbyraykurzweil.com/strategic-sustainable-brain#respond</comments>
		<pubDate>Fri, 31 Mar 2006 00:57:41 +0000</pubDate>
								<dc:creator>Natasha Vita-More</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[The human brain faces a challenging future. To cope with accelerating nanotech- and biotech-based developments in an increasingly complex world, compete with emerging superintelligence, and maintain its performance and sustainability as people live longer, the fragile human brain will need major enhancements: a backup system, eliminating degenerative processes, direct mind-linkup to ubiquitous computing networks, error-correction for memory, and a global Net connection with remote neural access.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology</a><i>, Volume 2, No. 1, March 27 2006. Reprinted with permission on KurzweilAI.net March 31, 2006.</i></p>
<p><b>Concern: Convincing society that the brain needs to keep up with the changes ahead.</b>
</p>
<p>Each one of us has been entrusted with the care and nourishment of what might be the most extraordinary and complex creation in the universe. Home to mind and personality, the <i>human brain</i> archives cherished memories and hopes for the future. It arranges and coordinates the elements of consciousness that gives us purpose, passion, motion, and emotion.<span id="more-80929"></span></p>
<p>But the brain is too fragile. It is far too vulnerable to be allowed to continue in its current state. In order to properly sustain the brain, we need to know what it likes, the challenges it craves, the rest it requires, and the protection it deserves. In short, the brain must have a strategy for its future.</p>
<p>But is it really necessary to take action now? I submit that if events have altered the day-to-day operations of the brain, affecting how it performs its operations and whether it can sustain for the long haul, then now is the right time to take action.</p>
<p>Recently, there has been a series of technological events causing irrevocable changes in the external environment of the brain. People are living longer; there is a notable increase in the number of activists supporting life extension technologies; economic reporting predicts an increase in research and development of molecular manufacturing and nanotechnology; programming engineers are reveling in the increase in research and development of superintelligence; and conservative organizations are publishing warnings indicating an increased awareness of the potential threats of superintelligence. These events will directly or indirectly affect the brain, resulting in a set of expectations for the brain to function over a longer period of time and operate at a higher level of quality than it has ever achieved in the past.</p>
<p>To keep pace and sustain itself for the long haul, the brain needs a strategy that takes into account the present circumstances and what the future may hold. Currently, the brain is challenged by a demand to produce better cognitive capabilities more quickly and efficiently for a longer period of time. Simultaneously there is an increased rate of neurological degeneration of brain cells resulting from increased longevity. And even though it is not a current threat, soon there will be a need to keep up with the acceleration of competitive superintelligence.</p>
<p>Developing a strategy for the brain requires a balance of several elements: a compelling vision for its future, strategic goals, an action plan, and a means for measuring the success of the plan. But before we can develop a strategic plan for the brain, we have to know more about the brain&#8217;s ability to meet the needs of the contemporary <i>mind</i>. This may seem like an abstract project because it would require us to separate the brain as a functioning organization of cells, or agents, from the mind. Nevertheless, an effective way to do this is to fictionalize the brain&#151;make it a character or a business entity.</p>
<p>If the brain had an executive statement, for example, it might read something like this:</p>
<blockquote>
<hr />
<p><b>Executive Statement of the Brain</b></p>
</p>
<p>The mission of the brain is to serve its cells by adopting the 	advantages of emerging technologies to ensure a smart, safe and 	sustainable environment.</p>
</p>
<p>The brain develops best practices for cognitive and creative 	processes. The brain&#8217;s central operating system is located 	in the neocortex, and has connections through the internal and 	external communications network.</p>
</p>
<p>The brain&#8217;s quality services are unique and exclusive, and 	its target supply chain is nerve cells and synapses with upper-end 	job-related responsibilities. The brain&#8217;s competitive &quot;intelligence&quot; 	edge is that its services are 100% man-made, unlike competitors, 	such as superintelligence and friendly artificial intelligences. 	By this fact, the brain&#8217;s mind hopes to attract inventors 	and investors that value the artistry of producing neurological 	connections and their emergent properties such as critical thinking, 	imagination, day-dreaming, problem-solving, humor and intellection. 	Since the brain&#8217;s responsibilities are mostly to serve the 	day-to-day functions of the mind, as well as to elaborate networking 	and communications assistance for the mind and body, it is considered 	to be in the communications market, although some mental personas 	use the end-result products, such as ideas, for themselves.</p>
</p>
<p>In the year 2006, the brain plans to develop strategic initiatives 	to protect its future and gain a competitive edge in the &quot;intelligence&quot; 	marketplace. Over the past few decades, the brain&#8217;s longevity 	has increased along with its competitors, necessitating a reevaluation 	of its position and its future.</p>
</p>
<p>The brain&#8217;s future is uncertain due to advancing cogitative 	systems such as AI and superintelligence. Adding to the external 	environment of the brain is the fact that new intelligence enterprises 	entering the marketplace are drawing business away from the brain. 	Encephalitis and other invasive viral infections, as well as dementia 	and neurological breakdowns, are eating away at the resources 	of the brain&#8217;s affiliates. This pending shortage has created 	an immense demand for increased memory.</p>
</p>
<p>Regardless of some of the internal flaws of the brain, there 	is great potential for its continued success. The brain will improve 	faltering memory by adding a backup system; will expand to direct 	mind-linkup ubiquitous computing networks; will add error-correction 	memory replay and a global Net connection with remote neural access, 	guarded by security protocol. The brain plans to support its entire 	system by eliminating degenerative processes that impede the ability 	for a healthy, vital life in its goal to keep up with the many 	changes ahead.</p>
<hr />
</blockquote>
<p>While the executive statement is a fictionalized story, it does contain tangible elements. The reality is that our brains need to be protected and improved upon. The brain&#8217;s future depends upon how we want our brains to perform in the coming years and how much augmentation is actually needed, both invasively and noninvasively, to satisfy this end. Since our brains contain our memories, and our memories build our identities, this is a serious matter. But because we cannot see it as clearly as we see our expanding or shrinking bodies, the brain is dismissed while our mind presses for more immediate attention, forgetting the hard fact that unless the brain is in good physical shape, the entire system will falter.</p>
<p>Today the brain is vulnerable. It is vulnerable because the axial skeleton&#8217;s skull that encloses and protects the brain is not built from impenetrable material; its command-and-control center, including the white matter in between, is in constant danger of breakdown, infection, and disease; and its cognitive processes are subject to loss of information.</p>
<p>Trends five to ten years in the future suggest an increase in technologies, including biotech and nanotech, for building better brains to operate with better bodies in meeting the needs of people living longer. Further future trends suggest people opting for the synthetic brain over a biological brain. Markets point to an expected increase in neurosurgery, neuroinformatics, neuromarketing, biotechnologies, and human performance enhancements with an explicit focus on nanotechnology. But the consequential inclination is that of machine intelligence challenging human intelligence. Lurking in the foreground of the future is whether or not the brain will be able to keep pace with new technologies that will otherwise outperform it.</p>
<p>Based on potential threats and opportunities, and on the brain&#8217;s mission to serve its cells by adopting the advantages of emerging technologies to ensure a smart, safe and sustainable environment, the brain&#8217;s strategy narrows down to: (1) enhancing its performance and sustainability in order to satisfy the needs of people living longer; (2) competing with emerging superintelligence; and (3) enhancing its cognitive capabilities in order to deal with the problems of an increasingly complex world.</p>
<p>With these issues on the table, the brain needs a practical approach hedged by a strong vision that helps society understand the opportunities and the threats that await all of us. This is not just an abstract discussion; it includes everyone, not a select few. It is not simply a matter of being smarter or more capable; it is a matter of healthy and vital living. It is a matter of being prepared for the challenges of the future, and a measurable goal of convincing others to be prepared as well.</p>
<p>Convincing people is not an easy task, especially when minds have already been made up. But I think that we must work toward convincing society that the brain needs to accelerate with the rate of technological change, as our vision and audition have through innovative corrective technologies, and our arms and legs have with robotic prosthetics, and as other parts of our bodies have transformed and renewed in working together to keep us alive.</p>
<p><i>&copy; 2006 Natasha Vita-More</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/strategic-sustainable-brain/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nanoethics and Human Enhancement</title>
		<link>https://www.writingsbyraykurzweil.com/nanoethics-and-human-enhancement</link>
		<comments>https://www.writingsbyraykurzweil.com/nanoethics-and-human-enhancement#respond</comments>
		<pubDate>Fri, 31 Mar 2006 00:36:00 +0000</pubDate>
								<dc:creator>Patrick Lin, Fritz Allhoff</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Radical nanotech-based human enhancements such as bionic implants and "respirocyte" artificial red blood cells will become technologically viable in the near future, raising profound ethical issues and forcing us to rethink what it means to be human. Recent pro-enhancement arguments will need to be critically examined and strengthened if they are to be convincing.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology</a><i>, Volume 2, No. 1, March 27 2006. Reprinted with permission on KurzweilAI.net March 31, 2006.</i></p>
<p>Human enhancement&#151;our ability to use technology to enhance our bodies and minds, as opposed to its application for therapeutic purposes&#151;is a critical issue facing nanotechnology. It will be involved in some of the near-term applications of nanotechnology, with such research labs as MIT&#8217;s Institute for Soldier Technologies working on exoskeletons and other innovations that increase human strength and capabilities. It is also a core issue related to far-term predictions in nanotechnology, such as longevity, nanomedicine, artificial intelligence and other issues.<span id="more-80928"></span> </p>
<p>The implications of nanotechnology as related to human enhancement are perhaps some of the most personal and therefore passionate issues in the emerging field of nanoethics, forcing us to rethink what it means to be human or, essentially, our own identity. For some, nanotechnology holds the promise of making us superhuman; for others, it offers a darker path toward becoming Frankenstein&#8217;s monster.</p>
<p>Without advocating any particular side of the debate, this essay will look at a growing chorus of calls for human enhancement, especially in the context of emerging technologies, to be embraced and unrestricted. We will critically examine recent &#8220;pro-enhancement&#8221; arguments&#151;articulated in <i>More Than Human</i> (2005) by Ramez Naam<sup><a name="r1"></a><a href="#f1">1</a></sup>, as one of the most visible works on the subject today&#151;and conclude that they ultimately need to be repaired, if they are to be convincing.</p>
<h2 align="center">I</h2>
<p>Before we proceed, we should lay out a few actual and possible scenarios in order to be clear on what we mean by &#8220;human enhancement.&#8221; In addition to steroid use to become stronger and plastic surgery to become more attractive, people today also use drugs to boost creativity, attentiveness, perception, and more. In the future, nanotechnology might give us implants that enable us to see in the dark, or in currently non-visible spectrums such as infrared. As artificial intelligence advances, nano-computers might be imbedded into our bodies in order to help process more information faster, even to the point where man and machine become indistinguishable. </p>
<p>These scenarios admittedly sound like science fiction, but with nanotechnology, we move much closer to turning them into reality. Atomically-precise manufacturing techniques continue to become more refined and will be able to build cellular-level sensors and other tools that can be integrated into our bodies. Indeed, designs have already been worked out for such innovations as a &#8220;respirocyte&#8221;&#151;an artificial red blood cell that holds a reservoir of oxygen.<sup><a name="r2"></a><a href="#f2">2</a></sup> A respirocyte would come in handy for, say, a heart attack victim to continue breathing for an extra hour until medical treatment is available, despite a lack of blood circulation to the lungs or anywhere else. But in an otherwise-healthy athlete, a respirocyte could boost performance by delivering extra oxygen to the muscles, as if the person were breathing from a pure oxygen tank.</p>
<p>What we do not mean by &#8220;human enhancement&#8221; is the mere use of tools, such as a hammer or Microsoft Word, to aid human activities, or &#8220;natural&#8221; improvements of diet and exercise&#151;though, as we shall discuss later, agreeing on a definition may not be a simple matter. Further, we must distinguish the concept from therapeutic applications, such as using steroids to treat any number of medical conditions, which we take to be unobjectionable for the purposes of this essay.</p>
<p>Also, our discussion here can benefit from quickly noting some of the intuitions on both sides of the debate. The anti-enhancement camp may point to steroids in sports as an argument for regulating technology: that it corrupts the notion of fair competition. Also, some say, by condoning enhancement we are setting the wrong example for our children, encouraging risky behavior in bodies that are still developing. &#8220;Human dignity&#8221; is also a recurring theme for this side, believing that such enhancements pervert the notion of what it means to be human (with all our flaws).</p>
<p>On the pro-enhancement side, it seems obvious that the desire for self-improvement is morally laudable. Attempts to improve ourselves through, for example, education, hard work, and so on are uncontroversially good; why should technology-based enhancements be viewed any differently? In addition to virtue-based defenses of technological enhancement, we might also appeal to individual autonomy to defend the practice: so long as rational, autonomous individuals freely choose to participate in these projects, intervention against them is morally problematic.</p>
<p>In <i>More Than Human</i>, it is interesting to see that the debate is framed as a conservative (anti-enhancement) versus liberal (pro-enhancement) issue<sup><a name="r3"></a><a href="#f3">3</a></sup>. This proposed dichotomy is undoubtedly influenced by the creation and work of the U.S. President&#8217;s Council on Bioethics. Led by Leon Kass, M.D., PhD, the council released a report, <i>Beyond Therapy</i>, in 2004 that endorsed an anti-enhancement position; this report has become the prime target for both liberals and pro-enhancement groups. However, it would be a mistake to think that the issue necessarily follows political lines, since there may be good reason for a liberal to be anti-enhancement, as well as for a conservative to support it.</p>
<h2 align="center">II</h2>
<p>In his introductory chapter, Naam outlines the overarching theme that is supported by his research and analysis in subsequent chapters. He offers four distinct arguments in defending the pro-enhancement position: first, there are pragmatic reasons for embracing enhancement; second, regulation will not work anyway; third, respect for our autonomy licenses the practices; and, fourth, that the desire to enhance is inherently human and therefore must be respected.</p>
<p><b>1.</b> In his first argument, Naam points out that &#8220;scientists cannot draw a clear line between healing and enhancing.&#8221;<sup><a name="r4"></a><a href="#f4">4</a></sup> The implied conclusion here is that, if no principled distinction can be made between two concepts, it is irrational to afford them different moral status. So, since there are no restrictions on therapy, in that we have a right to medical aid, there also should be no restrictions on human enhancement, <i>i.e.</i> using the same medical devices or procedures to improve our already-healthy bodies. In other words, there is no significant or moral difference between therapy and enhancement.</p>
<p>There are numerous problems with such a claim; we will herein elucidate two. The first problem can be illustrated by the famous philosophical puzzle called &#8220;The Paradox of the Heap&#8221;: given a heap of sand with N number of grains of sand, if we remove one grain of sand, we are still left with a heap of sand (that now only has N-1 grains of sand). If we remove one more grain, we are again left with a heap of sand (that now has N-2 grains). If we extend this line of reasoning and continue to remove grains of sand, we see that there is no clear point where we can definitely say that on side A, here is a heap of sand, but on the side B, this is less than a heap. In other words, there is no clear distinction between a heap of sand and a less-than-a-heap or even no sand at all. However, the wrong conclusion to draw here is that there is no difference between them; so likewise, it would be fallacious to conclude that there is no <i>difference</i> between therapy and enhancement. It may still be the case that there is no <i>moral </i>difference between the two, but we cannot arrive at it through the argument that there is no clear defining line.</p>
<p>But, second, there likely <i>are</i> principled distinctions that can be made between enhancement and therapy.<sup><a name="r5"></a><a href="#f5">5</a></sup> For example, Norm Daniels has argued for the use of &#8220;quasi-statistical concepts of &#8216;normality&#8217; to argue that any intervention designed to restore or preserve a species-typical level of functioning for an individual should count as [therapy]&#8221;<a name="r6"></a><sup><a href="#f6">6</a></sup> and the rest as enhancement. Alternatively, Eric Juengst has proposed that therapies aim at pathologies which compromise health, whereas enhancements aim at improvements that are not health-related.<sup><a name="r7"></a><a href="#f7">7</a></sup></p>
<p>Another pragmatic reason Naam gives is that &#8220;we cannot stop research into enhancing ourselves without also halting research focused on healing the sick and injured.&#8221;<a name="r8"></a><sup><a href="#f8">8</a></sup> However, this claim seems to miss the point: anti-enhancement advocates can simply counter that it is not the research they want stopped or regulated, but rather the use of that research or its products for enhancement. For instance, we may want to ban steroids from sports, but no one is calling for an outright ban on all steroids research, much of which serves healing purposes. </p>
<p>Naam also puts the burden of proof&#151;that regulation of enhancement is needed&#151;on the anti-enhancement side, instead of offering an argument that enhancement need not be regulated.<sup><a name="r9"></a><a href="#f9">9</a></sup> But it is unclear here why we should abandon the principle of erring on the side of caution, particularly where human health may be at stake as well as other societal impacts. Further, both sides have already identified a list of benefits or harms that might arise from unregulated human enhancement. The problem now is to evaluate these benefits and harms against each other (<i>e.g.</i>, increased longevity versus overpopulation), also factoring in any relevant human rights. If neither side is able to convincingly show that benefits outweigh harms, or vice versa, then burden of proof seems to be a non-issue.</p>
<p><b>2.</b> In his second argument, Naam compares a ban on enhancement to the U.S. &#8220;War on Drugs,&quot; citing its ineffectiveness as well as externalities such as artificially high prices and increased safety risks (<i>e.g.</i>, users having to share needles because they cannot obtain new or clean ones) for those who will use drugs anyway.<sup><a name="r10"></a><a href="#f10">10</a></sup> If people are as avidly driven to enhancement as they are to drugs, then yes, this may be the case. But is that a good enough reason to not even try to contain a problem, whether it is drugs, prostitution, gambling, or whatever? While such laws may be paternalistic, they reflect the majority consensus that a significant number of people cannot act responsibly in these activities and need to be protected from themselves and from inevitably harming others. Even many liberals are not categorically opposed to these regulations and may see the rationale of &#8220;greater good&#8221; behind similar regulation of enhancement. </p>
<p>Further, that we are unable to totally stop an activity does not seem to be reason at all against prohibiting that activity. If it were, then we would not have any laws against murder, speeding, &#8220;illegal&#8221; immigration&#151;in fact, it is unclear what laws we would have left. Laws exist precisely because some people inescapably have tendencies to the opposite of what is desired by society or government. Again, this is not to say that human enhancement should be prohibited, only that a stronger and more compelling argument is needed.</p>
<p><b>3.</b> In his third argument, Naam ties human enhancement to the debate over human freedom: &#8220;Should individuals and families have the right to alter their own minds and bodies, or should that power be held by the state? In a democratic society, it&#8217;s every man and woman who should determine such things, not the state&#8230;Governments are instituted to secure individual rights, not to restrict them.&#8221;<sup><a name="r11"></a><a href="#f11">11</a></sup> </p>
<p>Besides politicizing a debate that need not be political, Naam&#8217;s arguments are increasingly not anti-conservative but pro-libertarian. You would need to have already adopted the libertarian philosophy to accept this line of reasoning (as well as the preceding argument), since again, even liberals can see that the state has a broader role in creating a functioning, orderly society. This necessarily entails reasonable limits to whatever natural rights we have and also implies new responsibilities&#151;for example, we shouldn&#8217;t exercise our right to free speech by slandering or by yelling &#8220;Fire!&#8221; in a crowded theater. </p>
<p>A democratic society is not compelled to endorse <i>laissez-faire</i> political philosophy and the minimal state, as some political philosophers have suggested.<sup><a name="r12"></a><a href="#f12">12</a></sup> Nor would reasonable people necessarily want unrestricted freedom, <i>e.g.</i> no restrictions or background checks for gun ownership. Even in a democracy as liberal as ours in the United States, we understand the value of regulations as a way to enhance our freedom. For instance, our economic system is not truly a &#8220;free market&#8221;&#151;though we advocate freedom in general, regulations exist not only to protect our rights, but also to create an orderly process that greases the economic wheel, accelerating both innovations and transactions. As a simpler example, by disciplining a dog to obey commands and not run around unchecked, we actually increase that pet&#8217;s freedom by now being able to take him or her on more walks and perhaps without a leash (not to compare people with dogs or laws with behavioral conditioning).</p>
<p><b>4.</b> Finally, Naam argues that people have been enhancing themselves from the start: &#8220;Far from being unnatural, the drive to alter and improve on ourselves is a fundamental part of who we humans are. As a species we&#8217;ve always looked for ways to be faster, stronger, and smarter and to live longer.&#8221;<a name="r13"></a><sup><a href="#f13">13</a></sup> This seems to be an accurate observation, but it is an argumentative leap from this fact about the world, which is descriptive, to a moral conclusion about the world, which is normative. Or, as the philosophical saying goes, we cannot derive &#8220;ought&#8221; from &#8220;is,&quot; meaning just because something is a certain way doesn&#8217;t mean it should be that way or must continue to be that way. For instance, would the fact that we have engaged in wars&#151;or slavery, or intolerance&#151;across the entire history of civilization imply that we should continue with those activities? </p>
<p>More seriously, this argument seems to turn on an overly-broad definition of &#8220;human enhancement,&quot; such that it includes the use of tools, diet, exercise, and so on&#151;or what we would intuitively call &#8220;natural&#8221; improvement. An objection to Naam&#8217;s first argument also applies here: just because we cannot clearly delineate between enhancement and therapy or tool-use does not mean there is no line between them. We understand that steroid use by baseball players is a case of human enhancement; we also understand that using a rock to crack open a clam is not. Still, the fact that we have not arrived at a clear definition of &#8220;human enhancement&#8221; should not prevent us from using intuitive distinctions to meaningfully discuss the issue.</p>
<h2 align="center"><b>III</b></h2>
<p>The point here is not that human enhancement <i>should</i> be restricted. It is simply that current arguments need to be more compelling and philosophically rigorous, if the pro-enhancement side is to be successful. There is admittedly a strong intuition driving the pro-enhancement movement, but it needs to be articulated more fully, resulting in an argument something like the following: </p>
<p>Who we are now seems to be a product of nature and nurture, most of which is beyond our control. So, if this genetic-environmental lottery is truly random, then why should we be constrained to its results? After all, we&#8217;ve never agreed to such a process in the first place. Why not enhance ourselves to be on par with the capabilities of others? And if that is morally permissible, then why not go a little&#151;or a lot&#151;beyond the capabilities of others? </p>
<p>As suggested in the above analysis, one of the first steps in discussing human enhancement is to arrive at a better definition of what it is, perhaps by adopting that used by Daniels or Juengst, though these are still tough issues. For instance, does it matter whether enhancements are worn outside our bodies as opposed to being implanted? Why should carrying around a Pocket PC or binoculars be acceptable, but having a computer or a &#8220;bionic eye&#8221; implanted in our bodies be subject to possible regulation&#151;what is the moral difference between the two?</p>
<p>Further, there are societal and ethical implications that also need to be considered, apart from those already mentioned. Before we too quickly dismiss the idea of &#8220;human dignity&#8221; as romanticized and outdated, we need to give it full consideration and ask whether that concept would suffer if human enhancement were unrestricted. Is there an obligation to enhance our children, or will parents feel pressure to do so? Might there be an &#8220;Enhancement Divide,&#8221; similar to the Digital Divide, that significantly disadvantages those without? If some people can interact with the world in ways that are unimaginable to others (such as echolocation or seeing in infrared), will that create a further &#8220;Communication Divide&#8221; such that people no longer share the same basic experiences in order to communicate with each other?</p>
<p>In this essay, we have tried to detail some of the challenges that nanotechnology and nanoethics will confront as applications to human enhancement become technologically viable. This will not be in the distant future, but rather sooner than many of us might have expected. It seems to the authors that a balanced and reasonable perspective is more appropriate than either polarizing extreme, if we are to responsibly and productively advance nanotechnology and its applications, particularly in light of the challenges to the pro-enhancement position that we have described.</p>
<hr />
<p><a name="f1"></a><a href="#r1">1.</a> Ramez Naam, <i>More Than Human</i> (Broadway Books, New York: 2005). See also <a href="http://www.morethanhuman.org" target="_blank">www.morethanhuman.org</a>.</p>
<p><a name="f2"></a><a href="#r2">2.</a> Robert A. Freitas Jr., &#8220;Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell,&#8221; <i>Artificial Cells, Blood Substitutes, and Immobil. Biotech.</i> 26(1998): 411-430</p>
<p><a name="f3"></a><a href="#r3">3.</a> Naam (2005), pp.3-5.</p>
<p><a name="f4"></a><a href="#r4">4.</a> Naam (2005), p.5.</p>
<p><a name="f5"></a><a href="#r5">5.</a> For more discussion of these ideas, see Fritz Allhoff, &#8220;Germ-Line Genetic Enhancement and Rawlsian Primary Goods,&#8221; <i>Kennedy Institute of Ethics Journals</i> 15.1 (2005): 43-60.</p>
<p><a name="f6"></a><a href="#r6">6.</a> Norm Daniels, &#8220;Growth Hormone Therapy for Short Stature: Can We Support the Treatment/Enhancement Distinction?,&quot; <i>Growth: Genetics &amp; Hormones</i> 8.S1 (1992): 46-8.</p>
<p><a name="f7"></a><a href="#r7">7.</a> Eric Juengst, &#8220;Can Enhancement Be Distinguished from Prevention in Genetic Medicine?,&quot; <i>Journal of Medicine and Philosophy </i>22 (1997): 125-42.</p>
<p><a name="f8"></a><a href="#r8">8.</a> Naam (2005), p.5.</p>
<p><a name="f9"></a><a href="#r9">9.</a> Naam (2005), p.5.</p>
<p><a name="f10"></a><a href="#r10">10.</a> Naam (2005), p.6.</p>
<p><a name="f11"></a><a href="#r11">11.</a> Naam (2005), p.6-9.</p>
<p><a name="f12"></a><a href="#r12">12.</a> See, for example, Robert Nozick, <i>Anarchy, State, and Utopia</i> (New York: Basic Books, 1974).</p>
<p><a name="f13"></a><a href="#r13">13.</a> Naam (2005), p.9.</p>
<p><i>&copy; 2006 Patrick Lin and Fritz Allhoff. </i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/nanoethics-and-human-enhancement/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cultural Dominants and Differential MNT Uptake</title>
		<link>https://www.writingsbyraykurzweil.com/cultural-dominants-and-differential-mnt-uptake</link>
		<comments>https://www.writingsbyraykurzweil.com/cultural-dominants-and-differential-mnt-uptake#respond</comments>
		<pubDate>Thu, 30 Mar 2006 03:32:56 +0000</pubDate>
								<dc:creator>Damien Broderick</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[The impacts of radical and disruptive technologies such as molecular nanotechnology on societies deserve serious study by economists, sociologists and anthropologists. Would civil societies degenerate almost instantly into Hobbesian micro states, where the principal currency is direct power over other humans, expressed at the worst in sadistic or careless infliction of pain and consequent brutalization of spirit in slaves and masters alike?]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology</a><i>, Volume 2, No. 1, March 27 2006. Reprinted with permission on KurzweilAI.net March 30, 2006.</i></p>
<p>Can civil societies absorb the impact of MNT without degenerating almost instantly into Hobbesian micro states, where the principal currency is direct power over other humans, expressed at best as involuntary personal service and, at the worst, sadistic or careless infliction of pain and consequent brutalization of spirit in slaves and masters alike? It is a disturbing prospect, more worrying than crazed individuals or sectarian terrorists. Are we, indeed, doomed to this outcome through frailties in our evolved nature, unsuited to such challenges, or perhaps to the rapacity of the current global economy?<span id="more-80927"></span></p>
<p>A deeper question might be this: even if we assume that rich consumerist and individualist First World cultures like the USA might be prone to such collapse, is that true of all extant societies? Might more rigid or authoritarian societies have an advantage, if their citizens or subjects are too cowed by existing power structures to dash headlong into lawlessness? Might technologically simpler and poorer societies, possessing fewer goods to begin with and perhaps having fewer rising expectations, rebuff the temptations of MNT? Or might they seize upon such machines eagerly, but distribute them and their cornucopia, if only locally, on models of community or tribe unfamiliar to us in the West?</p>
<p>These seem to me extremely important issues that will require concentrated and imaginative study by economists, sociologists and anthropologists. Nearly half a century ago, the brilliant science fiction writer Damon Knight (1922-2002) published a parable salient to one possible sheaf of outcomes arising from successful and cheaply available molecular nanotechnological compilation of goods from cheap feedstocks. In his brief novel <i>A For Anything</i><a name="ref1"></a><sup><a href="#foot1">1</a></sup>, a radical device&#151;the Gismo&#151;duplicates any object within its field, including human beings. It needs no feedstock supply, and draws power from batteries, thereby apparently breaching conservation laws. This premise, although invalid given our current understanding of physics, fails to dispel the force of Knight&#8217; s allegory, since when matter compilers eventually turn information and cheap feed stocks into virtually any desirable good, the more disastrous consequences portrayed by Knight will actually become feasible, unfortunately.</p>
<p>Given the exponential proliferation of Gismos that apparently provide everything people need without their working for it, including copies of the Gismo and its batteries, ordered western society collapses almost instantly. Water can be produced out of the nothing (the &quot;quantum vacuum&quot;, perhaps), greening barren lands; plans to create spacecraft that generate their own fuel in flight seem set at first to remake the entire solar system. Melodramatically yet plausibly enough, alas, Knight projects an almost instant imposition of martial law and its failure, then, worse yet, general breakdown into lawlessness and acquisition by the brutal and canny of slaves, or &quot;slobs&quot;, who can be copied at will when they &quot;wear out&quot;. Within half a century, America sinks into a kind of feudalism where nothing, in effect, ever again changes, where innovation seems pointless if not intolerably disruptive.</p>
<p>Presciently, Knight realized that this kind of stable stagnation requires more than a simple duplicator, and added the proviso that Gismos can produce &quot;protes&quot; or &quot;arrested prototypes&quot;, &quot;a gnarled lump of quasi-matter that could be stored in a pigeonhole, and would keep forever&quot; (27). When an &quot;inhibitor&quot; is activated, the prote provides the information necessary to generate a complete copy of the original. In effect, the Gismo is equivalent to a nanofactory, using storable algorithms, although protes have the disadvantage of not being digitized and hence transmissible information.</p>
<p>The question <i>A For Anything</i> raises is perhaps one for specialists in cultural change and diversity. My own specialties are discourse theory and science fiction, so all I can do here is suggest diffidently certain possibilities for analytical approaches that are currently unfashionable in the academy and in the business world, but might be of use in probing the unknown. In doing so, I draw upon schemata advanced equally diffidently in my book <i>Theory and its Discontents</i> (1997)<sup><a name="ref2"></a><a href="#foot2">2</a></sup>, and a range of overviews of individual and culture conveniently summarized in several books by Ken Wilber, Don Beck, PhD, and others of their school<sup><a name="ref3"></a><a href="#foot3">3</a></sup>. Leaving aside the more metaphysical/ &quot;mystical&quot; aspects of his thought, Wilber has usefully condensed the work of some hundred specialists in a number of disciplines to yield a model of cultural phases.</p>
<p>To simplify brutally, Wilber and Beck propose that each society tends to segment, both through time and within a given period, according to a sequence of developmental stages. For shorthand, these are color-coded. The earliest&#151;though not &quot;simplest&quot;, each being as complex as the rest&#151;is Instinctive, directed to brute survival (beige), followed by tribal Animism (purple), impulsive Egocentrism (red), disciplined Authority (blue), managerial/ scientific Strategic (orange), communitarian Consensus (green), multicultural Ecology (yellow), and a sort of new age global Holism (turquoise), with perhaps several transcendent states beyond this highest level. These overlap to some degree at least with my own suggested cyclical cultural dominants, and several key stages match up with &quot;Three Systems of Action&quot; by Mike Treder and Chris Phoenix<a name="ref4"></a><sup><a href="#foot4">4</a></sup>.</p>
<p>Treder and Phoenix note three significantly different systems of response for social organization: <i>Guardian</i>, oriented principally around provision of security; <i>Commercial</i>, promoting science and trade; and <i>Informational</i>, devoted to abundance. It is easy to see that these Dominants (to borrow a term from the communications theory of Roman Jakobson) can be mapped against the most significant dynamics of certain periods, cultures, and elements of cultures. In Wilber&#8217;s terms, Guardian would be blue, and in the USA reflect Republican conservative family values; Commercial orange, representing scientific Enlightenment values; while Informational might perhaps be green, representing postmodern inclusive global or &quot;holistic&quot; values, enthusiasm for open source versus proprietary development of novelty, etc. The interactions between individuals and groups dominated by one mode or another can be troublesome and, indeed, mutually incomprehensible. Green, Wilber warns, tends to &quot;dissolve blue&quot;, which can wreak catastrophic damage on prickly red (tribal/gang) cultures or subcultures struggling to shift &quot;upward&quot; toward Enlightenment/ Commercial orange, by invalidating support for the intermediate &quot;conservative&quot; or blue Guardian stage in the interests of a premature holism.</p>
<p>My own analysis poses six sequential phases each half a century long and comprising two generations, punctuated by wars. The 300 years can be graphed as a sine curve&#151;an upward semicircle followed by a downward semicircle, each half comprising 150 years. (The full iterated sequence of roughly 50 year phases runs Algorithmic-We-I-It-Theory/Text-Code-Algorithmic&#8230;.) I propose no numerology here, attempting rather to draw together a number of separate analyses that seem to find certain recurrences at certain intervals, not all of them compressible into a single algorithm; one influence might enhance another, a third might tend to mute it. What&#8217;s more, recent human intervention on a planetary scale might be expected to have modified, extended or suppressed such cycles anyway&#151;although some of the theorists I quote below do carry their schemata forward into the second half of the twentieth century.</p>
<p>A similar model has been suggested in <i>Generations: the History of America&#8217;s Future, 1584 to 2069</i> by William Strauss and Neil Howe (New York: Morrow, 1991), whose parsed narrative discerns, like Modelski&#8217;s (below), a basic cycle four generations long, marked by disruptive &quot;secular&quot; and &quot;spiritual events&quot;. Cohorts&#151;individuals born within a given time-frame&#151;are said to resemble each other in temperament and trajectory more than they do those from earlier or later generations. The four phases, in order, are the Idealists (inner-driven, arrogant, creative), indulged in childhood after a secular event; the Reactives (disruptive in youth, pragmatic in maturity, uncultivated); the Civics (establishment figures); and the Adaptives (guilty conformists, aging into sensitive carers).<sup><a name="ref5"></a><a href="#foot5">5</a></sup></p>
<p>The three phases or tonalities characterized by Treder and Phoenix match fairly well with the 150 year half cycle I discern between, say, 1850 and 2000, in which the doubled generations are characterized sequentially by the dominants I have dubbed IT (imperialism, Hot Peace, public art), THEORY (global war, religiosity, modernism) and CODE (Cold Peace, democracy, postmodernism). In tone, that half cycle begins with what Australian historian and entrepreneur J. Penman, Ph.D., calls High Vigor and moderate Stress, through Mid Vigor and High Stress, to Low Vigor today but only Medium Stress.<a name="ref6"></a><sup><a href="#foot6">6</a></sup> These parameters are related to, and perhaps driven by, variations in child-rearing practices and those in turn, historically, on availability of adequate or abundant nutrients, levels of perceived threat and security, etc&#151;see note 6.</p>
<p>Very roughly, we might expect Guardian/IT cultural phases to attempt to impose strong centralized and hierarchical command over the ownership of nanofactories and any distribution of their socially disruptive cheap goods. Commercial/THEORY phases might use state power as well as conglomerate capital power to restrict or co-opt MNT. Informational/CODE phases will be likely to embrace MNT and attempt to spread its benefits widely, perhaps to the whole world, and to resist conservative &quot;moral values&quot; restraints, corporate ownership, and copyrights. It is obvious, despite the natural affiliation of computer-savvy members of the Code or green generations, that very powerful forces will be strongly motivated to restrict MNT for reasons of private gain and public security, even in those societies falling increasingly under this dominant in the last 50 years</p>
<p>The problem foreshadowed by Knight&#8217;s novel is that resistance to the free development and distribution of MNT might elicit regression to earlier dominants. In Wilber&#8217;s terms these are beige (instinctual/subservience to parents), purple (magical thinking) and red (egocentric), which map moderately well with the earlier (and subsequent) 150 year semi-cycle I have proposed, summarized briefly as ALGORITHMIC (global conflict, classicism, aristocracy), WE (feudal disorder, formal religion at nadir, superstition at zenith), and finally I (romanticism, beginning with successful revolutions and perhaps global war and culminating in thwarted revolutions). Historically, in the West, these three dominants held sway between 1700 and 1850, continuing on into the three phases previously described. On this model, which is consistent with classic long cycle analyses by G. Modelski<sup><a name="ref7"></a><a href="#foot7">7</a></sup> and others, we are arguably heading right now into a new algorithmic or phatic phase, with its attendant risks of banality, degeneration towards superstition, significant conflict (and perhaps the unexpected &quot;War on Terror&quot;&#151;and <i>by</i> culturally motivated terrorists and hegemonists&#151;is an index of this). Of course, such 300-year cycles&#151;which I trace back through at least three iterations, and probably much farther&#151;would presumably be interrupted forever by a Singularity, especially one in which drastic life extension becomes possible, thereby upsetting the already muddled traditional replacement of generations raised under consecutively different conditions. Nanotechnology is clearly one of the driving forces thrusting advanced technological cultures toward just such a Singularity. One question, therefore, is whether Wilber&#8217;s orange and green phases or waves can be sustained in their dominant roles at a time when external and internal factors are arguably impelling Western cultures, as well as their foes, toward what one might regard as more primitive dominants.</p>
<p>Indeed, this kind of analysis might lend itself usefully to the study of contemporary cultures other than the Western. Should they all be regarded, however different they remain, as <i>in some sense</i> synchronized with the productive and informational drivers of the global economy? One suggestion I hesitantly made in my preliminary study is that societies throughout the world have been <i>traditionally</i> tied, far more than we might imagine, to a kind of global clock driven by variable insolation, and the impact of available solar energy upon climate and hence food supply. Again, even if this has been the case, it might no longer be so in an epoch where human-induced global warming is skewing traditional large-scale solar-modulated weather patterns, and in which global scientific production and transport of food and raw materials to a large extent obviates reliance upon local climatic conditions.<sup><a name="ref8"></a><a href="#foot8">8</a></sup></p>
<p>In any event, it seems arguable that an analysis of cultural dominants of this kind, and their differential impact, might provide some general guidance in our expectations of the near-future impact of any truly radical and disruptive technology such as MNT.</p>
</p>
<hr />
<p><a name="foot1"></a><a href="#ref1">1.</a> Damon Knight, <i>A For Anything</i>, 1965, New York: Walker Publishing Co. 1970; as <i>The People Maker</i> 1959; short story &quot;A for Anything&quot;, <i>The Magazine of Fantasy and Science Fiction</i>, Nov. 1957.</p>
<p><a name="foot2"></a><a href="#ref2">2.</a> Damien Broderick, <i>Theory and its Discontents</i>, Melbourne: Deakin University Press, 1997.</p>
<p><a name="foot3"></a><a href="#ref3">3.</a> Ken Wilber, <i>A Theory of Everything: An Integral Vision for Business, Politics, Science and Spirituality</i>, Boston: Shambala, 2000; Boomeritis, Boston: Shambala, 2002; I am grateful to futurist Professor Richard Slaughter for drawing my attention to Wilber&#8217;s work. See also the &quot;Spiral Dynamics&quot; of Don Beck, for example at <a href="http://www.integralworld.net/beck2.html" target="_blank">http://www.integralworld.net/beck2.html</a></p>
<p><a name="foot4"></a><a href="#ref4">4.</a> <a href="http://crnano.org/systems.htm" target="_blank">http://crnano.org/systems.htm</a></p>
<p><a name="foot5"></a><a href="#ref5">5.</a> What drives this recurrence, in Strauss and Howe&#8217;s view, is a cycle of nurturant practice. Underprotection in childhood creates a tendency in the adults so formed to pay more attention to their own children, so the next generation shows increasing nurturance. The third step is a generation smothered by overprotection, and the reaction to such stifling is a fourth phase of decreasing nurturance, which in turn leads back to the start of the cycle.</p>
<p>It is interesting that the linear progression suggested by Strauss and Howe resembles a compressed version of my own model and Wilber&#8217;s, with their four-step periodicity folded into every pair of consecutive Dominant regimes in mine. Inner-driven Idealists correspond in character with my &quot;I&quot; generations, Reactives with &quot;IT&quot; empiricism, Civics with &quot;THEORY/TEXT&quot; governance, and Adaptives with &quot;PHATIC/ALGORITHMIC&quot; conformity. Two stages are elided: &quot;CODE&quot;, following &quot;THEORY&quot;, and &quot;WE&quot;, following &quot;PHATIC&quot;, but the two models operate at different scales. Neither is there a gross discord between the order of the two sequences. No doubt this is connected with the individual life-stage structure that also underlies each model: Youth (which conflates &quot;WE&quot; and &quot;I&quot; stages), Rising Adults (&quot;IT&quot;), Midlife Adults (&quot;THEORY/TEXT&quot; plus the shift to &quot;CODE&quot;), Elders (the transition from &quot;CODE&quot; to &quot;PHATIC&quot; or &quot;RULE&quot;).</p>
<p><a name="foot6"></a><a href="#ref6">6.</a> Jim Penman, <i>The Hungry Ape</i>, Melbourne, 1992, cited Broderick, 1997.</p>
<p>To sketch briefly the broad basis of Penman&#8217;s mechanism, operating on cultures via typical patterns for discipline of their infants: Societies using early control tend to develop a politics based on group loyalty; in a time-frame of low Restraint they produce feudalism, and during high Restraint, they produce stable city states and nation states. Their populations are open to change, and have elaborate economic skills. By contrast, societies lacking early control favor a politics based on personal, face-to-face authority; low Restraint stretches of the cycle are marked by unstable control over regions with shifting borders, while during high Restraint regimes they build large imperial dominions. Their populations are tradition-bound, and less skilled (Penman, p. 184).</p>
<p><a name="foot7"></a><a href="#ref7">7.</a> George Modelski, <i>Long Cycles in World Politics</i>, Seattle: University of Washington Press, 1987.</p>
<p>If Modelski is correct, since 1494 the world system, parameterized in versions of the four Parsonian variables (economy, polity, societal community, and pattern maintenance or media/information apparatuses), has passed through five &quot;long cycles&quot;, each with four generational phases. The cycles run to a little more than a century each, and climax in devastating contests for world leadership. These global conflicts last between 23 and 31 years, with the same average as his cycle generation, 27 years. The turn of the millennium marked the exhausted stage of an American century, and, if no better and more humane means is devised for adjudicating leadership, the world probably would be doomed to a new global war in perhaps 2030 (but not until then).</p>
<p><a name="foot8"></a><a href="#ref8">8.</a> A somewhat different but arguably overlapping analysis was developed by Raymond H. Wheeler, a former professor of psychology at the University of Kansas and president of the Kansas Academy of Sciences, who constructed his own grand theory of cultural recurrence. Around the middle of the 20<sup>th</sup> century, Wheeler orchestrated a massive research project, drawing on up to two hundred co-workers, to reduce all of recorded history to coherent summary form. As the data from 2500 years of records were tabulated, he discerned a number of recurrent patterns world-wide. The most notable was a roughly 100-year climatic cycle, varying between 70 and 120 years, which seemed to fall into four predictable phases. From this periodicity, and drawing on then-prevalent doctrines of cultural and ethnic character, he theorized a regular swing of mass psychological emphasis between &quot;classical&quot; or &quot;centralist&quot; and &quot;romantic&quot; or &quot;individualist&quot; styles of community and culture, summarized in Ellsworth Huntington, <i>Mainsprings of Civilization</i>, [1945] 1959, New York: Mentor, 515-7. (Huntington was an explorer and Yale professor of geography and climatology whose books ranged from <i>Civilization and Climate</i> (1915) to his magnum opus, <i>Mainsprings of Civilization</i>, published two years before his death. His thesis of strong climatic determinism strikes us today as crankily ethnocentric at best, for he sought to discover why &quot;vigorous&quot; peoples like wealthy Euro-Americans were so much more successful than the &quot;indolent&quot;, &quot;feminized&quot; races nearer the equator or otherwise trapped and stultified by debilitating circumstances. In the era of the Asian Tigers on the Pacific Rim, not to mention the historic defeat of American military efforts by tropical Vietnamese and the current imbroglio in Iraq, this claim seems not just racist but ludicrous. We should not be entirely distracted, however, by our legitimate distaste for colonial premises and rhetoric. Huntington&#8217;s comparative ethnography remains a rich trove of data, usefully categorized, on historical and environmental flows in the fortunes of nations.)
</p>
<p>Obviously these climate-driven distinctions cannot be found literally everywhere simultaneously, because a global shift like the El Ni&ntilde;o vacillation will bring unusually abundant rain to one region while filching it from another. Still, events like the Maunder Minimum suggest that at least some secular climatic variations on the order of a century can be due to changes in the sun&#8217;s internal clock. It is feasible that more subtle variations depend on more regular solar pacemakers, such as the deep processes that also cause the sunspot cycle and perhaps (even in the absence of human intervention) modulate global warming and cooling.
</p>
<p>Wheeler and his team found their data was usefully schematized by a four-fold sequence: Warm-Wet, Warm-Dry, Cold-Wet, and Cold-Dry. Each contributed to a certain characteristic mode of collective behavior, so that &quot;similar events have occurred throughout history during the same phases of the 100-year climate cycle&quot; (Dewey and Mandino, <i>Cycles</i>, 1971, New York: Manor Books, 138). Adapting this model in brutally schematic form, and projecting 20 years (without taking account of drastic global climate change), we might map the 20<sup>th</sup> century thus (138-9):</p>
<table width="100%" border="1" cellspacing="0" cellpadding="0">
<tr>
<td>WARM/WET: 1900-24</td>
<td>WARM/DRY<br />1925-49</td>
<td>COLD/WET<br />1950-74</td>
<td>COLD/DRY<br />1975-1999</td>
<td>WARM/WET: 2000-24</td>
</tr>
<tr>
<td>early stability;<br />nationalism;<br />imperialist and<br />expansionary wars;<br />good crops;<br />genius flourishes;<br />prosperity</td>
<td>police states;<br />introversion;<br />surrealism;<br />economic collapse;<br />cruel mass war;<br />crops recover;<br />revival begins</td>
<td>individualism;<br />decentralized politics;<br />emancipation;<br />mechanical scholarship;<br />shift to anarchistic tone</td>
<td>weakened government;<br />migrations;<br />race riots;<br />class struggle;<br />revolution;<br />new leadership emerges</td>
<td>early stability;<br />nationalism;<br />imperialist and<br />expansionary wars;<br />good crops;<br />genius flourishes;<br />prosperity</td>
</tr>
</table>
<p>Since Wheeler announced his model just prior to the mid-century, this makes a prescient cultural display, although he missed Greenhouse heating.</p>
<p><i>&copy; 2006 Damien Broderick. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/cultural-dominants-and-differential-mnt-uptake/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Globalization and Open Source Nano Economy</title>
		<link>https://www.writingsbyraykurzweil.com/globalization-and-open-source-nano-economy</link>
		<comments>https://www.writingsbyraykurzweil.com/globalization-and-open-source-nano-economy#respond</comments>
		<pubDate>Thu, 30 Mar 2006 01:19:53 +0000</pubDate>
								<dc:creator>Giulio Prisco</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Some of the problems of today's globalized world could be eliminated or reduced by developing operational worldwide molecular design and manufacturing capabilities. Instead of shipping physical objects, their detailed design specification in a "Molecular Description Language" (MDL) will be transmitted over a global data grid evolved from today's Internet and then physically "printed" by "nano printers" at remote sites. This would allow communities wishing to remain independent to retain their autonomy. 

]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology</a><i>, Volume 2, No. 1, March 27 2006. Reprinted with permission on KurzweilAI.net March 30, 2006.</i></p>
<p>In this essay, I wish to raise my concern over some of the problems of today&#8217;s world, and try to suggest how they can be eliminated, or at least their negative impact be reduced, by developing operational worldwide molecular design and manufacturing capabilities.<span id="more-80926"></span></p>
<p>The Unabomber Manifesto (&quot;Industrial Society And Its Future&quot;) by Theodore Kaczynski is one of the most interesting documents of our times, in terms of both its history and its content. Thanks to the work of Information Technology pioneers such as some of the people he targeted, you can read the full text of the Unabomber Manifesto online.</p>
<p>Quoting from the <i>Wikipedia</i> article:</p>
<blockquote>
<p>The main argument of Industrial Society and Its Future is that 	technological progress is undesirable, can be stopped, and in 	fact should be stopped in order to free people from the unnatural 	demands of technology, so that they can return to a happier, simpler 	life close to nature. Kaczynski argued that it was necessary to 	cause a &quot;social crash&quot;, before society became any worse. 	He believes a collapse of civilization is likely to occur at some 	point in the future; thus, it is better to end things now, rather 	than later, because the further society develops, the more painful 	things will be when the collapse occurs. If it does not occur, 	he says, humans will have the freedom and significance of house 	pets, although they may be happy, in a society dominated by machines 	or an elite social class.</p>
</blockquote>
<p>I am (and you are, I hope) definitely against Kaczynski&#8217;s final determinations. However, I have to agree with most critics who say that the Manifesto is very well written and that its conclusions, flawed as they are and despite the horrible acts of murder they spawned, are based on a well articulated analysis of some of the problems of today&#8217;s world.</p>
<p>One of Kaczynski&#8217;s central points is that the &quot;natural&quot; social and cultural environment for a human being is a relatively small community, not too dependent on the outside world for any necessary resource, where everyone has a chance to know everyone else and to actively contribute to the life of the community. He claims that an interconnected world in which the quality of each person&#8217;s life depends on things that take place far away is dehumanizing and cannot work without decreasing the freedom, the rights, and ultimately the happiness and well-being of people. He argues that the very technologies needed to sustain a globalized world contribute to creating more dehumanization. This produces a runaway feedback loop that can only result in an unnatural environment, putting far too much strain on our mental resources&#151;and at some point, something has to break.</p>
<p>So, Kaczynski wishes to go back to a world of loosely connected, relatively independent small communities. But this is difficult because in today&#8217;s world no small community could ever produce all that is needed to meet its own energy, food, communications, and health care requirements. Hence, Kaczynski proposes to break the technological foundations of our global civilization by any means, including murder.</p>
<p>The deep interconnectedness of today&#8217;s world also creates huge geopolitical tensions. The situation in the Middle East is a sad example of what can happen when the economy of one region is too strongly dependent on resources located in another region, and where too many players seek control over the complex planet-wide production and distribution networks crucial to the functioning of our global infrastructure.</p>
<p>(A big advantage of solar energy, and one of the main reasons why its deployment should be pursued much more aggressively, is that it can be produced locally by those who require it. A nation following this route would sharply reduce their vulnerability to hostile actions, and to the blackmail of others based on threatening to disrupt their energy supply. In addition, this would reduce that nation&#8217;s propensity to wage war against others for the control of energy supplies.)</p>
<p>I definitely do not want to go back to a pre-industrial age as Kaczynski proposes. Indeed, I like many aspects of globalization. I like that in some sense we can all regard ourselves as citizens of One World. I like that with the Internet I can know what happens and what people think on the other side of the planet, and that I can participate in virtual communities held together by common interests and values instead of geographic location. I like to see thinkers and doers from all over the world working together at near-thought speed to develop new ideas and goods.</p>
<p>So, I am definitely not a sympathizer of the anti-globalization movement. But I can see worth in some of the points they make, partly based on Kaczynski&#8217;s writings. Perhaps we can take their best arguments into account by recognizing that although the option of living in a global interconnected world is good for many, nobody should be forced to do so, and a local community of like-minded people who wish to live their lives in relative isolation from the rest of the world&#151;provided of course they do not oppress their citizens or threaten other communities&#151;should have the opportunity and the means to do so. A good, albeit perhaps extreme, example is in Damien Broderick&#8217;s <i>Transcension</i>.</p>
<p>Another problem of the modern world is that it is very difficult to build effective supranational governance bodies, because existing nation-states, especially those with a long history, refuse to give up sovereignty and power. This difficulty is often seen in the United Nations and in other supranational bodies such as the European Union. Few, if any, of today&#8217;s nation-states would seriously consider allowing such organizations to have real and effective decision-making power, let alone the means to enforce the decisions made. It appears that a gradual breakup of existing nation states into smaller entities, relatively autonomous but co-operating when co-operation is necessary for all parties involved, will be a necessary prerequisite for the creation of supranational governance structures including regional and world &quot;governments&quot;.</p>
<p>I have given two different but connected arguments for &#8220;small is beautiful.&#8221; And, speaking of small things, I believe that emerging NBIC<a name="ref"></a><a href="#foot">*</a> technologies, and in particular molecular nanotechnology, will offer the opportunity to retain the benefits of globalization while at the same time significantly reducing the dependence of local communities on the external world as far as the availability of material goods (food, medicines, energy, vehicles, toys, designer items, etc.) is concerned.</p>
<p>Richard Feynman was the first to articulate the possibility of molecular nanotechnology (although not by that name). In his 1959 essay, &quot;There&#8217;s Plenty of Room at the Bottom,&quot; he argued that there is nothing in the laws of physics to prevent us from building molecular size machines able to precisely place individual atoms and molecules according to design specifications and build complex structures and chemical compounds one atom at a time. Feynman wrote:</p>
<blockquote>
<p>It would be, in principle, possible (I think) for a physicist 	to synthesize any chemical substance that the chemist writes down. 	Give the orders and the physicist synthesizes it. How? Put the 	atoms down where the chemist says, and so you make the substance. 	The problems of chemistry and biology can be greatly helped if 	our ability to see what we are doing, and to do things on an atomic 	level, is ultimately developed&#151;a development which I think 	cannot be avoided.</p>
</blockquote>
<p>Eric Drexler, who coined the term &quot;nanotechnology&quot; and popularized it in <i>Engines of Creation&#151;The Coming Era of Nanotechnology</i>, was among the first to realize that nanotechnology will achieve its disruptive potential when molecular machines will be able to build other molecular machines by assembling them from atoms and molecules available in their environment. Given replicant nanotechnology, it is easy to see how, with suitable programming and assuming that all needed molecular &quot;bricks&quot; can be extracted from the environment (a safe assumption in most cases), it is possible to assemble any substance or structure for which detailed design specifications are available. So, our future economy will not be based on material goods, but on design specifications for material goods. We already have examples of this today:</p>
<p>A document can be transmitted over the Internet and reproduced, on screen or on paper, by whomever has to read it. This technology is available to nearly all consumers, at least in the Western world, at the (relatively) low cost of a PC, a printer, and an Internet connection.</p>
<p>A VHDL (VHSIC hardware description language) design specification for an application specific integrated circuit is as good as the device itself in the sense that it can be taken to a suitable hardware foundry and used to reproduce the device with an automated process. The fundamental difference from the previous example is that today one needs very complex and expensive machinery and extensive know-how to generate a physical instantiation of the device. But I think we can safely predict that the costs will drop and circuit printing will become more and more like document printing.</p>
<p>Instead of shipping physical objects, their detailed design specification in a &quot;Matter Description Language&quot; or &quot;Molecular Description Language&quot; (MDL) will be transmitted over a global data grid evolved from today&#8217;s Internet and then physically instantiated (&quot;printed&quot;) by &quot;nano printers&quot; at remote sites. The usage of nano printers, also called nanofactories, is described in Neal Stephenson&#8217;s <i>The Diamond Age</i>. The term &#8220;Matter Compiler&#8221; (MC) used by Stephenson in the novel is especially good as, by analogy with the software development process, it suggests the idea of organizing (compiling) matter from design specifications. Reading Stephenson&#8217;s descriptions of young Nell trying to use her mother&#8217;s cheap kitchen MC to compile clothes, toys, and mattresses makes it easier to understand the basic concepts of molecular manufacturing.</p>
<p>Assuming it still exists at that time, the Coca Cola Company will not sell physical cans, but will license the MDL description of its popular beverage for on-site compilation by customers. I assume Coca Cola and all other commercial companies will need some means to enforce their intellectual property rights to make sure that customers pay what they are supposed to pay. This probably will be done by a limit on the number of times a given MDL design can be assembled by a given user, with protection technologies conceptually similar to those used today for Digital Rights Management (DRM). Of course, there will be plenty of 15 year-old hackers willing and able to crack whatever DRM protection scheme manufacturers can think of, and then make available cracked DRM-free design specs on the global data grid.</p>
<p>I do not see any reason why molecular nanotechnology should change the basic laws of economy, so I assume that the MDL description of an Armani suit will cost as much as the Armani suit costs today. And I believe tomorrow&#8217;s designers of luxury items will be perfectly entitled to charge a lot of money for their creations. But what happens if the MDL descriptions of basic goods that a local community needs are priced beyond their reach? And what happens if these licenses are withdrawn for political reasons, perhaps to force a community to submit to an aggressor community or to an overreaching central authority?</p>
<p>Basic goods should be free, or priced within the means of everyone. In other words, Coca Cola can be expensive, but water must be free. Armani suits can be expensive, but basic clothing must be free. Who will develop royalty-free MDL descriptions of basic goods that everyone on the planet can use? The answer, I think (or at least I hope), is that they will be developed with an Open Source development model by armies of MDL programmers.</p>
<p>In the online version of this essay, I make frequent use of <i>Wikipedia</i> articles as references for two reasons: first, I am fond of <i>Wikipedia</i> as one of the best examples of Open Source development; and second, <i>Wikipedia</i> articles are as good as, and often better than, equivalent articles in expensive encyclopedias. I can rest assured that all <i>Wikipedia</i> references that I use in this article will be maintained under the spontaneous quality assurance and control processes that are emerging within the <i>Wikipedia</i> community, and will be further improved by countless users and experts. So, linking to <i>Wikipedia</i> is much safer than linking to a commercial website that may disappear if the owner goes out of business. (If you are reading a hardcopy version of this essay and wish to have further information on the terms and concepts mentioned, please go to the URL <a href="http://en.wikipedia.org/" target="_blank">http://en.wikipedia.org/</a> and enter your search keywords.)</p>
<p>It seems likely that many of the arguments used today in favor of the Open Source movement will be applicable to tomorrow&#8217;s nanotech economy. The availability of Open Source MDL specifications for all basic goods will result, I believe, in a better world&#151;a world where citizens and communities will be free to do their own thing (provided they do not reduce the right and ability of others to do the same) without having to give in to pressure and blackmail from hostile parties or meddlesome central authorities who threaten to disrupt their supply of basic material goods.</p>
<hr />
<p><a name="foot"></a><a href="#ref">*</a> Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science, Edited by Mihail C. Roco and William Sims Bainbridge, National Science Foundation, June 2002, <a href="http://www.wtec.org/ConvergingTechnologies/Report/NBIC_report.pdf" target="_blank">http://www.wtec.org/ConvergingTechnologies/Report/NBIC_report.pdf</a>
</p>
<p><i>&copy; 2006 Giulio Prisco</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/globalization-and-open-source-nano-economy/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nano-Guns, Nano-Germs, and Nano-Steel</title>
		<link>https://www.writingsbyraykurzweil.com/nano-guns-nano-germs-and-nano-steel</link>
		<comments>https://www.writingsbyraykurzweil.com/nano-guns-nano-germs-and-nano-steel#respond</comments>
		<pubDate>Wed, 29 Mar 2006 01:11:18 +0000</pubDate>
								<dc:creator>Mike Treder</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Within our lifetimes, we are likely to witness battles on a scale never before seen. Powered by molecular manufacturing, near-future wars may threaten our freedom, our way of life, and even our survival. Superior military technology allowed the Spanish to conquer the Incan empire in 1532. Could today's most powerful civilization, the United States, be just as easily conquered by a nano-enabled attacker?]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology</a><i>, Volume 2, No. 1, March 27 2006. Reprinted with permission on KurzweilAI.net March 29, 2006.</i></p>
<p>Conflicts, clashes, battles, and wars: this is the stuff of which history is made. The world as we know it today is largely a product of wars fought and peoples conquered.<span id="more-80925"></span></p>
<p>We like to look back admiringly on other things our species has produced: great works of art, brilliant inventions, sage philosophers, brave explorers, and selfless peacemakers. But the real star of the human story is war. In fact, very often those things we admire&#151;philosophy, technology, leadership, superb writing and speechmaking&#151;are put to maximum use in the service of war.</p>
<p>The story is not yet over. Within our lifetimes, we are likely to witness battles on a scale never before seen. Powered by molecular manufacturing, an advanced form of nanotechnology, these near-future wars<sup><a name="ref1"></a><a href="#foot1">1</a></sup> may threaten our freedom, our way of life, and even our survival. </p>
<p>Some wars are between opponents of roughly equal fighting ability. As a result, these conflicts tend to drag on, often for years and killing millions, until finally one side emerges victorious. Recent examples include the American Civil War, World War I, and World War II.</p>
<p>Occasionally one adversary will possess huge advantages over the other, in which case the war typically is quite short. A famous instance is the spectacular one-sided victory of Spanish conquistador Francisco Pizarro over the Incan empire in 1532. What makes this story so remarkable is that an army of 80,000 soldiers was overwhelmed and decimated in one day by a force of only 169 men.</p>
<p>Normally we would expect that an aggressor facing such great numbers would be a decided underdog, virtually assured of defeat. Jared Diamond, in his book Guns, Germs, and Steel,<sup><a name="ref2"></a><a href="#foot2">2</a></sup> analyzes this historic event&#151;clearly a major turning point in the course of human civilization&#151;and describes the elements that gave the Spaniards a stunningly easy victory.</p>
<p>Diamond lists superior military technology based on guns, steel weapons, and horses; infectious diseases; maritime technology; centralized political organization; and writing.</p>
<p>These advantages can be categorized as follows (with items from 1532 in parentheses):</p>
<ul>
<li>Battle technology (guns, steel weapons, and horses)</li>
<li>Physical fitness (infectious diseases)</li>
<li>Transportation technology (maritime)</li>
<li>Effective command and control (centralized organization)</li>
<li>Communications technology (writing)</li>
</ul>
<p>Looking forward, we can imagine a similar situation: an apparently strong nation, a superpower or empire within their realm, suddenly and overwhelmingly defeated by an adversary with superior technology and other advantages.</p>
<p>Molecular manufacturing&#151;the ability to construct powerful, atomically precise products at an exponentially increasing pace&#151;could provide the tools for a spectacular one-sided victory by an apparent underdog equipped with superior: </p>
<ul>
<li>Battle technology (nano-weapons)</li>
<li>Physical fitness (nano-enabled biotechnology)</li>
<li>Transportation technology (aerospace)</li>
<li>Effective command and control (boosted by nano-computing)</li>
<li>Communications technology (secure worldwide network)</li>
</ul>
<p>Despite vastly greater numbers, the Incas&#151;the most developed civilization in the Americas&#151;were not able to mount a serious resistance against the advanced technology of Spain.</p>
<p>Could today&#146;s most powerful civilization, the United States, as easily be conquered by a nano-enabled attacker? This appears possible, if molecular manufacturing does provide for huge gains in all five areas, as many analysts (including this author) believe it will.</p>
<p>No nation lacking the nanotech advantage will be able to resist a foe&#151;no matter how small or weak in conventional terms&#151;that wields the power of molecular manufacturing.<a name="ref3"></a><sup><a href="#foot3">3</a></sup></p>
<p>It is not certain, of course, that large-scale war will occur within the next few decades. But if it does, and if both (or all) sides are nano-enabled, that event could last a relatively long time, and casualties could be in the billions. If, on the other hand, only one combatant possesses the awesome capabilities of nano-built weapons, computers, and infrastructure, that war might be over very quickly, and could leave the victor in total command of the world.</p>
<hr />
<p><a name="foot1"></a><a href="#ref1">1.</a> Treder, Mike (2005) &#8220;War, Interdependence, &amp; Nanotechnology&#8221; (<i>Future Brief</i>) <a href="http://www.futurebrief.com/miketrederwar002.asp" target="_blank">http://www.futurebrief.com/miketrederwar002.asp</a></p>
<p><a name="foot2"></a><a href="#ref2">2.</a> Diamond, Jared (1997) <i>Guns, Germs, and Steel</i> (W. W. Norton, New York)</p>
<p><a name="foot3"></a><a href="#ref3">3.</a> Phoenix, Chris (2003) &#8220;Molecular Manufacturing: Start Planning&#8221; (<i>Public Interest Report</i>, 56:2) <a href="http://www.fas.org/faspir/2003/v56n2/nanotech.htm" target="_blank">http://www.fas.org/faspir/2003/v56n2/nanotech.htm</a></p>
<p><i>&copy; 2006 Mike Treder</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/nano-guns-nano-germs-and-nano-steel/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Molecular Manufacturing and 21st Century Policing</title>
		<link>https://www.writingsbyraykurzweil.com/molecular-manufacturing-and-21st-century-policing</link>
		<comments>https://www.writingsbyraykurzweil.com/molecular-manufacturing-and-21st-century-policing#respond</comments>
		<pubDate>Wed, 29 Mar 2006 00:57:33 +0000</pubDate>
								<dc:creator>Thomas J. Cowper</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Will nanofactories foster global anarchy? Will nations devolve into a technologically-driven arms race, the winner dominating or destroying the planet with powerful molecular-manufacturing-enabled weapons? Or will the world's Big Brothers grow larger and more tyrannical, using advanced nanotechnology to "protect" their law abiding masses through increasing surveillance, control and internal subjugation? A law-enforcement executive asks the tough questions.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology</a><i>, Volume 2, No. 1, March 27, 2006. Reprinted with permission on KurzweilAI.net March 29, 2006.</i></p>
<p>One of the fundamental questions driving any attempt at forecasting the future is: what kind of society do we want to live in? Or, for the farther future: what kind of society do we want our children to live in? How would widely available nanofactories change our lives and our world? Will multi-national corporations gain exclusive control of molecular manufacturing (MM), using it to dominate social institutions and dictate public policy from a purely capitalist and/or monopolist perspective? Will personal nanofactories foster global anarchy and create a form of modern tribalism based upon religion, ideology, or culture, and pit independent city-states or autonomous regions against one another? Will the world&#8217;s nations devolve further into a technologically-driven arms race with the winner dominating or destroying the planet with powerful MM-enabled weapons? <span id="more-80924"></span>Will the world&#8217;s Big Brothers grow larger and more tyrannical, using advanced nanotechnology to &quot;protect&quot; their law abiding masses through increasing surveillance, control and internal subjugation? Or, will personal freedom grow and evolve along with our technology, giving people and communities the ability to maintain their rights as individuals and protect the social welfare of their communities and nations while fostering global peace, security, and prosperity?</p>
<p>These questions and a host of others have no easy answers. One significant factor on the path to our future is our world as it exists today, a world largely dominated by governments and the forces they employ to maintain civil order and internal security. In today&#8217;s stable societies of the developed nations, government police and para-military forces provide the preponderance of domestic order maintenance services, enforcing criminal laws and ordinances, arbitrating physical disputes, investigating crimes, and responding to disasters&#151;professional functions usually deemed appropriate in modern democracies to ensure the continued safety and security of a community or nation. These activities and the manner in which they are carried out will have a direct and profound impact on the kind of world we and our children will live in, particularly in regards to the maintenance of civil liberties and individual freedom. </p>
<p>It is important therefore to give careful consideration to the ways in which governments use technology today to provide for public safety and security, and how that might change as a result of new technological advances. We need to give close scrutiny to the capabilities afforded the civil police by modern technology&#151;particularly the potential power bestowed by molecular nanotechnology and personal nanofactories&#151;before these capabilities are realized. What capabilities do we want the police to have and which do we want to restrict? How much capability do they need in order to provide for public order and safety in an age of advanced nanotechnology? Are they capable of wielding the power afforded them through augmented reality, unmanned aerial vehicles, robots, surveillance, data-mining, and biometrics, technologies that will be greatly enhanced and widely distributed by personal nanofactories? Can we afford to place such power in the hands of government? And if not, what is the alternative for ensuring peace and social stability for the world&#8217;s billions?</p>
<p>As we consider the appropriate limits on police surveillance and enforcement capabilities we also need to consider the ways in which criminals and terrorists might exploit advanced technologies like personal nanofactories in carrying out their goals, and the impact their actions will have on liberty and democracy if they succeed. While government action can have dramatic and negative impacts on our ability to be and remain free, so too the actions of a lone criminal or terrorist group armed with advanced technology can have severe repercussions on the social psyche, and thereby the economy and stability of a nation or the world. Successful terrorist attacks and chronic criminal activities in a globalized world have a fundamentally destabilizing affect on communities and nations, often fostering highly reactionary programs and policies aimed at providing short-term safety for the many at the expense of liberty for a perceived few. </p>
<p>In other words, simply limiting police use of technology is no guarantee that civil liberties will be maintained. On the contrary, the public&#8217;s perception of danger will inevitably drive policing and security operations within communities and nations whether the civil police are equipped and empowered to act or not. Recent activities along the border between the United States and Mexico demonstrate that in today&#8217;s world, with the ready availability of advancing technology, someone will end up conducting police operations when communities believe they face criminal and terrorist threats that remain unchallenged by the civil police. Groups such as the Minutemen and the American Border Patrol are non-government organizations formed by average citizens, frustrated at the lack of response by their government regarding illegals crossing the national border. Armed with widely available technology not currently utilized by the civil police (unmanned aerial vehicles with video cameras and wireless links for surveillance), and probably more than a few weapons, these groups conduct border interdiction operations outside of government sanction. </p>
<p>In the wake of the September 11<sup>th</sup> terrorist attacks, the US also has experienced a growing involvement in domestic security by military and private security forces. In the United States after 9/11, the Pentagon formed the US Northern Command, a first-ever strategic military command whose primary mission is to conduct domestic military operations&#151;essentially law enforcement and civil security missions&#151;in response to terrorist events and natural disasters. Similarly, private security agencies such as DynCorp and SAIC have taken on a much broader role within communities to combat terrorism and cyber-crimes such as identity theft and credit card fraud, filling a law enforcement and civil security niche that state and local police departments are either ill-equipped or unable to deal with.</p>
<p>Life in the 21<sup>st</sup> Century is only getting more complex. Information technologies and mass media confront the populace on a daily basis with graphic real-time images of death and destruction along with gripping narrative accounts of all the world&#8217;s problems, raising public fear and driving citizen demands for higher and higher levels of security. The specter of technology out of control is a frequent topic of popular books, movies and television, causing many people to question the wisdom of continued technological advancement. Molecular manufacturing and personal nanofactories will raise even further the level of public fear and create new conflicts and opportunities for criminal and terrorist groups to exploit to their advantage. </p>
<p>Advancing technology in general and molecular manufacturing in particular make predictions about the future difficult at best. Still, conceptualizing all the potential scenarios and contemplating new and appropriate strategies, programs and policies necessary to avoid a dystopian future is important, however imprecise. Regardless of the particulars, it seems clear that in a world of growing conflict and fear, policing and law enforcement will play a rather large role, for good or for ill. When communities and nations are threatened with or confronted by persistent criminal exploitation and catastrophic terrorist attacks, the public will demand action to prevent further personal danger, economic loss or social unrest. </p>
<p>The type of policing we end up with and its effectiveness at preventing significant harm while lowering public fear will be a factor governing the nature and extent of our civil liberties as MM and personal nanofactories become part of our world. What would our civil liberties look like after a major terrorist attack if the military, utilizing MM-enabled surveillance devices and weapons, is in the best position with the best capabilities to conduct domestic policing operations? What kind of society would ensue if all significant policing in our communities and nations is conducted by corporations and hired security guards? Whose civil liberties would be protected when concerned citizen groups and vigilantes take community security into their own hands and use personal nanofactories to arm themselves like the military? </p>
<p>Of all the organizations and entities capable and willing to conduct domestic policing and security missions, only the civil police are sworn to uphold the civil liberties of all people. The military is trained and equipped to defeat opposing armies on foreign battlefields, to seize objectives and kill anyone who stands in the way. Corporate security forces and privately paid police forces are focused on the bottom line and are loyal to those who pay them. Individual citizens, concerned citizen groups and vigilantes are concerned only with their own safety and the civil liberties of those within their own interest group. Nevertheless, each of the above groups will play a role in policing neighborhoods, enforcing laws, and providing domestic security. Each will be a necessary component for effectively securing our communities and nations from criminal and terrorist predators of the future. The challenge will be to create a model in which the actions of these groups compliment one another, enhancing the collective effects of the whole, not working at cross purposes or creating additional conflicts that add to local, regional, national and global insecurity.</p>
<p>In a world of advanced technologies, molecular manufacturing capabilities, and personal nanofactories, an effective law enforcement process will be essential to peace and social stability. No single group can provide the right balance of domestic policing capabilities and each has dangerous tendencies that when employed in isolation can be detrimental to someone&#8217;s rights and freedoms. As with most of what troubles us in the information age, 20<sup>th</sup> Century solutions will not solve 21<sup>st</sup> Century problems. Centralization, parochialism and hierarchy are being replaced with distributed systems based upon collaboration across local, wide-area and global networks. The successful policing model of the future will need to move in this direction as well. To deal effectively with the challenges and dangers posed by tomorrow&#8217;s technologies, we must form a collaborative policing network, consisting of all citizens, agencies and forces with useful capabilities and appropriate law enforcement interest. A collective and collaborative effort will do a better job of upholding liberty for all people while providing the safety and security necessary for continued social and technological advancement.</p>
<p><i>&copy; 2006 Thomas J. Cowper</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/molecular-manufacturing-and-21st-century-policing/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Is AI Near a Takeoff Point?</title>
		<link>https://www.writingsbyraykurzweil.com/is-ai-near-a-takeoff-point</link>
		<comments>https://www.writingsbyraykurzweil.com/is-ai-near-a-takeoff-point#respond</comments>
		<pubDate>Tue, 28 Mar 2006 00:19:37 +0000</pubDate>
								<dc:creator>J. Storrs Hall</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Computers built by nanofactories may be millions of times more powerful than anything we have today, capable of creating world-changing AI in the coming decades. But to avoid a dystopia, the nature (and particularly intelligence) of government (a giant computer program -- with guns) will have to change.
]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology</a><i>, Volume 2, No. 1, March 27 2006. Reprinted with permission on KurzweilAI.net March 28, 2006.</i></p>
<p>Ray Kurzweil consistently has predicted 2029 as the year to expect truly Turing-test capable machines. Kurzweil&#8217;s estimates<a name="ref1"></a><a href="#ftn1"><sup>1</sup></a> are based on a broad assessment of the progress in computer hardware, software, and neurobiological science. <span id="more-80922"></span></p>
<p>Kurzweil estimates that we need 10,000 teraops for a human-equivalent machine. Other estimates (e.g. Moravec<a name="ref2"></a><sup><a href="#ftn2">2</a></sup>) range from a hundred to a thousand times less. The estimates actually are consistent, as Moravec&#8217;s involve modeling cognitive functions at a higher level with ad hoc algorithms, whereas Kurzweil is assuming we&#8217;ll have to simulate brain function at a more detailed level.</p>
<p>So, the best-estimate range for human-equivalent computing power is 10 to 10,000 teraops.</p>
<p>The Moore&#8217;s Law curve for processing power available for $1000 (in teraops) is:</p>
<p align="center"><b>2000:</b> 0.001 <b>2010:</b> 1 <b>2020:</b> 1,000 <b>2030:</b> 1,000,000</p>
<p>Thus, sophisticated algorithmic AI becomes viable in the 2010s, and the brute-force version in the 2020s, as Kurzweil predicts. (Progress into atomically precise nanotechnology is expected to keep Moore&#8217;s Law on track throughout this period. Note that by the NNI definition, existing computer hardware with imprecise sub-100-nanometer feature sizes is <i>already</i> nanotechnology.)</p>
<p>However, a true AI would be considerably more valuable than $1000. To a corporation, a good decision-maker would be worth at least a million dollars. At a million dollars, the Moore&#8217;s law curve looks like this:</p>
<p align="center"><b>2000:</b> 1 <b>2010:</b> 1,000 <b>2020:</b> 1,000,000</p>
<p>In other words, based on processing power, sophisticated algorithmic AI is viable <i>now</i>. We only need to know how to program it.</p>
<p>Current brain scanning tools recently have become able to see the firing of a single neuron in real time. Brain scanning is on a track similar to Moore&#8217;s law, in a number of critical figures of merit such as resolution and cost. Nanotechnology is a clear driver here, as more sophisticated analysis tools become available to observe brains in action at ever-higher resolution in real time.</p>
<p>Cognitive scientists have worked out diagrams of several of the brain&#8217;s functional blocks, such as auditory and visual pathways, and built working computational models of them. There are a few hundred such blocks in the brain, but that&#8217;s all.</p>
<p>In the meantime, purely synthetic computer-based artificial intelligence has been proceeding apace, beating Kasparov at chess, proving a thorny new mathematical theorem that had eluded any human mathematician, and driving off-road vehicles 100 miles successfully, in the past decade.</p>
<p>Existing AI software techniques can build programs that are experts at any well-defined field. The breakthroughs necessary for such a program to learn for itself easily could happen in the next decade. It&#8217;s always difficult to predict breakthroughs, but it&#8217;s quite as much a mistake not to predict them. One hundred years ago, between 1903 and 1907 approximately, the consensus of the scientific community was that powered heavier-than-air flight was impossible, <i>after the Wright brothers had flown</i>.</p>
<p>The key watershed in AI will be the development of a program that learns and extends itself. It&#8217;s difficult to say just how near such a system is, based on current machine learning technology, or to judge whether neuro- and cognitive science will produce the sudden insight necessary inside the next decade. However, it would be foolish to rule out such a possibility: all the other pieces are essentially in place now. Thus, I see runaway-AI as quite possibly the first of the &quot;big&quot; problems to hit, since it doesn&#8217;t require full molecular manufacturing to come online first.</p>
<p>A few points: The most likely place for strong AI to appear first is in corporate management; most other applications that make an economic difference can use weak AI (many already do); corporations have the necessary resources and clearly could benefit from the most intelligent management (the next most probable point of development is the military).</p>
<p>Initial corporate development could be a problem, however, because such AIs are very likely to be programmed to be competitive first, and worry about minor details like ethics, the economy, and the environment later, if at all. (Indeed, it could be argued that the fiduciary responsibility laws would require them to be programmed that way!)</p>
<p>A more subtle problem is that a learning system will necessarily be self-modifying. In other words, if we do begin by giving rules, boundaries, and so forth to a strong AI, there&#8217;s a good chance it will find its way around them (note that people and corporations already have demonstrated capabilities of that kind with respect to legal and moral constraints).</p>
<p>In the long run, what self-modifying systems will come to resemble can be described by the logic of evolution. There is serious danger, but also room for optimism if care and foresight are taken.</p>
<p>The best example of a self-creating, self-modifying intelligent system is children. Evolutionary psychology has some disheartening things to tell us about children&#8217;s moral development. The problem is that the genes, developed by evolution, can&#8217;t know the moral climate an individual will have to live in, so the psyche has to be adaptive on the individual level to environments ranging from inner-city anarchy to Victorian small town rectitude.</p>
<p>How it works, in simple terms, is that kids start out lying, cheating, and stealing as much as they can get away with. We call this behavior &quot;childish&quot; and view it as normal in the very young. They are forced into &quot;higher&quot; moral operating modes by demonstrations that they can&#8217;t get away with &#8220;immature&#8221; behavior, and by imitating (&quot;imprinting on&quot;) the moral behavior of parents and high-status peers.</p>
<p>In March 2000, computer scientist Bill Joy published an essay<a name="ref3"></a><sup><a href="#ftn3">3</a></sup> in <i>Wired</i> magazine about the dangers of likely 21<sup>st</sup>-century technologies. His essay claims that these dangers are so great that they might spell the end of humanity: bio-engineered plagues might kill us all; super-intelligent robots might make us their pets; gray goo might destroy the ecosystem.</p>
<p>Joy&#8217;s article begins with a passage from the &quot;Unabomber Manifesto,&quot; the essay by Ted Kaczynski that was published under the threat of murder. Joy is surprised to find himself in agreement, at least in part. Kaczynski wrote:</p>
<blockquote>
<p>First let us postulate that the computer scientists succeed in 	developing intelligent machines that can do all things better 	than human beings can do them. In that case, presumably all work 	will be done by vast, highly organized systems of machines and 	no human effort will be necessary. Either of two cases might occur. 	The machines might be permitted to make all of their own decisions 	without human oversight, or else human control over the machines 	might be retained.</p>
</blockquote>
<p>But that either/or distinction is a false one (Kaczynski is a mathematician, and commits a serious fallacy applying pseudo-mathematical logic to the real world in this case).</p>
<p>To understand just how complicated the issue really is, let&#8217;s consider a huge, immensely powerful machine we&#8217;ve already built, and see if the terms being applied here work in its context. The machine is the U.S. government and legal system. It is a lot more like a giant computer system than people realize. Highly complex computer programs are not sequences of instructions; they are sets of rules. This is explicit in the case of &quot;expert systems&quot; and implicit in the case of distributed, object-oriented, interrupt-driven, networked software systems. More to the point, sets of rules are programs.
</p>
<p>Therefore, the government is a giant computer program&#151;with guns. The history of the twentieth century is a story of such giant programs going bad and turning on their creators (the Soviet Union) or their neighbors (Nazi Germany) in very much the same way that Kaczynski imagines computers doing.</p>
<p>Of course, you will say that the government isn&#8217;t<i> just</i> a program; it&#8217;s under human control, after all, and it&#8217;s composed of people. However, it is both the pride and the shame of the human race that we will do things as part of a group that we never would do on our own&#151;think of Auschwitz. Yes, the government is composed of people, but the whole point of the rules is to make them do different things&#151;or do things differently&#151;than they would otherwise. Bureaucracies famously exhibit the same lack of common sense as do computer programs, and are just as famous for a lack of human empathy.</p>
<p>But, virtual cyborg though the government may be, isn&#8217;t it still under human control? In the case of the two horror stories cited above, the answer is: yes, under the control of Stalin and Hitler respectively. The U.S. government is much more decentralized in power; it was designed that way. Individual politicians are very strongly tied to the wishes of the voters; listen to one talk and you&#8217;ll see just how carefully they have to tread when they speak. The government is very strongly under the control of the voters, but no individual voter has any significant power. Is this &quot;under human control&quot;? </p>
<p>The fact is that life in the liberal western democracies is as good as it has ever been for anyone anywhere (for corresponding members of society, that is). What is more, I would argue vigorously that a major reason is that these governments <i>are not</i> in the control of individuals or small groups. In the 20<sup>th</sup> century, worldwide, governments killed upwards of 200 million humans. The vast majority of those deaths came at the hand of governments under the control of individuals or small groups. It did not seem to matter that the mechanisms doing the killing were organizations of humans; it was the nature of the overall system, and the fact that it was a centralized autocracy, that made the difference.</p>
<p>Are Americans as a people so much more moral than Germans or Russians? Absolutely not. Those who will seek and attain power in a society, any society, are quite often ruthless and sometimes downright evil. The U.S. seems to have constructed a system that somehow can be more moral than the people who make it up. (Note that a well-constructed system being better than its components is also a feature of the standard model of the capitalist economy.)</p>
<p>This emergent morality is a crucial property to understand if we are soon to be ruled, as Joy and Kaczynski fear, by our own machines. If we think of the government as an AI system, we see that it is not under direct control of any human, yet it has millions of nerves of pain and pleasure that feed into it from humans. Thus in some sense it is under human control, in a very distributed and generalized way. However, it is not the way that Kaczynski meant in his manifesto, and his analysis seems to miss this possibility completely.</p>
<p>Let me repeat the point: It is possible to create (design may be too strong a word) a system that is controlled in a distributed way by billions of signals from people in its purview. Such a machine can be of a type capable of wholesale slaughter, torture, and genocide&#151;but, if the system is properly controlled, people can live comfortable, interesting, prosperous, sheltered, and moderately free lives within it.</p>
<p>What about the individual, self-modifying, soon-to-be-superintelligent AIs? It shouldn&#8217;t be necessary to tie each one into the &#8220;will of the people&#8221;; just keep them under the supervision of systems that are tied in. This is a key point: the nature (and particularly intelligence) of government will have to change in the coming era.</p>
<p>Having morals is what biologist Richard Dawkins calls an &quot;evolutionarily stable strategy.&quot; In particular, if you are in an environment where you&#8217;re being watched all the time, such as in a foraging tribal setting or a Victorian small town, you are better off <i>being</i> moral than just pretending, since the pretending is extra effort and involves a risk of getting caught. It seems crucial to set up such an environment for our future AIs.</p>
<p>Back to Bill Joy&#8217;s <i>Wired</i> article: he next quotes from Hans Moravec&#8217;s book <i>Robot: Mere Machine to Transcendent Mind</i>,<sup><a name="ref4"></a><a href="#ftn4">4</a></sup> &quot;Biological species almost never survive encounters with superior competitors.&quot; Moravec suggests that the marketplace is like an ecology where humans and robots will compete for the same niche, and he draws the inevitable conclusion. </p>
<p>What Moravec is describing here is not true biological competition; he&#8217;s just using that as a metaphor. He&#8217;s talking about economic displacement. We humans are cast in the role of the makers of buggy whips. The robots will be better than we are at everything, and there won&#8217;t be any jobs left for us poor incompetent humans. Of course, this sort of thing has happened before, and it continues to happen even as we speak. Moravec merely claims that this process will go all the way, displacing not just physical and rote workers, but everybody.</p>
<p>There are two separable questions here: Should humanity as a whole build machines that do all its work for it? And, if we do, how should the fruits of that productivity be distributed, if not by existing market mechanisms?</p>
<p>If we say yes to the first question, would the future be so bad? The robots, properly designed and administered, would be working to provide all that wealth for mankind, and we would get the benefit without having to work. Joy calls this &quot;a textbook dystopia&quot;, but Moravec writes, &quot;Contrary to the fears of some engaged in civilization&#8217;s work ethic, our tribal past has prepared us well for lives as idle rich. In a good climate and location the hunter-gatherer&#8217;s lot can be pleasant indeed. An afternoon&#8217;s outing picking berries or catching fish&#151;what we civilized types would recognize as a recreational weekend&#151;provides life&#8217;s needs for several days. The rest of the time can be spent with children, socializing, or simply resting.&quot;</p>
<p>In other words, Moravec believes that, in the medium run, handing our economy over to robots will reclaim the birthright of leisure we gave up in the Faustian bargain of agriculture. </p>
<p>As for the second question, about distribution, perhaps we should ask the ultra-intelligent AIs what to do.</p>
<hr />
<p><a name="ftn1"></a><a href="#ref1">1.</a> Kurzweil, Ray (2005) <i>The Singularity Is Near: When Humans Transcend Biology</i> (Viking Adult)</p>
<p><a name="ftn2"></a><a href="#ref2">2.</a> Moravec, Hans (1997) &#8220;When will computer hardware match the human brain?&#8221; (<i>Journal of Evolution and Technology</i>) <a href="http://www.transhumanist.com/volume1/moravec.htm" target="_blank">http://www.transhumanist.com/volume1/moravec.htm</a></p>
<p><a name="ftn3"></a><a href="#ref3">3.</a> Joy, Bill (2000) &#8220;Why the future doesn&#8217;t need us.&#8221; (<i>Wired Magazine</i>, Issue 8.04) <a href="http://www.wired.com/wired/archive/8.04/joy.html" target="_blank">http://www.wired.com/wired/archive/8.04/joy.html</a></p>
<p><a name="ftn4"></a><a href="#ref4">4.</a> Moravec, Hans (2000) <i>Robot: Mere Machine to Transcendent Mind</i> (Oxford University Press, USA)</p>
<p><i>&copy; 2006 J. Storrs Hall</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/is-ai-near-a-takeoff-point/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Need For Limits</title>
		<link>https://www.writingsbyraykurzweil.com/the-need-for-limits</link>
		<comments>https://www.writingsbyraykurzweil.com/the-need-for-limits#respond</comments>
		<pubDate>Fri, 24 Mar 2006 00:36:29 +0000</pubDate>
								<dc:creator>Chris Phoenix</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Molecular manufacturing will give its wielders extreme power and has the potential to remove or bypass many of today's limits, including laws. That could lead to a planet-wide dictatorship, or to any of several forms of irreversible destruction. Perhaps the biggest problem of all will be how to develop a system of near-absolute power that will not become corrupt.]]></description>
			<content:encoded><![CDATA[<p><i>Preprint from </i>Nanotechnology Perceptions <i>March 27, 2006. Published on KurzweilAI.net March 24, 2006.</i></p>
<p><i>The Center for Responsible Nanotechnology (CRN) has created a series of new research papers in which industry experts predict profound impacts of nanotechnology on society. The first set of 11 of these original essays by members of CRN&#8217;s Global Task Force will appear in the March 27 issue of the journal </i>Nanotechnology Perceptions<i>. KurzweilAI.net will syndicate these essays over that week. In this preview, Chris Phoenix, CRN&#8217;s director of research, presents the challenge of how to deal with possible unintended consequences of molecular manufacturing. </i><span id="more-80918"></span></p>
<p>Humans are good at pushing limits. We can survive in scorching deserts and in the frozen Arctic. We have flown faster than sound and sent robots to other planets. We have managed, with help from fossil fuels, to feed six billion people. Even before we had motors and technological navigation equipment, some of us were able to find and colonize islands in the middle of the vast Pacific Ocean. </p>
<p>Pushing limits has its darker side as well. Humans are not good at respecting each other&#8217;s rights; the ferocity of the Mongol hordes remains legendary, and the 20<sup>th</sup> century provides multiple examples of state-sponsored mass murder. Natural limits frequently are pushed too far, and whole civilizations have been wiped out by environmental backlash. We are too good at justifying our disrespect of limits, and then we often become increasingly destructive as the problem becomes more acute. More than a century ago, Lord Acton warned that &quot;absolute power corrupts absolutely.&quot; This can be restated as, &quot;Complete lack of limits leads to unlimited destruction.&quot;</p>
<p>Molecular manufacturing has the potential to remove or bypass many of today&#8217;s limits. It is not far wrong to say that the most significant remaining limits will be human, and that we will be trying our hardest to bypass even those. To people with faith in humanity&#8217;s good nature and high potential, this will come as welcome news. For many who have studied history, it will be rather frightening. A near-total lack of limits could lead straight to a planet-wide dictatorship, or to any of several forms of irreversible destruction.</p>
<p>Many of the plans that have been proposed to deal with molecular manufacturing, by CRN and others, assume (usually implicitly) that the plan will be implemented within some bigger system, such as the rule of law. This will be problematic if molecular manufacturing is powerful enough that its users can make their own law. We cannot assume that existing world systems will continue to provide a framework in which molecular manufacturing will play out. Those systems that adopt the new technology will be transformed; those that do not will be comparatively impotent. We will have to find ways for multiple actors empowered by molecular manufacturing to coexist constructively, without reliance on the stabilizing forces provided by today&#8217;s global institutions.</p>
<p>Any active system without limits will run off the rails. The simplest example is a reproducing population, which will indulge in exponential growth until it exhausts its resources and crashes. Another example can be found in the &quot;excesses&quot; of behavior that are seen in political revolutions. Humans systems need limits as much as any other system, for all that we try to overcome them.</p>
<p>Through all of history, the presence of limits has been a reasonable assumption. Nations were limited by other nations; populations were limited by geography, climate, or disease; and societies would sometimes be stable long enough to develop and agree on a morality that provided additional useful limits. A society that overstepped its bounds could expect to collapse or be out-competed by other societies.</p>
<p>It&#8217;s tempting to think that humanity has developed a new worldview&#151;the Enlightenment&#151;that will provide internal moral limits. However, the Enlightenment may be fading. It was supported by, and synergistic with, the brief period when people could be several times more productive using machines than by manual labor. During that period, individual people were quite valuable. However, now that we&#8217;re developing automation, people can be many times as productive (not just several times), and we don&#8217;t need all that productivity. And indeed, as abundance develops into glut, Enlightenment values and practices may be fading.</p>
<p>It&#8217;s tempting to think that, left to themselves, people will be generally good. History, in both microcosm and macrocosm, shows that this doesn&#8217;t work any better than Communism did. Without sufficient external limits, some people will start cheating, or choosing to violate the moral code of their society. Not only will this reduce benefits for everyone, but the ingrained human aversion to being taken advantage of will cause others to join the cheaters if they can&#8217;t prevent them. This leads to a vicious cycle, and the occasional saint won&#8217;t be enough to stop the degeneration.</p>
<p>It&#8217;s tempting to think that, now that we have digital computers, everything has changed and the old rules of scarcity and competition needn&#8217;t apply. As explored in CRN&#8217;s paper &quot;Three Systems of Action,&quot; <sup><a name="ref1"></a><a href="#foot1">[i]</a></sup> digital data transfer can be unlimited-sum, with benefit unrelated to and far larger than the cost. But digital information does not replace existing systems or issues wholesale. And increasing Internet problems such as spam, phishing, and viruses demonstrate that domains of digital abundance and freedom cannot moderate their own behavior very well.</p>
<p>It&#8217;s tempting to think that an ongoing power struggle between human leaders would provide limits. But in an age of molecular manufacturing, this seems unlikely for two reasons. First, such a competition almost certainly would be unstable, winner-take-all, and end up in massive oppression: no better than simply starting out with a dictatorship. Second, the contest probably would shift quickly to computer-assisted design and attack, and that would be even worse than all-out war between mere humans, even humans assisted by molecular manufactured weapons. Civilians would probably be a major liability in such conflicts: easy to kill and requiring major resources (not to mention oppressive lifestyle changes) to defend.</p>
<p>Molecular manufacturing will give its wielders extreme power&#151;certainly enough power to overcome all significant non-human limits (at least within the context of the planet; in space, there will be other limits such as scarcity of materials and speed of light). Even if the problem of cheaters could be overcome, we do not have many internal limits these days; the current trend in capitalism is to deny the desirability of all limits except those that arise from competition. What&#8217;s left?</p>
<p>Somehow, we have to establish a most-powerful system that limits itself and provides limits for the rest of our activities. Long ago, Eric Drexler proposed an Active Shield.<a name="ref2"></a><sup><a href="#foot2">[ii]</a></sup> Others have proposed building an AI to govern us&#151;though they have not explained how to build internal limits into the AI. I have proposed creating a government of people who have accepted modifications to their biochemistry to limit some of their human impulses. All of these suggestions have problems.</p>
<p>Open communication and accountability may supply part of the answer. David Brin has proposed &quot;reciprocal accountability.&quot;<sup><a name="ref3"></a><a href="#foot3">[iii]</a></sup> It&#8217;s been noted that democracies, which embody transparency and accountability, rarely have famines or go to war with each other. Communication and accountability may be able to overcome the race to the bottom that happens when humans are left to their own devices. But communication and accountability depend on creation and maintenance of the infrastructure; on continued widespread attention; and on forensic ability (being able to connect effect back to cause in order to identify perpetrators). Recent trends in US media and democracy are not encouraging; it seems people would rather see into bedrooms than boardrooms. And it&#8217;s not clear whether people&#8217;s voices will still matter to those in power once production becomes sufficiently automated that nation-scale productivity can be maintained with near-zero labor.</p>
<p>If we can somehow find meta-limits, then within those limits a variety of administration methods may work to optimize day-to-day life. In other words, the problem with administrative suggestions is not inherent in the suggestions themselves; it is that the suggestions rely on something else to provide limits. Without limits, nothing can be stable; with limits, wise administration will still be needed, and best practices should be researched. But perhaps the biggest problem of all will be how to develop a system of near-absolute power that will not become corrupt.</p>
<hr />
<p><a name="foot1"></a><a href="#ref1">[i]</a> <a href="http://crnano.org/systems.htm" target="_blank">http://crnano.org/systems.htm</a></p>
<p><a name="foot2"></a><a href="#ref2">[ii]</a> <a href="http://www.foresight.org/EOC/EOC_Chapter_11.html#section04of05" target="_blank">http://www.foresight.org/EOC/EOC_Chapter_11.html#section04of05</a></p>
<p><a name="foot3"></a><a href="#ref3">[iii]</a> <a href="http://davidbrin.blogspot.com/2005/09/another-pause-this-time-for-soa.html" target="_blank">http://davidbrin.blogspot.com/2005/09/another-pause-this-time-for-soa.html</a></p>
<p><i>&copy; 2006 Chris Phoenix. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/the-need-for-limits/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Revolution in a Box: An Interview with the Center for Responsible Nanotechnology</title>
		<link>https://www.writingsbyraykurzweil.com/revolution-in-a-box-an-interview-with-the-center-for-responsible-nanotechnology</link>
		<comments>https://www.writingsbyraykurzweil.com/revolution-in-a-box-an-interview-with-the-center-for-responsible-nanotechnology#respond</comments>
		<pubDate>Wed, 22 Mar 2006 14:05:22 +0000</pubDate>
								<dc:creator>Mike Treder, Chris Phoenix, Jamais Cascio</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[The Center for Responsible Nanotechnology has a modest goal: to ensure that the planet navigates the emerging nanotech era safely.  CRN's founders discuss the promises and perils of nanotechnology, as well as the need for a middle ground between resignation and relinquishment.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published on </i><a href="http://www.worldchanging.com/archives/004078.html" target="_blank">WorldChanging</a> <i>February 6, 2006. Reprinted on KurzweilAI.net March 23, 2006.</i></p>
<p><i>Founded in December 2002, the <a href="http://www.crnano.org" target="_blank">Center for Responsible Nanotechnology</a> has a modest goal: to ensure that the planet navigates the emerging nanotech era safely. That&#8217;s a lot for a couple of volunteers to shoulder, but Mike Treder and Chris Phoenix have carried their burden well, and done much to raise awareness of the potential risks and benefits of molecular manufacturing, including <a href="http://crnano.org/EPAhandout.htm" target="_blank">a major presentation</a> at the US Environmental Protection Agency on the impacts of nanotechnology&#8230;.</i><span id="more-80917"></span></p>
<p><i>[WorldChanging] conducted this interview as a series of email exchanges over the last few months. This post captures (and organizes) the highlights of that conversation. Mike, Chris&#151;thank you. Your work is one of the reasons we have optimism for the future. &#151; Jamais Cascio</i></p>
<p><b>WorldChanging:</b> So, to start&#151;what is the Center for Responsible Nanotechnology hoping to make happen?</p>
<p><b>Center for Responsible Nanotechnology:</b> We want to help create a world in which advanced nanotechnology&#151;molecular manufacturing&#151;is widely used for beneficial purposes, and in which the risks are responsibly managed. The ability to manufacture highly advanced nanotech products at an exponentially accelerating pace will have profound and perilous implications for all of society, and our goal is to lay a foundation for handling them wisely.</p>
<p><b></b><b>WC:</b> So you set up a non-profit. How is that going?</p>
<p><b></b><b>CRN:</b> CRN is a volunteer organization. We have no paid positions. Our co-founders have dedicated time to this cause in lieu of professional paying careers. But the thing is, technical progress toward nanotechnology is really accelerating, and it&#8217;s become more urgent than ever for us to examine the global implications of this technology and begin designing wise and effective solutions.</p>
<p>It won&#8217;t be easy. CRN needs to grow, quickly, to meet the expanding challenge. We&#8217;re asking people who share the belief that our research must keep moving ahead to support us with small or large donations.</p>
<p><b>WC:</b> One of the unusual aspects of CRN is that you&#8217;re neither a nanotech advocacy group nor unmoving nanotech critics. Your focus is on the responsible development and deployment of next-generation nanotechnologies. Tell me a bit about what &quot;responsible nanotechnology&quot; looks like.</p>
<p><b></b><b>CRN:</b> You&#146;re right that we have tried hard to stay in a &quot;middle&quot; place. We sometimes refer to it as between resignation (forsaking attempts to manage the technology) and relinquishment (forsaking the technology altogether). Our view is that advanced nanotechnology&#151;molecular manufacturing&#151;should be developed as fast as it can be done safely and responsibly. We&#146;re promoting responsible rapid development of the technology&#151;not because we believe it is safe, but because we believe it is risky&#151;and because the only realistic alternative to responsible development is irresponsible development.</p>
<p><b>CRN:</b> So, what does &#8216;responsible&#8217; mean? First, that we take effective precautions to forestall a new arms race. Second, that we do what is necessary to prevent a monopoly on the technology by one nation, one bloc of nations, or one multinational corporation. Third, that we seek appropriate ways to share the tremendous benefits of the technology as widely as possible; we should not allow a &#8216;nano-divide.&#8217; Fourth, that we recognize the possibilities for both positive and negative impacts on the environment from molecular manufacturing, and that we adopt sensible global regulations on its use. And fifth, that we understand and take precautions to avert the risk of severe economic disruption, social chaos, and consequent human suffering.</p>
<p><b>WC:</b> How does the &quot;responsible&quot; approach differ from something like the &quot;<a href="http://www.worldchanging.com/archives/000375.html" target="_blank">Precautionary Principle</a>?&quot; What&#8217;s your take on the concept of &quot;precaution&quot; applied to emerging technologies?</p>
<p><b>CRN:</b> One of our earliest published papers was on that very topic. It&#146;s called &quot;<a href="http://crnano.org/precautionary.htm" target="_blank">Applying the Precautionary Principle to Nanotechnology</a>.&quot; CRN&#146;s analysis shows that there are actually two different forms of the Precautionary Principle, something that many people don&#146;t realize. We call them the &#8216;strict form&#8217; and the &#8216;active form.&#8217;</p>
<p>The strict form of the Precautionary Principle requires inaction when action might pose a risk. In contrast, the active form calls for choosing less risky alternatives when they are available, and for taking responsibility for potential risks. Because the strict form of the Precautionary Principle does not allow consideration of the risks of inaction, CRN believes that it is not appropriate as a test of molecular manufacturing policy.</p>
<p>The active form of the Precautionary Principle, however, seems quite appropriate as a guide for developing molecular manufacturing policy. Given the extreme risks presented by misuse of nanotechnology, it appears imperative to find and implement the least risky plan that is realistically feasible. Although we cannot agree with the strict form of the Precautionary Principle, we do support the active form.</p>
<p><b>WC:</b> What is the <a href="http://www.crnano.org/CTF.htm" target="_blank">CRN Task Force</a>, and what do you hope to have it accomplish? [Disclaimer: I am a member of the CRN Task Force.]</p>
<p><b>CRN:</b> Without mutual understanding and cooperation on a global level, the hazardous potentials of advanced nanotechnology could spiral out of control and deny any hope of <a href="http://www.worldchanging.com/archives/002365.html" target="_blank">realizing the benefits to society</a>. We&#146;re not willing to leave the outcome to chance.</p>
<p>So, last August we announced the formation of a new Task Force, convened to study the societal implications of this rapidly emerging technology. We&#146;ve brought together a diverse group of more than 60 world-class experts from multiple disciplines to assist us in developing comprehensive recommendations for the safe and responsible use of nanotechnology.</p>
<p>Our first project is just nearing completion. Members of the task force have written a series of essays describing their greatest concerns about the potential impacts of molecular manufacturing. We have completed editing approximately 20 excellent articles that range from discussion of economic issues and security issues, to the implications of human enhancement and artificial intelligence. They will be published in the March 2006 issue of <a href="http://pages.unibas.ch/colbas/ntp/" target="_blank">Nanotechnology Perceptions</a>, an academic journal maintained by a couple of European universities. We will simultaneously publish the essays at the <a href="http://wise-nano.org" target="_blank">Wise-Nano.org</a> website, where anyone can read and comment on them.
</p>
<p align="center">&#149; &#149; &#149;</p>
<p align="center"><img loading="lazy" decoding="async" src="/images/Cascio_nanofactory.jpg" width="296" height="197" border="1"/></p>
<p><b>WC:</b> We&#8217;ve discussed the <a href="http://www.worldchanging.com/archives/003445.html" target="_blank">different kinds of nanotechnology</a> on WorldChanging, and you folks posted a <a href="http://crnano.typepad.com/crnblog/2005/09/nanoparticles_n.html" target="_blank">very useful follow-up</a> to one of our pieces on that subject. To be clear, when we talk about &quot;nanotechnology&quot; in this context, we&#8217;re talking about &quot;nanofactories.&quot; So let&#8217;s drill down a bit on that particular subject. What kinds of things could an early version of a nanofactory make? Are we just talking desktop printing of simple physical objects (like a cup), items embedding diverse materials &amp; electronics (like a laptop), or organic and biochemical materials (like medicines or food)?</p>
<p><b>CRN:</b> The first, tiny nanofactory will be built by intricate laboratory techniques; then that nanofactory will have to build a bigger one, and so on, many times over. This means that even the earliest usable nanofactory will necessarily work extremely fast and be capable of making highly functional products with moving parts. So, in addition to laptops and phones, an early nanofactory should be able to make cars, home appliances, and a wide array of other products.</p>
<p>Medicines and food will not be early products. A large number of reactions will be required to make the vast variety of organic molecules. Some molecules will be synthesized more easily than others. It may work better first to build (using a nanofactory) an advanced fluidic system that can do traditional chemistry.</p>
<p>Food will be especially difficult because it contains water. Water is a small molecule that would float around and gum up the factory. Also, food contains a number of large and intricate molecules for taste and smell; furthermore, nourishing food requires mineral elements that would require extra research to handle with nanofactory-type processes.</p>
<p><b>WC:</b> It seems to me that manufacturing via nanofactories will require some different concepts of the manufacturing process than the automated assembly-line model most of us probably have in mind when we think of &quot;factories.&quot; Parallel to early design work on the hardware end, has there been much work done on the software/design end of how nanofactories would work?</p>
<p><b>CRN:</b> We have thought about how nanofactories would be controlled, and it seems probable that it&#8217;s just not a very difficult problem, at least for the kind of nanofactory that can include lots of integrated computers. (This should include almost any diamond-building nanofactory, and a lot of nanofactories based on other technologies as well.)</p>
<p>Until automated design capabilities are developed, products will be limited largely by our product design skills. A simple product-description language, roughly analogous to PostScript, would be able to build an enormous range of products, but would not even require fancy networking in the nanofactory. (Drexler discusses product-description languages in section 14.6 of <a href="http://www.amazon.com/exec/obidos/redirect?link_code=ur2&amp;tag=worldchangi0b-20&amp;camp=1789&amp;creative=9325&amp;path=http%3A%2F%2Fwww.amazon.com%2Fgp%2Fproduct%2F0471575186%2Fsr%3D1-1%2Fqid%3D1139256738%2Fref%3Dpd_bbs_1%3F%255Fencoding%3DUTF8" target="_blank"><i>Nanosystems</i></a>.)</p>
<p><b>WC:</b> What makes nanofactories so different from traditional production methods?</p>
<p><b>CRN:</b> It&#8217;s important to understand that molecular manufacturing implies exponential manufacturing&#151;the ability to rapidly build as many desktop nanofactories (sometimes called personal fabricators) as you have the resources for. Starting with one nanofactory, someone could build thousands of additional nanofactories in a day or less, at very low cost. This means that projects of almost any size can be accomplished quickly.</p>
<p>Those who have access to the technology could use it to build a surveillance system to track six billion people, weapons systems far more powerful than the world&#8217;s combined conventional forces, construction on a planetary scale, or spaceflight as easy as airplane flight is today.</p>
<p>Massive construction isn&#8217;t always bad. Rapid construction could allow us to build environmental remediation technologies on a huge scale. Researchers at Los Alamos National Laboratory are suggesting that equipment could be built to remove significant quantities of carbon dioxide directly from the atmosphere. With molecular manufacturing, this could be done far more quickly, easily, and inexpensively.</p>
<p>In addition to being powerful, the technology will also be deft and exquisite. Medical research and treatment will advance rapidly, given access to nearly unlimited numbers of medical robots and sensors that are smaller than a cell.</p>
<p>This only scratches the surface of the implications. Molecular manufacturing has as many implications as electricity, computers, and gasoline engines.</p>
<p><b>WC:</b> In other words, nanotechnology is both an engineering process and (for lack of a less jargony phrase) an &quot;enabling paradigm&quot;&#151;it doesn&#8217;t just make it possible to do what we now do, but better/faster/ cheaper, it also makes it possible (in time) to do some things that we can&#8217;t now do.</p>
<p><b>CRN:</b> Yes, exactly. Another good way to look at it is as a general-purpose technology: enhancing and enabling a wide range of applications. It will be similar in effect to, say, electricity or computers.</p>
<p><b>WC:</b> Back up a sec. The complexities of surveillance systems, planetary engineering, and cheap &amp; easy space flight come from much more than not being able to make enough or sufficiently-precise gear. There are also questions of design, of power, of scale, and so forth. These seem likely to take substantial effort and time.</p>
<p><b>CRN:</b> The speed of development will differ for each project. But by today&#8217;s standards, almost any project could be done quite quickly. A lot of hardware development time today is spent in compensating for the high cost and large delay associated with building each prototype. If you could build a prototype in a few hours at low cost, a lot of engineering could be bypassed. Of course, this is less true for safety-critical systems. But imagine how quickly space flight could be developed if Elon Musk (SpaceX), John Carmack (Armadillo), and Burt Rutan could each build and fly a new (unmanned) spacecraft every day instead of waiting three months or more.</p>
<p>Power will of course have to be supplied to any project. But one of the first projects may be a massive solar-gathering array that could supply power for planet-scale engineering. A nanofactory-built solar array should be able to repay the energy cost of its construction in just a few days, so scaling up the solar array itself would not take too long.</p>
<p>A comparable advantage can be seen today in computer chip design. FPGA&#8217;s and ASIC&#8217;s are two similar kinds of configurable computer chips. They differ in that ASIC&#8217;s are designed before they are built, and FPGA&#8217;s can have new designs downloaded to them in seconds, even after they are integrated into a circuit. An FPGA can be designed by a person in a week or two. An ASIC requires a team of people working for several months&#151;largely to make absolutely sure that they have not made even a single mistake, which could cost the company millions of dollars and months of delay. The difference between today&#8217;s development cycle and nanofactory-enabled product R&amp;D is the difference between ASIC&#8217;s and FPGA&#8217;s.</p>
<p align="center"> &#149; &#149; &#149;</p>
<p align="center"><img loading="lazy" decoding="async" src="/images/Cascio_chart.jpg" width="335" height="225" border="1"/></p>
<p><b>WC:</b> The degree to which research is largely corporate, academic or governmental will obviously vary from country to country. Who are some of the organizations doing innovative work in nanotech?</p>
<p><b>CRN:</b> There are only a few companies that are explicitly working on molecular manufacturing. Many more are doing work that is relevant, but not aiming at that goal&#151;or at least not admitting to it.</p>
<p>Zyvex LLC is working on enabling technologies, with the stated goal of providing &quot;tools, products, and services that enable adaptable, affordable, and molecularly precise manufacturing.&quot;</p>
<p>In Japan, individual silicon atoms have been moved and bonded into place since 1994, first by the Aono group and then by Oyabu. Because this used a much larger scanning probe microscope to move the atoms, it is not a large-scale manufacturing technique.</p>
<p>Researchers at Rice University have developed a &quot;nano-car&quot; with single-molecule wheels that roll on molecular bearings, and reportedly are aiming toward &quot;nano-trucks&quot; that could transport molecules in miniature factories.</p>
<p><b>WC:</b> To what degree is nanotechnology research a province of the big industrial countries, and to what degree is it accessible to forward-looking developing countries (what we term on WorldChanging the &quot;leapfrog nations&quot;)?</p>
<p><b>CRN:</b> In the broad sense of nanoscale technologies, some kinds of nanotech research are quite accessible to leapfrog nations. Molecular manufacturing research may be accessible as well. Atom-level simulations can now be run on desktop PC&#8217;s. Some of the development pathways, such as biopolymer approaches, require only a small lab&#8217;s worth of equipment.</p>
<p>We don&#8217;t yet know exactly how difficult it will be to develop a nanofactory. Several approaches are on the table, but there could be a much easier approach waiting to be discovered. It&#8217;s probably safe to say that any nation that can support a space program could also engage in substantial research toward molecular manufacturing. Note that several individuals are now supporting space programs, including Elon Musk of SpaceX and Paul Allen who funded SpaceShipOne.</p>
<p><b>WC:</b> Do you expect home &quot;hobbyist&quot; designers&#151;perhaps using home-made nanotools&#151;to have any role in the nanotechnology revolution, as &quot;garage hackers&quot; did in the early days of personal computing?</p>
<p><b>CRN:</b> We have been aware of some of the scanning probe microscope efforts. If advanced molecular manufacturing requires a vacuum scanning-probe system cooled by liquid helium, it&#8217;s doubtful you could do that in your garage. On the other hand, if all it requires is an inert-gas environment at liquid nitrogen temperatures, then some work might be doable by a very competent hobbyist.</p>
<p><i>Design</i> of nanomachines (as opposed to construction) is already accessible to hobbyists. Without the ability to test their designs in the lab, many of the designs will have bugs, of course. However, at least in the early stages, the development of new design approaches and the demonstration that we&#8217;ve learned even approximately how to implement mechanisms will be important contributions.</p>
<p align="center"> &#149; &#149; &#149;</p>
<p align="center"><img loading="lazy" decoding="async" src="/images/Cascio_computers.jpg" width="156" height="200" border="1"/></p>
<p><b>WC:</b> A big concern in a world of easy fabrication is what to do with broken or obsolete stuff. In what ways could a nanofactory-type system use &quot;waste&quot; materials, with an eye towards the &quot;cradle-to-cradle&quot; concept?</p>
<p><b>CRN:</b> If the stuff is made of light atoms, such as carbon and nitrogen, it should be straightforward to burn it in an enclosed system. The resulting gases could be cooled and then sorted at a molecular level, and the molecules could be stored for re-use.</p>
<p>It seems likely that products will be designed and built using modules that would be somewhat smaller than a human cell. If these modules are standardized and re-usable, then it might be possible to pull apart a product and rearrange the modules into a different product. However, there are practical problems: the modules themselves may be obsolete, and they would need to be carefully cleaned before they could be reassembled. It would probably be easier to reduce them to atoms and start over, since every atom could be contained and re-used.</p>
<p><b>WC:</b> That seems likely to take a serious amount of energy to accomplish thoroughly, am I right? That is, if I toss my cell phone into an incinerator, different parts will cook at different temperatures, and there are some components that would require some fairly high temps to break down. In addition, the nano-incinerator will need to be able to sort out the various atoms that are emitted by the burning object. Sounds complex.</p>
<p>This becomes an important issue, because a world where it&#8217;s really easy to make stuff but much harder to get rid of it starts to accelerate some already-serious problems around garbage, especially hazardous wastes.</p>
<p><b>CRN:</b> Breaking down a carbon-based product just requires heating it a bit, then exposing it to oxygen or hydrogen&#151;something that can combine with the carbon to produce small gas molecules. This process will likely be exothermic&#151;in other words, being high in carbon, nano-built products would burn very nicely when you wanted them to. (Adding small integrated water tanks that were drained before recycling would prevent premature combustion.)</p>
<p>Constructing a nano-built product requires not only rearranging a lot of molecular bonds, but computing how to do that, and moving around a lot of nanoscale machinery. A nanofactory might require several times the bond energy to accomplish all that. The energy required to break down a nano-built product should be less than the energy it took to make it in the first place. And in terms of product strength per energy invested, nano-built diamond would probably be many times better than aluminum&#151;a cheap, energy-intensive commodity.</p>
<p align="center"> &#149; &#149; &#149;</p>
<p><b>WC:</b> We&#8217;ve occasionally written on WC about the <a href="http://www.worldchanging.com/archives/003805.html" target="_blank">increasing &quot;digitization&quot; of physical objects</a>, whether through embedded computer chips and sensors or even the introduction of DRM-style use controls. On the flip side, futurists have for a few years talked about the possibility of &quot;<a href="http://www.ennex.com/publish/200102-Napster/index.asp" target="_blank">napster fabbing</a>&quot;&#151;swapping design files, legally or otherwise, and/or the development of an open source culture around next-generation fabrication tools like nanofactories.</p>
<p>What do you see as the key intellectual property issues emerging from the rise of nanomanufacturing?</p>
<p><b>CRN:</b> Because molecular manufacturing will be a general-purpose technology, we can expect that it will raise many of the issues that exist today in many different domains. Many issues will be the same as for software and entertainment, but the stakes will be far higher. The issues we see in medicine, with controversies over whether affordable pharmaceuticals should be provided to developing nations, will also apply to humanitarian applications of nanofactory products.</p>
<p><b>WC:</b>To tease that point out for a minute, you&#8217;re suggesting that the issue won&#8217;t be with the difficulty or expense of making the materials, but the expense of the time necessary to come up with the design in the first place. Big pharma argues that the majority of their work is actually in dead ends, and that the high fees they charge for the drugs that do work are to make up for the time they take with the stuff that doesn&#8217;t work. Would the nanofactory world&#151;at least the early days of it&#151;parallel this?</p>
<p><b>CRN:</b> It&#8217;s not an exact parallel. Some percentage of pharmaceutical development costs go to preliminary testing, another percentage to clinical trials&#151;which are hugely expensive due to regulation and liability&#151;and a third percentage to advertising and incentives for doctors to prescribe the new medicine. Of these three, probably only the first will apply to early nanofactory products.</p>
<p>We do expect design time to be a large component of the cost of a product. But the Open Source software movement shows that significant design time can be contributed without adding to product price.</p>
<p><b>WC:</b> So you see Open Source as an aspect of the nanofactory future?</p>
<p><b>CRN:</b> Whether or not open source approaches will be allowed to develop nanofactory products is the single biggest intellectual property question. Open source software has been astonishingly creative and innovative, and open source products could be a rich source of innovation as well as humanitarian designs. Even businesses could benefit, since open source usually doesn&#8217;t put a final polish on its products, so commercial interests can repackage them and sell at a good profit.</p>
<p>However, the business interests that will want a monopoly, and the security institutions that will be uncomfortable with unrestricted fabbing, will probably oppose open source products. It would be easy to criminalize unrestricted fabbing, though far more difficult to prevent it. Prevention of private innovation, through simply not allowing private ownership of nanofactories, would have to be rigorously enforced worldwide&#151;likely impossible, and certainly oppressive. Criminalization without prevention would almost certainly be bad policy, but it will probably be tried.</p>
<p><b>WC:</b> We see early parallels to this in the issue of open source and &quot;digital rights management&quot; routines. The idea of outlawing Open Source (because it can&#8217;t be locked down) even gets kicked around from time to time. It seems likely that an open source that could result in new weapons might be even more likely to trigger this kind of response.</p>
<p><b>CRN:</b> Historically, Open Source has been a huge source of innovation. Open source applied to molecular manufacturing could result in new weapons, but also in new defenses. Shutting down Open Source might not reduce the weapons much, but it probably would reduce the development of defenses. We should think very carefully before we reduced our capacity to design new defenses. That said, you may well be right that a combination of government and corporate interests would work together to successfully eliminate Open Source-type development.</p>
<p align="center"> &#149; &#149; &#149;</p>
<p align="center"><img loading="lazy" decoding="async" src="/images/Cascio_tank.jpg" width="273" height="182" border="1"/></p>
<p><b>WC:</b> What would you say are your top concerns about how nanofactory technology might develop?</p>
<p><b>CRN:</b> Our biggest concern is that molecular manufacturing will be a source of immense military power. A medium-sized or larger nation that was the sole possessor of the technology would be a superpower, with a strong likelihood of becoming the superpower if they were sufficiently ruthless. This implies geopolitical instability in the form of accelerating arms races and preemptive strikes. For several reasons, a nanofactory-based arms race looks less stable than the nuclear arms race was.</p>
<p>Related to the military concern is a tangle of security concerns. If molecular manufacturing proliferates, it will become relatively easy to build a wide range of high-tech automated weaponry. Accountability may decrease even as destructive power increases. The Internet, with its viruses, spam, spyware, and phishing, provides a partial preview of what we might expect. It could be very difficult to police such a society without substantial weakening of civil rights and even human rights.</p>
<p>Economic disruption is a likely consequence of widespread use of molecular manufacturing. On the one hand, we would have an abundance of production capacity able to build high-performance products at minimal expense. On the other hand, this could threaten a lot of today&#8217;s jobs, from manufacturing to transportation to mineral extraction.</p>
<p>Environmental damage could result from widespread use of inexpensive products. Although products filling today&#8217;s purposes could be made more efficient with molecular manufacturing, future applications such as supersonic and ballistic transport may demand far more energy than we use today.</p>
<p>Another major risk associated with molecular manufacturing comes from not using it for positive purposes. Artificial scarcities&#151;legal restrictions&#151;have been applied to lifesaving medicines. Similar restrictions on molecular manufacturing, whether in the form of military classification, unnecessary safety regulations, or explicit intellectual property regulation, could allow millions of people to die unnecessarily.</p>
<p><b>WC:</b> We know from the digital restrictions/&quot;piracy&quot; debate that technical limitations on copying, etc., do an adequate job of preventing regular folks from duplicating movies, software and such, whether for illicit reasons (passing a copy to a friend) or otherwise (making a backup or other &quot;fair use&quot;), while doing little to prevent real IP pirates from duping off thousands of copies to sell on the street in Shanghai or the like.</p>
<p>In short, there&#8217;s every reason to believe that top-down efforts to stymie the illegal/illicit/irresponsible use of nanofactories will be only marginally-effective, at best, while driving the worst stuff deep underground and preventing regular citizens from using their nanofactories in ways that would be beneficial and not significantly harmful.</p>
<p><b>CRN:</b> It would be premature to dismiss all top-down regulation as ineffective. At the same time, the reduction in humanitarian and other benefits from excessive regulation is one of CRN&#8217;s primary concerns. It is certainly true that regulation will impose a significant cost in lost opportunities. However, because there are so many different types of harm that could be done with a nanofactory, we are not ready to say that all regulation would be undesirable.</p>
<p>It will be difficult to apply &quot;fine-grained relinquishment&quot; (Kurzweil&#8217;s term) to a general-purpose technology like nanofactories. However, we will probably have to achieve this, because both blanket permissiveness and blanket restrictions will impose extremely high costs and risks.</p>
<p>As we have said before, there will be no simple solutions. We will need a combination of both top-down and emergent approaches.</p>
<p><b>WC:</b> I&#8217;ve been a pretty vocal advocate of <a href="http://www.worldchanging.com/archives/000404.html" target="_blank">openness as a tool for countering dangerous uses</a>. It&#8217;s a bit counter-intuitive, I admit, but there&#8217;s real precedent for its value. Most experts see free/open source software, for example, as being more secure than closed, proprietary code. And the treatment for SARS (to cite a non-computer example) <a href="http://www.worldchanging.com/archives/003648.html" target="_blank">emerged directly from open global access to the virus genome</a>.</p>
<p>In both cases, the key is the widespread availability of the underlying &quot;code&quot; to both professional and interested amateurs. The potential increase in possible harmful use of that knowledge is, at least so far, demonstrably outweighed by the preventative use.</p>
<p>What do you think of an open approach to nanotechnology as a means of heading off disasters?</p>
<p><b>CRN:</b> In a false dichotomy between totally closed and totally open, the open approach would seem to increase the dangers posed by hobbyists and criminals. A totally closed approach, assuming no one in power was insanely stupid, probably would not lead to certain kinds of danger such as hobbyist-built free-range self-replicators, the so-called grey goo.</p>
<p>I don&#8217;t think we can count on no one in power being insanely stupid, however. Realistically, even a totally closed, locked-down, planet-wide dictator approach would not be safe.</p>
<p>A partially closed approach, where Open Source was criminalized but bootleg or independent nanofactories were available, would be prone to danger from criminals and rebellious hobbyists&#151;and by the way, the world still needs a lot more research to determine just how extreme that danger is. An open approach probably would not increase the danger much versus a partially closed approach, and would certainly increase our ability to deal with the danger.</p>
<p>Remember Ben Franklin&#8217;s adage: Three can keep a secret, if two are dead. There would be a substantial danger of disastrous abuse even with a mere one thousand people or groups having access to the technology (and the rest of the six billion at their mercy). It&#8217;s not certain that the danger would be very much worse with a million or even a billion people empowered.</p>
<p align="center"> &#149; &#149; &#149;</p>
<p><b>WC:</b> Closing on a more positive note, what would you say are your biggest hopes about how this kind of technology might be applied? In other words, what does a world of responsible nanotechnology look like?</p>
<p><b>CRN:</b> We would like to see a world in which security and geopolitical concerns are addressed proactively and skillfully, in order to maximize liberty without allowing any devastating uses.</p>
<p>We would like to see a world in which the ubiquity of tradeoffs is recognized, and where consequences are neither dismissed nor exaggerated. Regulation should be appropriate to the extent of the various risks. The drawbacks of inaction should be considered along with the risks and problems of action.</p>
<p>We would like to see a world in which everyone has access to at least a minimal molecular manufacturing capacity. The computer revolution has shown that inventiveness is maximized by a combination of commercial and open source development, and open source is a good generator of free basic products when the cost of production is tiny.</p>
<p><i>&copy; 2006 Jamais Cascio. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/revolution-in-a-box-an-interview-with-the-center-for-responsible-nanotechnology/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Reprogramming your Biochemistry for Immortality: An Interview with Ray Kurzweil by David Jay Brown</title>
		<link>https://www.writingsbyraykurzweil.com/reprogramming-your-biochemistry-for-immortality-an-interview-with-ray-kurzweil-by-david-jay-brown</link>
		<comments>https://www.writingsbyraykurzweil.com/reprogramming-your-biochemistry-for-immortality-an-interview-with-ray-kurzweil-by-david-jay-brown#respond</comments>
		<pubDate>Wed, 08 Mar 2006 09:42:39 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Scientists are now talking about  people staying young and not aging. Ray Kurzweil is taking it a step further: "In addition to radical life extension, we'll also have radical life expansion. The nanobots will be able to go inside the brain and extend our mental functioning by interacting with our biological neurons."]]></description>
			<content:encoded><![CDATA[<p><i>Interview conducted by <a href="http://www.mavericksofthemind.com/" target="_blank">David Jay Brown</a> on February 8, 2006. This interview will be published in Brown&#8217;s upcoming book </i>Mavericks of Medicine(2006)<i>. Published on KurzweilAI.net March 8, 2006.</i></p>
<p><i>Ray Kurzweil is a computer scientist, software developer, inventor, entrepreneur, philosopher, and a leading proponent of radical life extension. He is the coauthor (with Terry Grossman, M.D.) of </i><a href="http://www.fantastic-voyage.net" target="_blank">Fantastic Voyage: Live Long Enough to Live Forever</a><i>, which is one of the most intriguing and exciting books on life extension around. Kurzweil and Grossman&#146;s approach to health and longevity combines the most current and practical medical knowledge with a soundly-based, yet awe-inspiring visionary perspective of what&#146;s to come.<span id="more-80915"></span></p>
<p>Kurzweil&#146;s philosophy is built upon the premise that we now have the knowledge to identify and correct the problems caused by most unhealthy genetic predispositions. By taking advantage of the opportunities afforded us by the genomic testing, nutritional supplements, and lifestyle adjustments, we can live long enough to reap the benefits of advanced biotechnology and nanotechnology, which will ultimately allow us to conquer aging and live forever. At the heart of Kurzweil&#146;s optimistic philosophy is the notion that human knowledge is growing exponentially, not linearly, and this fact is rarely taken into account when people try to predict the rate of technological advance in the future. Kurzweil predicts that at the current rate of knowledge expansion we&#146;ll have the technology to completely conquer aging within the next couple of decades.</p>
<p>I spoke with Ray on February 8, 2006. Ray speaks very precisely, and he chooses his words carefully. He presents his ideas with a lot of confidence, and I found his optimism to be contagious. We spoke about the importance of genomic testing, some of the common misleading ideas that people have about health, and how biotechnology and nanotechnology will radically affect our longevity in the future.</i></p>
<p><b>David:</b> What inspired your interest in life extension?</p>
<p><b>Ray:</b> Probably the first incident that got me on this path was my father&#146;s illness. This began when I was fifteen, and he died seven years later of heart disease when I was twenty-two. He was fifty-eight. I&#146;ll actually be fifty-eight this Sunday. I sensed a dark cloud over my future, feeling like there was a good chance that I had inherited his disposition to heart disease. When I was thirty-five, I was diagnosed with Type 2 diabetes, and the conventional medical approach made it worse.</p>
<p>So I really approached the situation as an inventor, as a problem to be solved. I immersed myself in the scientific literature, and came up with an approach that allowed me to overcome my diabetes. My levels became totally normal, and in the course of this process I discovered that I did indeed have a disposition, for example, to high cholesterol. My cholesterol was 280 and I also got that down to around 130. That was twenty-two years ago.</p>
<p>I wrote a bestselling health book, which came out in 1993 about that experience, and the program that I&#146;d come up with. That&#146;s what really got me on this path of realizing that&#151;if you&#146;re aggressive enough about reprogramming your biochemistry&#151;you can find the ideas that can help you to overcome your genetic dispositions, because they&#146;re out there. They exist.</p>
<p>About seven years ago, after my book <i>The Age of Spiritual Machines</i> came out in 1999, I was at a Foresight Institute conference. I met Terry Grossman there, and we struck up a conversation about this subject&#151;nutrition and health. I went to see him at his longevity clinic in Denver for an evaluation, and we built a friendship. We started exchanging emails about health issues&#151;and that was 10,000 emails ago. We wrote this book <i>Fantastic Voyage</i> together, which really continues my quest. And he also has his own story about how he developed similar ideas, and how we collaborated.</p>
<p>There&#146;s really a lot of knowledge available right now, although, previously, it has not been packaged in the same way that we did it. We have the knowledge to reprogram our biochemistry to overcome disease and aging processes. We can dramatically slow down aging, and we can really overcome conditions such as atherosclerosis, that leads to almost all heart attacks and strokes, diabetes, and we can substantially reduce the risk of cancer with today&#146;s knowledge. And, as you saw from the book, all of that is just what we call &#145;Bridge One.&#146; We&#146;re not saying that taking lots of supplements and changing your diet is going enable you to live five hundred years. But it will enable Baby Boomers&#151;like Dr. Grossman and myself, and our contemporaries&#151;to be in good shape ten or fifteen years from now, when we really will have the full flowering of the biotechnology revolution, which is &#145;Bridge Two.&#146;</p>
<p>Now, this gets into my whole theory of information technology. Biology has become an information technology. It didn&#146;t used to be. Biology used to be hit or miss. We&#146;d just find something that happened to work. We didn&#146;t really understand why it worked, and, invariably, these tools, these drugs, had side-effects. They were very crude tools. Drug development was called drug discovery, because we really weren&#146;t able to reprogram biology. That is now changing. Our understanding of biology, and the ability to manipulate it, is becoming an information technology. We&#146;re understanding the information processes that underlie disease processes, like atherosclerosis, and we&#146;re gaining the tools to reprogram those processes.</p>
<p>Drug development is now entering an era of rational drug design, rather than drug discovery. The important point to realize is that the progress is exponential, not linear. Invariably people&#151;including sophisticated people&#151;do not take that into consideration, and it makes all the difference in the world. The mainstream skeptics declared the fifteen year genome project a failure after seven and half years because only one percent of the project was done. The skeptics said, I told you this wasn&#146;t going to work&#151;here you are halfway through the project and you&#146;ve hardly done anything. But the progress was exponential, doubling every year, and the last seven doublings go from one percent to a hundred percent. So the project was done on time. It took fifteen years to sequence HIV. We sequenced the SARS virus in thirty-one days.</p>
<p>There are many other examples of that. We&#146;ve gone from ten dollars to sequence one base pair in 1990 to a penny today. So in ten or fifteen years from now it&#146;s going to be a very different landscape. We really will have very powerful interventions, in the form of rationally-designed drugs that can precisely reprogram our biochemistry. We can do it to a large extent today with supplements and nutrition, but it takes a more extensive effort. We&#146;ll have much more powerful tools fifteen years from, so I want it to be in good shape at that time.</p>
<p>Most of my Baby Boomer contemporaries are completely oblivious of this perspective. They just assume that aging is part of the cycle of human life, and at 65 or 70 you start slowing down. Then at eighty you&#146;re dead. So they&#146;re getting ready to retire, and are really unaware of this perspective that things are going to be very different ten or fifteen years from now. This insight really should motivate them to be aggressive about using today&#146;s knowledge. Of course all of this will lead to &#145;Bridge Three&#146; about twenty years from now&#151;the nanotechnology revolution&#151;where we can go beyond the limitations of biology. We&#146;ll have programmable nanobots that can keep us healthy from inside, and truly provide truly radical life extension.</p>
<p>So that&#146;s the genesis. My interest in life extension stems primarily from my having been diagnosed with Type 2 diabetes. I really consider the diabetes to be a blessing because it prodded me to overcome it, and, in so doing, I realized that I didn&#146;t just have an approach for diabetes, but a general attitude and approach to overcome any health problem, that we really can find the ideas and apply them to overcome the genetic dispositions that we have. There&#146;s a common wisdom that your genes are eighty percent of your health and longevity and lifestyle is only twenty percent. Well, that&#146;s true if you follow the generally, watered-down guidelines that our health institutions put out. But if you follow the optimal guidelines that we talk about, you can really overcome almost any genetic disposition. We do have the knowledge to do that.</p>
<p><b>David:</b> What do you think are some of the common misleading ideas that people have about health and longevity?</p>
<p><b>Ray:</b> One thing that I just eluded to is the compromised recommendations from our health authorities. I just had a lengthy debate with the Joslin Diabetes Center, which is considered the world&#146;s leading diabetes treatment and research center. I&#146;m on the board, and they&#146;ve just come out with new nutritional guidelines, which are highly compromised. They&#146;re far from ideal, and they acknowledge that. They say, well, we have enough trouble getting people to follow these guidelines, let alone the stricter guidelines that you recommend. And my reply is, you have trouble getting people to follow your guidelines because they don&#146;t work. If people followed your guidelines very precisely they&#146;d still have Type 2 diabetes. They&#146;d still have to take harsh drugs or insulin.</p>
<p>If they follow my guidelines the situation is quite different. I&#146;ve counseled many people about Type 2 diabetes, and Dr. Grossman has treated many people with it, and they come back and they have completely normal levels. Their symptoms are gone, and they don&#146;t have to take insulin or harsh drugs. They feel liberated, and that&#146;s extremely motivating. In many ways it&#146;s easier to make a stricter change. To dramatically reduce your high Glycemic index carbs is actually easier than moderately reducing them, because if you moderately reduce them you don&#146;t get rid of the cravings for carbs. Carbs are addictive, and it&#146;s just like trying to cut down a little bit on cigarettes. It&#146;s actually easier to cut cigarettes out completely, and it&#146;s also easier to largely cut out high Glycemic index starches and sugars, because the cravings go away and it&#146;s much easier to follow. But, most importantly, it works along with a few supplements and exercise to overcome most cases of Type 2 Diabetes.</p>
<p>However, this doesn&#146;t seem to be the attitude our health authorities. The nutritional recommendations are consistently compromised. There&#146;s almost no understanding of the role of nutritional supplements, which can be very powerful. I take two hundred and fifty supplements a day, and I monitor my body regularly. I&#146;m not just flying without instrumentation. Being an engineer, I like data and I monitor fifty or sixty different blood levels every few months, and I&#146;m constantly fine-tuning my program. All of my blood levels are ideal. My Homocysteine level many years ago was eleven, but now it&#146;s five. My C-reactive protein is 0.1. My cholesterol is 130. My LDL is about 60, and my HDL&#151;which was 28&#151;is now close to sixty. And so on and so forth.</p>
<p>I&#146;ve also taken biological aging tests, which measure things like tactile sensitivity, reaction time, memory, and decision-making speed. There are forty different tests, and you compare your score to medians for different populations at different ages. When I was forty I came out at about thirty-eight. Now I&#146;m fifty-seven&#151;at least for a few more days&#151;and I come out at forty. So, according to these tests, I&#146;ve only aged two years in the last seventeen years. Now you can dispute the absolute validity of these biological aging tests. It&#146;s just a number, but it&#146;s just evidence that this program is working.</p>
<p><b>David:</b> Why do you think that genomic testing is important?</p>
<p><b>Ray:</b> Our program is very much not a one size fits all. It&#146;s not a one-trick pony. We&#146;re not saying that if you lower your carbs, lower your fat, or eat a grapefruit a day then everything will be fine. In fact, our publisher initially had a problem with this, but they actually got behind it enthusiastically, because it fundamentally differs, as you know, from most health books that really do have just one idea. We earnestly try to provide a comprehensive understanding of your biology and your body, which does have some complexity to it. Then we let people apply these principles to their own lives.</p>
<p>It is important to emphasize the issues that are concerns for yourself. We use an analogy of stepping backwards towards a cliff. It&#146;s much easier to change direction before you fall off the cliff. But, generally, medicine doesn&#146;t get involved until the eruption of clinical disease. Someone has a heart attack, or they develop clinical cancer, and that&#146;s very often akin to falling off a cliff. One third of first heart attacks are fatal, and another third cause permanent damage to the heart muscle.</p>
<p>It&#146;s much easier to catch these conditions beforehand. You don&#146;t just catch heart disease or cancer walking down the street one day. These are many years or decades in the making, and you can see where you are in the progression of these diseases. So it&#146;s very important to know thyself, to access your own situation. Genetic testing is important because you can see what dispositions you have. If you have certain genes that dispose you to heart disease, or conversely cancer, or diabetes, then you would give a higher priority to managing those issues, and do more tests to see where you are in the progression of those conditions. Let&#146;s say you do a test and it says you have a genetic disposition to Type 2 diabetes. So you should do a glucose-tolerance test. In fact, we describe a more sophisticated form of that in the book, where you measure insulin as well, and can see if you have early stages of insulin resistance.</p>
<p>Perhaps you have metabolic syndrome, which a very substantial fraction of the population has. If you have these early harbingers of insulin resistance, that could lead to Type 2 diabetes, so obviously the priority of that issue will be greatly heightened. If you don&#146;t have that vulnerability then you don&#146;t have to be as concerned about insulin resistance, and so on. But if you do have insulin resistance, or you have a high level of atherosclerosis, then it really behooves you to take important steps to get these dangerous conditions under control&#151;which you can do. So genomic testing is not something you do by itself. It&#146;s part of a comprehensive assessment program to know your own body&#151;not only what you&#146;re predisposed to, but what your body has already developed in terms of early versions of these degenerative conditions.</p>
<p><b>David:</b> What are some of the most important nutritional supplements that you would recommend to help prevent cancer and cardiovascular disease?</p>
<p><b>Ray:</b> We spell all that out in the book. Coenzyme Q<sub>10</sub> is important. It never ceases to amaze me that physicians do not tell their patients to take coenzyme Q<sub>10</sub> when they prescribe Statin drugs. This is because it&#146;s well known that Statin drugs deplete the body of coenzyme Q<sub>10</sub>, and a lot of the side-effects such as muscle weakness that people suffer from Statin drugs are because of this depletion of coenzyme Q<sub>10</sub>. In any event, that&#146;s an important supplement. It is involved in energy generation within the mitochondria of each cell. Disruption to the mitochondria is an important aging process and this supplement will help slow that down. Coenzyme Q<sub>10</sub> has a number of protective effect including lowering blood pressure, helping to control free-radical damage, and protecting the heart. </p>
<p>A lot of research recently shows the Curcumin, which is derived from the spice turmeric, has important anti-inflammatory properties and can protect against cancer, heart disease, and even Alzheimer&#146;s disease. </p>
<p>Alpha-Lipoic acid is an important antioxidant which is both water and fat-soluble. It can neutralize harmful free radicals, improve insulin sensitivity, and slow down the process of advanced Glycation end products (AGEs), which is another key aging process. </p>
<p>Each of the vitamins is important and plays a key role. Vitamin C is generally protective as a premier antioxidant. It appears to have particular effectiveness in preventing the early stages of atherosclerosis, namely the oxidizing of LDL cholesterol. </p>
<p>In terms of vitamin E, there&#146;s been a lot of negative publicity about that, but if you look carefully at that research you&#146;ll see that all of those studies were done with alpha-Tocopherol, and vitamin E is really a blend of eight different substances&#151;four tocopherols and four Tocotrienols. Alpha-Tocopherol actually depletes levels of gamma-Tocopherol, and gamma-Tocopherol is the form of vitamin E that&#146;s found naturally in food, and is a particularly important one. So we recommend that people take a blend of the fractions of vitamin E, and that they get enough gamma-Tocopherol.</p>
<p>There are a number of others that are important to take in general. If you have high cholesterol, Policosanol is one supplement that is quite effective, and has an independent action from the Statin drugs. Statin drugs actually are quite good. They appear to be anti-inflammatory, so they not only lower cholesterol but attack the inflammatory processes, which underlie many diseases, including atherosclerosis. But as I mentioned it&#146;s important to take coenzyme Q<sub>10</sub> if you&#146;re taking Statin drugs.</p>
<p>There are others. Grape seed proanthocyanidin extract has been found to be another effective antioxident. Resveratrol is another. We have an extensive discussion of the most important supplements in the book.</p>
<p><b>David:</b> What sort of suggestions would you make to someone who is looking to improve their memory or cognitive performance?</p>
<p><b>Ray:</b> Vinpocetine, derived from the periwinkle plant, seems to have the best research. It improves cerebral blood flow, increases brain cell TP (energy) production, and enables better utilization of glucose and oxygen in the brain.</p>
<p>Other supplements that appear to be important for brain health include Phosphatidylserine, Acetyl-L-Carnitine, Pregneneolone, and EPA/DHA. The research appears a bit mixed on Ginkgo Biloba, but we&#146;re not ready to give up on it. </p>
<p>We provide a discussion in the book of a number of smart nutrients that appear to improve brain health. There are also a number of smart drugs being developed, some of which are already in the testing pipeline, that appear to be quite promising.</p>
<p><b>David:</b> What do you think are the primary causes of aging?</p>
<p><b>Ray:</b> Aging is not one thing. There&#146;s a number of different processes involved and you can adopt programs that slow down each of these. For example, one process involves the depletion of phosphatidylcholine in the cell membrane. In young people the cell membrane is about sixty or seventy percent phosphatidylcholine, and the cell membrane functions very well then&#151;letting nutrients in and letting toxins out.</p>
<p>The body makes phosphatidylcholine, but very slowly, so over the decades the phosphatidylcholine in the cell membrane depletes, and the cell membrane gets filled in with inert substances, like hard fats and cholesterol, that basically don&#146;t work. This is one reasons that cells become brittle with age. The skin in an elderly person begins to not be supple. The organs stop functioning efficiently. So it&#146;s actually a very important aging process, and you can reverse that by supplementing with phosphatidylcholine. If you really want to do it effectively you can take phosphatidylcholine intravenously, as I do. Every week I have a I.V. with phosphatidylcholine. I also take it every day orally. So that&#146;s one aging process we can stop today. </p>
<p>Another important aging process involves oxidation through positively-charged oxygen free radicals, which will steal electrons from cells, disrupting normal enzymatic processes. There are a number of different types of antioxidants that you can take to slow down that process, including vitamin C. You could take vitamin C intravenously to boost that process.</p>
<p>Advanced Glycation end-products, or AGEs, are involved in another aging process. This is where proteins develop cross-links with each other, therefore disrupting their function. There are supplements that you can take, such as Alpha Lipoic Acid, that slow that down. There is an experimental drug called ALT-711 (phenacyldimenthylthiazolium chloride) that can dissolve the AGE cross-links without damaging the original tissues. </p>
<p>Atherosclerosis is an aging process, and it&#146;s not just taking place in the coronary arteries, of course. It can take place in the cerebral arteries, which ultimately causes cerebral strokes, but it also takes place in the arteries all throughout the body. It can lead to impotence, claudication of the legs and limbs, and like most of these processes, it&#146;s not linear but exponential, in that it grows by a certain percentage each year.</p>
<p>So that&#146;s why the process of atherosclerosis hardly seems to progress for a long time, but then when gets to a certain point it can really explode and develop very quickly. We have an extensive program on reducing atherosclerosis, which is both an aging process and a disease process. We cite a number of important supplements that reduce cholesterol and inflammation&#151;such as the omega-3 fats EPA and DHA&#151;as well as the Statin drugs. Supplements like Curcumin [Tumeric] are helpful. Supplements that reduce inflammation will reduce both cancer and the inflammatory processes that lead to atherosclerosis. There are a number of supplements that reduce Homocysteine, which appears to encourage atherosclerosis. These include Folic Acid, vitamins B<sub>2</sub>, B<sub>6</sub>, and B<sub>12</sub>, magnesium, and trimethylglycine (TMG). </p>
<p>So you can attack atherosclerosis five or six different ways, and we recommend that you do them all, so long as there aren&#146;t contraindications for combining treatments. But generally these treatments are independent of each other. If you go to war, you don&#146;t just send in the helicopters. You send in the helicopters, the tanks, the planes, and the infantry. You use your intelligence resources, and attack the enemy every way that you can, with all of your resources. And that&#146;s really what you need to do with these conditions, because they represent very threatening processes. If you are sufficient proactive, you can generally get them under control.</p>
<p><b>David:</b> What are some of the new anti-aging treatments that you foresee coming along in the near future, like from stem cell research and therapeutic cloning?</p>
<p><b>Ray:</b> It depends on what you mean by &#147;near future,&#148; because in ten or fifteen years we foresee a fundamentally transformed landscape.</p>
<p><b>David:</b> Let&#146;s just say prior to nanotechnology, and then that will be the next question.</p>
<p><b>Ray:</b> is the next frontier is biotechnology. We&#146;re really now entering an era where we can reprogram biology. We&#146;ve sequenced the genome, and we are now reverse-engineering the genome. We&#146;re understanding the roles that the genes play, how they express themselves in proteins, and how these proteins then play roles in sequences of biochemical steps that lead to both orderly processes as well as dysfunction&#151;disease processes, such as atherosclerosis and cancer&#151;and we are gaining the means to reprogram those processes.</p>
<p>For example, we can now turn genes off with RNA interference. This is a new technique that just emerged a few years ago&#151;a medication with little pieces of RNA that latch on to the messenger RNA that is expressing a targeted gene and destroys it, therefore preventing the gene from expressing itself. This effectively turns the gene off. So right away that methodology has lots of applications.</p>
<p>Take the fat insulin receptor gene. That gene basically says &#145;hold on to every calorie because the next hunting season may not work out so well.&#146; That was a good strategy, not only for humans, but for most species, thousands of years ago. It&#146;s still probably a good strategy for animals living in the wild. But we&#146;re not animals living in the wild. It was good for humans a thousand years ago when calories were few and far between. Today it underlies an epidemic of obesity. How about turning that gene off in the fat cells? What would happen?</p>
<p>That was actually tried in mice, and these mice ate ravenously, and they remained slim. They got the health benefits of being slim. They didn&#146;t get diabetes. They didn&#146;t get heart disease. They lived twenty percent longer. They got the benefits of caloric restriction while doing the opposite. So turning off the fat insulin receptor gene in fat cells is the idea. You don&#146;t want to turn it off in muscle cells, for example. This is one methodology that could enable us to prevent obesity, and actually maintain an optimal weight no matter what we ate. So that&#146;s one application of RNA interference.</p>
<p>There&#146;s a number of genes that have been identified that promote atherosclerosis, cancer, diabetes and many other diseases. We&#146;d like to selectively turn those genes off, and slow down or stop these disease processes. There are certain genes that appear to have an influence on the rate of aging. We can amplify the expression of genes similarly, and we can actually add new genetic information&#151;that&#146;s gene therapy. Gene therapy has had problems in the past, because we&#146;ve had difficulty putting the genetic information in the right place at the right chromosome. There are new techniques now that enable us to do that correctly.</p>
<p>For example, you can take a cell out of the body, insert the genetic information in vitro&#151;which is much easier to do in a Petri dish&#151;and examine whether or not the insertion went as intended. If it ended up in the wrong place you discard it. You keep doing this until you get it right. You can examine the cell and make sure that it doesn&#146;t have any DNA errors. So then you take this now modified cell&#151;that has also been certified as being free of DNA errors&#151;and it&#146;s replicated in the Petri dish, so that hundreds of millions of copies of it are created. Then you inject these cells back into the patient, and they will work their way into the right tissues. A lung cell is not going to end up in the liver.</p>
<p>In fact, this was tried by a company I&#146;m involved with, United Therapeutics. I advise them and I&#146;m on their board. They tried this with a fatal disease called pulmonary hypertension, which is a lung disease, and these modified cells ended up in the right place&#151;in the lungs&#151;and actually cured pulmonary hypertension in animal tests. It has now been approved for human trials. That&#146;s just one example of many of being able to actually add new genes. So we&#146;ll be able to subtract genes, over-express certain genes, under-express genes, and add new genes.</p>
<p>Another methodology is cell transdifferentiation, a broader concept then just stem cells. One of the problems with stem cell research or stem cell approaches is this. If I want to grow a new heart, or maybe add new heart cells, because my heart has been damaged, or if I need new pancreatic Islet cells because my pancreatic Islet cells are destroyed, or need some other type of cells, I&#146;d like it to have my DNA. The ultimate stem cell promise, the holy grail of these cell therapies, is to take my own skin cells and reprogram them to be a different kind of cell. How do you do that? Actually, all cells have the same DNA. What&#146;s the difference between a heart cell and pancreatic Islet cell?</p>
<p>Well, there are certain proteins, short RNA fragments, and peptides that control gene expression. They tell the heart cells that only the certain genes which should be expressed in a heart cell are expressed. And we&#146;re learning how to manipulate which genes are expressed. By adding certain proteins to the cell we can reprogram a skin cell to be a heart cell or a pancreatic Islet cell. This has been demonstrated in just the last couple years. So then we can create in a Petri dish as many heart cells or pancreatic Islet cells as I need, with my own DNA, because they&#146;re derived from my cells. Then inject them, and they&#146;ll work their way into the right tissues. In the process we can discard cells that have DNA errors, so we can basically replenish our cells with DNA-corrected cells.</p>
<p>While we are at it, we can also extend the telomeres. That&#146;s another aging process. As the cells replicate, these little repeating codes of DNA called telomeres grow shorter. They&#146;re like little beads at the end of the DNA strands. One falls off every time the cell replicates, and there&#146;s only about fifty of them. So after a certain number of replications the cell can&#146;t replicate anymore. There is actually one enzyme that controls this&#151;telomerase, which is capable of extending the telomeres. Cancer actually works by creating telomerase to enable them to replicate without end. Cancer cells become immortal because they can create telomerase.</p>
<p>As we&#146;re rejuvenating our cells, turning a skin cell into a kind of cell that I need, making sure that it has it&#146;s DNA corrected, we can also extend it&#146;s telomeres by using telomerase in the Petri dish. Then you got this new cell that&#146;s just like my heart cells were when I was twenty. Now you can replicate that, and then inject it, and really rejuvenate all of the body&#146;s tissues with young versions of my cells. So that&#146;s cell rejuvenation. That&#146;s one idea, or one technique, and there&#146;s many different variations of that.</p>
<p>Then there&#146;s turning on and off enzymes. Enzymes are the work horses of biology. Genes express themselves as enzymes, and the enzymes actually go and do the work. And we can add enzymes. We can turn enzymes off. One example of that is Torcetrapib, which destroys one enzyme, and that enzyme destroys HDL, the good cholesterol in the blood. So when people take Torcetrapib their HDL, good cholesterol levels, soar, and atherosclerosis dramatically slows down or stops. The phase 2 trials were very encouraging, and Pfizer is spending a record one billion dollars on the phase 3 trials. That&#146;s just one example of many of these paradigm: manipulating enzymes. So there&#146;s many different ideas to get in and very precisely reprogram the information processes that underlie biology, to undercut disease processes and aging processes, and move them towards healthy rejuvenated processes.</p>
<p><b>David:</b> How do you see robotics, artificial intelligence, and nanotechnology affecting human health and life span in the future?</p>
<p><b>Ray:</b> I mentioned that we talk about three bridges to radical life extension in <i>Fantastic Voyage</i>. Bridge One is aggressively applying today&#146;s knowledge, and that&#146;s, of course, a moving frontier, as we learn and gain more and more knowledge. In Chapter 10 of <i>Fantastic Voyage</i> I talk about my program, and at the end I mention that one part of my program is what I call a positive health slope, which means that my program is not fixed.</p>
<p>I spend a certain amount of time every week studying a number of things&#151;new research, new drug developments that are coming out, new information about myself that may come from testing. Just reading the literature I might discover something that&#146;s in fact old knowledge, but there&#146;s so much information out there, I haven&#146;t read everything. So I&#146;m constantly learning more about health and medicine and my own body and modifying my own program. I probably make some small change every week. That doesn&#146;t mean my program is unstable. My program is quite stable, but I&#146;m fine-tuning at the edges quite frequently.</p>
<p>Bridge Two we&#146;ve just been talking about, which is the biotechnology revolution. A very important insight that really changes one&#146;s perspective is to understand that progress is exponential and not linear. So many sophisticated scientists fail to take this into consideration. They just assume that the progress is going to continue at the current pace, and they make this mistake over and over again. If you consider the exponential pace of this process, ten or fifteen years from now we will have really dramatic tools in the forms of medications and cell therapies that can reprogram our health, within the domain of biology.</p>
<p>Bridge Three is nanotechnology. The golden era will be in about twenty years from now. They&#146;ll be some applications earlier, but the real Holy Grail of nanotechnology are nanobots, blood cell-size devices that can go inside the body and keep us healthy from inside. If that sounds very futuristic, I&#146;d actually point out that we&#146;re doing sophisticated tasks already with blood cell-size devices in animal experiments.</p>
<p>One scientist cured Type 1 diabetes in rats with a nano-engineered capsule that has seven nanometers pores. It lets insulin out in a controlled fashion and blocks antibodies. And that&#146;s what is feasible today. MIT has a project of a nano-engineered device that&#146;s actually smaller than a cell and it&#146;s capable of detecting specifically the antigens that exist only on certain types of cancer cells. When it detects these antigens it latches onto the cell, and burrows inside the cell. It can detect once it&#146;s inside and then at that point it releases a toxin which destroys the cancer cell. This has actually worked in the Petri dish, but that&#146;s quite significant because there&#146;s actually not that much that could be different in vivo as in vitro.</p>
<p>This is a rather sophisticated device because it&#146;s going through these several different stages, and it can do all of these different steps. It&#146;s a nano-engineered device in that it is created at the molecular level. So that&#146;s what is feasible already. If you consider what I call the Law of Accelerating Returns, which is a doubling of the power of these information technologies every year, within twenty-five years these computation-communication technologies, and our understanding of biology, will be a billion times more advanced than it is today. We&#146;re shrinking technology, according to our models, at a rate of over a hundred per 3-D volume per decade.</p>
<p>So these technologies will be a hundred thousand times smaller than they are today in twenty-five years, and a billion times more powerful. And look at what we can already do today experimentally. Twenty-five years from now these nanobots will be quite sophisticated. They&#146;ll have computers in them. They&#146;ll have communication devices. They&#146;ll have small mechanical systems. They&#146;ll really be little robots, and they be able to go inside the body and keep us healthy from inside. They will be able to augment the immune system by destroying pathogens. They will repair DNA errors, remove debris and reverse atherosclerosis. Whatever we don&#146;t get around to finishing with biotechnology, we&#146;ll be able to finish the job with these nano-engineered blood-cell sized robots or nanobots.</p>
<p>This really will provide radical life extension. The basic metaphor or analogy to keep in mind is to ask the question, How long does a house last? Aubrey de Grey uses this metaphor. The answer is, a house lasts as long as you want it to. If you don&#146;t take care of it the house won&#146;t last that long. It will fall apart. The roof will spring a leak and the house will quickly decay. On the other hand, if you&#146;re diligent, and something goes wrong in the house you fix it. Periodically you upgrade the technology. You put in a new HVAC system and so forth. With this approach, the house will go on indefinitely, and we do have houses, in fact, that are thousands of years of old. So why doesn&#146;t this apply to the human body?</p>
<p>The answer is that we understand how a house works. We understand how to fix a house. We understand all the problems a house can have, because we&#146;ve designed them. We don&#146;t yet have that knowledge and those tools today to do a comparable job with our body. We don&#146;t understand all the things that could wrong, and we don&#146;t have all the fixes for everything. But we will have this knowledge and these tools. We will have complete models of biology. We&#146;ll reverse-engineered biology within twenty years, and we&#146;ll have the means to go in and repair all of the problems we have identified.</p>
<p>We&#146;ll be able to indefinitely fix the things that go wrong. We&#146;ll have nanobots that can go in and proactively keep us healthy at a cellular level, without waiting until major diseases flare up, as well as stop and reverse aging processes. We&#146;ll get to a point where people will not age. So when we talk about radical life extension we&#146;re not talking about people growing old and becoming what we think of today as a 95 year old and then staying at a biological age 95 for hundreds of years.</p>
<p>We&#146;re talking about people staying young and not aging. Actually, I&#146;m talking about even more than that, because in addition to radical life extension, we&#146;ll also have radical life expansion. The nanobots will be able to go inside the brain and extend our mental functioning by interacting with our biological neurons. Today we already have computers that are placed inside people&#146;s brains, that replace diseased parts of the brain, like the neural implant for Parkinson&#146;s disease. The latest generation of that implant allows you download new software to your neural implant from outside the patient&#151;and that&#146;s not an experiment, that&#146;s an FDA approved therapy.</p>
<p>Today these neural implants require surgery, but ultimately we&#146;ll be able to send these brain extenders into the nervous system noninvasively through the capillaries of the brain, without surgery. And we&#146;ll be using them, not just to replace diseased tissue, but to go beyond our current abilities&#151;to extend our memories, extend our pattern recognition and cognitive capabilities, and merge intimately with our technology. So we&#146;ll have radical life expansion along with radical life extension. That&#146;s my vision of what will happen in the next several decades.</p>
<p><b>David:</b> What are you currently working on?</p>
<p><b>Ray:</b> I spend maybe forty or fifty percent of my time communicating&#151;in the form of books, articles, interviews, speeches. I give several speeches a month. Then there&#146;s my Web site: KurzweilAI.net. We have a free daily or weekly newsletter; people can sign up by putting in their email address (which is kept in confidence) on the home page.</p>
<p>Then I have several businesses that I&#146;m running, which are in the area of pattern recognition. I&#146;ve been in the reading machine business for thirty-two years. I developed the first print-to-speech technology for the blind in 1976, and we&#146;re introducing a new version that fits in your pocket. A blind person can take it out of their pocket, snap a picture of a handout at a meeting, a sign on a wall, the back of a cereal box, an electronic display, and the device will read it out loud to them through a earphone or speaker.</p>
<p>We&#146;re developing a new medical technology, which is basically a smart undershirt that monitors your health. There will be a smart bra version for women. It takes a complete morphology EKG and monitors your breathing. So, for example, if you&#146;re a heart patient it could tell you whether your atrial fibrillation is getting better or worse. When you&#146;re exercising it can tell you if you&#146;re getting a problem situation. So it gives you diagnostic information. It can also alert you if you should contact your doctor. So basically your undershirt is sending this information by Bluetooth to your cell phone, and your cell phone is running this cardiac evaluation software. So that&#146;s another project.</p>
<p>Then we have Ray and Terry&#146;s longevity products at <a href="http://RayandTerry.com" target="_blank">RayandTerry.com</a>, which goes along with <i>Fantastic Voyage</i>. We have about 20 products available now, and we&#146;ll have about fifty within a few months. Basically all the things we recommend in the book will be available. We also have combinations. So, for example, if you want to lower cholesterol we have a cholesterol-lowering product, and you don&#146;t have to buy the eight or nine different supplements separately. We put all of our recommendations together in one combination to make it easy for people to follow. There&#146;s a total daily care, that has basic nutritional supplements, like vitamins and minerals, and coenzyme Q-10, and so on. We have a meal-replacement shake that is low carbohydrate, has no sugar, but actually tastes good, which is actually very unique, because if you&#146;ve ever tasted a low-carb meal-replacement shake you know that there in general the taste is not desirable. This might sound promotional but that was the objective, and it&#146;s actually made up of the nutritional supplements that we recommend. So that&#146;s another company, and those are the companies that we&#146;re running.</p>
<p><i>&copy;2006 <a href="http://www.mavericksofthemind.com/" target="_blank">David Jay Brown</a>. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/reprogramming-your-biochemistry-for-immortality-an-interview-with-ray-kurzweil-by-david-jay-brown/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>From ENIAC to Everyone: Talking with J. Presper Eckert</title>
		<link>https://www.writingsbyraykurzweil.com/from-eniac-to-everyone-talking-with-j-presper-eckert</link>
		<comments>https://www.writingsbyraykurzweil.com/from-eniac-to-everyone-talking-with-j-presper-eckert#respond</comments>
		<pubDate>Thu, 23 Feb 2006 08:43:34 +0000</pubDate>
								<dc:creator>Alexander Randall 5th</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[J. Presper Eckert reveals the inside story of the invention of ENIAC, the first practical, all-electronic computer, and debunks some myths in this forgotten interview. "It is shocking to have your life work reduced to a tenth of a square inch of silicon," he said.]]></description>
			<content:encoded><![CDATA[<p><i>Published on KurzweilAI.net February 23, 2006.</i></p>
<p><i>There are two epochs in computer history: Before ENIAC and After ENIAC. The first practical, all-electronic computer was unveiled on February 13, 1946 at the Univ. of Pennsylvania&#8217;s Moore School of Electronics. While there are controversies over who invented what, there is universal agreement that the ENIAC was the watershed project that showed electronic computing was possible. It was a masterpiece of electrical engineering, with unprecedented reliability and speed. And the two men most responsible for its success were J. Presper Eckert and John W. Mauchly.</i><span id="more-80912"></span></p>
<p><i>I recorded two days of interviews with J. Presper Eckert in 1989. He was 70 years old. My father was Pres&#8217; best friend from childhood and I&#146;d spent my childhood playing with his children. I visited him regularly as an adult. On that day, we spoke in his living room in Gladwyne Pennsylvania&#151;most of it spent sitting on the floor. We stopped talking about computers only to fiddle with his Nova Chord electronic organ that predated ENIAC and we fiddled with stereo speakers. On a second occasion, I recorded a conversation at his daughter&#8217;s home in western Massachusetts.</i></p>
<h3>Calculating Machines Before ENIAC</h3>
<p>Randall: How did the calculating machines <i>before</i> ENIAC work?</p>
<p>Eckert: Well, a person with a paper and pencil can add two 10-digit numbers in about 10 seconds. With a hand calculator, the time is down to 4 seconds. The Harvard Mark 1 was the last of the electromechanical computers&#151;it could add two 10-digit numbers in 0.3 seconds about 30 times faster than paper and pencil. When I was a graduate student, the Moore School of Electronics had two analyzers that were essentially copies of Vannevar Bush&#8217;s machine from MIT. </p>
<p>Randall: What could that machine do?</p>
<p>Eckert: It could solve linear differential equations but only linear equations. It has a long framework divided into sections with a couple dozen shafts buried through it. You could put different gears on the shafts using screwdrivers and hammers and it had &quot;integrators&quot; that gave the product of two shafts coming in on a third shaft coming out. By picking the right gear ratio, you should get the right constants in the equation. We used published tables to pick the gear ratios to get whatever number you wanted. The limit on accuracy of this machine was the slippage of the mechanical wheels on the integrator. That made me say, &quot;Let&#8217;s build electronic integrators and stick them into this machine instead of those wheel things.&quot; We added several dozen motors and amplifiers and circuits using over 400 vacuum tubes, which as electronic things go, is not trivial. The radio has only five or six tubes, and television sets have up to 30. The Nova Chord organ was built prior to this and it has about 170 tubes. The Bush Analyzer was still essentially a mechanical device.</p>
<p align="center"><img loading="lazy" decoding="async" src="/images/ENIAC_Photo1.jpg" width="400" height="300" border="1"/></p>
<p><i><font size="2">ENIAC, which debuted 60 years ago, had 18,000 vacuum tubes.</font></i></p>
<p>That led me to examine if I could find some way to multiply pulse numbers together so I don&#8217;t need gears&#151;then I could do the whole thing electrically. There&#8217;s a theorem in calculus where you can use two integrators to do a multiplication. I talked with John Mauchley about it. Just who put in which part is hard to tell, but the idea of doing the integrations by counters was mine. </p>
<hr />
<p><b>&quot;The first real use was Edward Teller using ENIAC to do calculations for the Hydrogen bomb.&quot; </b></p>
<hr />
<p>The ENIAC (<b>E</b>lectrical <b>N</b>umerical <b>I</b>ntegrator <b>A</b>nd <b>C</b>alculator) was the first electronic digital computer and could add those two 10 digit numbers in .0002 seconds&#151;that&#8217;s 50,000 times faster than a human, 20,000 times faster than a calculator and 1500 times faster than the Mark 1. For specialized scientific calculations it was even faster.</p>
<p>Randall: So it&#8217;s a myth that ENIAC could only add, subtract, multiply and divide&#8230;.</p>
<p>Eckert: No, that&#8217;s a calculator. ENIAC could do three-dimensional second-order differential equation. We were calculating trajectory tables for the war effort. In those days the trajectory tables were calculated by hundreds of people operating desk calculators&#151;people who were called &quot;computers.&quot; So the machine that does that work was called a &quot;Computer.&quot;</p>
<h3>Visions of ENIAC</h3>
<p>Randall: So what did they give you? Did they say, &quot;Here&#8217;s a room? Here are some tools. Here are some guys&#151;go make it?&#148;</p>
<p>Eckert: Un Huh. Pretty much.</p>
<p>Randall: What did ENIAC&#8217;s room look like?</p>
<p>Eckert: We built ENIAC in a room that was 30 feet by 50 feet. At the Moore School in West Philadelphia on the first floor.</p>
<p>Randall: There is a story that ENIAC dimmed the lights in Philadelphia when it was in use.</p>
<p>Eckert: That story is total fiction, dreamed up by some journalist. We took power off of the grid. We had voltage regulators to provide 150 kilowatts of regulated supply.</p>
<p>Randall: Did the military guys working on ENIAC salute the machine?</p>
<p>Eckert: Another ENIAC myth. </p>
<p>Randall: You said the largest tube gadget in 1943 was the Nova Chord with 170 tubes&#8230; what did ENIAC use?</p>
<p>Eckert: ENIAC had 18,000 vacuum tubes. The tubes were off the shelf; we got whatever the distributor could supply in lots of a thousand. We used 10 tube types, but could have done it with 4 tubes types; we just couldn&#8217;t get enough of them. We decided that our tube filaments would last a lot longer if we kept them below their proper voltage. Not too high or too low. A lot of the circuits were off the shelf, but I invented a lot of the circuits as well. Registers were a new idea. So were integrator circuits.</p>
<p>The function of the machine was split into eight basic circuit components: the accumulator, initiator, master programmer, multiplier, divider/square-root, gate, buffer, and the function tables. The accumulator was the basic arithmetic unit of the ENIAC. It consisted of twenty registers, each ten digits wide, which performed addition, subtraction, and temporary storage. The accumulator can be compared to the registers in today&#8217;s central processing units. </p>
<h3>State of the Invention</h3>
<p>Randall: Are there any of your circuits still in use in today&#8217;s personal computers&#8230;</p>
<p>Eckert: No, but that&#8217;s true of any first invention. Edison&#8217;s original light bulb bears no resemblance to a modern bulb. They do the same thing but with totally different components. Same with the computer. <i>What did survive were the concepts, not the hardware. The idea of a subroutine was original with ENIAC.</i> Mauchly had this idea based on his knowledge of the inner workings of desk calculators and introduced me to his idea for a subroutine in the machine. On Mark 1, if they wanted to do a calculation over and over, they had to feed the same tape in over and over. We invented ways to run the same subroutine without any mechanical input. The idea of using internal memory was also original with ENIAC.</p>
<p>Randall: There&#8217;s a story that some guy was running around with a box of tubes and had to change one every few minutes.</p>
<p align="center"><img loading="lazy" decoding="async" src="/images/eniac-6.jpg" width="400" height="493" border="1"/></p>
<p>Eckert: Another myth. We had a tubes fail every about every two days and we could locate the problem within 15 minutes. We invented a scheme to build the computer on removable chassis&#151;plug in components&#151;so when tubes failed we could swap them out in seconds. We carried out a very radical idea in a very conservative fashion.</p>
<p>Randall: You are talking about many simultaneous innovations. How many inventions went into ENIAC?</p>
<p>Eckert: Hard to say, maybe 100. Some are just good engineering or wrinkles on ideas. We made a memory device where bits were stored as sound waves that propagated down a meter-length tube of mercury. You could input about 1000 pulses at one end before they started to come out the other end, where we re-amplified them and sent them back in again. Sound is so much slower than electricity that we could store 1000 pulses as acoustic waves in short-term memory.</p>
<p>Randall: How many people were working on ENIAC?</p>
<p>Eckert: Total count was about 50 people, 12 of us engineers or technical people. Mauchley was teaching part-time, others had part-time jobs. I was on it full-time as Chief Engineer.</p>
<p>Randall: How old were you?</p>
<p>Eckert: We signed the contract on my 24th birthday, May 9, 1943.
</p>
<p>Randall: Was ENIAC programmable?</p>
<p>Eckert: Yes and no. We programmed the machine by plugging wires in from place to place. That&#8217;s not hard- wired, it is not software, it is not memory. It&#8217;s pluggable programming. And we had switches to set the functions.</p>
<p>Randall: What was the first thing you did with ENIAC?</p>
<p>Eckert: It was designed to calculate trajectory tables, but it came too late to really help with the war effort. The first real use was Edward Teller using ENIAC to do calculations for the Hydrogen bomb.</p>
<p>Randall: What&#8217;s the zaniest thing you did while developing ENIAC?</p>
<p>Eckert: The mouse cage was pretty funny. We knew mice would eat the insulation off the wires, so we get samples of all the wires that were available and put them in a cage with a bunch of mice to see which insulation they did <i>not</i> like. We only used wire that passed the mouse test. </p>
<h3>What Background Led to ENIAC?</h3>
<p>Randall: What prepared you for building an electronic computer?</p>
<p>Eckert: Remember, in that era, Philadelphia was Vacuum Tube Valley. Radios and televisions were predominantly made in Philadelphia. I worked on primitive television at Farnsworth back as a teenager and at Penn I had been working on various radar problems trying to measure the time for a pulse to go out and come back. We needed an accuracy of one part to hundred thousandths, which is more accurate than anything we could do at that time. I figured that out with counters. All this is a good lead-in for building an electronic computer. </p>
<p>Randall: Was it you or was it the times?</p>
<p>Eckert: Well, I may have been uniquely prepared. I was very good in math and I was fascinated with all electronics. I was designing electronic gadgets as a kid and I not only did academic math, I studied business math. Maybe I had the right fusion of interests. But every inventor stands on the pedestals built by other people. If I hadn&#8217;t done it, someone else would have. All that any inventor does is accelerate the process. <i>The main thing was we made a machine that didn&#8217;t fail the first time.</i> If it had failed, we might have discouraged this line of work for a long time. People usually build prototypes, see their errors and try again. We couldn&#146;t do that. We had to make it work the first time out.</p>
<p>Randall: You have dozens of patents for your inventions. What motivates you?</p>
<p>Eckert: I am happiest when I am working on the edge of something. Where there are not many people who have done it. When nobody has done it, it is pretty tough. That gets me excited.</p>
<p>Randall: If you were a young engineer today, what would you be working on?</p>
<p>Eckert: <i>I want to develop three-dimensional processors like a sugar cube instead of wafers.</i> I want to make a computer that is specialized to simulate a wind tunnel. I have an idea for a keyboard that is shaped like a basketball on a joystick&#151;your hands would be comfortable&#8230; I&#8217;ve been thinking a lot about a machine to gang up a few thousand really cheap processors with a commander like &quot;Simon Says&#133;.&quot; They&#8217;d all do the same procedure in synchrony. I have a lot of acoustic projects that are still not solved. Maybe I&#146;d work on a robot that could clear dishes off the table or mow the lawn. The next wave is all about recognition. Making systems that recognize patterns&#133;.</p>
<p>Randall: When you were working on ENIAC did you have any inkling these things would be laptop sized and everyone would own one?</p>
<hr />
<p><b>&quot;It is shocking to have your life work reduced to a tenth of a square inch of silicon.&quot; </b></p>
<hr />
<p>Eckert: Mauchley thought the world would need maybe 6 computers. No one had any idea the transistor and chip technologies would come along so quickly. It is shocking to have your life work reduced to a tenth of a square inch of silicon. Jules Verne predicted we&#146;d go to the moon, but he never had any idea we&#146;d all sit home and watch it on TV. In every technology, there are inventions that go off at a right angle that change the path, there are new ideas that you can&#146;t see coming.</p>
<p align="center"><img loading="lazy" decoding="async" src="/images/ENIAC-1.gif" width="337" height="318" border="1"/></p>
<p>Randall: A lot of people have claimed they invented the first computer, what about John Atanasof?</p>
<p>Eckert: In the course of a patent fight, the other side brought up Atanasof and tried to show that he built an electronic computer ahead of us. It&#8217;s true he had a lab- bench tabletop kind of thing and John went out to look at it and wrote a memo, but we never used any of it. His thing didn&#146;t really work. He didn&#146;t have a whole system. That&#146;s a big thing with an invention; you have to have a whole system that works.</p>
<p>John and I not only built ENIAC. It worked. And it worked for a decade doing what it was designed to do. We went on to build BINAC and UNIVAC and hundreds of other computers. And the company we started is still in operation after many name changes as Unisys and I am still working for that company. Atanasof may have won a point in court, but he went back to teaching and we went on to build the first real electronic programmable computers, the first commercial computers. We made a lot of computers and we still are.</p>
<p>Randall: And John von Neumann?</p>
<p>Eckert: He came and looked at our stuff and went back to Princeton and wrote a long document about the principles. He gets a lot of credit but the inventions were ours. Someday I&#8217;ll write a book on who really invented the computer. It wasn&#8217;t Atanasof or von Neumann&#8230; we did it.</p>
<p><i>&copy; 2005 Alexander Randall 5th.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/from-eniac-to-everyone-talking-with-j-presper-eckert/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Sander Olson Interviews Ray Kurzweil</title>
		<link>https://www.writingsbyraykurzweil.com/sander-olson-interviews-ray-kurzweil</link>
		<comments>https://www.writingsbyraykurzweil.com/sander-olson-interviews-ray-kurzweil#respond</comments>
		<pubDate>Fri, 10 Feb 2006 13:44:37 +0000</pubDate>
								<dc:creator>Sander Olson, Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Nonbiological intelligence is multiplying by over 1,000 per decade. Once we can achieve the software of intelligence, which we will achieve through reverse-engineering the human brain, non-biological intelligence will soar past biological intelligence.  By the 2040s, nonbiological intelligence will be a billion times more powerful than the 10^26 computations per second that all biological humanity represents.]]></description>
			<content:encoded><![CDATA[<p><i>Originally printed on <a href="http://crnano.org/interview.kurzweil.htm" target="_blank">CRNano.org</a> February 9, 2006. Reprinted on KurzweilAI.net February 10, 2006.</i></p>
<p> <i><b>Question 1:&nbsp; Tell us about your background. When did you first work with a computer?&nbsp; When did you first begin studying computer/technological trends?</b></i> </p>
<p>I had the idea that I wanted to be an inventor since I was five. I first got involved with computers when I was twelve, programming some early computers, such as the 1401 and the 1620. I also built computers out of telephone relays.<span id="more-80910"></span></p>
<p>I began seriously modeling technology trends around 1980. I quickly realized that timing is the critical factor in the success of inventions. Most technology projects fail not because the technology doesn&#8217;t work, but because the timing is wrong&#151;not all of the enabling factors are at play where they are needed. So I began to study these trends in order to anticipate what the world would be like in 3&#8211;5 or 10 years and make realistic assessments. That continued to be the primary application of this study. I used these methodologies to guide the development plans of my projects, in particular when to launch a particular project, so that the software would be ready when the underlying hardware was needed, the needs of the market, and so on.</p>
<p>These methodologies had the side benefit of allowing us to project development 20 or 30 years in the future. There is a strong common wisdom that you can&#8217;t predict the future, but that wisdom is incorrect. Some key measures of information technology&#151;price-performance, capacity, bandwidth&#151;follow very smooth exponential trends. I have been making predictions going back into the 1980s, when I wrote <i>The Age of Intelligent Machines</i>. That book had hundreds of predictions about the 1990s and 21<sup>st</sup> century based on these models, which have turned out to be quite accurate. If we know how much it will cost per million-instruction-per-second (MIPs) of computing in future points in time, or how much it will cost to sequence a base-pair of DNA or to model a protein, or any other measure of information technology at different points in time, we can build scenarios of what will be feasible. The capability of these technologies grows exponentially, essentially doubling every year (depending on what you measure). There is even a slow second level of exponential growth.</p>
<p>We will increase the price-performance of computing, which is already formidable and deeply influential, by a factor of a billion in 25 years, and we will also shrink the technology at a predictable pace of over one hundred in 3D volume per decade. So these technologies will be very small and widely distributed, inexpensive, and extremely powerful. Look at what we can do already, and multiply that by a billion.</p>
<p><b><i>Question 2: When did you first become aware of the term “singularity?”&nbsp;Did you use that term in your first book, </i>The Age of Intelligent Machines<i>?</i></b></p>
<p>No. I first became familiar with it probably around the late 1990s. In my latest book, <i>The Singularity is Near</i>, I have really focused on the point in time where these technologies become quite explosive and profoundly transformative. In my earlier book, <i>The Age of Spiritual Machines</i>, I touched on that, and wrote about computers achieving human levels of intelligence and what that would mean. My main focus in this new book is on the merger of biological humanity with the technology that we are creating. Once nonbiological intelligence gets a foothold in our bodies and brains, which we have arguable already done in some people, but will do significantly in the 2020s, it will grow exponentially. We have about 10<sup>26</sup> calculations per second (cps) (at most 10<sup>29</sup>) in biological humanity, and that figure won&#8217;t change much in the next fifty years. Our brains use a form of electro-chemical signaling that travels a few hundred feet per second, which is a million times slower than electronics. The inter-neuronal connections in our brains compute at about 200 calculations per second, which is also about a million times slower than electronics. We communicate our knowledge and skills using language, which is similarly a million times slower than computers can transmit information. </p>
<p>So biological intelligence, while it could be better educated and better organized, is not going to significantly change. Nonbiological intelligence, however, is multiplying by over 1,000 per decade in less than a decade. So once we can achieve the software of intelligence, which we will achieve through reverse-engineering the human brain, non-biological intelligence will soar past biological intelligence. But this isn&#8217;t an alien invasion, it is something that will literally be deeply integrated in our bodies and brains. By the 2040s, the nonbiological intelligence that we create that year will be a billion times more powerful than the 10<sup>26 </sup>CPS that all biological humanity represents. The word “singularity” is a metaphor, and the metaphor that we are using isn&#8217;t really infinity, because these exponentials are finite. The real meaning of “singularity” is similar to the concept of the “event horizon” in physics.&nbsp; A black hole as physicists envision it has an event horizon around it, and you can&#8217;t easily see past it. Similarly, it is difficult to see beyond this technological event horizon, because it is so profoundly transformative. </p>
<p><b><i>Question 3:&nbsp; Has there been one writer or researcher, such as Marvin Minsky or Vernor Vinge, who has had a predominant influence on your thinking?</i></b> </p>
<p>Both those individuals have been influential. Vernor Vinge has had some really key insights into the singularity very early on, There were others, such as John Von Neuman, who talked about a singular event occurring, because he had the idea of technological acceleration and singularity half a century ago. But it was simply a casual comment, and Vinge worked out some of the key ideas. </p>
<p>Marvin Minsky was actually my mentor, and I corresponded with him and visited with him when I was in high school. We remain close friends and colleagues, and many of his writings on artificial intelligence, such as <i>Society of Mind</i> and some of his more technical work, have deeply influenced me.</p>
<p><b><i>Question 4:&nbsp; Many semiconductor analysts are predicting that the field of robotics will become the next major growth industry. When do you predict that the robotics industry will become a major, thriving industry?</i></b> </p>
<p>In the GNR revolutions I write about, R nominally stands for robotics, but the real reference is to strong AI. By strong AI,&nbsp; I mean artificial Intelligence at human levels, some of which will be manifested in robots, and some of which will be manifested in virtual bodies and virtual reality. We will go into virtual reality environments, and have nanobots in our brain that will shut down the signals coming from our nerves and sense organs, and replace them with the signals that we would be receiving if we were in the virtual environment. We can be actors in this virtual environment, and have a virtual body. But this virtual body doesn&#8217;t need to be the same as our real body. We will encounter other people in similar situations in this VR. There will also be forms of AI which perform specific tasks, like narrow AI programs do today in our economic infrastructure. Our economic infrastructure would collapse if all these current narrow AI programs stopped functioning, but this wasn&#8217;t true 25 years ago. So these task specific AI programs will become very intelligent in the coming decades. </p>
<p>So strong AI won&#8217;t just be robots; that is only one manifestation. The R revolution really is the strong AI revolution. Billions of dollars of financial transactions are done every day, in the form of intelligent algorithms, automatic detection of credit card fraud, and so forth. Every time you send an email or make a telephone call, intelligent algorithms route the information. Algorithms automatically diagnose electrocardiograms and blood cell images, fly airplanes, guide “smart” weapons, and so forth. I give dozens of examples in the book. These applications will become increasingly intelligent in the decades ahead. Machines are already performing tasks that previously could only be done by humans, and the tasks that this covers will increase in the coming years. </p>
<p>In order to achieve strong AI, we need to understand how the human brain works, and there are two fundamental requirements. One is the hardware requirement, which you mentioned. It is relatively uncontroversial today that we will achieve computer hardware equivalent to the human brain&#8217;s computing capacity&#151;just look at the semiconductor industry&#8217;s own roadmap. This is a roadmap into which the semiconductor industry has put enormous effort. By 2020, a single chip will provide 10<sup>16</sup> instructions per second, sufficient to emulate a single human brain.&nbsp;We will go to the third dimension, effectively superseding the limits of Moore&#8217;s law, which deals only in 2-d integrated circuits. These ideas were controversial notions when my last book (<i>The Age of Spiritual Machines</i>) was published in 1999, but is relatively uncontroversial today. </p>
<p>The more controversial issue is whether we will have the software, because it is not sufficient to simply have powerful computers, we need to actually understand how human intelligence works. That doesn&#8217;t necessarily mean copying every single pattern of every dendrite and ion channel. It really means understanding the basic principles of how the human brain performs certain tasks, such as remembering, reasoning, recognizing patterns and so on. That is a grand project, which I refer to as reverse-engineering the human brain, which is far further along than many people realize. We see exponential growth in every aspect of it. For instance, the spatial resolution of brain scanning is doubling every year in 3D volume. For the first time we can actually see individual interneuronal connections in living brains, and see them signaling in real time. This capability was not feasible a few years ago. The amount of data that we are obtaining on the brain is doubling every year, and we are showing that we can turn this data into working models, and in the book I highlight a couple of dozen simulations of different regions of the brain. For example, there is now a simulation of the cerebellum, which is an important region of the brain devoted to skill formation. This region comprises over half of the neurons of the brain.</p>
<p>I make the case that we will have the principles of operation understood well within twenty years. At the end of the 2020s, we will have both the hardware and software to create human levels of intelligence. This includes emotional intelligence, which is really the cutting edge of intelligence, in a machine. Given that machines are already superior to humans in certain aspects, the human-intelligent computer combination will be quite formidable, and this combination will continue to grow exponentially. Nonbiological intelligence will be able to examine its own source code and improve it in an iterative design cycle.&nbsp; We are doing something like that now with biotechnology, by reading our genes. So in the GNR revolutions I write about, R really stands for intelligence, which is the most powerful force in the universe. It is therefore the most influential of the revolutions.</p>
<p><b><i>Question 5: Nanotechnology plays a key role in your forecasts. What advice would you give to someone wanting to invest today in nanotechnology corporations?</i></b></p>
<p>Nanotechnology developments are currently in their formative stages. There are early applications of nanotechnology, but these do not represent the full vision of nanotechnology, the vision that Eric Drexler articulated in 1986. No one was willing to supervise this radical and interdisciplinary thesis except for my mentor Marvin Minsky. We have shown the feasibility of manipulating matter at the molecular level, which is what biology does. One of the ways to create nanotechnology is to start with biological mechanisms and modify them to extend the biological paradigm&#151;to go beyond proteins. That vision of molecular nanotechnology assembly&#151;of using massively parallel, fully programmable processes to grow objects with remarkable properties&#151;is about twenty years away. There will be a smooth progression, and early adaptor applications, many of which I discuss in the book. </p>
<p>There are early applications in terms of nanoparticles. These nanoparticles have unique features due to nanosize components, but this is a slightly different concept. We are using the special properties of nanoscale objects, but we are not actually building objects molecule by molecule. So the real revolutionary aspect of nanotechnology is a couple of decades away, and it is too early to say which companies will be the leaders of that. Intel sees that the future of electronics is nanotechnology, and by some definitions today&#8217;s electronics are already nanotechnology. Undoubtedly, there will be small corporations that will dominate. When search engines were formative, it would have been difficult to foresee that two Stanford undergrads would dominate that field. Nanotechnology is already a multi-billion dollar industry which will continue growing as we get closer to molecular manufacturing. When we actually have molecular manufacturing, it will be transforming&#151;we will be able to inexpensively manufacture almost anything we need from feedstock materials and these information processes. </p>
<p><b><i>Question 6:&nbsp; You write in </i>The Singularity is Near<i> of feeling somewhat alone in your beliefs. How has the mainstream scientific community responded to your prognostications?</i></b></p>
<p>Actually quite well. The book has been very well received; it has gotten very positive reviews in mainstream publications such as the <i>New York Times</i> and the <i>Wall Street Journal</i>.&nbsp;It has done very well, it has been #1 on the science list at Amazon, and ended up the fourth best selling science book of 2005 despite coming out at the end of the year. The New York Times cited it as the 13<sup>th</sup> most blogged about book of 2005.&nbsp;In terms of group intellectual debate, I believe that it has gotten a lot of respect, and has been well received. There are individuals who don&#8217;t read the arguments and just read the conclusions. For some of these individuals, the conclusions are so distant from the conventional wisdom on these topics that they reject it out of hand. But for those who carefully read the arguments, the response is generally positive. This is not to say that everyone agrees with everything, but it has gotten a lot of serious response and respect. I do believe that these ideas are getting more widely distributed and accepted, I am obviously not the only person articulating these concepts. Nevertheless, the common wisdom is quite strong&#151;even among friends and associates, the common wisdom regarding life cycle and the concept that life won&#8217;t be much different in the future than it is today&#151;still permeates people&#8217;s thinking. Thoughts and statements regarding life&#8217;s brevity and senescence are still quite influential.&nbsp;The deathist meme (that death gives meaning to life) is alive and well.&nbsp; </p>
<p>The biggest issue, which I put out in the beginning of <i>Singularity</i>, is linear vs. exponential thinking. It is remarkable how thoughtful people, including leading scientists, think linearly. This is just wrong, and I make this case, showing dozens of examples. But even though someone may be an expert regarding one aspect of technology or science, doesn&#8217;t mean that they have studied technology forecasting. Relatively few futurists/prognosticators really have well-grounded methodologies.&nbsp;The common wisdom is to think linearly, to assume that the current pace of change will continue indefinitely. But this attitude is gradually changing, as more and more people understand the exponential perspective and how explosive an exponential can be. That is the true nature of these technology trends. </p>
<p><b><i>Question 7:&nbsp; What about other technologies and industries, such as the textile, aerospace, or automotive industries?&nbsp; Are all technology fields experiencing exponential growth?</i></b></p>
<p>The key issue is that information technology and information processes progress at an exponential pace. Biological evolution itself was an information process&#151;the backbone is the genetic code, which is a digital code. I show in my book how that has accelerated very smoothly, in terms of the growth of complexity. The same thing is true of technological evolution, when it has to do with information. If we can measure the information content, which we can readily do with things like computation and communication, then we can discern that it progresses in this exponential fashion and subject to the law of accelerating returns. </p>
<p>The information technology needs to get to a point where it is capable of transforming an industry, and biology is a good example. Biology was not an information technology until recently&#151;it was basically hit or miss. Drug development was called drug discovery, which meant that we didn&#8217;t know why a drug worked and we had no theory of its operation. These drugs and tools were relatively crude and had many negative side effects. 99.9% of the drugs on the market were designed in this haphazard pre-information era fashion. </p>
<p>The new paradigm in biology is to understand these processes as information processes, and to develop the tools to reprogram these processes and actually change our genes. We still have these genetic programs that are obsolete. The fat insulin receptor gene tells the body to hold on to every calorie, since it is programmed to anticipate that the next hunting season may be a failure. That was a good program 10,000 years ago, but is not a good program today. We have shown in experimental studies with mice that we can change those programs. There are many genes that we would like to turn off, and there is also new genetic information that we would like to insert.&nbsp;New gene therapy techniques are now beginning to work. We can turn enzymes on and off, which are the workhorses of biology, and there are many examples of that. Most current drug development is through this rational drug design. So biology is becoming an information technology, and we can see the clear exponential growth. The amount of genetic data we sequence is doubling every year, the speed with which we can sequence DNA is doubling every year, and the cost has come down by half every year. It took 15 years to sequence the HIV virus, but we sequenced the SARS virus in 31 days.&nbsp;AIDs drugs cost $30,000 per patient per year fifteen years ago, but didn&#8217;t work very well. Now they&#8217;re down to $100 per patient per year in poor countries and work much better. </p>
<p>Fields such as energy are still not information technologies, but that is going to change as well. For instance, in <i>Singularity</i> I describe how we could meet 100% of our energy needs through renewable energy with nanoengineered solar panels and fuel cells within twenty years, by capturing only 3% of 1% of the sunlight that hits the Earth. That will happen within twenty years, and it will be related to information technology, since it will be able to meet our energy needs in a highly distributed, renewable, clean fashion with nanoengineered devices.&nbsp; We will ultimately transform transportation in a similar way, with nanoengineered devices that can provide personal flying vehicles at very low cost.&nbsp;The transportation and energy industries are currently pre-information fields. Ultimately, however, information technologies will comprise almost everything of value, because we will be able to build anything at extremely low cost using nanoengineered materials and processes. We will have new methods of doing things like flying and creating energy. </p>
<p><b><i>Question 8: You have emphasized the superior mechanical and electronic property of carbon nanotubes. When do you anticipate nanotubes being embedded in materials?&nbsp; When will we see the first computers with nanotube components? </i></b></p>
<p>There is actually a nanotube-based memory that may hit the market next year.&nbsp;This is a dense, two-dimensional device that has attractive properties. But three-dimensional devices are still about one and a half decades away. There are alternatives to nanotubes, such as DNA itself. DNA has potential uses outside of biology, because of its affinity for linking to itself.&nbsp; DNA could also be used structurally.&nbsp;But the full potential of three-dimensional structures based on either carbon nanotubes or DNA, is a circa 2020 technology.</p>
<p><b><i>Question 9:&nbsp; Most predictions of future technological developments have been inaccurate. What techniques do you use to improve the accuracy of your prognostications?</i></b>&nbsp; </p>
<p>I have a team of people that gathers data on many different industries and phenomena, and we build mathematical models.&nbsp;More and more areas of science and technology are now measurable in information terms. I use a data-driven approach, and I endeavor to build theoretical models of why these technologies progress. I have this theory of the law of accelerating returns, which is a theory of evolution. I then try to build mathematical models of how that applies to different phenomena and industries. Most futurists don&#8217;t use this type of methodology, and some just make guesses. Many futurists are simply unaware of these trends&#151;they make linear models. It is often said that we overestimate what can be done in the short term, because developing technologies turns out to be more difficult than we expect, but dramatically underestimate what can be achieved in the long term, because people think linearly. </p>
<p><b><i>Question 10:&nbsp; The Government has traditionally played a pivotal role in developing new technologies. Is the U.S. Government doing enough to support the nascent nanotechnology or the AI industries? Do these industries require Government support at this point?</i></b></p>
<p>These industries will both be propelled forward by the enormous economic incentive. Nanotechnology will be able to create almost any physical product we need at very low cost. These devices will be quite powerful because they will have electronics and communications embedded throughout the device. So there is tremendous economic incentive to develop nanotechnology, and the same is true of artificial intelligence. Basic research has an important role to play&#151;the Internet, for instance, came out of the Arpanet. The new world wide mesh concept&#151;of having every device not simply connected to the net but actually become a node on the net, sending and receiving both its own and other people&#8217;s messages&#151;this arose out of a department of defense concept. It is now being adopted by civilian, commercial corporations. DARPA is actually playing a forward-looking role in such technologies as speech recognition and other AI fields.
</p>
<p>In terms of national competitiveness, the key issue is that we are not graduating enough scientists and engineers. The figures regarding numbers of individuals receiving advanced technical degrees are dramatically growing in China, Japan, Korea, and India. These figures actually resemble exponential curves. China in particular is greatly outpacing the U.S., producing scientists and engineers, both at the undergraduate and doctoral level, in every scientific field.&nbsp; Although this is a real concern, there is now one integrated world economy, so we shouldn&#8217;t see this problem as simply the U.S. vs. China. I am glad to see China and India economically engaged, and this isn&#8217;t a zero-sum game&#151;Chinese engineers are creating value. But to the extent that we care about issues such as national competitiveness, this is a concern. In the end, however, this is about what fields teenagers choose to enter. </p>
<p>The U.S. does lead in the application of these technologies.&nbsp;I speak at many conferences each year, including music conferences, graphic arts conferences, library conferences, and so on. Yet, every conference I attend reads like a computer conference, because they are so heavily engaged in computer technology. The level of computer technology used in any of a great diversity of fields is quite impressive.
</p>
<p><b><i>Question 11:&nbsp; How do you envision the world in 2015?&nbsp; What economic and technological predictions would you make for that year?</i></b></p>
<p>By 2015, computers will be largely invisible, and will be very small. We will be dealing with a mesh of computing and communications that will be embedded in the environment and in our clothing. People in 2005 face a dilemma because, on the one hand, they want large, high-resolution displays. They can obtain these displays by buying expensive 72” flat-panel plasma monitors. But they also want portable devices, which have limited display capabilities.&nbsp; By 2015, we will have images input directly onto our retinas. This allows for a very high-resolution display that encompasses the entire visual field of view yet is physically tiny. These devices exist in 2005, and are used in high-performance applications, such as putting a soldier or a surgeon into a virtual reality environment. So in 2015, if we want a large, high-resolution computer image, it will just appear virtually in the air. We will have augmented reality, including pop-up displays explaining what is happening in the real world. We will be able to go into full-immersion, visual auditory virtual reality environments.</p>
<p>We will have useable language technologies. These are beginning to emerge, and by 2015 they will be quite effective. In this visual field of view, we will have virtual personalities with which you can interact. Computers will have virtual assistants with sufficient command of speech recognition that you can discuss subjects with them. Search engines won&#8217;t wait to be asked&#151;they will track your conversation and attempt to anticipate your needs and help you with routine transactions. These virtual assistants won&#8217;t be at the human level, that won&#8217;t happen until we have strong AI.&nbsp;But they will be useful, and many transactions will be mediated by these assistants. Computing will be very powerful, and it will be a mesh of computing. Individuals who need the power of a million computers for 25 milliseconds will be able to obtain that as needed.</p>
<p>By 2015, we will have real traction with nanotechnology. I believe that we will be well on the way to overcoming major diseases, such as cancer, heart disease, and diabetes through the biotechnology revolution that we talked above. We will also make progress in learning how to stop and even reverse the  aging process.</p>
<p><i>This interview was conducted by Sander Olson. The opinions expressed do not necessarily represent those of CRN. </i><i>Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/sander-olson-interviews-ray-kurzweil/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Biocosm: Lecture at Hayden Planetarium</title>
		<link>https://www.writingsbyraykurzweil.com/biocosm-the-new-scientific-theory-of-evolution-intelligent-life-is-the-architect-of-the-universe-2</link>
		<comments>https://www.writingsbyraykurzweil.com/biocosm-the-new-scientific-theory-of-evolution-intelligent-life-is-the-architect-of-the-universe-2#respond</comments>
		<pubDate>Thu, 09 Feb 2006 11:24:45 +0000</pubDate>
								<dc:creator>James N. Gardner</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Why is the universe life-friendly? Columbia physicist Brian Greene says it's the deepest question in all of science. Cosmologist Paul Davies agrees, calling it the biggest of the Big Questions.]]></description>
			<content:encoded><![CDATA[<p><i>This is a transcript of a lecture originally delivered at Hayden Planetarium, as part of the &quot;Distinguished Authors in Astronomy&quot; lecture series. Reprinted on KurzweilAI.net February 10, 2006.</i></p>
<p>It is, in the view of Columbia physicist Brian Greene, the deepest question in all of science. Renowned cosmologist Paul Davies agrees, calling it the biggest of the Big Questions.</p>
<p> And just what is this momentous question?</p>
<p> Not the mystery of life&#146;s origin, though the profundity of that particular puzzle prompted Charles Darwin to remark that it was probably forever beyond the pale of human comprehension. A dog, Darwin commented famously, might as easily contemplate the mind of Newton.<span id="more-80909"></span></p>
<p> Not the inscrutable manner in which consciousness emerges from the interaction and interconnection of neurons in the human skull, though a cascade of Nobel prizes will undoubtedly reward the teams of neuroscientists who achieve progress in understanding this phenomenon.</p>
<p> And not even the future course of biological and cultural evolution on planet Earth, though the great Darwinian river is surely carving a course that today&#146;s most visionary evolutionary theorist will have difficulty even imagining.</p>
<p> No, the question is more profound, more fundamental, less tractable than any of these. It is this&#151;why is the universe life-friendly?</p>
<p> Life-friendly, you might ask incredulously? The universe is <i>life-friendly</i>? The heck it is!</p>
<p> We have been taught since childhood that the universe is a horrifyingly hostile place. Violent black holes, planets and moons searing with unbearable heat or deep-frozen at temperatures that make Antarctica look tropical, and the vastness of interstellar space dooming us to perpetual physical isolation from our nearest starry neighbors&#151;this is the depressing picture of the cosmos beyond Earth that dominates the popular imagination.</p>
<p> This vision is profoundly wrong at a fundamental level. As scientists are now beginning to realize to their astonishment, the truly amazing thing about our universe is how strangely and improbably life-friendly or anthropic it is. As Cambridge evolutionary biologist Simon Conway Morris puts it in his new book <i>Life&#146;s Solution</i>, &#147;On a cosmic scale, it is now widely appreciated that even trivial differences in the starting conditions [of the cosmos] would lead to an unrecognizable and uninhabitable universe.&#148;</p>
<p> Simply put, if the Big Bang had detonated with slightly greater force, the cosmos would be essentially empty by now. If the primordial explosion had propelled the initial payload of cosmic raw materials outward with slightly lesser force, the universe would long ago have recollapsed in a Big Crunch. In neither case would human beings or other life forms have had time to evolve.</p>
<p> As Stephen Hawking asks, &#147;Why is the universe so close to the dividing line between collapsing again and expanding indefinitely? In order to be as close as we are now, the rate of expansion early on had to be chosen fantastically accurately.&#148; </p>
<p> It is not only the rate of cosmic expansion that appears to have been selected, with phenomenal precision, in order to render our universe fit for carbon-based life and the emergence of intelligence. A multitude of other factors are fine-tuned with fantastic exactitude to a degree that renders the cosmos almost spookily bio-friendly. Some of the universe&#146;s life-friendly attributes include the odd proclivity of stellar nucleosynthesis&#151;the process by which simple elements like hydrogen and helium are transmuted into heavier elements in the hearts of giant supernovae&#151;to yield copious quantities of carbon, the chemical epicenter of life as we know it.</p>
<p> As British astronomer Fred Hoyle pointed out, in order for carbon to exist in the abundant quantities that we observe throughout the cosmos, the mechanism of stellar nucleosynthesis must be exquisitely fine-tuned in a very special way.</p>
<p> Yet another bio-friendly feature of the cosmos is the physical dimensionality of our universe: why are there just three extended dimensions of space rather one or two or even the ten spatial dimensions contemplated by M-theory? As has been known for more than a century, in any other dimensional setup, stable planetary orbits would be impossible and life would not have time to get started before planets skittered off into deep space or plunged into their suns.</p>
<p> For centuries, it seemed that the dimensionality of the universe&#151;three dimensions of space plus one dimension of time&#151;was a matter of axiomatic truth. Rather like the propositions of geometry. In fact, precisely like the propositions of geometry. That was before the birth of superstring theory, and its successor, M-theory. I am going to get into M-theory more deeply in a moment but for now I want to highlight its insistence on the fact that there are, in fact, <i>ten</i> dimensions of space and one dimension of time. The mystery is why only three of the spatial dimensions got inflated into cosmic proportions by the Big Bang while the remaining seven stayed inconceivably minuscule. If anything else had happened&#151;if only two spatial dimensions had been inflated or if four had been inflated&#151;then the universe would not have been set up to allow the emergence of life and mind as we know them.</p>
<p> Collectively, this stunning set of coincidences render the universe eerily fit for life and intelligence. And the coincidences are built into the fundamental fabric of our reality. As British Astronomer Royal Sir Martin Rees says, &#147;There are deep connections between stars and atoms, between the cosmos and the microworld . . . . Our emergence and survival depend on very special &#145;tuning&#146; of the cosmos.&#148; Or, as the eminent Princeton physicist John Wheeler put it, &#147;It is not only that man is adapted to the universe. The universe is adapted to man. Imagine a universe in which one or another of the fundamental dimensionless constants of physics is altered by a few percent one way or the other? Man could never come into being in such a universe.&#148;</p>
<p> Scientists have been aware of this set of puzzles for decades and have given it name&#151;the anthropic cosmological principle&#151;but there is a new urgency to the quest for a plausible explanation because of two very recent discoveries&#151;the first at nature&#146;s largest scale and the second at its tiniest.</p>
<p> The first was the discovery of dark energy, which resulted from the observations of supernovae at extreme distances. Contrary to all expectations, the evidence showed that the expansion of the universe was speeding up, not slowing down. No one knows what is causing this phenomenon, although speculative explanations like leakage of gravity into extra unseen dimensions are beginning to show up in the scientific literature.</p>
<p> But for our purposes, what is particularly puzzling is why the strength of dark energy&#151;which the new Wilkinson microwave probe has revealed to be the predominant constituent of our cosmos&#151;is so vanishingly small, yet not quite zero. If it were even a tad stronger, you see, the universe would have been emptied long ago, scrubbed clean of stars and galaxies well before life and intelligence could evolve.</p>
<p> The second discovery occurred in the realm of M-theory, whose previous incarnation was known as superstring theory. Those of you who have read Brian Greene&#146;s terrific book <i>The Elegant Universe</i> or watched the Nova series based on it will know that M-theory posits that subatomic particles like quarks, electrons and neutrinos are really just different modes of vibration of tiny one-dimensional strings of energy. But what is truly strange about M-theory is that it allows a vast landscape of possible vibration modes of superstrings, only a tiny fraction of which correspond to anything like the sub-atomic particle world we observe and that is described by what is known as the Standard Model of particle physics.</p>
<p> Just how big is this landscape of possible alternative models of particle physics allowed by M-theory? According to Stanford physicist and superstring pioneer Leonard Susskind, the mathematical landscape is horrifyingly gigantic, permitting 10<sup>500</sup> power different and distinct environments, none of which appears to be mathematically favored, let alone foreordained by the theory. And in virtually none of those other mathematically permissible environments would matter and energy have possessed the qualities that are necessary for stars, galaxies and carbon-based living creatures to have emerged from the primordial chaos.</p>
<p> This is, as Susskind says, an intellectual cataclysm of the first magnitude because it seems to deprive our most promising new theory of fundamental physics&#151;M-theory&#151;of the power to uniquely predict the emergence of anything remotely resembling our universe. As Susskind puts it, the picture of the universe that is emerging from the deep mathematical recesses of M-theory is not an &#147;elegant universe&#148; at all! It&#146;s a Rube Goldberg device, cobbled together by some unknown process in a supremely improbable manner that just happens to render the whole ensemble miraculously fit for life. In the words of University of California theoretical physicist Steve Giddings, &#147;No longer can we follow the dream of discovering the unique equations that predict everything we see, and writing them on a single page.&#148; Or a tee-shirt! &#147;Predicting the constants of nature becomes a messy environmental problem. It has the complications of biology.&#148; Note the key word Giddings uses&#151;&#147;biology&#148;&#151;because we will be coming back to it shortly.</p>
<p> This really is, as Brian Greene says, the deepest problem in all of science. It really is, as Paul Davies says, the biggest of the Big Questions: why is the universe life-friendly?</p>
<p> If we put to one side theological approaches to this ultimate issue, what rational pathways forward are on offer from the scientific community? I suggest that three basic approaches are available. Two are familiar while the third is radically novel.</p>
<p> The first approach is to continue searching patiently for a unique final theory&#151;something that you really could write on your tee-shirt like E = mc<sup>2</sup>&#151;which might yet, against the odds, emerge from M-theory or one of its competitors (like loop quantum gravity) aspiring to the status of a so-called &#147;theory of everything.&#148; This is the fond hope of virtually every professional theoretical physicist, including those who have been driven to desperation by the horrendously messy and complex landscape of theoretically possible M-theory-allowed universes that distresses Susskind and other superstring theorists. Perhaps the laws and constants of nature&#151;an ensemble the late New York Academy of Sciences president and physicist Heinz Pagels dubbed the cosmic code&#151;will, in the end, turn out to be uniquely specified by mathematics and thus subject to no conceivable variation. Perhaps the ultimate equations will someday slide out of the mind of a new colossus of physics as slickly and beautifully as E = mc<sup>2</sup> emerged from Einstein&#146;s brain. Perhaps, but that appears to be an increasingly unlikely prospect.</p>
<p> A second approach, born of desperation on the part of Susskind and others, is to overlay a refinement of Big Bang inflation theory called eternal chaotic inflation with an explanatory approach that has been traditionally reviled by most scientists which is known as the weak anthropic principle. The weak anthropic principle merely states in tautological fashion that since human observers inhabit this particular universe, it must perforce be life-friendly or it would not contain any observers resembling ourselves. Eternal chaotic inflation, invented by Russian-born physicist Andrei Linde, asserts that instead of just one Big Bang there are, always have been, and always will be, zillions of Big Bangs going off in inaccessible regions all the time. These Big Bangs create zillions of new universes constantly and the whole ensemble constitutes a multiverse.</p>
<p> Now here&#146;s what happens when these two ideas&#151;eternal chaotic inflation and the weak anthropic principle&#151;are joined together. In each Big Bang, the laws, constants and the physical dimensionality of nature come out differently. In some, dark energy is stronger. In some, dark energy is weaker. In some, gravity is stronger. In some, gravity is weaker. This happens, according to M-theory-based cosmology, because the 10-dimensional physical shapes in which superstrings vibrate&#151;known as Calabi-Yau shapes&#151;evolve randomly and chaotically at the moment of each new Big Bang. The laws and constants of nature are constantly reshuffled by this process, like a cosmic deck of cards.</p>
<p> And here&#146;s the crucial part. Once in a blue moon, this random process of eternal chaotic inflation will yield a winning hand, as judged from the perspective of whether a particular new universe is life-friendly. That outcome will be pure chance&#151;one lucky roll of the dice in an unimaginably vast cosmic crap shoot with 10<sup>500</sup> unfavorable outcomes for every winning turn.</p>
<p> Our universe was a big winner, of course, in the cosmic lottery. Our cosmos was dealt a royal flush. Here is how the eminent Nobel laureate Steve Weinberg explained this scenario in a <i>New York Review of Books</i> essay a couple of years ago: &#147;The expanding cloud of billions of galaxies that we call the big bang may be just one fragment of a much larger universe in which big bangs go off all the time, each one with different values for the fundamental constants.&#148; It is no more a mystery that our particular branch of the multiverse exhibits life-friendly characteristics, according to Weinberg, than that life evolved on the hospitable Earth &#147;rather than some horrid place, like Mercury or Pluto.&#148;</p>
<p> If you find this scenario unsatisfactory&#151;the weak anthropic principle superimposed on Andrei Linde&#146;s theory of eternal chaotic inflation&#151;I can assure you that you are not alone. To most scientists, offering the tautological explanation that since human observers inhabit this particular universe, it must necessarily be life-friendly or else it would not contain any observers resembling ourselves is anathema. It just sounds like giving up. </p>
<p> In my view, there are two primary problems with the Weinberg/Susskind approach. First, universes spawned by Big Bangs other than our own are inaccessible from our own universe, at least with the experimental techniques currently available to scientists. So the approach appears to be untestable, perhaps untestable in principle. And testability is the hallmark of genuine science, distinguishing it from fields of inquiry like metaphysics and theology.</p>
<p> Second, the Weinberg/Susskind approach extravagantly violates the mediocrity principle. The mediocrity principle, a mainstay of scientific theorizing since Copernicus, is a statistically based rule of thumb that, absent contrary evidence, a particular sample (Earth, for instance, or our particular universe) should be assumed to be a typical example of the ensemble of which it is a part. The Weinberg/Susskind approach flagrantly flouts the mediocrity principle. Instead, their approach simply takes refuge in a brute, unfathomable mystery&#151;the conjectured lucky roll of the dice in a crap game of eternal chaotic inflation&#151;and declines to probe seriously into the possibility of a naturalistic cosmic evolutionary process that has the capacity to yield a life-friendly set of physical laws and constants on a nonrandom basis. It is as if Charles Darwin, contemplating the famous tangled bank (the arresting visual image with which he concludes <i>The Origin of Species</i>), had confessed not a magnificent obsession with gaining an understanding of the mysterious natural processes that had yielded &#147;endless forms most beautiful and most wonderful,&#148; but rather a smug satisfaction that <i>of course</i> the earthly biosphere must have somehow evolved in a just-so manner mysteriously friendly to humans and other currently living species, or else Darwin and other humans would not be around to contemplate it!</p>
<p> Indeed, the situation that confronts cosmologists today is eerily reminiscent of that which faced biologists before Charles Darwin propounded his revolutionary theory of evolution. Darwin confronted the seemingly miraculous phenomenon of a fine-tuned natural order in which every creature and plant appeared to occupy a unique and well-designed niche. Refusing to surrender to the brute mystery posed by the appearance of nature&#146;s design, Darwin masterfully deployed the art of metaphor to elucidate a radical hypothesis&#151;the origin of species through natural selection&#151;that explained the apparent miracle as a natural phenomenon.</p>
<p> The metaphor furnished by the familiar process of artificial selection was Darwin&#146;s crucial stepping stone. Indeed, the practice of <i>artificial</i> selection through plant and animal breeding was the primary intellectual model that guided Darwin in his quest to solve the mystery of the origin of species and to demonstrate in principle the plausibility of his theory that variation and natural selection were the prime movers responsible for the phenomenon of speciation. So, too, today a few venturesome cosmologists have begun to use the same poetic tool utilized by Darwin&#151;the art of metaphorical thinking&#151;to develop novel intellectual models that might offer a logical explanation for what appears to be an unfathomable mystery: the apparent fine-tuning of the cosmos.</p>
<p> The cosmological metaphor chosen by these iconoclastic theorists is life itself. What if life, they ask in the spirit the great Belgian biologist and Nobel laureate Christian de Duve, were not a cosmic accident but the essential reality at the very heart of the elegant machinery of the universe? What if Darwin&#146;s principle of natural selection were merely a tiny fractal embodiment of a universal life-giving principle that drives the evolution of stars, galaxies, and the cosmos itself?</p>
<p> This, as you may have guessed, is the headline summarizing the third (and radically novel) approach to answering the biggest of the Big Questions: why is the universe life-friendly? It is the approach outlined at length in my new book <i>BIOCOSM</i>.</p>
<p> Before I get into this third approach in more detail, I want to say something upfront about scientific speculation. The approach I am about to outline for you is intentionally and forthrightly speculative. Following the example of Darwin, I have attempted to crudely frame a radically new explanatory paradigm well before all of the required building materials and construction tools are at hand. Darwin had not the slightest clue, for instance, that DNA is the molecular device used by all life-forms on Earth to accomplish the feat of what he called &#147;inheritance.&#148; Indeed, as cell biologist Kenneth R. Miller noted in <i>Finding Darwin&#146;s God</i>, &#147;Charles Darwin worked in almost total ignorance of the fields we now call genetics, cell biology, molecular biology, and biochemistry.&#148; Nonetheless, Darwin managed to put forward a plausible theoretical framework that succeeded magnificently despite the fact that it was utterly dependent on hypothesized but completely unknown mechanisms of genetic transmission.</p>
<p> As Darwin&#146;s example shows, plausible and deliberate speculation plays an essential role in the advancement of science. Speculation is the means by which new scientific paradigms are initially constructed, to be either abandoned later as wrong-headed detours or vindicated as the seeds of scientific revolutions.</p>
<p> Another important lesson drawn from Darwin&#146;s experience is important to note at the outset. Answering the question of why the most eminent geologists and naturalists had, until shortly before publication of <i>The Origin of Species</i>, disbelieved in the mutability of species, Darwin responded that this false conclusion was &#147;almost inevitable as long as the history of the world was thought to be of short duration.&#148; It was geologist Charles Lyell&#146;s speculations on the immense age of Earth that provided the essential conceptual framework for Darwin&#146;s new theory. Lyell&#146;s vastly expanded stretch of geological time provided an ample temporal arena in which the forces of natural selection could sculpt and reshape the species of Earth and achieve nearly limitless variation.</p>
<p> The central point is that collateral advances in sciences seemingly far removed from cosmology can help dissipate the intellectual limitations imposed by common sense and na&iuml;ve human intuition. And, in an uncanny reprise of the Lyell/Darwin intellectual synergy, it is a realization of the vastness of time and history that gives rise to the crucial insight. Only in this instance, the vastness of which I speak is the vastness of future time and future history. In particular, sharp attention must be paid to the key conclusion of Princeton physicist John Wheeler: most of the time available for life and intelligence to achieve their ultimate capabilities lie in the distant cosmic future, not in the cosmic past. As cosmologist Frank Tipler bluntly stated, &#147;Almost all of space and time lies in the future. By focusing attention only on the past and present, science has ignored almost all of reality. Since the domain of scientific study is the whole of reality, it is about time science decided to study the future evolution of the universe.&#148;</p>
<p> That is exactly what I have attempted to do in <i>BIOCOSM</i> in order to explore, in a tentative way, a possible third pathway to an answer to the biggest of the Big Questions. I call that third pathway the Selfish Biocosm hypothesis.</p>
<p> Originally presented in peer-reviewed scientific papers published in <i>Complexity</i>, <i>Acta Astronautica</i>, and the <i>Journal of the British Interplanetary Society</i>, my Selfish Biocosm hypothesis suggests that in attempting to explain the linkage between life, intelligence and the anthropic qualities of the cosmos, most mainstream scientists have, in essence, been peering through the wrong end of the telescope. The hypothesis asserts that life and intelligence are, in fact, the primary cosmological phenomena and that everything else&#151;the constants of nature, the dimensionality of the universe, the origin of carbon and other elements in the hearts of giant supernovas, the pathway traced by biological evolution&#151;is secondary and derivative. In the words of Martin Rees, my approach is based on the proposition that &#147;what we call the fundamental constants&#151;the numbers that matter to physicists&#151;may be <i>secondary consequences</i> of the final theory, rather than direct manifestations of its deepest and most fundamental level.&#148;</p>
<p> I began developing the Selfish Biocosm hypothesis as an attempt to supply two essential elements missing from a novel model of cosmological evolution put forward by astrophysicist Lee Smolin. Smolin had come up with the intriguing suggestion that black holes are gateways to new &#147;baby universes&#148; and that a kind of Darwinian population dynamic rewards those universes most adept at producing black holes with the greatest number of progeny. Proliferating populations of baby universes emerging from the loins (metaphorically speaking) of black hole-rich &#147;mother universes&#148; thus come to dominate the total population of the &#147;multiverse&#148;&#151;a theoretical ensemble of all mother and baby universes. Black hole-prone universes also happen to coincidentally exhibit anthropic qualities, according to Smolin, thus accounting for the bio-friendly nature of the &#147;average&#148; cosmos in the ensemble, more or less as an incidental side-effect.</p>
<p> This was a thrilling conjecture because for the first time it posited a cosmic evolutionary process endowed with what economists call a utility function (i.e., a value that was maximized by the hypothesized evolutionary process, which in the case of Smolin&#146;s conjecture was black hole maximization).</p>
<p> However, Smolin&#146;s approach was seriously flawed. As the computer genius John von Neumann demonstrated in a famous 1948 Caltech lecture entitled &#147;On the General and Logical Theory of Automata,&#148; any self-reproducing object (mouse, bacterium, human or baby universe) must, as a matter of inexorable logic, possess four essential elements:</p>
<p>1. A <i>blueprint</i>, providing the plan for construction of offspring;</p>
<p>2. A <i>factory</i>, to carry out the construction;</p>
<p>3. A <i>controller</i>, to ensure that the factory follows the plan; and</p>
<p>4. A <i>duplicating machine</i>, to transmit a copy of the blueprint to the offspring.</p>
<p> In the case of Smolin&#146;s hypothesis, one could logically equate the collection of physical laws and constants that prevail in our universe with a von Neumann blueprint and the universe at large with a kind of enormous von Neumann factory. But what could possibly serve as a von Neumann controller or a von Neumann duplicating machine? My goal was to rescue Smolin&#146;s basic innovation&#151;a cosmic evolutionary model that incorporated a discernible utility function&#151;by proposing scientifically plausible candidates for the two missing von Neumann elements.</p>
<p> The hypothesis I developed was based on a set of conjectures put forward by Martin Rees, John Wheeler, Freeman Dyson, John Barrow, Frank Tipler, and Ray Kurzweil. Their futuristic visions suggested collectively that the ongoing process of biological and technological evolution was sufficiently robust, powerful, and open-ended that, in the very distant future, a cosmologically extended biosphere could conceivably exert a global influence on the physical state of the entire cosmos. Think of this idea as the Gaia principle extended universe-wide.</p>
<p> A synthesis of these insights lead me directly to the central claim of the Selfish Biocosm hypothesis: that the ongoing process of biological and technological emergence, governed by still largely unknown laws of complexity, could function as a von Neumann controller and that a cosmologically extended biosphere could serve as a von Neumann duplicating machine in a conjectured process of cosmological replication.</p>
<p> I went on to speculate that the means by which the hypothesized cosmological replication process could occur was through the fabrication of baby universes by highly evolved intelligent life forms. These hypothesized baby universes would themselves be endowed with a cosmic code&#151;an ensemble of physical laws and constants&#151;that would be life-friendly so as to enable life and ever more competent intelligence to emerge and eventually to repeat the cosmic reproduction cycle. Under this scenario, the physical laws and constants serve a cosmic function precisely analogous to that of DNA in earthly creatures: they furnish a recipe for the birth and evolution of intelligent life and a blueprint, which provides the plan for construction of offspring.</p>
<p> I should add that if the fabrication of baby universes, which is the key step in the hypothesized cosmic reproductive cycle that I just outlined, sounds to you like outrageous science fiction&#151;an &#147;X-file too far,&#148; in the words of one of my critics&#151;you should be aware that the topic has begun to be rigorously explored by such eminent physicists as Andrei Linde of Stanford, Alan Guth of MIT (who is the father of inflation theory), Martin Rees of Cambridge, eminent astronomer Edward Harrison, and physicists Lawrence Krauss and Glenn Starkman.</p>
<p> This central claim of the Selfish Biocosm hypothesis offered a radically new and quite parsimonious explanation for the apparent mystery of an anthropic or bio-friendly universe. If highly evolved intelligent life is the von Neumann duplicating machine that the cosmos employs to reproduce itself&#151;if intelligent life is, in effect, the reproductive organ of the universe&#151;then it is entirely logical and predictable that the laws and constants of nature should be rigged in favor of the emergence of life and the evolution of ever more capable intelligence. Indeed, the existence of such propensity is a falsifiable prediction of the hypothesis.</p>
<p> Now, at this point you are probably saying to yourself, &#147;Wow, with a theory that crazy and radical, this Gardner fellow must have been shunned by the scientific establishment.&#148; And indeed some mainstream scientists have commented that the ideas advanced in my book <i>BIOCOSM</i> are impermissibly speculative or impossible to verify. A few have hurled what scientists view as the ultimate epithet&#151;that my theory constitutes metaphysics instead of genuine science.</p>
<p> On the other hand, some of the brightest and most far-sighted scientists have been extremely encouraging. John Barrow and Freeman Dyson have offered favorable comments and reviews. In particular, <i>BIOCOSM</i> has received outspoken endorsements from Sir Martin Rees (the UK Astronomer Royal and winner of the top scientific prize in the world for cosmology) and Paul Davies (the prominent astrophysicist and author and winner of the Templeton Prize).</p>
<p> As I continue to explore this hypothesis in the future, what will be of utmost interest to me and my sympathizers is whether it can generate what scientists call falsifiable implications. Falsifiabiliy or testability of claims, remember, is the hallmark of genuine science, distinguishing it from metaphysics and faith-based belief systems.</p>
<p> I believe that the Selfish Biocosm hypothesis does qualify as a genuine scientific conjecture on this ground. A key implication of the hypothesis is that the process of progression of the cosmos through critical thresholds in its life cycle, while perhaps not strictly inevitable, is relatively robust. One such critical threshold is the emergence of human-level and higher intelligence, which is essential to the scaling up of biological and technological processes to the stage at which those processes could conceivably exert an influence on the global state of the cosmos.</p>
<p> The conventional wisdom among evolutionary theorists, typified by the thinking of the late Stephen Jay Gould, is that the abstract probability of the emergence of anything like human intelligence through the natural process of biological evolution was vanishingly small. According to this viewpoint, the emergence of human-level intelligence was a staggeringly improbable contingent event. A few distinguished contrarians like Simon Conway Morris, Robert Wright, E. O. Wilson, and Christian de Duve take an opposing position, arguing on the basis of the pervasive phenomenon of convergent evolution and other evidence that the appearance of human-level intelligence was highly probable, if not virtually inevitable. The latter position is consistent with the Selfish Biocosm hypothesis while the Gould position is not.</p>
<p> In my book <i>BIOCOSM</i> and in a preceding scientific paper delivered at the International Astronautical Congress, I suggest that the issue of the robustness of the emergence of human-level and higher intelligence is potentially subject to experimental resolution by means of at least three realistic tests: SETI research, artificial life evolution, and the emergence of transhuman computer intelligence predicted by computer science theorist Ray Kurzweil and others. The discovery of extraterrestrial intelligence, the discovery of an ability on the part of artificial life forms that exist and evolve in software environments to acquire autonomy and intelligence, and the emergence of a capacity on the part of advanced self-programming computers to attain and then exceed human levels of intelligence are all falsifiable implications of the Selfish Biocosm hypothesis because they are consistent with the notion that the emergence of ever more competent intelligence is a robust natural phenomenon. These tests don&#146;t, of course, conclusively answer the question of whether the hypothesis correctly describes ultimate reality. But such a level of certainty is not demanded of any scientific hypothesis in order to qualify it as genuine science.</p>
<p> Let me conclude by asking whether the Selfish Biocosm hypothesis promotes or demotes the cosmic role of humanity. Have I introduced a new anthropocentrism into the science of cosmology? If so, then you should be suspect on this basis alone of my new approach because, as Sigmund Freud pointed out long ago, new scientific paradigms must meet two distinct criteria to be taken seriously: they must reformulate our vision of physical reality in a novel and plausible way and, equally important, they must advance the Copernican project of demoting human beings from the centerpiece of the universe to the results of natural processes.</p>
<p> At first blush, the Selfish Biocosm hypothesis may appear to be hopelessly anthropocentric. Freeman Dyson once famously proclaimed that the seemingly miraculous coincidences exhibited by the physical laws and constants of inanimate nature&#151;factors that render the universe so strangely life-friendly&#151;indicated to him that &#147;the more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense knew we were coming.&#148; This strong anthropic perspective may seem uplifting and inspiring at first blush but a careful assessment of the new vision of a bio-friendly universe revealed by the Selfish Biocosm hypothesis yields a far more sobering conclusion.</p>
<p> To regard the pageant of life&#146;s origin and evolution on Earth as a minor subroutine in an inconceivably vast ontogenetic process through which the universe prepares itself for replication is scarcely to place humankind at the epicenter of creation. Far from offering an anthropocentric view of the cosmos, the new perspective relegates humanity and its probable progeny species (biological or mechanical) to the functional equivalents of mitochondria&#151;formerly free-living bacteria whose special talents were harnessed in the distant past when they were ingested and then pressed into service as organelles inside eukaryotic cells.</p>
<p> The essence of the Selfish Biocosm hypothesis is that the universe we inhabit is in the process of becoming pervaded with increasingly intelligent life&#151;but not necessarily human or even human-successor life. Under the theory, the emergence of life and increasingly competent intelligence are not meaningless accidents in a hostile, largely lifeless cosmos but at the very heart of the vast machinery of creation, cosmological evolution, and cosmic replication. However, the theory does not require or even suggest that the life and intelligence that emerge be human or human-successor in nature.</p>
<p> The hypothesis simply asserts that the peculiarly life-friendly laws and constants that prevail in our universe serve a function precisely equivalent to that of DNA in living creatures on Earth, providing a recipe for development and a blueprint for the construction of offspring.</p>
<p> Finally, the hypothesis implies that the capacity for the universe to generate life and to evolve ever more capable intelligence is encoded as a hidden subtext to the basic laws and constants of nature, stitched like the finest embroidery into the very fabric of our universe. A corollary&#151;and a key falsifiable implication of the Selfish Biocosm theory&#151;is that we are likely not alone in the universe but are probably part of a vast, yet undiscovered transterrestrial community of lives and intelligences spread across billions of galaxies and countless parsecs. Under the theory, we share a possible common fate with that hypothesized community&#151;to help shape the future of the universe and transform it from a collection of lifeless atoms into a vast, transcendent mind.</p>
<p> The inescapable implication of the Selfish Biocosm hypothesis is that the immense saga of biological evolution on Earth is one tiny chapter in an ageless tale of the struggle of the creative force of life against the disintegrative acid of entropy, of emergent order against encroaching chaos, and ultimately of the heroic power of mind against the brute intransigence of lifeless matter.</p>
<p> In taking full measure of the seeming miracle of a bio-friendly universe we should obviously be skeptical of wishful thinking and &#147;just-so&#148; stories. But we should not be so dismissive of new approaches that we fail to relish the sense of wonder at the almost miraculous ability of science to fathom mysteries that once seemed impenetrable&#151;a sense perfectly captured by the great British innovator Michael Faraday when he summarily dismissed skepticism about his almost magical ability to summon up the genie of electricity simply by moving a magnet in a coil of wire.</p>
<p> As Faraday said, &#147;Nothing is too wonderful to be true if it be consistent with the laws of nature.&#148;</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/biocosm-the-new-scientific-theory-of-evolution-intelligent-life-is-the-architect-of-the-universe-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Ray Kurzweil Responds to Richard Eckersley</title>
		<link>https://www.writingsbyraykurzweil.com/ray-kurzweil-responds-to-richard-eckersley</link>
		<comments>https://www.writingsbyraykurzweil.com/ray-kurzweil-responds-to-richard-eckersley#respond</comments>
		<pubDate>Fri, 03 Feb 2006 13:45:46 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA["Eckersley bases his romanticized idea of ancient life on communication and the relationships fostered by communication. But much of modern technology is directed at just this basic human need."]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.wfs.org/futcontma06.htm" target="_blank">The Futurist</a><i> March-April 2006. Reprinted on KurzweilAI.net February 2, 2006.</i></p>
<p><i>This article is a response to <a href="/" target="_top">Richard Eckersley&#8217;s comments</a> on Kurzweil&#8217;s article, <a href="/" target="_top">Reinventing Humanity</a>. You can also read other responses to Kurzweil&#8217;s article by <a href="/" target="_top">Terry Grossman</a>, <a href="/" target="_top">John Smart</a>, <a href="/" target="_top">J. Storrs Hall</a>, and <a href="/" target="_top">Damien Broderick</a>.</i></p>
<p><i>Click <a href="http://www.singularity.com/KurzweilFuturist.pdf" target="_blank">here</a> to read a PDF of the full feature.</i><span id="more-80908"></span></p>
<p>Richard Eckersley&#146;s idyllic notion of human life hundreds of years ago belies our scientific knowledge of history. Two hundred years ago, there was no understanding of sanitation so bacterial infections were rampant. There were no antibiotics and no social safety nets so an infectious disease was a disaster plunging a family into desperation. Thomas Hobbes&#146; characterization in 1651 of human life as solitary, poor, nasty, brutish, and short was on the mark. Even ignoring infant mortality, life expectancy was in the 30&#146;s only a couple of hundred years ago. Schubert and Mozart&#146;s death at 31 and 35 respectively was typical.</p>
<p>Eckersley bases his romanticized idea of ancient life on communication and the relationships fostered by communication. But much of modern technology is directed at just this basic human need. The telephone allowed people to be together even if far apart geographically. The Internet is the quintessential communication technology. Social networks and the panoply of new ways to make connection are creating communities based on genuine common interests rather than the accident of geography. This decentralized electronic communication is also highly democratizing. In a book I wrote in the mid 1980s I predicted the demise of the Soviet Union from the impact of the then emerging communication networks, and that is indeed what happened in the early 1990s. The democracy movement we saw in the 1990s and since is similarly fueled by our unprecedented abilities to stay in touch.</p>
<p>If Eckersley really sticks to his own philosophy, he won&#146;t be around for very long to influence the debate. I suspect, however, that he will take advantage of the life extension&#151;and enhancement&#151;technologies that will emerge in the decades ahead. And I hope that he does so that we can continue this dialogue through this century and beyond.</p>
<p><i>&copy; 2006 Ray Kurzweil. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/ray-kurzweil-responds-to-richard-eckersley/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Ray Kurzweil Responds to Richard Eckersley</title>
		<link>https://www.writingsbyraykurzweil.com/ray-kurzweil-responds-to-richard-eckersley-2</link>
		<comments>https://www.writingsbyraykurzweil.com/ray-kurzweil-responds-to-richard-eckersley-2#respond</comments>
		<pubDate>Fri, 03 Feb 2006 18:45:20 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.writingsbyraykurzweil.com/?p=370967</guid>
		<description><![CDATA[Originally published in The Futurist March-April 2006. Reprinted on KurzweilAI.net February 2, 2006. This article is a response to Richard Eckersley&#8217;s comments on Kurzweil&#8217;s article, Reinventing Humanity. You can also read other responses to Kurzweil&#8217;s article by Terry Grossman, John Smart, J. Storrs Hall, and Damien Broderick. Click here to read a PDF of the [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.wfs.org/futcontma06.htm" target="_blank">The Futurist</a><i> March-April 2006. Reprinted on KurzweilAI.net February 2, 2006.</i></p>
<p><i>This article is a response to <a href="/" target="_top">Richard Eckersley&#8217;s comments</a> on Kurzweil&#8217;s article, <a href="/" target="_top">Reinventing Humanity</a>. You can also read other responses to Kurzweil&#8217;s article by <a href="/" target="_top">Terry Grossman</a>, <a href="/" target="_top">John Smart</a>, <a href="/" target="_top">J. Storrs Hall</a>, and <a href="/" target="_top">Damien Broderick</a>.</i></p>
<p><i>Click <a href="http://www.singularity.com/KurzweilFuturist.pdf" target="_blank">here</a> to read a PDF of the full feature.</i><span id="more-370967"></span></p>
<p>Richard Eckersley&#8217;s idyllic notion of human life hundreds of years ago belies our scientific knowledge of history. Two hundred years ago, there was no understanding of sanitation so bacterial infections were rampant. There were no antibiotics and no social safety nets so an infectious disease was a disaster plunging a family into desperation. Thomas Hobbes&#8217; characterization in 1651 of human life as solitary, poor, nasty, brutish, and short was on the mark. Even ignoring infant mortality, life expectancy was in the 30&#8217;s only a couple of hundred years ago. Schubert and Mozart&#8217;s death at 31 and 35 respectively was typical.</p>
<p>Eckersley bases his romanticized idea of ancient life on communication and the relationships fostered by communication. But much of modern technology is directed at just this basic human need. The telephone allowed people to be together even if far apart geographically. The Internet is the quintessential communication technology. Social networks and the panoply of new ways to make connection are creating communities based on genuine common interests rather than the accident of geography. This decentralized electronic communication is also highly democratizing. In a book I wrote in the mid 1980s I predicted the demise of the Soviet Union from the impact of the then emerging communication networks, and that is indeed what happened in the early 1990s. The democracy movement we saw in the 1990s and since is similarly fueled by our unprecedented abilities to stay in touch.</p>
<p>If Eckersley really sticks to his own philosophy, he won&#8217;t be around for very long to influence the debate. I suspect, however, that he will take advantage of the life extension&#8212;and enhancement&#8212;technologies that will emerge in the decades ahead. And I hope that he does so that we can continue this dialogue through this century and beyond.</p>
<p><i>&copy; 2006 Ray Kurzweil. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/ray-kurzweil-responds-to-richard-eckersley-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Techno-Utopia and Human Values</title>
		<link>https://www.writingsbyraykurzweil.com/techno-utopia-and-human-values</link>
		<comments>https://www.writingsbyraykurzweil.com/techno-utopia-and-human-values#respond</comments>
		<pubDate>Fri, 03 Feb 2006 13:44:34 +0000</pubDate>
								<dc:creator>Richard Eckersley</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[It is our preordained fate, Ray Kurzweil suggests, to advance technologically "until the entire universe is at our fingertips." The question then becomes, preordained by whom or what? Biological evolution has not set this course for us. Is technology itself the planner?
]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.wfs.org/futcontma06.htm" target="_blank">The Futurist</a><i> March-April 2006. Reprinted on KurzweilAI.net February 3, 2006.</i></p>
<p><i>This article is a response to Ray Kurzweil&#8217;s feature in </i>The Futurist<i>, <a href="/" target="_top">Reinventing Humanity</a>. You can also read other responses to Kurzweil&#8217;s article by <a href="/" target="_top">Terry Grossman</a>, <a href="/" target="_top">John Smart</a>, <a href="/" target="_top">J. Storrs Hall</a>, and <a href="/" target="_top">Damien Broderick</a>. Ray Kurzweil&#8217;s response to this article can be found <a href="/" target="_top">here</a>.</i></p>
<p><i>Click <a href="http://www.singularity.com/KurzweilFuturist.pdf" target="_blank">here</a> to read a PDF of the full feature.</i></p>
<p>I have sometimes asked audiences if they are inspired or excited by the sort of techno-utopian vision represented by the Singularity; almost no one is. In my surveys over the past decade, I found dwindling minorities of young people (one-fifth to one-quarter) believed in the sort of technical fixes to human problems that Ray Kurzweil champions, while an increased majority (about three-quarters) believe science and technology are alienating people from each other and from nature.<span id="more-80907"></span></p>
<p>The question I ask is, why? Why pursue this future? I don&#146;t pose this question dismissively, or derogatorily, but out of genuine curiosity and a desire for an open, honest conversation. I&#146;m skeptical of arguments that say pre-technological humans led short, nasty and brutish lives. Yes, life expectancy was lower&#151;mainly because of high rates of infant mortality&#151;but those who survived often lived socially and spiritually rich lives. It doesn&#146;t make evolutionary sense to believe humans lived in misery until we discovered technological progress. Animals in the wild don&#146;t live that way humans have been, for most of their history, animals in the wild. </p>
<p>The future world that Ray Kurzweil describes bears almost no relationship to human well-being that I am aware of. In essence, human health and happiness comes from being connected and engaged, from being suspended in a web of relationships and interests&#151;personal, social and spiritual&#151; that give meaning to our lives. The intimacy and support provided by close personal relationships seem to matter most; isolation exacts the highest price. The need to belong is more important than the need to be rich. Meaning matters more than money and what it buys.</p>
<p>We are left with the matter of destiny: it is our preordained fate, Kurzweil suggests, to advance technologically &#147;until the entire universe is at our fingertips.&#148; The question then becomes, preordained by whom or what? Biological evolution has not set this course for us; Is technology itself the planner? Perhaps it will eventually be, but not yet. Is God the entity doing the ordaining? A lot of religious people would have something to say about that, and are likely to strenuously, and even violently, oppose what the Singularity promises, as I have argued before (<i><a href="http://www.wfs.org/futcontma06.htm" target="_blank">The Futurist</a></i>, November-December 2001).</p>
<p>We are left to conclude that we will do this because it is we who have decided it is our destiny. But we have made no such decision, really as the observations with which I began this commentary show.</p>
<hr width="50%" align="center"/>
<p><i>On February 2 2006</i><i>, Richard wrote KurzweilAI.net with this followup:</i></p>
<p>A key issue is this (taken from a 1997 paper of mine in futures):</p>
<p>&#8230; Young people are not so much against science and technology: they acknowledge their importance in achieving a preferred future, and almost 70% said science and technology offered the best hope for meeting the challenges ahead. But they are astute enough to realise<br />science and technology are tools, and their impacts depend on who controls them and whose interests they serve.</p>
<p>They expect to see new technologies used further to entrench and concentrate wealth, power and privilege: for example, they were almost twice as likely to believe that governments would use new technologies to watch and regulate people more as they were that these technologies would empower people and strengthen democracy. They want to see new technologies used to help create closer-knit communities of people living a sustainable lifestyle: for example, they recognised the potential for advances in information and communication technologies to facilitate the creation of overlapping communities&#151;virtual and real, global and local&#151;and the possibility of a sustainable way of life through greater use of alternative energy technologies and renewable resources&#8230;.</p>
<p><i>&copy; 2006 Richard Eckersley. Reprinted with permission.</i> </p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/techno-utopia-and-human-values/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nanofactories, Gang Wars, and &#8220;Feelies&#8221;</title>
		<link>https://www.writingsbyraykurzweil.com/nanofactories-gang-wars-and-feelies</link>
		<comments>https://www.writingsbyraykurzweil.com/nanofactories-gang-wars-and-feelies#respond</comments>
		<pubDate>Fri, 03 Feb 2006 13:43:46 +0000</pubDate>
								<dc:creator>Damien Broderick</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[In 30 years, a new intelligent species might share the planet with us and 
dirt-cheap molecular manufacturing may end poverty and strife. But there exists a risk that a world of lotus-eaters will degenerate into gang wars among those for whom life
retains no discipline or meaning. 
]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.wfs.org/futcontma06.htm" target="_blank">The Futurist</a><i> March-April 2006. Reprinted on KurzweilAI.net February 3, 2006.</i></p>
<p><i>This article is a response to Ray Kurzweil&#8217;s feature in </i>The Futurist<i>, <a href="/" target="_top">Reinventing Humanity</a>. You can also read other responses to Kurzweil&#8217;s article by <a href="/" target="_top">Terry Grossman</a>, <a href="/" target="_top">John Smart</a>, <a href="/" target="_top">J. Storrs Hall</a>, and <a href="/" target="_top">Richard Eckersley</a>. Ray Kurzweil&#8217;s response to Eckersley&#8217;s comments can be found <a href="/" target="_top">here</a>.</i></p>
<p><i>Click <a href="http://www.singularity.com/KurzweilFuturist.pdf" target="_blank">here</a> to read a PDF of the full feature.</i></p>
<p>A quarter century ago, we&#8217;d have laughed at the prospect of &quot;Dick Tracy&quot; cell-phones with cameras; now they&#8217;re everywhere, and nobody noticed after the first few days. So the jump to the idea of a Singularity is not really extraordinary. But, should we really expect ever more substantial changes to follow the same accelerating, headlong pace?<span id="more-80906"></span></p>
<p>It&#8217;s reasonable to expect affordable computers to be smaller and more powerful, 1,000 times improved in a decade, one million times in 20 years, one billion in 30. By then, some machines might have capabilities to rival the human mind. A new intelligent species might share the planet with us.</p>
<p>In addition, developing technologies such as molecular manufacture&#151;nanotechnology&#151;will allow the very engines of productivity to be copied cheaply and distributed widely. If that happens the gap between rich and &quot;poor&quot; might diminish. However, it will only occur if we find ways to prevent portable nano-factories from making lethal weapons available to any child or psychopath. We&#8217;ll be able to solve most of the problems that currently vex us&#151;global warming, (to the extent that it&#8217;s caused by humans,) water and food shortages, provision of clean, cheap power, and so on.</p>
<p>There is a scary downside that I discussed nearly a decade ago in my book <i>The Spike</i>: Dirt-cheap molecular manufacture may end poverty and strife, but there exists a risk that a world of lotus-eaters will degenerate into gang wars among those for whom life retains no discipline or meaning outside of arbitrary local status and violence. People (young men especially) with full bellies gained effortlessly, but lacking meaning in their lives, often find purpose in ganging up on each other in fits of murderous primate chest-pounding. Making Huxleian soma, or &quot;feelies,&quot; the opiate of the people might help, but that, too, is a sickening prospect.</p>
<p>On the other hand, those strictly unforeseeable and mysterious changes captured in the word &quot;Singularity&quot; are likely to overwhelm and surpass such predictable downsides of any technological utopia or dystopia. The eeriest aspect of accelerating change is that we ourselves, and our children, will be the ones soaking in it. The sooner we start thinking seriously about the prospect, the better prepared we&#8217;ll be.</p>
<p><i>&copy; 2006 Damien Broderick. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/nanofactories-gang-wars-and-feelies/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Runaway Artificial Intelligence?</title>
		<link>https://www.writingsbyraykurzweil.com/runaway-artificial-intelligence</link>
		<comments>https://www.writingsbyraykurzweil.com/runaway-artificial-intelligence#respond</comments>
		<pubDate>Fri, 03 Feb 2006 13:42:40 +0000</pubDate>
								<dc:creator>J. Storrs Hall</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Synthetic computer-based artificial intelligence will become available well before nanotechnology makes neuron-level brain scans possible in the 2020s -- it's already a short step to computer systems that make better decisions than corporate managers do, says J. Storrs Hall.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.wfs.org/futcontma06.htm" target="_blank">The Futurist</a><i> March-April 2006. Reprinted on KurzweilAI.net February 3, 2006.</i></p>
<p><i>This article is a response to Ray Kurzweil&#8217;s feature in </i>The Futurist<i>, <a href="/" target="_top">Reinventing Humanity</a>. You can also read other responses to Kurzweil&#8217;s article by <a href="/" target="_top">Terry Grossman</a>, <a href="/" target="_top">John Smart</a>, <a href="/" target="_top">Damien Broderick</a>, and <a href="/" target="_top">Richard Eckersley</a>. Ray Kurzweil&#8217;s response to Eckersley&#8217;s comments can be found <a href="/" target="_top">here</a>.</i></p>
<p><i>Click <a href="http://www.singularity.com/KurzweilFuturist.pdf" target="_blank">here</a> to read a PDF of the full feature.</i></p>
<p>Some years ago, I reviewed Kurzweil&#8217;s earlier book, <i>The Age of Spiritual Machines</i>, for the Foresight Nanotech Institute&#8217;s newsletter. Shortly thereafter I met him in person at a Foresight event, and he had read the review. He told me, &quot;Of all the people who reviewed my book, you were the only one who said I was too conservative!&quot;<span id="more-80905"></span></p>
<p><i>The Singularity is Near</i> is very well researched, and I think that in general, Kurzweil&#8217;s predictions are about as good as it&#8217;s possible to get for things that far in advance. I still think he&#8217;s too conservative in one specific area: Synthetic computer-based artificial intelligence will become available well before nanotechnology makes neuron-level brain scans possible in the 2020s.</p>
<p>What&#8217;s happening is that existing technologies like functional MRI are beginning to give us a high-level functional block diagram of the brain&#146;s processes. At the same time, the hardware capable of running a strong, artificially intelligent computer, by most estimates, is here now, though it&#8217;s still pricey.</p>
<p>Existing AI software techniques can build programs that are experts in any well-defined field. The breakthroughs necessary for such programs to learn for themselves could happen easily in the next decade&#151;one or two decades before Kurzweil predicts.</p>
<p>Kurzweil finesses the issue of runaway AI by proposing a pathway where machine intelligence is patterned after human brains, so that they would have our morals and values built in. Indeed, this would clearly be the wise and prudent course. Unfortunately, it seems all too likely that a shortcut exists without that kind of safeguard. Corporations already use huge computer systems for data mining and decision support that employ sophisticated algorithms no human manager understands. It&#8217;s a very short step to having such a system make better decisions than the managers do, as far as the corporation&#8217;s bottom line is concerned.</p>
<p>The Singularity may mean different things to different people. To me, it is that point where intelligences significantly greater than our own control so many of the essential processes that figure in our lives that mere humans can&#8217;t predict what happens next. This future may be even nearer than Ray Kurzweil has predicted.</p>
<p><i>&copy; 2006 J. Storrs Hall. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/runaway-artificial-intelligence/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Technology and Human Enhancement</title>
		<link>https://www.writingsbyraykurzweil.com/technology-and-human-enhancement</link>
		<comments>https://www.writingsbyraykurzweil.com/technology-and-human-enhancement#respond</comments>
		<pubDate>Fri, 03 Feb 2006 13:41:23 +0000</pubDate>
								<dc:creator>John Smart</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Machines are increasingly exceeding us in the performance of more and more tasks, from guiding objects like
missiles or satellites to assembling other machines. They are merging with us ever more intimately and are learning how to reconfigure our biology in new and significantly
faster technological domains.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.wfs.org/futcontma06.htm" target="_blank">The Futurist</a><i> March-April 2006. Reprinted on KurzweilAI.net February 3, 2006.</i></p>
<p><i>This article is a response to Ray Kurzweil&#8217;s feature in </i>The Futurist<i>, <a href="/" target="_top">Reinventing Humanity</a>. You can also read other responses to Kurzweil&#8217;s article by <a href="/" target="_top">Terry Grossman</a>, <a href="/" target="_top">J. Storrs Hall</a>, <a href="/" target="_top">Damien Broderick</a>, and <a href="/" target="_top">Richard Eckersley</a>. Ray Kurzweil&#8217;s response to Eckersley&#8217;s comments can be found <a href="/" target="_top">here</a>.</i></p>
<p><i>Click <a href="http://www.singularity.com/KurzweilFuturist.pdf" target="_blank">here</a> to read a PDF of the full feature.</i></p>
<p>I have a few differences of opinion with Kurzweil about the coming Singularity.<span id="more-80904"></span></p>
<p>I think he is being overly optimistic about biotechnology&#146;s ability to create substantially better biological human beings. While we&#8217;ll certainly learn to push human capacities to their natural limits in coming decades, I see nothing on the horizon that would allow us to exceed those limits. Biology seems far too frail, slow, complex, and well defended (both at the molecular level and with regard to social custom) for that to be plausible within any reasonable time frame. Furthermore, by the time we are able to substantially improve our biology, we probably won&#146;t want to, as there will be far more interesting and powerful technological environments available to us instead. This points to the importance of understanding the relative accelerations of various technologies (in this case, biological vs. technological).</p>
<p>Kurzweil makes a major contribution to the literature on acceleration studies by clearly explaining technological acceleration curves. These acceleration curves show that the longer we use a technology, the more we get out of it: We use less energy, space, and time, and we get more capacity for less cost. Technological acceleration curves are a little-understood area, but thanks to pioneers like Kurzweil, interest and research in the field are advancing. </p>
<p>The notion that the &#147;future can&#8217;t be predicted&quot; is demonstrably false with regard to a wide number of accelerating physical-computational trends, even though we do not yet know specifically how those technologies will be implemented. We can no longer ignore the profound technological changes occurring all around us.</p>
<p>It&#8217;s also time we acknowledged the slowness of human biology compared to our technological progeny. Our machines are increasingly exceeding us in the performance of more and more tasks, from guiding objects like missiles or satellites to assembling other machines. They are merging with us ever more intimately, andare learning how to reconfigure our biology in new and significantly faster technological domains.</p>
<p>Something very interesting is happening, and human beings are selective catalysts, not absolute controllers, of this process. Let us face this openly, and investigate it actively, so that we may guide these developments as wisely as possible.</p>
<p><i>&copy; 2006 John Smart. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/technology-and-human-enhancement/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Ray Kurzweil&#8217;s Plan for Cheating Death</title>
		<link>https://www.writingsbyraykurzweil.com/ray-kurzweil-s-plan-for-cheating-death</link>
		<comments>https://www.writingsbyraykurzweil.com/ray-kurzweil-s-plan-for-cheating-death#respond</comments>
		<pubDate>Fri, 03 Feb 2006 13:40:22 +0000</pubDate>
								<dc:creator>Terry Grossman</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[A cure for aging may be found in the next fifty years. The trick now is to live long enough to be there when it happens.  In his two new books, Ray Kurzweil has painted a clear picture of the future and provided a blueprint for how to get there.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.wfs.org/futcontma06.htm" target="_blank">The Futurist</a><i> March-April 2006. Reprinted on KurzweilAI.net February 3, 2006.</i></p>
<p><i>This article is a response to Ray Kurzweil&#8217;s feature in </i>The Futurist<i>, <a href="/" target="_top">Reinventing Humanity</a>. You can also read other responses to Kurzweil&#8217;s article by <a href="/" target="_top">John Smart</a>, <a href="/" target="_top">J. Storrs Hall</a>, <a href="/" target="_top">Damien Broderick</a>, and <a href="/" target="_top">Richard Eckersley</a>. Ray Kurzweil&#8217;s response to Eckersley&#8217;s comments can be found <a href="/" target="_top">here</a>.</i></p>
<p><i>Click <a href="http://www.singularity.com/KurzweilFuturist.pdf" target="_blank">here</a> to read a PDF of the full feature.</i></p>
<p>I first met Ray Kurzweil in 1999 at a Foresight Institute meting in Palo Alto. I was there to get some background information on nanotechnology for a new book I was writing. As I stood in the lunch line, a healthy appearing man in front of me was engaged in animated conversation with a not nearly so healthy looking second man. Their topic of conversation was vitamins and nutritional supplementation, a topic of great interest to me, a nutritionally oriented M.D.<span id="more-80903"></span></p>
<p>I joined the conversation, and the healthy looking man introduced himself as Ray Kurzweil. Ray and I continued our dialog via email after the conference ended, and a few months later, he flew from his home in Boston to Frontier Medical Institute, my longevity clinic in Denver, for a comprehensive longevity medical evaluation. In Denver we performed a comprehensive battery of tests designed to uncover any health risks he might still have so that together we could better optimize Ray&#146;s already very sophisticated program for health and longevity.</p>
<p>From the beginning, it was obvious that Ray would be a unique patient. I have many engineer patients in my practice (and Ray is an engineer by training), so I am not surprised when a patient comes to see me with a notebook of spreadsheets detailing various data extracted from their daily lives: blood pressure, weight, cholesterol, blood sugar levels, amount of exercise, etc. carefully tabulated for several years. But all previous data collections I had seen, even those organized into Excel and meticulously graphed, paled in comparison to Ray&#146;s. His data collection was so thorough and meticulous that he could tell me what he ate for lunch on June 23, 1989 (as well as for every other day for several years before that date or since). And not only what he ate, but the number of grams of each serving and calories consumed, as well as the number of calories he burned that day through exercise &#150; every day for decades!</p>
<p>As a result, it came as less of a surprise for me to learn that Ray was taking over 200 supplement pills a day. Ray&#146;s approach had been to accurately assess his personal health risks and then quite simply to &#147;reprogram his biochemistry.&#148; Ongoing testing indicates that he is doing a remarkable job, as measurement of his biological age in my clinic indicates that he is now almost two decades younger than his chronological age, and all of his health risks appear under optimal control.</p>
<p>Ray was already working on his new book, &#147;The Singularity is Near,&#148; at that time, and I had just completed my first book, &#147;The Baby Boomers&#146; Guide to Living Forever.&#148; It was natural that our email dialog moved into discussion of the prospects for truly radical life extension for people of all ages, including older boomers like ourselves. As our emails multiplied into the many thousands, we decided to organize the information and see if we had the makings of a new book that we would coauthor. I created a preliminary table of contents, Ray organized the information from our emails and another 10,000 emails or so later, our joint book, &#147;Fantastic Voyage: Live Long Enough to Live Forever&#148; was written in the midst of Ray&#146;s writing of &#147;The Singularity Is Near.&#148;</p>
<p>Ray felt that he was writing these books together as a unit and that there was synergy between them. &#147;The Singularity Is Near&#148; details Ray&#146;s vision of the astounding possibilities of the world of the near future as the singularity unfolds sometime within the next few decades. In &#147;Fantastic Voyage&#148; we provide readers with the information they need to live long enough and remain healthy enough to fully experience the wonders of life in the post singularity world. In writing these two books has Ray painted a clear picture of the future and provided a blueprint for how to get there.</p>
<p><i>&copy; 2006 Terry Grossman. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/ray-kurzweil-s-plan-for-cheating-death/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Interview with Robert A. Freitas Jr. Part 2</title>
		<link>https://www.writingsbyraykurzweil.com/interview-with-robert-a-freitas-jr-part-2</link>
		<comments>https://www.writingsbyraykurzweil.com/interview-with-robert-a-freitas-jr-part-2#respond</comments>
		<pubDate>Thu, 02 Feb 2006 10:25:44 +0000</pubDate>
								<dc:creator>Robert A. Freitas Jr., Sander Olson</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[There are very few diseases or conditions--including infectious diseases--aside from physical brain damage, that cannot be cured using nanomedicine, says nanomedicine pioneer Robert A. Freitas Jr. He believes nanomedicine's greatest power will emerge in a decade or two as we learn to design and construct complete artificial nanorobots using diamondoid nanometer-scale parts and subsystems.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published on </i><a href="http://www.nanotech.biz/" target="_blank">Nanotech.biz</a> <i>November 5, 2005. Reprinted on KurzweilAI.net February 2, 2006.</i></p>
<p><i><a href="/" target="_top">Continued from Interview with Robert Freitas: Part 1</a>.</i></p>
<p>Robert A. Freitas Jr., J.D., published the first detailed technical design study of a mechanical nanorobot ever published in a peer-reviewed mainstream biomedical journal and is the author of nanomedicine, the first book-length technical discussion of the medical applications of nanotechnology and medical nanorobotics.<span id="more-80901"></span></p>
<h3>Question 1: How far can simple genetic engineering go towards curing diseases? Does pre-nanotechnology based technology have the potential to cure cancer and regrow organs?</h3>
<p>Yes, of course. Genetic engineering is a very powerful technology. Pre-nanotechnology treatments for some forms of cancer already exist. The emerging discipline of tissue engineering is already heading in the direction of building tissues and organs using special scaffolds that are impregnated with appropriate cells which grow into the matrix to form cohesive new tissues. Single-organ cloning is also on the horizon. But all of these treatments and organ substitutions could be accomplished with greater reliability, executed with greater speed, and completed in a side-effect free manner, using the tools of nanorobotic medicine. There are also many kinds of treatments, particularly those related to physical trauma, that can only be dealt with efficiently using advanced nanorobotic medicine.</p>
<p>The way I like to think about all this is to recognize that “nanomedicine” is most simply and generally defined as the preservation and improvement of human health, using molecular tools and molecular knowledge of the human body. Nanomedicine involves the use of three conceptual classes of molecularly precise structures: nonbiological nanomaterials and nanoparticles, biotechnology-based materials and devices, and nonbiological devices including nanorobotics.</p>
<p>In the near term, say, the next 5 years, the molecular tools of nanomedicine will include biologically active materials with well-defined nanoscale structures, including those produced by the methods of genetic engineering. For example, one of the first uses of “nanotechnology” in treating cancer employs engineered nanoparticles of various kinds to attempt a general cure while staying within the usual drug-treatment paradigm. Kopelman&#8217;s group at the University of Michigan has developed dye-tagged nanoparticles that can be inserted into living cells as biosensors. This quickly led to nanomaterials incorporating a variety of plug-in modules, creating molecular nanodevices for the early detection and therapy of brain cancer. One type of particle is attached to a cancer cell antibody that adheres to cancer cells, and is also affixed with a contrast agent to make the particle highly visible during MRI, while also enhancing the selective cancer-killing effect during subsequent laser irradiation of the treated brain tissue.</p>
<p>Another example from the University of Michigan is thedendrimers, tree-shaped synthetic molecules with a regular branching structureemanating outward from a core. Theoutermost layer can be functionalized with other useful molecules such asgenetic therapy agents, decoys for viruses, or anti-HIV agents. The next stepis to create dendrimercluster agents,multi-component nanodevicescalledtecto-dendrimersbuilt up from a numberof single-dendrimermodules.These modules performspecialized functionssuch asdiseased cell recognition,diagnosisof disease state,therapeutic drug delivery,location reporting, and therapy outcome reporting.Theframework can be customizedto fight a particular cancersimplyby substitutingany one of many possibledistinct cancer recognitionor “targeting” dendrimers. Thelarger trend in medical nanomaterials is to migrate from single-functionmolecules to multi-module entities that can do many things, but only at certaintimes or under certain conditions &#150; exemplifying a continuing, and, in my view,inevitable, technological evolution toward a device-oriented nanomedicine.</p>
<p> In the mid-term, the next 5 or 10 years or so, knowledge gained from genomics and proteomics will make possible new treatments tailored to specific individuals, new drugs targeting pathogens whose genomes have now been decoded, and stem cell treatments to repair damaged tissue, replace missing function, or slow aging. We will see genetic therapies and tissue engineering, and many other offshoots of biotechnology, becoming more common in medical practice. We should also see artificial organic devices that incorporate biological motors or self-assembled DNA-based structures for a variety of useful medical purposes. And we&#8217;ll also see biological robots, derived from bacteria or other motile cells, that have had their genomes re-engineered and re-programmed. </p>
<p>So yes, there is a lot that pre-nanotechnology, or, more properly, pre-nanorobotic medicine can do to improve human health. But the advent of medical nanorobotics will represent a huge leap forward.</p>
<h3>Question 2: Are there any diseases that can&#8217;t be cured by nanotechnology? Are there any aspects of aging that can&#8217;t be stopped by nanotechnology?</h3>
<p> If we combine the benefits of a human physiology maintained at the level of effectiveness possessed by our bodies when we were children (e.g., <a HREF="http://www.rfreitas.com/Nano/DeathIsAnOutrage.htm#Slide69" target="_blank">dechronification</a>), along with the ability to deal with almost any form of severe trauma (via nanosurgery), then there are very few diseases or conditions that cannot be cured using nanomedicine. The only major class of incurable illness which nanorobots can&#8217;t handle is the case of brain damage in which portions of&nbsp;your brain have been physically destroyed. This condition might not be reversible if unique information has been irrevocably lost (say, because you neglected to make a backup copy of this information). There are several other minor “incurable” conditions, but all of these similarly relate to the loss of unique information. </p>
<h3>Question 3: The Foresight community has deemphasized molecular assemblers in favor of a desktop manufacturing paradigm. How will medical nanorobots be constructed?</h3>
<p> As noted in the previous interview, my view is that this change of emphasis is unlikely to affect the conduct of research in the field, or the activities of those few of us who are actually doing the research involved, because the distinction between “<a HREF="http://www.molecularassembler.com/KSRM/4.11.3.htm" target="_blank">molecular assemblers</a>” and “<a HREF="http://www.molecularassembler.com/KSRM/5.9.4.htm" target="_blank">nanofactories</a>” is largely cosmetic and because both approaches require almost exactly the same set of enabling technologies. At present we&#8217;re concentrating our efforts mostly on developing these component enabling technologies, not on integration of these technologies into larger systems. <a href="http://www.molecularassembler.com/KSRM/6.4.6.htm" target="_blank">Systems analysis</a> will come next.
</p>
<p> Medical nanorobots small enough to go into the human bloodstream will be very complex machines.&nbsp; We don&#8217;t know exactly how to build them yet, but the overall pathway from here to there is slowly starting to come into focus.&nbsp; Building and deploying nanorobotic systems will require first the ability to build diamondoid structures to molecular precision, using atomic force microscopy or similar means along with the techniques of diamond mechanosynthesis. My early work on diamond mechanosynthesis is described in a lecture I gave at the 2004 Foresight Conference in Washington DC, the text of which (plus many images) is available <a href="http://www.molecularassembler.com/Papers/PathDiamMolMfg.htm" target="_blank">online</a>. I&#8217;m currently involved in 6 collaborations with university groups in the U.S, U.K. and Russia (including both theoretical and experimental efforts) to push forward the technology in this area, and I have several new papers nearing completion for journal submission very soon on this work. </p>
<p> This must be followed by developing the ability to design and manufacture rigid machine parts and then to assemble them into larger machine systems, up to and including nanorobots. My forthcoming book with Josh Hall (<i><a HREF="http://www.rfreitas.com/index.htm#Machines" target="_blank">Fundamentals of Nanomechanical Engineering</a></i>) and the development of the <a href="http://www.nanoengineer-1.com/" target="_blank">NanoEngineer</a> software by <a href="http://www.nanorex.com/" target="_blank">Nanorex</a> should advance our ability to design nanomechanical components, and further simulations and experiments will be required to learn how to build these systems and then assemble them into larger structures. </p>
<p> Once <a href="http://www.molecularassembler.com/Papers/PathDiamMolMfg.htm" target="_blank">diamond mechanosynthesis</a> and the fabrication of nanoparts becomes feasible, we will also need a massively parallel manufacturing capability to assemble nanorobots cheaply, precisely, and in vast quantities. My recently published technical book, co-authored with Merkle and titled <i><a HREF="http://www.molecularassembler.com/KSRM.htm" target="_blank">Kinematic Self-Replicating Machines</a></i> (<a HREF="http://www.landesbioscience.com/" target="_blank">Landes Bioscience</a>, 2004), surveys all known current work in the field of self-replication and replicative manufacturing, including concepts of molecular assemblers and nanofactories. (This book is <a HREF="http://www.molecularassembler.com/KSRM.htm" target="_blank">freely available online</a> at the <a HREF="http://www.molecularassembler.com/" target="_blank">Molecular Assembler website</a>.) </p>
<p> Finally, the reliable mass-production of medical nanorobots must be followed by a period of testing and approval for <a HREF="http://www.nanomedicine.com/NMIIA.htm" target="_blank">biocompatibility</a> and safety by the FDA or its equivalent in other countries. I would not be surprised if the first deployment of such systems occurred during the 2020s.&nbsp; But until we can build these devices experimentally, we are limited to theoretical analyses and computational chemistry simulations (some of which are now so good that their accuracy rivals the results of actual experiments). </p>
<p>So we can take two approaches, both of which I&#8217;m pursuing.&nbsp; First, we can use our knowledge of the laws of physics and the principles of good engineering to create exemplar designs of nanorobots, and to analyze potential capabilities and uses of these devices, and determine which applications are likely to be possible and which seem not to be feasible.&nbsp; This helps to establish a clear long-term goal.&nbsp; Second, we can examine the implementation pathways that could lead from where we are today to the future time when we may be able to build nanorobotic devices.&nbsp; As noted above, this may require <a href="http://www.molecularassembler.com/Papers/PathDiamMolMfg.htm" target="_blank">diamond mechanosynthesis</a> and <a href="http://www.molecularassembler.com/KSRM/5.7.htm" target="_blank">massively parallel nanofabrication</a> capabilities.&nbsp; Earlier this year I submitted the first-ever <a href="http://www.molecularassembler.com/Papers/DMSToolbuildProvPat.htm" target="_blank">U.S. patent on diamond mechanosynthesis</a> that describes one possible specific experimental process for achieving molecularly precise diamond structures in a practical way. </p>
<h3>Question 4: How will nanorobots avoid being destroyed by our immune systems? Won&#8217;t our immune systems identify them as foreign organisms and immediately attack them?</h3>
<p> Nanorobots constructed of diamondoid materials cannot be destroyed by our immune system. They can be made to be essentially <a href="http://www.nanomedicine.com/NMI/9.3.5.3.6.htm" target="_blank">impervious to chemical attack</a>. However, the body may react to their presence in a way that may interfere with their function. This raises the issue of nanorobot biocompatibility. </p>
<p> The biocompatibility of medical nanorobots is a complex and important issue. That&#8217;s why I expanded my original discussion in the <a HREF="http://www.nanomedicine.com/" target="_blank">Nanomedicine</a> book series from a single chapter (Chapter 15, <i>Nanomedicine Vol. II</i>) to an entire book-length treatment (<i><a href="http://www.nanomedicine.com/NMIIA.htm" target="_blank">Nanomedicine, Vol. IIA</a></i>) (<i>NMIIA</i>). My exploration of the particular problem you raise, <a HREF="http://www.nanomedicine.com/NMIIA/15.2.3.htm" target="_blank">nanorobot immunoreactivity</a>, spans 16 pages in <i>NMIIA</i>. There is not enough space here to go into details, so interested readers should refer to that extended discussion. The short answer to your question is that the immune system invokes several different responses to foreign objects placed within the body, including <a HREF="http://www.nanomedicine.com/NMIIA/15.2.3.2.htm" target="_blank">complement activation</a> and <a HREF="http://www.nanomedicine.com/NMIIA/15.2.3.3.htm" target="_blank">antibody response</a>. <a href="http://www.nanomedicine.com/NMIIA/15.4.3.htm" target="_blank">Phagocytosis</a> and <a href="http://www.nanomedicine.com/NMIIA/15.4.3.5.htm" target="_blank">foreign-body granulomatous reaction</a> are additional major immune system issues for medical nanorobots intended to remain in the body for extended durations. The <i><a href="http://www.nanomedicine.com/NMIIA.htm" target="_blank">NMIIA</a></i> book discusses all of these issues and suggests numerous methods by which antigenic reactions to nanorobots can be prevented or avoided, including (but not limited to) <a href="http://www.nanomedicine.com/NMIIA/15.2.3.3.htm#p23" target="_blank">camouflage</a>, <a href="http://www.nanomedicine.com/NMIIA/15.2.3.3.htm#24" target="_blank">chemical inhibition</a>, <a HREF="http://www.nanomedicine.com/NMIIA/15.2.3.3.htm#25" target="_blank">decoys</a>, <a href="http://www.nanomedicine.com/NMIIA/15.2.3.3.htm#26" target="_blank">active neutralization</a>, <a HREF="http://www.nanomedicine.com/NMIIA/15.2.3.3.htm#27" target="_blank">tolerization</a>, and <a HREF="http://www.nanomedicine.com/NMIIA/15.2.3.3.htm#28" target="_blank">clonal deletion</a>. <i><a href="http://www.nanomedicine.com/NMIIA.htm" target="_blank">NMIIA</a></i> also has an extensive discussion of <a HREF="http://www.nanomedicine.com/NMIIA/15.4.htm" target="_blank">nanorobotic phagocytosis</a>, including details of all steps in the phagocytic process and possible techniques for <a HREF="http://www.nanomedicine.com/NMIIA/15.4.3.6.htm" target="_blank">phagocyte avoidance and escape</a> by medical nanorobots. To summarize: the problems appear arduous but surmountable with good design. </p>
<h3>Question 5: Ray Kurzweil has proposed having billions of nanorobots positioned in our brains, in order to create full-immersion virtual reality. Do you think that such a scenario will ever be feasible?</h3>
<p> Yes of course. I first described the foundational concepts necessary for this in <a href="http://www.nanomedicine.com/NMI.htm" target="_blank">Nanomedicine, Vol. I</a> (1999), including <a HREF="http://www.nanomedicine.com/NMI/4.8.6.htm" target="_blank">noninvasive neuroelectric monitoring</a> (i.e., nanorobots monitoring neuroelectric signal traffic without being resident inside the neuron cell body, using &gt;5 different methods), <a HREF="http://www.nanomedicine.com/NMI/4.9.5.htm" target="_blank">neural macrosensing</a> (i.e., nanorobots eavesdropping on the body&#8217;s sensory traffic, including auditory and optic nerve taps), <a href="http://www.nanomedicine.com/NMI/7.4.5.4.htm" target="_blank">modification of natural cellular message traffic</a> by nanorobots stationed nearby (including signal <a href="http://www.nanomedicine.com/NMI/7.4.5.4.htm#p2" target="_blank">amplification</a>, <a href="http://www.nanomedicine.com/NMI/7.4.5.4.htm#p3" target="_blank">suppression</a>, <a href="http://www.nanomedicine.com/NMI/7.4.5.4.htm#p4" target="_blank">replacement</a>, and <a href="http://www.nanomedicine.com/NMI/7.4.5.4.htm#p5" target="_blank">linkage of previously disparate neural signal sources</a>), <a href="http://www.nanomedicine.com/NMI/7.4.2.6.htm" target="_blank">inmessaging from neurons</a> (nanorobots receiving signals from the neural traffic), <a href="http://www.nanomedicine.com/NMI/7.4.5.6.htm" target="_blank">outmessaging to neurons</a> (nanorobots inserting signals into the neural traffic), direct stimulation of <a href="http://www.nanomedicine.com/NMI/7.4.6.1.htm" target="_blank">somesthetic</a>, <a href="http://www.nanomedicine.com/NMI/7.4.6.2.htm" target="_blank">kinesthetic</a>, <a href="http://www.nanomedicine.com/NMI/7.4.6.3.htm#p4" target="_blank">auditory</a>, <a href="http://www.nanomedicine.com/NMI/7.4.6.4.htm#p3" target="_blank">gustatory</a>, <a href="http://www.nanomedicine.com/NMI/7.4.6.4.htm#p4" target="_blank">auditory</a>, and <a href="http://www.nanomedicine.com/NMI/7.4.6.5.htm" target="_blank">ocular</a> sensory nerves (including ganglionic <a HREF="http://www.nanomedicine.com/NMI/7.4.6.5.htm#C" target="_blank">stimulation</a> and <a HREF="http://www.nanomedicine.com/NMI/7.4.6.5.htm#D" target="_blank">direct photoreceptor stimulation</a>) by nanorobots, and the many <a href="http://www.nanomedicine.com/NMIIA/15.3.6.5.htm" target="_blank">neuron biocompatibility</a> issues related to nanorobots in the brain, with special attention to the <a href="http://www.nanomedicine.com/NMIIA/15.3.6.5.htm#p8" target="_blank">blood-brain barrier</a>. </p>
<p> The key issue for enabling full-immersion reality is obtaining the necessary bandwidth inside the body, which should be available using the <i><a HREF="http://www.nanomedicine.com/NMI/7.3.1.htm" target="_blank">in vivo fiber network</a></i> I first proposed in <a href="http://www.nanomedicine.com/NMI.htm" target="_blank">Nanomedicine, Vol. I</a> (1999). Such a network can handle 10<sup>18</sup> bits/sec of data traffic, capacious enough for real-time brain-state monitoring. The fiber network has a 30 cm<sup>3</sup> volume and generates 4-6 watts waste heat, both small enough for safe installation in a 1400 cm<sup>3</sup> 25-watt human brain. Signals travel at most a few meters at nearly the speed of light, so transit time from signal origination at neuron sites inside the brain to the external computer system mediating the upload are ~0.00001 millisec which is considerably less than the minimum ~5 millisec neuron discharge cycle time. Neuron-monitoring chemical sensors located on average ~2 microns apart can capture relevant chemical events occurring within a ~5 millisec time window, since this is the <a href="http://www.nanomedicine.com/NMI/Tables/3.4.jpg" target="_blank">approximate diffusion time</a> for, say, a small neuropeptide across a 2-micron distance. Thus human brain state monitoring can probably be “instantaneous,” at least on the timescale of human neural response, in the sense of “nothing of significance was missed.” </p>
<p> I believe Ray was relying upon these earlier analyses, among others, when making his proposals. </p>
<h3>Question 6: What is your best guess regarding the development of advanced medical nanotechnology? Will it appear within a decade of the first desktop assembler?</h3>
<p> The availability of practical molecular manufacturing is an obvious and necessary precursor to the widespread use of medical nanorobotics. I would not be surprised if the 2020&#8217;s are eventually dubbed the “Decade of Medical Nanorobots.” </p>
<h3>Question 7: Will nanorobots be able to eradicate all infectious disease? After all, bacteria and viruses are extremely adaptable, and have developed a plethora of effective techniques to thwart the immune system.</h3>
<p> It will probably not be possible to eradicate all infectious disease. The current bacterial population of Earth may be ~10<sup>31</sup> organisms and so the chances are good that most of them are going to survive in some host reservoir, somewhere on the planet, for as long as life exists here, despite our best efforts to eradicate them. However, it should be possible to eliminate all harmful effects, and all harmful natural disease organisms, from the human body, allowing us to lead lives that are free of pathogen-mediated illness (at least most of the time). A simple antimicrobial nanorobot like the <a href="http://www.jetpress.org/volume14/freitas.html" target="_blank">microbivore</a> should be able to eliminate even the most severe bloodborne infections in treatment times on the order of an hour; more sophisticated devices could be used to tackle more difficult infection scenarios. </p>
<p> Regarding microbial adaptability, it makes no difference if a bacterium has acquired multiple drug resistance to antibiotics or to any other traditional treatment &#150; the microbivore will eat it anyway, achieving complete clearance of even the most severe septicemic infections in minutes to hours, as compared to weeks or even months for antibiotic-assisted natural phagocytic defenses, without increasing the risk of sepsis or septic shock. Hence microbivores, each 2-3 microns in size, appear to be up to ~1000 times faster-acting than either unaided natural or antibiotic-assisted biological phagocytic defenses, and can extend the doctor&#8217;s reach to the entire range of potential bacterial threats, including locally dense infections. </p>
<h3>Question 8: Have you made any detailed, molecularly precise simulations of medical nanorobots?</h3>
<p> The greatest power of nanomedicine will emerge in a decade or two as we learn to design and construct complete artificial nanorobots using diamondoid nanometer-scale parts and subsystems including sensors, motors, manipulators, power plants, and molecular computers. The development pathway will be lengthy and difficult. First, theoretical scaling studies must be used to assess basic concept feasibility. These initial studies would then be followed by more detailed computational simulations of specific nanorobot components and assemblies, and ultimately full systems simulations, all thoroughly integrated with additional simulations of massively parallel manufacturing processes from start to finish consistent with a design-for-assembly engineering philosophy. Once molecular manufacturing capabilities become available, experimental efforts may progress from component fabrication and testing, to component assembly, and finally to prototypes and mass manufacture, ultimately leading to clinical trials. </p>
<p> As of 2005, progress in medical nanorobotics remains largely at the concept feasibility stage &#150; since 1998, the author has published four theoretical nanorobot scaling studies, including the <a HREF="http://www.foresight.org/Nanomedicine/Respirocytes.html" target="_blank">respirocytes</a> (artificial red cells), <a HREF="http://www.jetpress.org/volume14/freitas.html" target="_blank">microbivores</a> (artificial white cells), <a HREF="http://www.imm.org/Reports/Rep018.html" target="_blank">clottocytes</a> (artificial platelets), and the <a href="http://www.jetpress.org/volume11/vasculoid.html" target="_blank">vasculoid</a> (an artificial vascular system). These studies have not been intended to yield an actual engineering design for a future nanomedical product. Rather, the purpose was merely to examine a set of appropriate design constraints, scaling issues, and reference designs to assess whether or not the core idea might be feasible, and to determine key limitations of such designs. </p>
<p> The basic diamondoid structure of the respirocyte, the simplest nanorobot designed to date, includes 18 billion atoms. Molecular mechanics simulations of systems including 10-40 billion atoms have recently been reported using cluster supercomputers. So it is now possible, at least in principle, to attempt a basic simulation of an entire working medical nanorobot. The problems with actually doing this are many, and include the lack of a detailed atomic-level description of the respirocyte, a lack of reliable nanopart designs for components comprising the respirocyte, the difficulties of preparing input files and running massive simulations, and access to the personnel and computer time necessary to run the simulation. Such a simulation might well be attempted sometime in the next 5-10 years. Meanwhile we must content ourselves with molecular mechanics simulations of molecularly precise nanocomponents, starting with structures of up to 100,000 atoms using, for instance, the new <a HREF="http://www.nanoengineer-1.com/" target="_blank">NanoEngineer</a> software produced by <a HREF="http://www.nanorex.com/" target="_blank">Nanorex</a>. </p>
<h3><b>Question 9: How has the mainstream medical community reacted to your research?</b></h3>
<p> I think the biggest impact so far has been in solidifying the long-term vision of where the technology can go. Typically articles describing future medicine, especially nanotechnology-based medicine, will lead off with a mention of “nanorobots in the bloodstream” as an idea that lies out there somewhere in the distant future, before moving on to a more substantive discussion of the latest news in medical nanoparticle research. This is entirely understandable and logical. Doctors are faced with the immediacy of sick or dying patients, and can only employ the instruments at their command today. Realistically, there will only be some small fraction of the traditional medical community that “gets it” right off the bat. The intended audience of my <i><a href="http://www.nanomedicine.com/" target="_blank">Nanomedicine</a></i> book series is technical and professional people who are seriously interested in the future of medical technology. Many practicing physicians do not &#150; and quite correctly should not &#150; fit this description. But I know I&#8217;m having an impact. I&#8217;ve received dozens of emails from students and young researchers thanking me for inspiring them to consider new career directions. (I&#8217;ve also been told, only partly tongue-in-cheek, that my <i><a HREF="http://www.nanomedicine.com/" target="_blank">Nanomedicine</a></i> books are often used by postdocs to help prepare their grant proposals because of all the relevant literature references collected in each volume.) </p>
<p> As medical nanorobotics proceeds along the development pathway I&#8217;ve outlined above &#150; moving from drawing board, to computer simulation, to laboratory demonstration of mechanosynthesis, to component design and fabrication, to parts assembly and integration, and finally to device performance and safety testing &#150; members of the mainstream medical community will naturally pay increasing attention to it, because it will become more directly relevant to them. By mid-century, medical nanorobotics will completely dominate medical practice. By writing the <i><a href="http://www.nanomedicine.com/" target="_blank">Nanomedicine</a></i> book series, <i><a href="http://www.molecularassembler.com/KSRM.htm" target="_blank">KSRM</a></i>, and the rest, I hope to accelerate the process of technological development and adoption of nanorobotics in modern medicine. To this end, the <i><a HREF="http://www.nanomedicine.com/" target="_blank">Nanomedicine</a></i> book series and my other books are being made freely available online, with the generous consent of my publisher, <a HREF="http://www.landesbioscience.com/" target="_blank">Landes Bioscience</a>. Such generosity is still almost unheard of among conventional book publishers. The main reason we&#8217;re doing this is to promote a broader discussion of the technical issues and a rapid assessment of the possibilities by the worldwide biomedical and engineering community. </p>
<h3><b>Question 10: How far along are you in writing your Nanomedicine book series? What else have you been up to lately, in the nanomedicine area?</b></h3>
<p> I&#8217;ve been writing the <i><a href="http://www.nanomedicine.com/" target="_blank">Nanomedicine</a></i> book series since 1994. It was originally conceived as a single book, then became a trilogy until I realized I needed an entire volume devoted solely to biocompatibility, whereupon it became a tetralogy. <a HREF="http://www.nanomedicine.com/NMI.htm" target="_blank">Volume I</a> was published by <a HREF="http://www.landesbioscience.com/" target="_blank">Landes Bioscience</a> in 1999 and <a HREF="http://www.nanomedicine.com/NMIIA.htm" target="_blank">Volume IIA</a> came out in 2003, also published by <a href="http://www.landesbioscience.com/" target="_blank">Landes Bioscience</a>.&nbsp; I&#8217;m still writing the last 2 volumes (<i><a HREF="http://www.nanomedicine.com/NMIIB.htm" target="_blank">NMIIB</a></i>, <i><a HREF="http://www.nanomedicine.com/NMIII.htm" target="_blank">NMIII</a></i>) of this book series, an ongoing effort that&nbsp;will continue during 2005-2010. Earlier this year I published two reviews on the current status of nanomedicine, available online at <a href="http://www.nanomedicine.com/Papers/WhatIsNMMar05.pdf" target="_blank">http://www.nanomedicine.com/Papers/WhatIsNMMar05.pdf</a> and <a HREF="http://www.nanomedicine.com/Papers/NMRevMar05.pdf" target="_blank">http://www.nanomedicine.com/Papers/NMRevMar05.pdf</a>. The first of these papers was the leadoff article for the premier issue of the new journal <a href="http://www.us.elsevierhealth.com/product.jsp?isbn=15499634" target="_blank">Nanomedicine</a> (the first journal exclusively devoted to this field, published by Elsevier), on whose <a href="http://www.elsevier.com/wps/find/journaleditorialboard.cws_home/703416/editorialboard" target="_blank">Editorial Board</a> I also serve. </p>
<p> In a recent major collaborative effort, artist <a href="http://www.nanogirl.com/index2.html" target="_blank">Gina Miller</a> has finished work on a 3-minute long <a HREF="http://www.nanogirl.com/museumfuture/dermaldisplay.htm" target="_blank">animation</a> that nicely illustrates the workings of my proposed <a href="http://www.nanomedicine.com/NMI/7.4.6.7.htm#p3" target="_blank">programmable dermal display</a> (essentially, a video-touchscreen nano-tattoo that reports real-time medical information to the user, as reported back by numerous nanorobots stationed in various locations inside the body). I think this is a very cool animation. And of course you can always visit my <a href="http://www.foresight.org/Nanomedicine/Gallery/index.html" target="_blank">Nanomedicine Art Gallery</a> (hosted for me by Foresight Institute) with all the nice <a href="http://www.foresight.org/Nanomedicine/Gallery/Captions/index.html" target="_blank">nanorobot images</a>, where I continue on as curator. </p>
<p><i>&copy;2006 <a href="http://www.nanotech.biz/" target="_blank">Sander Olson</a>. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/interview-with-robert-a-freitas-jr-part-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>We Are the Web</title>
		<link>https://www.writingsbyraykurzweil.com/we-are-the-web-2</link>
		<comments>https://www.writingsbyraykurzweil.com/we-are-the-web-2#respond</comments>
		<pubDate>Thu, 19 Jan 2006 15:22:24 +0000</pubDate>
								<dc:creator>Kevin Kelly</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[The planet-sized "Web" computer is already more complex than a human brain and has surpassed the 20-petahertz threshold for potential intelligence as calculated by Ray Kurzweil. In 10 years, it will be ubiquitous. So will superintelligence emerge on the Web, not a supercomputer?]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.wired.com/wired/archive/13.08/tech.html" target="_blank">Wired Magazine</a><i> August 2005. Published on KurzweilAI.net January 19, 2006.</i></p>
<p>Ten years ago, Netscape&#8217;s explosive IPO ignited huge piles of money. The brilliant flash revealed what had been invisible only a moment before: the World Wide Web. As Eric Schmidt (then at Sun, now at Google) noted, the day before the IPO, nothing about the Web; the day after, everything.<span id="more-80898"></span></p>
<p>Computing pioneer Vannevar Bush outlined the Web&#8217;s core idea&#8212;hyperlinked pages&#8212;in 1945, but the first person to try to build out the concept was a freethinker named Ted Nelson who envisioned his own scheme in 1965. However, he had little success connecting digital bits on a useful scale, and his efforts were known only to an isolated group of disciples. Few of the hackers writing code for the emerging Web in the 1990s knew about Nelson or his hyperlinked dream machine.</p>
<p>At the suggestion of a computer-savvy friend, I got in touch with Nelson in 1984, a decade before Netscape. We met in a dark dockside bar in Sausalito, California. He was renting a houseboat nearby and had the air of someone with time on his hands. Folded notes erupted from his pockets, and long strips of paper slipped from overstuffed notebooks. Wearing a ballpoint pen on a string around his neck, he told me&#8212;way too earnestly for a bar at 4 o&#8217;clock in the afternoon&#8212;about his scheme for organizing all the knowledge of humanity. Salvation lay in cutting up 3 x 5 cards, of which he had plenty.</p>
<p>Although Nelson was polite, charming, and smooth, I was too slow for his fast talk. But I got an <i>aha!</i> from his marvelous notion of hypertext. He was certain that every document in the world should be a footnote to some other document, and computers could make the links between them visible and permanent. But that was just the beginning! Scribbling on index cards, he sketched out complicated notions of transferring authorship back to creators and tracking payments as readers hopped along networks of documents, what he called the docuverse. He spoke of &quot;transclusion&quot; and &quot;intertwingularity&quot; as he described the grand utopian benefits of his embedded structure. It was going to save the world from stupidity.</p>
<p>I believed him. Despite his quirks, it was clear to me that a hyperlinked world was inevitable&#8212;someday. But looking back now, after 10 years of living online, what surprises me about the genesis of the Web is how much was missing from Vannevar Bush&#8217;s vision, Nelson&#8217;s docuverse, and my own expectations. We all missed the big story. The revolution launched by Netscape&#8217;s IPO was only marginally about hypertext and human knowledge. At its heart was a new kind of participation that has since developed into an emerging culture based on sharing. And the ways of participating unleashed by hyperlinks are creating a new type of thinking&#8212;part human and part machine&#8212;found nowhere else on the planet or in history.</p>
<p>Not only did we fail to imagine what the Web would become, we still don&#8217;t see it today! We are blind to the miracle it has blossomed into. And as a result of ignoring what the Web really is, we are likely to miss what it will grow into over the next 10 years. Any hope of discerning the state of the Web in 2015 requires that we own up to how wrong we were 10 years ago.</p>
<p><b>1995</b><br />Before the Netscape browser illuminated the Web, the Internet did not exist for most people. If it was acknowledged at all, it was mischaracterized as either corporate email (as exciting as a necktie) or a clubhouse for adolescent males (read: pimply nerds). It was hard to use. On the Internet, even dogs had to type. Who wanted to waste time on something so boring?</p>
<p>The memories of an early enthusiast like myself can be unreliable, so I recently spent a few weeks reading stacks of old magazines and newspapers. Any promising new invention will have its naysayers, and the bigger the promises, the louder the nays. It&#8217;s not hard to find smart people saying stupid things about the Internet on the morning of its birth. In late 1994, <i>Time</i> magazine explained why the Internet would never go mainstream: &quot;It was not designed for doing commerce, and it does not gracefully accommodate new arrivals.&quot; <i>Newsweek</i> put the doubts more bluntly in a February 1995 headline: &quot;THE INTERNET? BAH!&quot; The article was written by astrophysicist and Net maven Cliff Stoll, who captured the prevailing skepticism of virtual communities and online shopping with one word: &quot;baloney.&quot;</p>
<p>This dismissive attitude pervaded a meeting I had with the top leaders of ABC in 1989. I was there to make a presentation to the corner office crowd about this &quot;Internet stuff.&quot; To their credit, they realized something was happening. Still, nothing I could tell them would convince them that the Internet was not marginal, not just typing, and, most emphatically, not just teenage boys. Stephen Weiswasser, a senior VP, delivered the ultimate putdown: &quot;The Internet will be the CB radio of the &#8217;90s,&quot; he told me, a charge he later repeated to the press. Weiswasser summed up ABC&#8217;s argument for ignoring the new medium: &quot;You aren&#8217;t going to turn passive consumers into active trollers on the Internet.&quot;</p>
<p>I was shown the door. But I offered one tip before I left. &quot;Look,&quot; I said. &quot;I happen to know that the address <i>abc.com</i> has not been registered. Go down to your basement, find your most technical computer guy, and have him register <i>abc.com</i> immediately. Don&#8217;t even think about it. It will be a good thing to do.&quot; They thanked me vacantly. I checked a week later. The domain was still unregistered.</p>
<p>While it is easy to smile at the dodos in TV land, they were not the only ones who had trouble imagining an alternative to couch potatoes. <i>Wired</i> did, too. When I examine issues of <i>Wired</i> from before the Netscape IPO (issues that I proudly edited), I am surprised to see them touting a future of high production-value content&#8212;5,000 always-on channels and virtual reality, with a side order of email sprinkled with bits of the Library of Congress. In fact, <i>Wired</i> offered a vision nearly identical to that of Internet wannabes in the broadcast, publishing, software, and movie industries: basically, TV that worked. The question was who would program the box. <i>Wired</i> looked forward to a constellation of new media upstarts like Nintendo and Yahoo!, not old-media dinosaurs like ABC. </p>
<p>Problem was, content was expensive to produce, and 5,000 channels of it would be 5,000 times as costly. No company was rich enough, no industry large enough, to carry off such an enterprise. The great telecom companies, which were supposed to wire up the digital revolution, were paralyzed by the uncertainties of funding the Net. In June 1994, David Quinn of British Telecom admitted to a conference of software publishers, &quot;I&#8217;m not sure how you&#8217;d make money out of it.&quot;</p>
<p>The immense sums of money supposedly required to fill the Net with content sent many technocritics into a tizzy. They were deeply concerned that cyberspace would become cyburbia&#8212;privately owned and operated. Writing in <i>Electronic Engineering Times</i> in 1995, Jeff Johnson worried: &quot;Ideally, individuals and small businesses would use the information highway to communicate, but it is more likely that the information highway will be controlled by Fortune 500 companies in 10 years.&quot; The impact would be more than commercial. &quot;Speech in cyberspace will not be free if we allow big business to control every square inch of the Net,&quot; wrote Andrew Shapiro in <i>The Nation</i> in July 1995.</p>
<p>The fear of commercialization was strongest among hardcore programmers: the coders, Unix weenies, TCP/IP fans, and selfless volunteer IT folk who kept the ad hoc network running. The major administrators thought of their work as noble, a gift to humanity. They saw the Internet as an open commons, not to be undone by greed or commercialization. It&#8217;s hard to believe now, but until 1991, commercial enterprise on the Internet was strictly prohibited. Even then, the rules favored public institutions and forbade &quot;extensive use for private or personal business.&quot;</p>
<p>In the mid-1980s, when I was involved in the WELL, an early nonprofit online system, we struggled to connect it to the emerging Internet but were thwarted, in part, by the &quot;acceptable use&quot; policy of the National Science Foundation (which ran the Internet backbone). In the eyes of the NSF, the Internet was funded for research, not commerce. At first this restriction wasn&#8217;t a problem for online services, because most providers, the WELL included, were isolated from one another. Paying customers could send email within the system&#8212;but not outside it. In 1987, the WELL fudged a way to forward outside email through the Net without confronting the acceptable use policy, which our organization&#8217;s own techies were reluctant to break. The NSF rule reflected a lingering sentiment that the Internet would be devalued, if not trashed, by opening it up to commercial interests. Spam was already a problem (one every week!).</p>
<p>This attitude prevailed even in the offices of <i>Wired</i>. In 1994, during the first design meetings for <i>Wired</i>&#8216;s embryonic Web site, HotWired, programmers were upset that the innovation we were cooking up&#8212;what are now called clickthrough ad banners&#8212;subverted the great social potential of this new territory. The Web was hardly out of diapers, and already they were being asked to blight it with billboards and commercials. Only in May 1995, after the NSF finally opened the floodgates to ecommerce, did the geek elite begin to relax.</p>
<p>Three months later, Netscape&#8217;s public offering took off, and in a blink a world of DIY possibilities was born. Suddenly it became clear that ordinary people could create material anyone with a connection could view. The burgeoning online audience no longer needed ABC for content. Netscape&#8217;s stock peaked at $75 on its first day of trading, and the world gasped in awe. Was this insanity, or the start of something new?</p>
<p><b>2005</b><br />The scope of the Web today is hard to fathom. The total number of Web pages, including those that are dynamically created upon request and document files available through links, exceeds 600 billion. That&#8217;s 100 pages per person alive.</p>
<p>How could we create so much, so fast, so well? In fewer than 4,000 days, we have encoded half a trillion versions of our collective story and put them in front of 1 billion people, or one-sixth of the world&#8217;s population. That remarkable achievement was not in anyone&#8217;s 10-year plan.</p>
<p>The accretion of tiny marvels can numb us to the arrival of the stupendous. Today, at any Net terminal, you can get: an amazing variety of music and video, an evolving encyclopedia, weather forecasts, help wanted ads, satellite images of anyplace on Earth, up-to-the-minute news from around the globe, tax forms, TV guides, road maps with driving directions, real-time stock quotes, telephone numbers, real estate listings with virtual walk-throughs, pictures of just about anything, sports scores, places to buy almost anything, records of political contributions, library catalogs, appliance manuals, live traffic reports, archives to major newspapers&#8212;all wrapped up in an interactive index that really works.</p>
<p>This view is spookily godlike. You can switch your gaze of a spot in the world from map to satellite to 3-D just by clicking. Recall the past? It&#8217;s there. Or listen to the daily complaints and travails of almost anyone who blogs (and doesn&#8217;t everyone?). I doubt angels have a better view of humanity.</p>
<p>Why aren&#8217;t we more amazed by this fullness? Kings of old would have gone to war to win such abilities. Only small children would have dreamed such a magic window could be real. I have reviewed the expectations of waking adults and wise experts, and I can affirm that this comprehensive wealth of material, available on demand and free of charge, was not in anyone&#8217;s scenario. Ten years ago, anyone silly enough to trumpet the above list as a vision of the near future would have been confronted by the evidence: There wasn&#8217;t enough money in all the investment firms in the entire world to fund such a cornucopia. The success of the Web at this scale was impossible.</p>
<p>But if we have learned anything in the past decade, it is the plausibility of the impossible.</p>
<p>Take eBay. In some 4,000 days, eBay has gone from marginal Bay Area experiment in community markets to the most profitable spinoff of hypertext. At any one moment, 50 million auctions race through the site. An estimated half a million folks make their living selling through Internet auctions. Ten years ago I heard skeptics swear nobody would ever buy a car on the Web. Last year eBay Motors sold $11 billion worth of vehicles. EBay&#8217;s 2001 auction of a $4.9 million private jet would have shocked anyone in 1995&#8212;and still smells implausible today.</p>
<p>Nowhere in Ted Nelson&#8217;s convoluted sketches of hypertext transclusion did the fantasy of a global flea market appear. Especially as the ultimate business model! He hoped to franchise his Xanadu hypertext systems in the physical world at the scale of a copy shop or caf&eacute;&#8212;you would go to a store to do your hypertexting. Xanadu would take a cut of the action.</p>
<p>Instead, we have an open global flea market that handles 1.4 billion auctions every year and operates from your bedroom. Users do most of the work; they photograph, catalog, post, and manage their own auctions. And they police themselves; while eBay and other auction sites do call in the authorities to arrest serial abusers, the chief method of ensuring fairness is a system of user-generated ratings. Three billion feedback comments can work wonders.</p>
<p>What we all failed to see was how much of this new world would be manufactured by users, not corporate interests. Amazon.com customers rushed with surprising speed and intelligence to write the reviews that made the site&#8217;s long-tail selection usable. Owners of Adobe, Apple, and most major software products offer help and advice on the developer&#8217;s forum Web pages, serving as high-quality customer support for new buyers. And in the greatest leverage of the common user, Google turns traffic and link patterns generated by 2 billion searches a month into the organizing intelligence for a new economy. This bottom-up takeover was not in anyone&#8217;s 10-year vision.</p>
<p>No Web phenomenon is more confounding than blogging. Everything media experts knew about audiences&#8212;and they knew a lot&#8212;confirmed the focus group belief that audiences would never get off their butts and start making their own entertainment. Everyone knew writing and reading were dead; music was too much trouble to make when you could sit back and listen; video production was simply out of reach of amateurs. Blogs and other participant media would never happen, or if they happened they would not draw an audience, or if they drew an audience they would not matter. What a shock, then, to witness the near-instantaneous rise of 50 million blogs, with a new one appearing every two seconds. There&#8212;another new blog! One more person doing what AOL and ABC&#8212;and almost everyone else&#8212;expected only AOL and ABC to be doing. These user-created channels make no sense economically. Where are the time, energy, and resources coming from?</p>
<p>The audience.</p>
<p>I run a blog about cool tools. I write it for my own delight and for the benefit of friends. The Web extends my passion to a far wider group for no extra cost or effort. In this way, my site is part of a vast and growing gift economy, a visible underground of valuable creations&#8212;text, music, film, software, tools, and services&#8212;all given away for free. This gift economy fuels an abundance of choices. It spurs the grateful to reciprocate. It permits easy modification and reuse, and thus promotes consumers into producers.</p>
<p>The open source software movement is another example. Key ingredients of collaborative programming&#8212;swapping code, updating instantly, recruiting globally&#8212;didn&#8217;t work on a large scale until the Web was woven. Then software became something you could join, either as a beta tester or as a coder on an open source project. The clever &quot;view source&quot; browser option let the average Web surfer in on the act. And anyone could rustle up a link&#8212;which, it turns out, is the most powerful invention of the decade.</p>
<p>Linking unleashes involvement and interactivity at levels once thought unfashionable or impossible. It transforms reading into navigating and enlarges small actions into powerful forces. For instance, hyperlinks made it much easier to create a seamless, scrolling street map of every town. They made it easier for people to refer to those maps. And hyperlinks made it possible for almost anyone to annotate, amend, and improve any map embedded in the Web. Cartography has gone from spectator art to participatory democracy.</p>
<p>The electricity of participation nudges ordinary folks to invest huge hunks of energy and time into making free encyclopedias, creating public tutorials for changing a flat tire, or cataloging the votes in the Senate. More and more of the Web runs in this mode. One study found that only 40 percent of the Web is commercial. The rest runs on duty or passion.</p>
<p>Coming out of the industrial age, when mass-produced goods outclassed anything you could make yourself, this sudden tilt toward consumer involvement is a complete Lazarus move: &quot;We thought that died long ago.&quot; The deep enthusiasm for making things, for interacting more deeply than just choosing options, is the great force not reckoned 10 years ago. This impulse for participation has upended the economy and is steadily turning the sphere of social networking&#8212;smart mobs, hive minds, and collaborative action&#8212;into the main event.</p>
<p>When a company opens its databases to users, as Amazon, Google, and eBay have done with their Web services, it is encouraging participation at new levels. The corporation&#8217;s data becomes part of the commons and an invitation to participate. People who take advantage of these capabilities are no longer customers; they&#8217;re the company&#8217;s developers, vendors, skunk works, and fan base.</p>
<p>A little over a decade ago, a phone survey by <i>Macworld</i> asked a few hundred people what they thought would be worth $10 per month on the information superhighway. The participants started with uplifting services: educational courses, reference books, electronic voting, and library information. The bottom of the list ended with sports statistics, role-playing games, gambling, and dating. Ten years later what folks actually use the Internet for is inverted. According to a 2004 Stanford study, people use the Internet for (in order): playing games, &quot;just surfing,&quot; shopping the list ends with responsible activities like politics and banking. (Some even admitted to porn.) Remember, shopping wasn&#8217;t supposed to happen. Where&#8217;s Cliff Stoll, the guy who said the Internet was baloney and online catalogs humbug? He has a little online store where he sells handcrafted Klein bottles.</p>
<p>The public&#8217;s fantasy, revealed in that 1994 survey, began reasonably with the conventional notions of a downloadable world. These assumptions were wired into the infrastructure. The bandwidth on cable and phone lines was asymmetrical: Download rates far exceeded upload rates. The dogma of the age held that ordinary people had no need to upload; they were consumers, not producers. Fast-forward to today, and the poster child of the new Internet regime is BitTorrent. The brilliance of BitTorrent is in its exploitation of near-symmetrical communication rates. Users upload stuff while they are downloading. It assumes participation, not mere consumption. Our communication infrastructure has taken only the first steps in this great shift from audience to participants, but that is where it will go in the next decade.</p>
<p>With the steady advance of new ways to share, the Web has embedded itself into every class, occupation, and region. Indeed, people&#8217;s anxiety about the Internet being out of the mainstream seems quaint now. In part because of the ease of creation and dissemination, online culture is <i>the culture</i>. Likewise, the worry about the Internet being 100 percent male was entirely misplaced. Everyone missed the party celebrating the 2002 flip-point when women online first outnumbered men. Today, 52 percent of netizens are female. And, of course, the Internet is not and has never been a teenage realm. In 2005, the average user is a bone-creaking 41 years old.</p>
<p>What could be a better mark of irreversible acceptance than adoption by the Amish? I was visiting some Amish farmers recently. They fit the archetype perfectly: straw hats, scraggly beards, wives with bonnets, no electricity, no phones or TVs, horse and buggy outside. They have an undeserved reputation for resisting all technology, when actually they are just very late adopters. Still, I was amazed to hear them mention their Web sites.</p>
<p>&quot;Amish Web sites?&quot; I asked.</p>
<p>&quot;For advertising our family business. We weld barbecue grills in our shop.&quot;</p>
<p>&quot;Yes, but&#8212;&quot;</p>
<p>&quot;Oh, we use the Internet terminal at the public library. And Yahoo!&quot;</p>
<p>I knew then the battle was over.</p>
<p><b>2015</b><br />The Web continues to evolve from a world ruled by mass media and mass audiences to one ruled by messy media and messy participation. How far can this frenzy of creativity go? Encouraged by Web-enabled sales, 175,000 books were published and more than 30,000 music albums were released in the US last year. At the same time, 14 million blogs launched worldwide. All these numbers are escalating. A simple extrapolation suggests that in the near future, everyone alive will (on average) write a song, author a book, make a video, craft a weblog, and code a program. This idea is less outrageous than the notion 150 years ago that someday everyone would write a letter or take a photograph.</p>
<p>What happens when the data flow is asymmetrical&#8212;but in favor of creators? What happens when everyone is uploading far more than they download? If everyone is busy making, altering, mixing, and mashing, who will have time to sit back and veg out? Who will be a consumer?</p>
<p>No one. And that&#8217;s just fine. A world where production outpaces consumption should not be sustainable; that&#8217;s a lesson from Economics 101. But online, where many ideas that don&#8217;t work in theory succeed in practice, the audience increasingly doesn&#8217;t matter. What matters is the network of social creation, the community of collaborative interaction that futurist Alvin Toffler called prosumption. As with blogging and BitTorrent, prosumers produce and consume at once. The producers are the audience, the act of making is the act of watching, and every link is both a point of departure and a destination.</p>
<p>But if a roiling mess of participation is <i>all</i> we think the Web will become, we are likely to miss the big news, again. The experts are certainly missing it. The Pew Internet &amp; American Life Project surveyed more than 1,200 professionals in 2004, asking them to predict the Net&#8217;s next decade. One scenario earned agreement from two-thirds of the respondents: &quot;As computing devices become embedded in everything from clothes to appliances to cars to phones, these networked devices will allow greater surveillance by governments and businesses.&quot; Another was affirmed by one-third: &quot;By 2014, use of the Internet will increase the size of people&#8217;s social networks far beyond what has traditionally been the case.&quot;</p>
<p>These are safe bets, but they fail to capture the Web&#8217;s disruptive trajectory. The real transformation under way is more akin to what Sun&#8217;s John Gage had in mind in 1988 when he famously said, &quot;The network <i>is</i> the computer.&quot; He was talking about the company&#8217;s vision of the thin-client desktop, but his phrase neatly sums up the destiny of the Web: As the OS for a megacomputer that encompasses the Internet, all its services, all peripheral chips and affiliated devices from scanners to satellites, and the billions of human minds entangled in this global network. This gargantuan Machine already exists in a primitive form. In the coming decade, it will evolve into an integral extension not only of our senses and bodies but our minds.</p>
<p>Today, the Machine acts like a very large computer with top-level functions that operate at approximately the clock speed of an early PC. It processes 1 million emails each second, which essentially means network email runs at 1 megahertz. Same with Web searches. Instant messaging runs at 100 kilohertz, SMS at 1 kilohertz. The Machine&#8217;s total external RAM is about 200 terabytes. In any one second, 10 terabits can be coursing through its backbone, and each year it generates nearly 20 exabytes of data. Its distributed &quot;chip&quot; spans 1 billion active PCs, which is approximately the number of transistors in one PC.</p>
<p>This planet-sized computer is comparable in complexity to a human brain. Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a trillion &quot;synapses&quot; between the static pages on the Web. The human brain has about 100 times that number&#8212;but brains are not doubling in size every few years. The Machine is.</p>
<p>Since each of its &quot;transistors&quot; is itself a personal computer with a billion transistors running lower functions, the Machine is fractal. In total, it harnesses a quintillion transistors, expanding its complexity beyond that of a biological brain. It has already surpassed the 20-petahertz threshold for potential intelligence as calculated by Ray Kurzweil. For this reason some researchers pursuing artificial intelligence have switched their bets to the Net as the computer most likely to think first. Danny Hillis, a computer scientist who once claimed he wanted to make an AI &quot;that would be proud of me,&quot; has invented massively parallel supercomputers in part to advance us in that direction. He now believes the first real AI will emerge not in a stand-alone supercomputer like IBM&#8217;s proposed 23-teraflop Blue Brain, but in the vast digital tangle of the global Machine.</p>
<p>In 10 years, the system will contain hundreds of millions of miles of fiber-optic neurons linking the billions of ant-smart chips embedded into manufactured products, buried in environmental sensors, staring out from satellite cameras, guiding cars, and saturating our world with enough complexity to begin to learn. We will live inside this thing.</p>
<p>Today the nascent Machine routes packets around disturbances in its lines; by 2015 it will anticipate disturbances and avoid them. It will have a robust immune system, weeding spam from its trunk lines, eliminating viruses and denial-of-service attacks the moment they are launched, and dissuading malefactors from injuring it again. The patterns of the Machine&#8217;s internal workings will be so complex they won&#8217;t be repeatable; you won&#8217;t always get the same answer to a given question. It will take intuition to maximize what the global network has to offer. The most obvious development birthed by this platform will be the absorption of routine. The Machine will take on anything we do more than twice. It will be the Anticipation Machine.</p>
<p>One great advantage the Machine holds in this regard: It&#8217;s always on. It is very hard to learn if you keep getting turned off, which is the fate of most computers. AI researchers rejoice when an adaptive learning program runs for days without crashing. The fetal Machine has been running continuously for at least 10 years (30 if you want to be picky). I am aware of no other machine&#8212;of any type&#8212;that has run that long with zero downtime. While portions may spin down due to power outages or cascading infections, the entire thing is unlikely to go quiet in the coming decade. It will be the most reliable gadget we have.</p>
<p>And the most universal. By 2015, desktop operating systems will be largely irrelevant. The Web will be the only OS worth coding for. It won&#8217;t matter what device you use, as long as it runs on the Web OS. You will reach the same distributed computer whether you log on via phone, PDA, laptop, or HDTV.</p>
<p>In the 1990s, the big players called that convergence. They peddled the image of multiple kinds of signals entering our lives through one box&#8212;a box they hoped to control. By 2015 this image will be turned inside out. In reality, each device is a differently shaped window that peers into the global computer. Nothing converges. The Machine is an unbounded thing that will take a billion windows to glimpse even part of. It is what you&#8217;ll see on the other side of any screen.</p>
<p>And who will write the software that makes this contraption useful and productive? We will. In fact, we&#8217;re already doing it, each of us, every day. When we post and then tag pictures on the community photo album Flickr, we are teaching the Machine to give names to images. The thickening links between caption and picture form a neural net that can learn. Think of the 100 billion times <i>per day</i> humans click on a Web page as a way of teaching the Machine what we think is important. Each time we forge a link between words, we teach it an idea. Wikipedia encourages its citizen authors to link each fact in an article to a reference citation. Over time, a Wikipedia article becomes totally underlined in blue as ideas are cross-referenced. That massive cross-referencing is how brains think and remember. It is how neural nets answer questions. It is how our global skin of neurons will adapt autonomously and acquire a higher level of knowledge.</p>
<p>The human brain has no department full of programming cells that configure the mind. Rather, brain cells program themselves simply by being used. Likewise, our questions program the Machine to answer questions. We think we are merely wasting time when we surf mindlessly or blog an item, but each time we click a link we strengthen a node somewhere in the Web OS, thereby programming the Machine by using it.</p>
<p>What will most surprise us is how dependent we will be on what the Machine knows&#8212;about us and about what we want to know. We already find it easier to Google something a second or third time rather than remember it ourselves. The more we teach this megacomputer, the more it will assume responsibility for our knowing. It will become our memory. Then it will become our identity. In 2015 many people, when divorced from the Machine, won&#8217;t feel like themselves&#8212;as if they&#8217;d had a lobotomy.</p>
<p>Legend has it that Ted Nelson invented Xanadu as a remedy for his poor memory and attention deficit disorder. In this light, the Web as memory bank should be no surprise. Still, the birth of a machine that subsumes all other machines so that in effect there is only one Machine, which penetrates our lives to such a degree that it becomes essential to our identity&#8212;this will be full of surprises. Especially since it is only the beginning.</p>
<p>There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine. Later that Machine may run faster, but there is only one time when it is born.</p>
<p>You and I are alive at this moment.</p>
<p>We should marvel, but people alive at such times usually don&#8217;t. Every few centuries, the steady march of change meets a discontinuity, and history hinges on that moment. We look back on those pivotal eras and wonder what it would have been like to be alive then. Confucius, Zoroaster, Buddha, and the latter Jewish patriarchs lived in the same historical era, an inflection point known as the axial age of religion. Few world religions were born after this time. Similarly, the great personalities converging upon the American Revolution and the geniuses who commingled during the invention of modern science in the 17th century mark additional axial phases in the short history of our civilization.</p>
<p>Three thousand years from now, when keen minds review the past, I believe that our ancient time, here at the cusp of the third millennium, will be seen as another such era. In the years roughly coincidental with the Netscape IPO, humans began animating inert objects with tiny slivers of intelligence, connecting them into a global field, and linking their own minds into a single thing. This will be recognized as the largest, most complex, and most surprising event on the planet. Weaving nerves out of glass and radio waves, our species began wiring up all regions, all processes, all facts and notions into a grand network. From this embryonic neural net was born a collaborative interface for our civilization, a sensing, cognitive device with power that exceeded any previous invention. The Machine provided a new way of thinking (perfect search, total recall) and a new mind for an old species. It was the Beginning.</p>
<p>In retrospect, the Netscape IPO was a puny rocket to herald such a moment. The product and the company quickly withered into irrelevance, and the excessive exuberance of its IPO was downright tame compared with the dotcoms that followed. First moments are often like that. After the hysteria has died down, after the millions of dollars have been gained and lost, after the strands of mind, once achingly isolated, have started to come together&#8212;the only thing we can say is: Our Machine is born. It&#8217;s on.</p>
<p><i>&copy; 2005 <a href="http://www.kk.org" target="_blank">Kevin Kelly</a>. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/we-are-the-web-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Ray Kurzweil&#8217;s Dangerous Idea: The near-term inevitability of radical life extension and expansion</title>
		<link>https://www.writingsbyraykurzweil.com/ray-kurzweil-s-dangerous-idea-the-near-term-inevitability-of-radical-life-extension-and-expansion</link>
		<comments>https://www.writingsbyraykurzweil.com/ray-kurzweil-s-dangerous-idea-the-near-term-inevitability-of-radical-life-extension-and-expansion#respond</comments>
		<pubDate>Tue, 17 Jan 2006 14:29:15 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA["What is your dangerous idea?" Over one hundred big thinkers answered this question, as part of <a href="http://www.edge.org/q2006/q06_index.html" target="_blank">The Edge's Annual Question</a> for 2006.  Ray Kurzweil's dangerous idea?  We can achieve immortality in our lifetime.]]></description>
			<content:encoded><![CDATA[<p><i>This is an excerpt of an article originally published on </i><a href="http://www.edge.org/q2006/q06_2.html#kurzweil" target="_blank">Edge</a> <i>January 1, 2006</i>.<i> Published with permission on KurzweilAI.net January 17, 2006.</i></p>
<p>My dangerous idea is the near-term inevitability of radical life extension and expansion. The idea is dangerous, however, only when contemplated from current linear perspectives. <span id="more-80896"></span></p>
<p>First the inevitability: the power of information technologies is doubling each year, and moreover comprises areas beyond computation, most notably our knowledge of biology and of our own intelligence. It took 15 years to sequence HIV and from that perspective the genome project seemed impossible in 1990. But the amount of genetic data we were able to sequence doubled every year while the cost came down by half each year.</p>
<p>We finished the genome project on schedule and were able to sequence SARS in only 31 days. We are also gaining the means to reprogram the ancient information processes underlying biology. RNA interference can turn genes off by blocking the messenger RNA that express them. New forms of gene therapy are now able to place new genetic information in the right place on the right chromosome. We can create or block enzymes, the work horses of biology. We are reverse-engineering&#151;and gaining the means to reprogram&#151;the information processes underlying disease and aging, and this process is accelerating, doubling every year. If we think linearly, then the idea of turning off all disease and aging processes appears far off into the future just as the genome project did in 1990. On the other hand, if we factor in the doubling of the power of these technologies each year, the prospect of radical life extension is only a couple of decades away.</p>
<p>In addition to reprogramming biology, we will be able to go substantially beyond biology with nanotechnology in the form of computerized nanobots in the bloodstream. If the idea of programmable devices the size of blood cells performing therapeutic functions in the bloodstream sounds like far off science fiction, I would point out that we are doing this already in animals. One scientist cured type I diabetes in rats with blood cell sized devices containing 7 nanometer pores that let insulin out in a controlled fashion and that block antibodies. If we factor in the exponential advance of computation and communication (price-performance multiplying by a factor of a billion in 25 years while at the same time shrinking in size by a factor of thousands), these scenarios are highly realistic.</p>
<p>The apparent dangers are not real while unapparent dangers are real. The apparent dangers are that a dramatic reduction in the death rate will create over population and thereby strain energy and other resources while exacerbating environmental degradation. However we only need to capture 1 percent of 1 percent of the sunlight to meet all of our energy needs (3 percent of 1 percent by 2025) and nanoengineered solar panels and fuel cells will be able to do this, thereby meeting all of our energy needs in the late 2020s with clean and renewable methods. Molecular nanoassembly devices will be able to manufacture a wide range of products, just about everything we need, with inexpensive tabletop devices. The power and price-performance of these systems will double each year, much faster than the doubling rate of the biological population. As a result, poverty and pollution will decline and ultimately vanish despite growth of the biological population.</p>
<p>There are real downsides, however, and this is not a utopian vision. We have a new existential threat today in the potential of a bioterrorist to engineer a new biological virus. We actually do have the knowledge to combat this problem (for example, new vaccine technologies and RNA interference which has been shown capable of destroying arbitrary biological viruses), but it will be a race. We will have similar issues with the feasibility of self-replicating nanotechnology in the late 2020s. Containing these perils while we harvest the promise is arguably the most important issue we face.</p>
<p>Some people see these prospects as dangerous because they threaten their view of what it means to be human. There is a fundamental philosophical divide here. In my view, it is not our limitations that define our humanity. Rather, we are the species that seeks and succeeds in going beyond our limitations.</p>
<p>[Continued on <a href="http://www.edge.org/q2006/q06_2.html#kurzweil" target="_blank">Edge</a>]</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/ray-kurzweil-s-dangerous-idea-the-near-term-inevitability-of-radical-life-extension-and-expansion/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Thought Experiments: When the Singularity Is More than a Literary Device: An Interview with Futurist-Inventor Ray Kurzweil</title>
		<link>https://www.writingsbyraykurzweil.com/thought-experiments-when-the-singularity-is-more-than-a-literary-device-an-interview-with-futurist-inventor-ray-kurzweil</link>
		<comments>https://www.writingsbyraykurzweil.com/thought-experiments-when-the-singularity-is-more-than-a-literary-device-an-interview-with-futurist-inventor-ray-kurzweil#respond</comments>
		<pubDate>Mon, 16 Jan 2006 10:02:21 +0000</pubDate>
								<dc:creator>Cory Doctorow</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Is the Singularity a spiritual or a technological belief system? Perhaps it is the melding of both, says science fiction author Cory Doctorow in this dialogue with Ray Kurzweil. "After all, this is a system of belief that dictates a means by which we can care for our bodies virtuously and live long enough to transcend them. It's no wonder that the Singularity has come to occupy so much of the science fiction narrative in these years. Science or spirituality, you could hardly ask for a subject better tailored to technological speculation and drama."]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in </i><a href="http://www.asimovs.com/_issue_0506/thoughtexperiments.shtml" target="_blank">Asimov&#8217;s Science Fiction</a><i>, April 18, 2005. Published with permission on KurzweilAI.net January 16, 2006.</i></p>
<p>It&#146;s not clear to me whether the Singularity is a technical belief system or a spiritual one. <span id="more-80894"></span></p>
<p>The Singularity&#151;a notion that&#146;s crept into a lot of skiffy, and whose most articulate in-genre spokesmodel is Vernor Vinge&#151;describes the black hole in history that will be created at the moment when human intelligence can be digitized. When the speed and scope of our cognition is hitched to the price-performance curve of microprocessors, our &quot;prog-ress&quot; will double every eighteen months, and then every twelve months, and then every ten, and eventually, every five seconds.</p>
<p>Singularities are, literally, holes in space from whence no information can emerge, and so SF writers occasionally mutter about how hard it is to tell a story set after the information Singularity. Everything will be different. What it means to be human will be so different that what it means to be in danger, or happy, or sad, or any of the other elements that make up the squeeze-and-release tension in a good yarn will be unrecognizable to us pre-Singletons.</p>
<p>It&#146;s a neat conceit to write around. I&#146;ve committed Singularity a couple of times, usually in collaboration with gonzo Singleton Charlie Stross, the mad antipope of the Singularity. But those stories have the same relation to futurism as romance novels do to love: a shared jumping-off point, but radically different morphologies.</p>
<p>Of course, the Singularity isn&#146;t just a conceit for noodling with in the pages of the pulps: it&#146;s the subject of serious-minded punditry, futurism, and even science.</p>
<p>Ray Kurzweil is one such pundit-futurist-scientist. He&#146;s a serial entrepreneur who founded successful businesses that advanced the fields of optical character recognition (machine-reading) software, text-to-speech synthesis, synthetic musical instrument simulation, computer-based speech recognition, and stock-market analysis. He cured his own Type-II diabetes through a careful review of the literature and the judicious application of first principles and reason. To a casual observer, Kurzweil appears to be the star of some kind of Heinlein novel, stealing fire from the gods and embarking on a quest to bring his maverick ideas to the public despite the dismissals of the establishment, getting rich in the process.</p>
<p>Kurzweil believes in the Singularity. In his 1990 manifesto, &quot;The Age of Intelligent Machines,&quot; Kurzweil persuasively argued that we were on the brink of meaningful machine intelligence. A decade later, he continued the argument in a book called <i>The Age of Spiritual Machines</i>, whose most audacious claim is that the world&#146;s computational capacity has been slowly doubling since the crust first cooled (and before!), and that the doubling interval has been growing shorter and shorter with each passing year, so that now we see it reflected in the computer industry&#146;s Moore&#146;s Law, which predicts that microprocessors will get twice as powerful for half the cost about every eighteen months. The breathtaking sweep of this trend has an obvious conclusion: computers more powerful than people; more powerful than we can comprehend.</p>
<p>Now Kurzweil has published two more books, <i>The Singularity Is Near, When Humans Transcend Biology</i> (Viking, Spring 2005) and <i>Fantastic Voyage: Live Long Enough to Live Forever</i> (with Terry Grossman, Rodale, November 2004). The former is a technological roadmap for creating the conditions necessary for ascent into Singularity; the latter is a book about life-prolonging technologies that will assist baby-boomers in living long enough to see the day when technological immortality is achieved.</p>
<p>See what I meant about his being a Heinlein hero?</p>
<p>I still don&#146;t know if the Singularity is a spiritual or a technological belief system. It has all the trappings of spirituality, to be sure. If you are pure and kosher, if you live right and if your society is just, then you will live to see a moment of Rapture when your flesh will slough away leaving nothing behind but your ka, your soul, your consciousness, to ascend to an immortal and pure state.</p>
<p>I wrote a novel called <i>Down and Out in the Magic Kingdom</i> where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when you&#146;ve been restored from backup?</p>
<p>The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to &quot;cure&quot; him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people. If the computer fools a human judge into thinking that it&#146;s a person, then to all intents and purposes, it&#146;s a person.</p>
<p>So how do you know if the backed-up you that you&#146;ve restored into a new body&#151;or a jar with a speaker attached to it&#151;is really you? Well, you can ask it some questions, and if it answers the same way that you do, you&#146;re talking to a faithful copy of yourself.</p>
<p>Sounds good. But the me who sent his first story into <i>Asimov&#146;s</i> seventeen years ago couldn&#146;t answer the question, &quot;Write a story for <i>Asimov&#146;s</i>&quot; the same way the me of today could. Does that mean I&#146;m not me anymore?</p>
<p>Kurzweil has the answer.</p>
<p>&quot;If you follow that logic, then if you were to take me ten years ago, I could not pass for myself in a Ray Kurzweil Turing Test. But once the requisite uploading technology becomes available a few decades hence, you <i>could</i> make a perfect-enough copy of me, and it <i>would</i> pass the Ray Kurzweil Turing Test. The copy doesn&#146;t have to match the quantum state of my every neuron, either: if you meet me the next day, I&#146;d pass the Ray Kurzweil Turing Test. Nevertheless, none of the quantum states in my brain would be the same. There are quite a few changes that each of us undergo from day to day, we don&#146;t examine the assumption that we are the same person closely.</p>
<p>&quot;We gradually change our pattern of atoms and neurons but we very rapidly change the particles the pattern is made up of. We used to think that in the brain&#151;the physical part of us most closely associated with our identity&#151;cells change very slowly, but it turns out that the components of the neurons, the tubules and so forth, turn over in only <i>days</i>. I&#146;m a completely different set of particles from what I was a week ago.</p>
<p>&quot;Consciousness is a difficult subject, and I&#146;m always surprised by how many people talk about consciousness routinely as if it could be easily and readily tested scientifically. But we can&#146;t postulate a consciousness detector that does not have some assumptions about consciousness built into it.</p>
<p>&quot;Science is about objective third party observations and logical deductions from them. Consciousness is about first-person, subjective experience, and there&#146;s a fundamental gap there. We live in a world of assumptions about consciousness. We share the assumption that other human beings are conscious, for example. But that breaks down when we go outside of humans, when we consider, for example, animals. Some say only humans are conscious and animals are instinctive and machinelike. Others see humanlike behavior in an animal and consider the animal conscious, but even these observers don&#146;t generally attribute consciousness to animals that aren&#146;t humanlike.</p>
<p>&quot;When machines are complex enough to have responses recognizable as emotions, those machines will be more humanlike than animals.&quot;</p>
<p>The Kurzweil Singularity goes like this: computers get better and smaller. Our ability to measure the world gains precision and grows ever cheaper. Eventually, we can measure the world inside the brain and make a copy of it in a computer that&#146;s as fast and complex as a brain, and voila, intelligence.</p>
<p>Here in the twenty-first century we like to view ourselves as ambulatory brains, plugged into meat-puppets that lug our precious grey matter from place to place. We tend to think of that grey matter as transcendently complex, and we think of it as being the bit that makes us <i>us</i>.</p>
<p>But brains aren&#146;t that complex, Kurzweil says. Already, we&#146;re starting to unravel their mysteries.</p>
<p>&quot;We seem to have found one area of the brain closely associated with higher-level emotions, the spindle cells, deeply embedded in the brain. There are tens of thousands of them, spanning the whole brain (maybe eighty thousand in total), which is an incredibly small number. Babies don&#146;t have any, most animals don&#146;t have any, and they likely only evolved over the last million years or so. Some of the high-level emotions that are deeply human come from these.</p>
<p>&quot;Turing had the right insight: base the test for intelligence on written language. Turing Tests really work. A novel is based on language: with language you can conjure up any reality, much more so than with images. Turing almost lived to see computers doing a good job of performing in fields like math, medical diagnosis and so on, but those tasks were easier for a machine than demonstrating even a child&#146;s mastery of language. Language is the true embodiment of human intelligence.&quot;</p>
<p>If we&#146;re not so complex, then it&#146;s only a matter of time until computers are more complex than us. When that comes, our brains will be model-able in a computer and that&#146;s when the fun begins. That&#146;s the thesis of <i>Spiritual Machines</i>, which even includes a (Heinlein-style) timeline leading up to this day.</p>
<p>Now, it may be that a human brain contains <i>n</i> logic-gates and runs at <i>x</i> cycles per second and stores <i>z</i> petabytes, and that <i>n</i> and <i>x</i> and <i>z</i> are all within reach. It may be that we can take a brain apart and record the position and relationships of all the neurons and sub-neuronal elements that constitute a brain.</p>
<p>But there are also a nearly infinite number of ways of modeling a brain in a computer, and only a finite (or possibly nonexistent) fraction of that space will yield a conscious copy of the original meat-brain. Science fiction writers usually hand-wave this step: in Heinlein&#146;s &quot;Man Who Sold the Moon,&quot; the gimmick is that once the computer becomes complex enough, with enough &quot;random numbers,&quot; it just wakes up.</p>
<p>Computer programmers are a little more skeptical. Computers have never been known for their skill at programming themselves&#151;they tend to be no smarter than the people who write their software.</p>
<p>But there are techniques for getting computers to program themselves, based on evolution and natural selection. A programmer creates a system that spits out lots&#151;thousands or even millions&#151;of randomly generated programs. Each one is given the opportunity to perform a computational task (say, sorting a list of numbers from greatest to least) and the ones that solve the problem best are kept aside while the others are erased. Now the survivors are used as the basis for a new generation of randomly mutated descendants, each based on elements of the code that preceded them. By running many instances of a randomly varied program at once, and by culling the least successful and regenerating the population from the winners very quickly, it is possible to <i>evolve</i> effective software that performs as well or better than the code written by human authors.</p>
<p>Indeed, evolutionary computing is a promising and exciting field that&#146;s realizing real returns through cool offshoots like &quot;ant colony optimization&quot; and similar approaches that are showing good results in fields as diverse as piloting military UAVs and efficiently provisioning car-painting robots at automotive plants.</p>
<p>So if you buy Kurzweil&#146;s premise that computation is getting cheaper and more plentiful than ever, then why not just use evolutionary algorithms to<i> evolve </i>the best way to model a scanned-in human brain such that it &quot;wakes up&quot; like Heinlein&#146;s Mike computer?</p>
<p>Indeed, this is the crux of Kurzweil&#146;s argument in <i>Spiritual Machines</i>: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.</p>
<p>But it&#146;s a cheat. Evolutionary algorithms depend on the same mechanisms as real-world evolution: herit-able variation of candidates and a system that culls the least-suitable candidates. This latter&#151;the fitness-factor that determines which individuals in a cohort breed and which vanish&#151;is the key to a successful evolutionary system. Without it, there&#146;s no pressure for the system to achieve the desired goal: merely mutation and more mutation.</p>
<p>But how can a machine evaluate which of a trillion models of a human brain is &quot;most like&quot; a conscious mind? Or better still: which one is most like the individual whose brain is being modeled?</p>
<p>&quot;It is a sleight of hand in <i>Spiritual Machines</i>,&quot; Kurzweil admits. &quot;But in <i>The Singularity Is Near</i>, I have an in-depth discussion about what we know about the brain and how to model it. Our tools for understanding the brain are subject to the Law of Accelerating Returns, and we&#146;ve made more progress in reverse-engineering the human brain than most people realize.&quot; This is a tasty Kurzweilism that observes that improvements in technology yield tools for improving technology, round and round, so that the thing that progress begets more than anything is more and yet faster progress.</p>
<p>&quot;Scanning resolution of human tissue&#151;both spatial and temporal&#151;is doubling every year, and so is our knowledge of the workings of the brain. The brain is not one big neural net, the brain is several hundred different regions, and we can understand each region, we can model the regions with mathematics, most of which have some nexus with chaos and self-organizing systems. This has already been done for a couple dozen regions out of the several hundred.</p>
<p>&quot;We have a good model of a dozen or so regions of the auditory and visual cortex, how we strip images down to very low-resolution movies based on pattern recognition. Interestingly, we don&#146;t actually see things, we essentially hallucinate them in detail from what we see from these low resolution cues. Past the early phases of the visual cortex, detail doesn&#146;t reach the brain.</p>
<p>&quot;We are getting <i>exponentially</i> more knowledge. We can get detailed scans of neurons working in vivo, and are beginning to understand the chaotic algorithms underlying human intelligence. In some cases, we are getting comparable performance of brain regions in simulation. These tools will continue to grow in detail and sophistication.</p>
<p>&quot;We can have confidence of reverse-engineering the brain in twenty years or so. The reason that brain reverse engineering has not contributed much to artificial intelligence is that up until recently we didn&#146;t have the right tools. If I gave you a computer and a few magnetic sensors and asked you to reverse-engineer it, you might figure out that there&#146;s a magnetic device spinning when a file is saved, but you&#146;d never get at the instruction set. Once you reverse-engineer the computer fully, however, you can express its principles of operation in just a few dozen pages.</p>
<p>&quot;Now there are new tools that let us see the interneuronal connections and their signaling, <i>in vivo</i>, and in real-time. We&#146;re just now getting these tools and there&#146;s very rapid application of the tools to obtain the data.</p>
<p>&quot;Twenty years from now we will have realistic simulations and models of all the regions of the brain and [we will] understand how they work. We won&#146;t blindly or mindlessly copy those methods, we will understand them and use them to improve our AI toolkit. So we&#146;ll learn how the brain works and then apply the sophisticated tools that we will obtain, as we discover how the brain works.</p>
<p>&quot;Once we understand a subtle science principle, we can isolate, amplify, and expand it. Air goes faster over a curved surface: from that insight we isolated, amplified, and expanded the idea and invented air travel. We&#146;ll do the same with intelligence.</p>
<p>&quot;Progress is exponential&#151;not just a measure of power of computation, number of Internet nodes, and magnetic spots on a hard disk&#151;the rate of paradigm shift is itself accelerating, doubling every decade. Scientists look at a problem and they intuitively conclude that since we&#146;ve solved 1 percent over the last year, it&#146;ll therefore be one hundred years until the problem is exhausted: but the rate of progress doubles every decade, and the power of the information tools (in price-performance, resolution, bandwidth, and so on) doubles every year. People, even scientists, don&#146;t grasp exponential growth. During the first decade of the human genome project, we only solved 2 percent of the problem, but we solved the remaining 98 percent in five years.&quot;</p>
<p>But Kurzweil doesn&#146;t think that the future will arrive in a rush. As William Gibson observed, &quot;The future is here, it&#146;s just not evenly distributed.&quot;</p>
<p>&quot;Sure, it&#146;d be interesting to take a human brain, scan it, reinstantiate the brain, and run it on another substrate. That will ultimately happen.&quot;</p>
<p>&quot;But the most salient scenario is that we&#146;ll <i>gradually</i> merge with our technology. We&#146;ll use nanobots to kill pathogens, then to kill cancer cells, and then they&#146;ll go into our brain and do benign things there like augment our memory, and very gradually they&#146;ll get more and more sophisticated. There&#146;s no single great leap, but there is ultimately a great leap comprised of many small steps.</p>
<p>&quot;In <i>The Singularity Is Near</i>, I describe the radically different world of 2040, and how we&#146;ll get there one benign change at a time. The Singularity will be gradual, smooth.</p>
<p>&quot;Really, this is about augmenting our biological thinking with nonbiological thinking. We have a capacity of 10<sup>26</sup> to 10<sup>29</sup> calculations per second (cps) in the approximately 10<sup>10</sup> biological human brains on Earth and that number won&#146;t change much in fifty years, but nonbiological thinking will just crash through that. By 2049, nonbiological thinking capacity will be on the order of a billion times that. We&#146;ll get to the point where bio thinking is relatively insignificant.</p>
<p>&quot;People didn&#146;t throw their typewriters away when word-processing started. There&#146;s always an overlap&#151;it&#146;ll take time before we realize how much more powerful nonbiological thinking will ultimately be.&quot;</p>
<p>It&#146;s well and good to talk about all the stuff we can do with technology, but it&#146;s a lot more important to talk about the stuff we&#146;ll be <i>allowed </i>to do with technology. Think of the global freak-out caused by the relatively trivial advent of peer-to-peer file-sharing tools: Universities are wiretapping their campuses and disciplining computer science students for writing legitimate, general purpose software; grandmothers and twelve-year-olds are losing their life savings; privacy and due process have sailed out the window without so much as a by-your-leave.</p>
<p>Even P2P&#146;s worst enemies admit that this is a general-purpose technology with good and bad uses, but when new tech comes along it often engenders a response that countenances punishing an infinite number of innocent people to get at the guilty.</p>
<p>What&#146;s going to happen when the new technology paradigm isn&#146;t song-swapping, but transcendent super-intelligence? Will the reactionary forces be justified in razing the whole ecosystem to eliminate a few parasites who are doing negative things with the new tools?</p>
<p>&quot;Complex ecosystems will always have parasites. Malware [malicious software] is the most important battlefield today.</p>
<p>&quot;<i>Everything</i> will become software&#151;objects will be malleable, we&#146;ll spend lots of time in VR, and computhought will be orders of magnitude more important than biothought.</p>
<p>&quot;Software is already complex enough that we have an ecological terrain that has emerged just as it did in the bioworld.</p>
<p>&quot;That&#146;s partly because technology is unregulated and people have access to the tools to create malware and the medicine to treat it. Today&#146;s software viruses are clever and stealthy and not simpleminded. <i>Very</i> clever.</p>
<p>&quot;But here&#146;s the thing: you don&#146;t see people advocating shutting down the Internet because malware is so destructive. I mean, malware is potentially more than a nuisance&#151;emergency systems, air traffic control, and nuclear reactors all run on vulnerable software. It&#146;s an important issue, but the potential damage is still a tiny fraction of the benefit we get from the Internet.</p>
<p>&quot;I hope it&#146;ll remain that way&#151;that the Internet won&#146;t become a regulated space like medicine. Malware&#146;s not the most important issue facing human society today. Designer bioviruses are. People are concerted about WMDs, but the most daunting WMD would be a designed biological virus. The means exist in college labs to create destructive viruses that erupt and spread silently with long incubation periods.</p>
<p>&quot;Importantly, a would-be bio-terrorist doesn&#146;t have to put malware through the FDA&#146;s regulatory approval process, but scientists working to fix bio-malware <i>do</i>.</p>
<p>&quot;In Huxley&#146;s<i> Brave New World</i>, the rationale for the totalitarian system was that technology was too dangerous and needed to be controlled. But that just pushes technology underground where it becomes <i>less</i> stable. Regulation gives the edge of power to the irresponsible who won&#146;t listen to the regulators anyway.</p>
<p>&quot;The way to put more stones on the defense side of the scale is to put more resources into defensive technologies, not create a totalitarian regime of Draconian control.</p>
<p>&quot;I advocate a one hundred billion dollar program to accelerate the development of anti-biological virus technology. The way to combat this is to develop broad tools to destroy viruses. We have tools like RNA interference, just discovered in the past two years to block gene expression. We could develop means to sequence the genes of a new virus (SARS only took thirty-one days) and respond to it in a matter of days.</p>
<p>&quot;Think about it. There&#146;s no FDA for software, no certification for programmers. The government is thinking about it, though! The reason the FCC is contemplating Trusted Computing mandates,&quot;&#151;a system to restrict what a computer can do by means of hardware locks embedded on the motherboard&#151;&quot;is that computing technology is broadening to cover everything. So now you have communications bureaucrats, biology bureaucrats, all wanting to regulate computers.</p>
<p>&quot;Biology would be a lot more stable if we moved away from regulation&#151;which is extremely irrational and onerous and doesn&#146;t appropriately balance risks. Many medications are not available today even though they should be. The FDA always wants to know what happens if we approve this and will it turn into a thalidomide situation that embarrasses us on CNN?</p>
<p>&quot;Nobody asks about the harm that will certainly accrue from delaying a treatment for one or more years. There&#146;s no political weight at all, people have been dying from diseases like heart disease and cancer for as long as we&#146;ve been alive. Attributable risks get 100-1000 times more weight than unattributable risks.&quot;</p>
<p>Is this spirituality or science? Perhaps it is the melding of both&#151;more shades of Heinlein, this time the weird religions founded by people who took <i>Stranger in a Strange Land</i> way too seriously.</p>
<p>After all, this is a system of belief that dictates a means by which we can care for our bodies virtuously and live long enough to transcend them. It is a system of belief that concerns itself with the meddling of non-believers, who work to undermine its goals through irrational systems predicated on their disbelief. It is a system of belief that asks and answers the question of what it means to be human.</p>
<p>It&#146;s no wonder that the Singularity has come to occupy so much of the science fiction narrative in these years. Science or spirituality, you could hardly ask for a subject better tailored to technological speculation and drama.</p>
<p><i>&copy; 2005 <a href="/cory-doctorow" target="_top">Cory Doctorow</a>. Reprinted with permission.</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/thought-experiments-when-the-singularity-is-more-than-a-literary-device-an-interview-with-futurist-inventor-ray-kurzweil/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Lunch with Mikhail Gorbachev</title>
		<link>https://www.writingsbyraykurzweil.com/lunch-with-mikhail-gorbachev-2</link>
		<comments>https://www.writingsbyraykurzweil.com/lunch-with-mikhail-gorbachev-2#respond</comments>
		<pubDate>Tue, 19 Apr 2005 02:40:13 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[With only 53,000 engineering graduates a year compared to Russia's 200,000, the U.S. needs to "communicate the importance of science in today's world," Mikhail Gorbachev told Ray Kurzweil in a luncheon discussion that ranged from blogs to nuclear disarmament and longevity.]]></description>
			<content:encoded><![CDATA[<p>On April 12, 2005, Mikhail Gorbachev delivered the keynote address at the annual Spring meeting of the <a href="http://www.masoftware.org/" target="_blank">Massachusetts Software Council</a> (MSC), of which I am a board member. He started with reminiscences of his days as the last leader of the Soviet Union. He recalled that his first impression of Ronald Reagan was that he was a &quot;dinosaur,&quot; and later heard that Reagan&#8217;s first impression of him was that Gorbachev was a &quot;diehard Bolshevik.&quot; <span id="more-80891"></span></p>
<p>A few days after these negative first impression, however, their relationship warmed, and this first Summit produced an arms control agreement expressing for the first time the idea that nuclear war can never be won and must never be fought. He referred to these early efforts as bridges that were built to overcome decades of distrust. The bridges now need renovation, and the remarkable information technology revolution is a key new bridge being built. </p>
<p>He reflected on the perestroika movement that he initiated. He quoted a Chinese prime minister who was asked what the impact of the French revolution had been on China, and responded &quot;too soon to tell.&quot; Similarly, we cannot yet judge perestroika, but if he had to answer in a word whether it had won or lost, he would say it had won. &quot;The clock will not be turned back in Russia. . . . We thought we could solve many problems in a short period of time. . .We freed Eastern Europe. . . We settled the animosity with China with a new friendship. . .We built a new relationship of cooperation with the United States.&quot;</p>
<p>He recalled how in 1986, he told the Communist Party Congress meeting (his first as leader) that &quot;we are living in an interdependent and interrelated world.&quot; He said this was a new idea twenty years ago, but is now an article of faith. It was ten years later, in 1996, that he first heard the word &quot;outsourcing&quot; at an information technology industry conference. He said that Russia and Ukraine are benefiting from the Internet&#8217;s ability to create a single world information economy, with a billion dollars of annual information exports and a growth rate of 50 percent. </p>
<p>For a more complete recollection of Gorbachev&#8217;s keynote address, see Dan Bricklin&#8217;s excellent <a href="http://danbricklin.com/log/gorbachev.htm" target="_blank">blog</a>. </p>
<p>Gorbachev received a very enthusiastic standing ovation. There was a private reception, and then a small private lunch. I was fortunate to have the opportunity to sit next to Gorbachev and his long time translator (Pavel Palazchenko). To my right was Dan Bricklin (creator of VisiCalc, the first spreadsheet software), and to his right was Gorbachev&#8217;s daughter Irina who runs the Gorbachev Foundation. Also sitting with us at the head table was John Cullinane (software industry pioneer, cofounder of the MSC), Bob Metcalfe (inventor of Ethernet, recent recipient of the National Medal of Technology), George Bell (founder of uPromise, current Chairman of the MSC), Joyce Plotkin (MSC President, who did an outstanding job organizing this historic event), and Monster executives Jeff Taylor and Andy McKelvey. </p>
<p>Here is my recollection of some of our conversation during the 90-minute luncheon:</p>
<p><i>Ray Kurzweil</i>: You spoke about the role of the technology revolution in building new bridges of understanding. I&#8217;m concerned about the difficulty we&#8217;re having in this country in attracting our young people into technical careers. I&#8217;ve been gathering statistics on this. Ten years ago, we had about 60,000 engineering graduates in the U.S. That figure is now around 53,000. In China, the comparable figure ten years ago was about 10,000, a small fraction of the U.S., but has now soared to over 300,000. There is a similar trend in all scientific areas, and also at the doctoral level. India is much the same. How is this going in Russia?</p>
<p><i>Mikhail Gorbachev</i>: The figure in Russia is over 200,000 engineering graduates per year.</p>
<p><i>Ray</i>: So, how are you accomplishing this?</p>
<p><i>Mikhail</i>: Science has always been an attraction for our talented students. They understand its importance in today&#8217;s world, and they value the hard work it represents. We need both engineering and the liberal arts, and the waves of interest in these respectively go up and down. </p>
<p><i>Ray</i>: How can we do better here?</p>
<p><i>Mikhail</i>: You have to communicate the importance of science in today&#8217;s world.</p>
<p><i>Ray</i>: You would think that would be obvious. I think the leadership has to come from the top.</p>
<p><i>Mikhail</i>: Indeed.</p>
<p><i>Ray</i>: There was a spurt of interest in science careers in the United States about four decades ago that your country was responsible for. </p>
<p><i>Mikhail</i>: (smiling) Yes, today is the anniversary of Yuri Gagarin&#8217;s flight into space, the first person in outer space. We were competitors then, now we work together. </p>
<p><i>Dan Bricklin</i>: What information technologies do you use?</p>
<p><i>Mikhail</i>: I use a cell phone, a computer, a laptop, and the Internet, but not email. I do use the web a lot to get information, but generally not search engines. There are newspapers in Russia that are published only on the web, and that have excellent political and social commentary. I read about a dozen of these.</p>
<p><i>Dan</i>: Kind of like blogs?</p>
<p><i>Mikhail</i>: They&#8217;re not considered blogs, but they are similar. The Internet has a lot of freedom in Russia, and much important journalism and discussion takes place there. </p>
<p><i>Dan</i>: One of our commentators, Chris Daly, wrote recently that blogs and online journalism play a similar role today as the pamphlets of the United States revolutionary times.</p>
<p><i>Mikhail</i>: (smiling) Yes, very much so.</p>
<p><i>Ray</i>: It has been my view that the Internet is a major factor in the democratization we&#8217;ve seen in the world over the past fifteen years.</p>
<p><i>Mikhail</i>: That&#8217;s a fair statement.</p>
<p><i>Ray</i>: My first book, which I wrote in the 1980s, while you were President, talked about how the emerging decentralized electronic communication &#8211; fax machines, teletype-based email &#8211; would ultimately foster democracy in the Soviet Union. </p>
<p><i>Mikhail</i>: It was a big factor, to be sure. But today, the Internet also spreads hate.</p>
<p><i>Ray</i>: Yes, well, technology is always a double-edged sword. </p>
<p><i>Mikhail</i>: We have to bring the benefits of technology to everyone. The rich-poor divide is getting worse, and poverty in the world underlies much of the instability we see. </p>
<p><i>Ray</i>: A major source of instability is the rise of fundamentalism. Fundamentalist leaders don&#8217;t seem to come from the poor.</p>
<p><i>Mikhail</i>: The leaders, that&#8217;s true. But the support and attraction for fundamentalist and destructive movements is fueled by the despair of poverty. </p>
<p><i>Ray</i>: There is a lot of attention &#8211; appropriate attention to be sure &#8211; on nuclear disarmament, but surprising little attention to the fact that the United States and Russia still have these enormous stockpiles of thermonuclear weapons that represent an existential threat to human survival. Despite the apparent friendship of Russia and America, these stockpiles have not been reduced at all.</p>
<p><i>Mikhail</i>: Yes, it is a tragedy. The U.S. still has a doctrine that allows for the possibility of use of these weapons. A new doctrine is needed that would allow these weapons to be reduced.</p>
<p><i>Ray</i>: That would certainly move the world in a safer direction, and also inspire the overall nuclear disarmament effort.</p>
<p><i>Mikhail</i>: Very much so. It would also be good if America would agree to a comprehensive test ban treaty.</p>
<p><i>Ray</i>: You should discuss this with Putin and Bush.</p>
<p><i>Mikhail</i>: (smiling) Okay.</p>
<p><i>Ray</i>: How often do you talk with President Putin?</p>
<p><i>Mikhail</i>: We talk on occasion, and he listens to my ideas. But he is, of course, an independent thinker and leader. He is a strong supporter of free speech, which I personally benefit from.</p>
<p><i>Jeff Taylor</i>: What is your view of the Yukos affair?</p>
<p><i>Mikhail</i>: Well, people should pay their taxes.</p>
<p><i>Jeff</i>: Do you think the actions against Khodorkovsky were politically motivated?</p>
<p><i>Mikhail</i>: Really, I don&#8217;t. Too many powerful people in Russia have not been paying their taxes, and this situation must be confronted. Khodorkovsky did not make his money from the ground up the way, say, Bill Gates did. He was given the opportunity to take over a public asset, which of course was legal and appropriate. But he is obligated to pay the taxes he owes. The situation has some similarities to your Enron affair. </p>
<p><i>Ray</i> (after giving Gorbachev an inscribed copy of my book, Fantastic Voyage: Live Long Enough to Live Forever, coauthored with Terry Grossman, M.D.): Here&#8217;s to your health, so that you can continue to contribute to the world for many decades to come.</p>
<p><i>Mikhail</i>: (laughing) Oh I think it is too late for me.</p>
<p><i>Ray</i>: You are very active, no?</p>
<p><i>Mikhail</i>: Yes, I walk a great deal.</p>
<p><i>Ray</i>: That&#8217;s my main exercise also. That&#8217;s the best exercise.</p>
<p><i>Mikhail</i>: I enjoy an occasional vodka.</p>
<p><i>Ray</i>: Actually, we recommend moderate use of alcohol, it appears to benefit longevity.</p>
<p><i>Mikhail</i>: (Smiling) Well, then, maybe I will live a long time.</p>
<p><i>Ray</i>: I would bet on it. </p>
<p><i>Dan</i>: As someone who has changed the world, what advice would you give to kids today who want to change the world?</p>
<p><i>Mikhail</i>: My advice would be to have a dialogue with people, to help overcome prejudice, to build bridges of understanding. </p>
<p><i>Mikhail&#8217;s first toast</i>: I&#8217;d like to toast the Massachusetts Software Council and everyone who made this event possible, with thanks for your warmth and hospitality. </p>
<p><i>Mikhail&#8217;s second toast</i>: To the future success of the Russian and American information industries.</p>
<p><i>Mikhail&#8217;s third toast</i>: It is a tradition in our country that the third toast is always for the women. So, to the health of all the lovely women here at the luncheon.</p>
<p><i>Mikhail&#8217;s fourth toast</i>: Only monks and nuns do their thinking alone. We need to communicate our thoughts and work together. </p>
<p>At the end of the lunch, Gorbachev gave warm bear hugs to Dan Bricklin, Joyce Plotkin, and myself, which was an unexpected treat.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/lunch-with-mikhail-gorbachev-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Future of Intelligent Technology and Its Impact on Disabilities</title>
		<link>https://www.writingsbyraykurzweil.com/the-future-of-intelligent-technology-and-its-impact-on-disabilities-2</link>
		<comments>https://www.writingsbyraykurzweil.com/the-future-of-intelligent-technology-and-its-impact-on-disabilities-2#respond</comments>
		<pubDate>Tue, 16 Mar 2004 03:58:57 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[Future technologies for sensory impairments will include automatic subtitles on the fly for the hearing-impaired,  pocket-sized reading machines, automatic language translators, and intelligent devices sent through the bloodstream. These devices will also augment the senses for the general population.]]></description>
			<content:encoded><![CDATA[<p><i>Originally published in the <a href="http://www.afb.org/JVIB/jvib971012.asp" target="_blank">Journal of Visual Impairment &amp; Blindness</a> October 2003. Excerpted on KurzweilAI.net on March 15, 2004. </i></p>
<h2>Technology for sensory impairments </h2>
<p>By 2010, computers will disappear. They will be so tiny that they will be embedded in our environment, in clothing, and so on. We will have high-bandwidth connections to the Internet at all times. We will have eyeglasses for the sighted that display images directly in our retina: contact lenses for full-immersion virtual reality. We will also have relatively powerful (but not human level) artificial intelligence (AI) on web sites&#038;#8212artificial personalities such as the avatar-like Ramona, who greets visitors and answers questions at the KurzweilAI.net web site.<span id="more-80887"></span></p>
<p>For people who are hearing impaired, we will have systems that provide subtitles around the world. We&#8217;re getting close to the point where speaker-independent speech recognition will become common. Machines will create subtitles automatically and on the fly, and these subtitles will be a pretty accurate representation of what people are saying. It won&#8217;t be error-free&#038;#8212but then, our own auditory understanding is not error-free, either. We will also have listening systems that will allow deaf persons to understand what people are saying.</p>
<p>For people who are blind, we will have reading machines within a few years that are not just sitting on a desk, but are tiny devices you put in your pocket. You&#8217;ll take pictures of signs on the wall, handouts at meetings, and so on. We encounter text everywhere&#038;#8212on the back of packages, on menus, on electronic displays&#038;#8212and these pocket-sized reading machines will enable a blind person to read this material. By 2010, these devices will be very tiny. You will be able to wear one on your lapel and scan in all directions. These devices probably will be used by sighted people as well, because they will allow us to get visual information from all around us.</p>
<p>Such devices will also translate the information from one language to another for everyone. The current reading machine technology used in the Kurzweil 1000 and Kurzweil 3000 reading systems uses a new generation of synthetic speech. Although it sounds relatively normal, it is not recorded human speech.</p>
<p>We are not yet on the verge of creating cybernetic geniuses. But we have many systems in our societies that already can perform intelligently in narrow areas. We have hundreds of examples of these machines. Some of them are flying and landing our airplanes, or guiding intelligent weapons. We have electrocardiogram systems that provide an analysis as accurate as your doctor&#8217;s. We have some systems that can diagnose blood-cell images, others that automatically make financial decisions involving stock-market investments. In fact, $1 trillion in stock-market investments use these systems. Other intelligent systems look for credit card fraud and find optimal routes for e-mail messages and cell phone calls. Likewise, a disabled person has a narrow need. A person who is blind needs access to ordinary printed material. A person who is deaf needs to be able to understand ordinary speech from people he or she encounters at random. Devices to perform these tasks can work in close concert with the much broader, more flexible intelligence of the disabled persons themselves.</p>
<h2>Enhancing our own intelligence </h2>
<p>In some ways, machines can perform better than humans. Computers are much faster than people when they master tasks and can share knowledge. Something a computer has learned can be shared with thousands of other computers instantly, whereas, if I learn French, I can&#8217;t just download that to you.</p>
<p>The implication of this will not be just an alien invasion of intelligent machines to compete with us. We are going to enhance our own intelligence by getting closer and closer to machine intelligence&#038;#8212 and that&#8217;s already happening.</p>
<p>There are many people walking around now who are essentially cyborgs and have computers in their brains interfacing with their biological neurons. The Food and Drug Administration just approved a neural implant for Parkinson&#8217;s disease that replaces the portion of the brain destroyed by that disease. And there are more than a dozen different types of implants like that in use or being developed. Now, they require surgical implantation; but by 2029, we will be able to send these intelligent devices through the bloodstream.</p>
<h2>The importance of hanging around </h2>
<p>The most profound implication of these developments will be an expansion of human intelligence. Right now, we are restricted to a mere hundred trillion interneural connections. That may sound like a large number, but I personally find it rather limiting. Many people send me books to read, web sites to visit, conferences to attend, and I would love to be able to do all these things, but our human bandwidth is quite limited.</p>
<p>Ultimately, we won&#8217;t be restricted to 100 trillion connections. We will able to create new virtual connections with nanobots, so we can expand the number of interneuronal connections we have in our brain many fold. We are today profoundly expanding human intelligence as a species through the Internet and all of our technology. Through much more intimate connections with this technology, we will continue to profoundly expand human intelligence.</p>
<p>Human life expectancy is another one of those exponential trends. Every year during the 18th and 19th centuries, we added a few days to human life expectancy. Now, we are at the intersection of biology and information science. Today, we are adding about 120 days every year to human life expectancy. With the full flowering of the biotechnology revolution, within 10 years, we will be adding more than a year to the human life expectancy every year.</p>
<p>So if we can hang in there for another 10 years, we may actually get to experience the full measure of the profound century ahead.</p>
<p><i>© 2004 <a href="http://www.afb.org/jvib/main.asp" target="_blank">JVIB, American Foundation for the Blind</a>. Reprinted with permission.</i> </p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/the-future-of-intelligent-technology-and-its-impact-on-disabilities-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Kurzweil’s Law (aka “the law of accelerating returns”)</title>
		<link>https://www.writingsbyraykurzweil.com/kurzweils-law-aka-the-law-of-accelerating-returns-2</link>
		<comments>https://www.writingsbyraykurzweil.com/kurzweils-law-aka-the-law-of-accelerating-returns-2#respond</comments>
		<pubDate>Mon, 12 Jan 2004 08:44:55 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">https://www.writingsbyraykurzweil.com/?p=370969</guid>
		<description><![CDATA[Published on Edge.org and KurzweilAI.net Jan. 12, 2003 Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. Each epoch of evolution has progressed more rapidly by building on the products of the previous stage. Evolution works through indirection: evolution created [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><i>Published on <a href="http://www.edge.org/q2004/q04_print.html" target="_blank">Edge.org</a> and KurzweilAI.net Jan. 12, 2003</i></p>
<p>Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage.  Each epoch of evolution has progressed more rapidly by building on the products of the previous stage.
</p>
<p>Evolution works through indirection: evolution created humans, humans created technology, humans are now working with increasingly advanced technology to create new generations of technology.  As a result, the rate of progress of an evolutionary process increases exponentially over time.  <span id="more-370969"></span></p>
<p>Over time, the &#8220;order&#8221; of the information embedded in the evolutionary process (i.e., the measure of how well the information fits a purpose, which in evolution is survival) increases.  </p>
<p><b>A comment on the nature of order.  </b>The concept of the &#8220;order&#8221; of information is important here, as it is not the same as the opposite of disorder.  If disorder represents a random sequence of events, then the opposite of disorder should imply &#8220;not random.&#8221;  Information is a sequence of data that is meaningful in a process, such as the DNA code of an organism, or the bits in a computer program.  Noise, on the other hand, is a random sequence.  Neither noise nor information is predictable.  Noise is inherently unpredictable, but carries no information.  Information, however, is also unpredictable.  If we can predict future data from past data, then that future data stops being information.  We might consider an alternating pattern (&#8220;0101010. . . .&#8221;) to be orderly, but it carries no information (beyond the first couple of bits).  </p>
<p>Thus orderliness does not constitute order because order requires information.  However, order goes beyond mere information.  A recording of radiation levels from space represents information, but if we double the size of this data file, we have increased the amount of data, but we have not achieved a deeper level of order.  </p>
<p><i>Order is information that fits a purpose</i>.  The measure of order is the measure of how well the information fits the purpose.  In the evolution of life-forms, the purpose is to survive.  In an evolutionary algorithm (a computer program that simulates evolution to solve a problem) applied to, say, investing in the stock market, the purpose is to make money.  Simply having more information does not necessarily result in a better fit.  A superior solution for a purpose may very well involve less data.</p>
<p>The concept of &#8220;complexity&#8221; is often used to describe the nature of the information created by an evolutionary process.  Complexity is a close fit to the concept of order that I am describing, but is also not sufficient.  Sometimes, a deeper order &#8211; a better fit to a purpose &#8211; is achieved through simplification rather than further increases in complexity.  For example, a new theory that ties together apparently disparate ideas into one broader more coherent theory reduces complexity but nonetheless may increase the &#8220;order for a purpose&#8221; that I am describing.  Indeed, achieving simpler theories is a driving force in science.  Evolution has shown, however, that the general trend towards greater order does generally result in greater complexity.</p>
<p>Thus improving a solution to a problem &#8211; which may increase or decrease complexity &#8211; increases order.  Now that just leaves the issue of defining the problem.  Indeed, the key to an evolution algorithm (and to biological and technological evolution) is exactly this: defining the problem.  </p>
<p>We may note that this aspect of &#8220;Kurzweil&#8217;s Law&#8221; (the law of accelerating returns) appears to contradict the Second Law of Thermodynamics, which implies that entropy (randomness in a closed system) cannot decrease, and, therefore, generally increases.  However, the law of accelerating returns pertains to evolution, and evolution is not a closed system.  It takes place amidst great chaos, and indeed depends on the disorder in its midst, from which it draws its options for diversity.  And from these options, an evolutionary process continually prunes its choices to create ever greater order.  Even a crisis, such as the periodic large asteroids that have crashed into the Earth, although increasing chaos temporarily, end up increasing &#8211; deepening &#8211; the order created by an evolutionary process.  </p>
<p> A primary reason that evolution &#8211; of life-forms or of technology &#8211; speeds up is that it builds on its own increasing order, with ever more sophisticated means of recording and manipulating information.  Innovations created by evolution encourage and enable faster evolution.  In the case of the evolution of life forms, the most notable early example is DNA, which provides a recorded and protected transcription of life&#8217;s design from which to launch further experiments.  In the case of the evolution of technology, ever improving human methods of recording information have fostered further technology.  The first computers were designed on paper and assembled by hand.  Today, they are designed on computer workstations with the computers themselves working out many details of the next generation&#8217;s design, and are then produced in fully-automated factories with human guidance but limited direct intervention.  </p>
<p> The evolutionary process of technology seeks to improve capabilities in an exponential fashion.  Innovators seek to improve things by multiples.  Innovation is multiplicative, not additive.  Technology, like any evolutionary process, builds on itself.  This aspect will continue to accelerate when the technology itself takes full control of its own progression. </p>
<p> We can thus conclude the following with regard to the evolution of life-forms, and of technology:  the law of accelerating returns as applied to an evolutionary process: An evolutionary process is not a closed system; therefore, evolution draws upon the chaos in the larger system in which it takes place for its options for diversity; and evolution builds on its own increasing order.  Therefore, in an evolutionary process, order increases exponentially.</p>
<p> A correlate of the above observation is that the &#8220;returns&#8221; of an evolutionary process (e.g., the speed, cost-effectiveness, or overall &#8220;power&#8221; of a process) increase exponentially over time.  We see this in Moore&#8217;s law, in which each new generation of computer chip (now spaced about two years apart) provides twice as many components, each of which operates substantially faster (because of the smaller distances required for the electrons to travel, and other innovations).  This exponential growth in the power and price-performance of information-based technologies &#8211; now roughly doubling every year &#8211; is not limited to computers, but is true for a wide range of technologies, measured many different ways.  </p>
<p> In another positive feedback loop, as a particular evolutionary process (e.g., computation) becomes more effective (e.g., cost effective), greater resources are deployed towards the further progress of that process.  This results in a second level of exponential growth (i.e., the rate of exponential growth itself grows exponentially).  For example, it took three years to double the price-performance of computation at the beginning of the twentieth century, two years around 1950, and is now doubling about once a year.  Not only is each chip doubling in power each year for the same unit cost, but the number of chips being manufactured is growing exponentially. </p>
<p> Biological evolution is one such evolutionary process.  Indeed it is the quintessential evolutionary process.  It took place in a completely open system (as opposed to the artificial constraints in an evolutionary algorithm).  Thus many levels of the system evolved at the same time. </p>
<p> Technological evolution is another such evolutionary process.  Indeed, the emergence of the first technology-creating species resulted in the new evolutionary process of technology.  Therefore, technological evolution is an outgrowth of &#8211; and a continuation of  &#8211;  biological evolution.   Early stages of humanoid created technology were barely faster than the biological evolution that created our species.  Homo sapiens evolved in a few hundred thousand years.  Early stages of technology &#8211; the wheel, fire, stone tools &#8211; took tens of thousands of years to evolve and be widely deployed.  A thousand years ago, a paradigm shift such as the printing press, took on the order of a century to be widely deployed.  Today, major paradigm shifts, such as cell phones and the world wide web were widely adopted in only a few years time. </p>
<p>A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential.  When this happens, a paradigm shift (a fundamental change in the approach) occurs, which enables exponential growth to continue. </p>
<p>Each paradigm follows an &#8220;S-curve,&#8221; which consists of slow growth (the early phase of exponential growth), followed by rapid growth (the late, explosive phase of exponential growth), followed by a leveling off as the particular paradigm matures.
</p>
<p>During this third or maturing phase in the life cycle of a paradigm, pressure builds for the next paradigm shift, and research dollars are invested to create the next paradigm.  We can see this in the enormous investments being made today in the next computing paradigm &#8211; three-dimensional molecular computing &#8211; despite the fact that we still have at least a decade left for the paradigm of shrinking transistors on a flat integrated circuit using photolithography (Moore&#8217;s Law).  Generally, by the time a paradigm approaches its asymptote (limit) in price-performance, the next technical paradigm is already working in niche applications.  For example, engineers were shrinking vacuum tubes in the 1950s to provide greater price-performance for computers, and reached a point where it was no longer feasible to shrink tubes and maintain a vacuum.  At this point, around 1960, transistors had already achieved a strong niche market in portable radios.  </p>
<p>When a paradigm shift occurs for a particular type of technology, the process begins a new S-curve.  </p>
<p>Thus the acceleration of the overall evolutionary process proceeds as a sequence of S-curves, and the overall exponential growth consists of this cascade of S-curves.  </p>
<p>The resources underlying the exponential growth of an evolutionary process are relatively unbounded.</p>
<p>One resource is the (ever-growing) order of the evolutionary process itself.  Each stage of evolution provides more powerful tools for the next.  In biological evolution, the advent of DNA allowed more powerful and faster evolutionary &#8220;experiments.&#8221;  Later, setting the &#8220;designs&#8221; of animal body plans during the Cambrian explosion allowed rapid evolutionary development of other body organs, such as the brain.  Or to take a more recent example, the advent of computer-assisted design tools allows rapid development of the next generation of computers.</p>
<p>The other required resource is the &#8220;chaos&#8221; of the environment in which the evolutionary process takes place and which provides the options for further diversity.  In biological evolution, diversity enters the process in the form of mutations and ever- changing environmental conditions.  In technological evolution, human ingenuity combined with ever-changing market conditions keep the process of innovation going.  </p>
<p>If we apply these principles at the highest level of evolution on Earth, the first step, the creation of cells, introduced the paradigm of biology.  The subsequent emergence of DNA provided a digital method to record the results of evolutionary experiments.  Then, the evolution of a species that combined rational thought with an opposable appendage (the thumb) caused a fundamental paradigm shift from biology to technology.  The upcoming primary paradigm shift will be from biological thinking to a hybrid combining biological and nonbiological thinking.  This hybrid will include &#8220;biologically inspired&#8221; processes resulting from the reverse engineering of biological brains. </p>
<p>If we examine the timing of these steps, we see that the process has continuously accelerated.  The evolution of life forms required billions of years for the first steps (e.g., primitive cells); later on progress accelerated.  During the Cambrian explosion, major paradigm shifts took only tens of millions of years.  Later on, Humanoids developed over a period of millions of years, and Homo sapiens over a period of only hundreds of thousands of years.  </p>
<p>With the advent of a technology-creating species, the exponential pace became too fast for evolution through DNA-guided protein synthesis and moved on to human-created technology.  Technology goes beyond mere tool making; it is a process of creating ever more powerful technology using the tools from the previous round of innovation, and is, thereby, an evolutionary process.  As I noted, the first technological took tens of thousands of years.  For people living in this era, there was little noticeable technological change in even a thousand years.  By 1000 AD, progress was much faster and a paradigm shift required only a century or two.  In the nineteenth century, we saw more technological change than in the nine centuries preceding it.  Then in the first twenty years of the twentieth century, we saw more advancement than in all of the nineteenth century.  Now, paradigm shifts occur in only a few years time.  </p>
<p>The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).  So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries.  In contrast, the twentieth century saw only about 20 years of progress (again at today&#8217;s rate of progress) since we have been speeding up to current rates.  So the twenty-first century will see about a thousand times greater technological change than its predecessor. </p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/kurzweils-law-aka-the-law-of-accelerating-returns-2/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Kurzweil’s Law (aka “the law of accelerating returns”)</title>
		<link>https://www.writingsbyraykurzweil.com/kurzweils-law-aka-the-law-of-accelerating-returns</link>
		<comments>https://www.writingsbyraykurzweil.com/kurzweils-law-aka-the-law-of-accelerating-returns#respond</comments>
		<pubDate>Mon, 12 Jan 2004 03:44:46 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[In an evolutionary process, positive feedback increases order exponentially. A correlate is that the "returns" of an evolutionary process (such as the speed, cost-effectiveness, or overall "power" of a process) increase exponentially over time -- both for biology and technology. Ray Kurzweil submitted on essay based on that premise to Edge.org in response to John Brockman's question: "What's your law?"]]></description>
			<content:encoded><![CDATA[<p><i>Published on <a href="http://www.edge.org/q2004/q04_print.html" target="_blank">Edge.org</a> and KurzweilAI.net Jan. 12, 2003</i></p>
<p>Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage.  Each epoch of evolution has progressed more rapidly by building on the products of the previous stage.
</p>
<p>Evolution works through indirection: evolution created humans, humans created technology, humans are now working with increasingly advanced technology to create new generations of technology.  As a result, the rate of progress of an evolutionary process increases exponentially over time.  <span id="more-80881"></span></p>
<p>Over time, the &#8220;order&#8221; of the information embedded in the evolutionary process (i.e., the measure of how well the information fits a purpose, which in evolution is survival) increases.  </p>
<p><b>A comment on the nature of order.  </b>The concept of the &#8220;order&#8221; of information is important here, as it is not the same as the opposite of disorder.  If disorder represents a random sequence of events, then the opposite of disorder should imply &#8220;not random.&#8221;  Information is a sequence of data that is meaningful in a process, such as the DNA code of an organism, or the bits in a computer program.  Noise, on the other hand, is a random sequence.  Neither noise nor information is predictable.  Noise is inherently unpredictable, but carries no information.  Information, however, is also unpredictable.  If we can predict future data from past data, then that future data stops being information.  We might consider an alternating pattern (&#8220;0101010. . . .&#8221;) to be orderly, but it carries no information (beyond the first couple of bits).  </p>
<p>Thus orderliness does not constitute order because order requires information.  However, order goes beyond mere information.  A recording of radiation levels from space represents information, but if we double the size of this data file, we have increased the amount of data, but we have not achieved a deeper level of order.  </p>
<p><i>Order is information that fits a purpose</i>.  The measure of order is the measure of how well the information fits the purpose.  In the evolution of life-forms, the purpose is to survive.  In an evolutionary algorithm (a computer program that simulates evolution to solve a problem) applied to, say, investing in the stock market, the purpose is to make money.  Simply having more information does not necessarily result in a better fit.  A superior solution for a purpose may very well involve less data.</p>
<p>The concept of &#8220;complexity&#8221; is often used to describe the nature of the information created by an evolutionary process.  Complexity is a close fit to the concept of order that I am describing, but is also not sufficient.  Sometimes, a deeper order &#150; a better fit to a purpose &#150; is achieved through simplification rather than further increases in complexity.  For example, a new theory that ties together apparently disparate ideas into one broader more coherent theory reduces complexity but nonetheless may increase the &#8220;order for a purpose&#8221; that I am describing.  Indeed, achieving simpler theories is a driving force in science.  Evolution has shown, however, that the general trend towards greater order does generally result in greater complexity.</p>
<p>Thus improving a solution to a problem &#150; which may increase or decrease complexity &#150; increases order.  Now that just leaves the issue of defining the problem.  Indeed, the key to an evolution algorithm (and to biological and technological evolution) is exactly this: defining the problem.  </p>
<p>We may note that this aspect of &#8220;Kurzweil&#8217;s Law&#8221; (the law of accelerating returns) appears to contradict the Second Law of Thermodynamics, which implies that entropy (randomness in a closed system) cannot decrease, and, therefore, generally increases.  However, the law of accelerating returns pertains to evolution, and evolution is not a closed system.  It takes place amidst great chaos, and indeed depends on the disorder in its midst, from which it draws its options for diversity.  And from these options, an evolutionary process continually prunes its choices to create ever greater order.  Even a crisis, such as the periodic large asteroids that have crashed into the Earth, although increasing chaos temporarily, end up increasing &#150; deepening &#150; the order created by an evolutionary process.  </p>
<p> A primary reason that evolution &#150; of life-forms or of technology &#150; speeds up is that it builds on its own increasing order, with ever more sophisticated means of recording and manipulating information.  Innovations created by evolution encourage and enable faster evolution.  In the case of the evolution of life forms, the most notable early example is DNA, which provides a recorded and protected transcription of life&#8217;s design from which to launch further experiments.  In the case of the evolution of technology, ever improving human methods of recording information have fostered further technology.  The first computers were designed on paper and assembled by hand.  Today, they are designed on computer workstations with the computers themselves working out many details of the next generation&#8217;s design, and are then produced in fully-automated factories with human guidance but limited direct intervention.  </p>
<p> The evolutionary process of technology seeks to improve capabilities in an exponential fashion.  Innovators seek to improve things by multiples.  Innovation is multiplicative, not additive.  Technology, like any evolutionary process, builds on itself.  This aspect will continue to accelerate when the technology itself takes full control of its own progression. </p>
<p> We can thus conclude the following with regard to the evolution of life-forms, and of technology:  the law of accelerating returns as applied to an evolutionary process: An evolutionary process is not a closed system; therefore, evolution draws upon the chaos in the larger system in which it takes place for its options for diversity; and evolution builds on its own increasing order.  Therefore, in an evolutionary process, order increases exponentially.</p>
<p> A correlate of the above observation is that the &#8220;returns&#8221; of an evolutionary process (e.g., the speed, cost-effectiveness, or overall &#8220;power&#8221; of a process) increase exponentially over time.  We see this in Moore&#8217;s law, in which each new generation of computer chip (now spaced about two years apart) provides twice as many components, each of which operates substantially faster (because of the smaller distances required for the electrons to travel, and other innovations).  This exponential growth in the power and price-performance of information-based technologies &#150; now roughly doubling every year &#150; is not limited to computers, but is true for a wide range of technologies, measured many different ways.  </p>
<p> In another positive feedback loop, as a particular evolutionary process (e.g., computation) becomes more effective (e.g., cost effective), greater resources are deployed towards the further progress of that process.  This results in a second level of exponential growth (i.e., the rate of exponential growth itself grows exponentially).  For example, it took three years to double the price-performance of computation at the beginning of the twentieth century, two years around 1950, and is now doubling about once a year.  Not only is each chip doubling in power each year for the same unit cost, but the number of chips being manufactured is growing exponentially. </p>
<p> Biological evolution is one such evolutionary process.  Indeed it is the quintessential evolutionary process.  It took place in a completely open system (as opposed to the artificial constraints in an evolutionary algorithm).  Thus many levels of the system evolved at the same time. </p>
<p> Technological evolution is another such evolutionary process.  Indeed, the emergence of the first technology-creating species resulted in the new evolutionary process of technology.  Therefore, technological evolution is an outgrowth of &#150; and a continuation of  &#150;  biological evolution.   Early stages of humanoid created technology were barely faster than the biological evolution that created our species.  Homo sapiens evolved in a few hundred thousand years.  Early stages of technology &#150; the wheel, fire, stone tools &#150; took tens of thousands of years to evolve and be widely deployed.  A thousand years ago, a paradigm shift such as the printing press, took on the order of a century to be widely deployed.  Today, major paradigm shifts, such as cell phones and the world wide web were widely adopted in only a few years time. </p>
<p>A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential.  When this happens, a paradigm shift (a fundamental change in the approach) occurs, which enables exponential growth to continue. </p>
<p>Each paradigm follows an &#8220;S-curve,&#8221; which consists of slow growth (the early phase of exponential growth), followed by rapid growth (the late, explosive phase of exponential growth), followed by a leveling off as the particular paradigm matures.
</p>
<p>During this third or maturing phase in the life cycle of a paradigm, pressure builds for the next paradigm shift, and research dollars are invested to create the next paradigm.  We can see this in the enormous investments being made today in the next computing paradigm &#150; three-dimensional molecular computing &#150; despite the fact that we still have at least a decade left for the paradigm of shrinking transistors on a flat integrated circuit using photolithography (Moore&#8217;s Law).  Generally, by the time a paradigm approaches its asymptote (limit) in price-performance, the next technical paradigm is already working in niche applications.  For example, engineers were shrinking vacuum tubes in the 1950s to provide greater price-performance for computers, and reached a point where it was no longer feasible to shrink tubes and maintain a vacuum.  At this point, around 1960, transistors had already achieved a strong niche market in portable radios.  </p>
<p>When a paradigm shift occurs for a particular type of technology, the process begins a new S-curve.  </p>
<p>Thus the acceleration of the overall evolutionary process proceeds as a sequence of S-curves, and the overall exponential growth consists of this cascade of S-curves.  </p>
<p>The resources underlying the exponential growth of an evolutionary process are relatively unbounded.</p>
<p>One resource is the (ever-growing) order of the evolutionary process itself.  Each stage of evolution provides more powerful tools for the next.  In biological evolution, the advent of DNA allowed more powerful and faster evolutionary &#8220;experiments.&#8221;  Later, setting the &#8220;designs&#8221; of animal body plans during the Cambrian explosion allowed rapid evolutionary development of other body organs, such as the brain.  Or to take a more recent example, the advent of computer-assisted design tools allows rapid development of the next generation of computers.</p>
<p>The other required resource is the &#8220;chaos&#8221; of the environment in which the evolutionary process takes place and which provides the options for further diversity.  In biological evolution, diversity enters the process in the form of mutations and ever- changing environmental conditions.  In technological evolution, human ingenuity combined with ever-changing market conditions keep the process of innovation going.  </p>
<p>If we apply these principles at the highest level of evolution on Earth, the first step, the creation of cells, introduced the paradigm of biology.  The subsequent emergence of DNA provided a digital method to record the results of evolutionary experiments.  Then, the evolution of a species that combined rational thought with an opposable appendage (the thumb) caused a fundamental paradigm shift from biology to technology.  The upcoming primary paradigm shift will be from biological thinking to a hybrid combining biological and nonbiological thinking.  This hybrid will include &#8220;biologically inspired&#8221; processes resulting from the reverse engineering of biological brains. </p>
<p>If we examine the timing of these steps, we see that the process has continuously accelerated.  The evolution of life forms required billions of years for the first steps (e.g., primitive cells); later on progress accelerated.  During the Cambrian explosion, major paradigm shifts took only tens of millions of years.  Later on, Humanoids developed over a period of millions of years, and Homo sapiens over a period of only hundreds of thousands of years.  </p>
<p>With the advent of a technology-creating species, the exponential pace became too fast for evolution through DNA-guided protein synthesis and moved on to human-created technology.  Technology goes beyond mere tool making; it is a process of creating ever more powerful technology using the tools from the previous round of innovation, and is, thereby, an evolutionary process.  As I noted, the first technological took tens of thousands of years.  For people living in this era, there was little noticeable technological change in even a thousand years.  By 1000 AD, progress was much faster and a paradigm shift required only a century or two.  In the nineteenth century, we saw more technological change than in the nine centuries preceding it.  Then in the first twenty years of the twentieth century, we saw more advancement than in all of the nineteenth century.  Now, paradigm shifts occur in only a few years time.  </p>
<p>The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).  So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries.  In contrast, the twentieth century saw only about 20 years of progress (again at today&#8217;s rate of progress) since we have been speeding up to current rates.  So the twenty-first century will see about a thousand times greater technological change than its predecessor. </p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/kurzweils-law-aka-the-law-of-accelerating-returns/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A Dialogue on Reincarnation</title>
		<link>https://www.writingsbyraykurzweil.com/a-dialogue-on-reincarnation</link>
		<comments>https://www.writingsbyraykurzweil.com/a-dialogue-on-reincarnation#respond</comments>
		<pubDate>Tue, 06 Jan 2004 03:40:10 +0000</pubDate>
								<dc:creator>Ray Kurzweil</dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false"></guid>
		<description><![CDATA[If you were offered physical immortality as a "Wallerstein brain" (a human brain maintained in a jar interfacing to a virtual reality through its sensory and motor neurons), would you accept it? The question came up in an email dialogue about reincarnation between Ray Kurzweil and Steve Rabinowitz, a practicing attorney in New York City (which he says may explain his need to believe in reincarnation).]]></description>
			<content:encoded><![CDATA[<p><i>Published on KurzweilAI.net January 6, 2004.</i><month day, year></month></p>
<p><b>Ray:  </b>You mentioned that you believe in reincarnation.&nbsp; I know that this is the belief of many traditions.<b>&nbsp;</b> But as you know, following a &quot;tradition&quot; is not always the most reliable way of achieving the truth of the matter.&nbsp; There are a lot of traditions that have arbitrary and nonsensical beliefs.</p>
<p>So I was wondering: do you really believe in reincarnation, or are you just accepting without critical reflection&nbsp;this belief from a tradition that has provided you with a lot of other benefits?&nbsp; Or to put it another way, what evidence do you have for reincarnation?<span id="more-80880"></span></p>
<p>One concern I have with this belief is that it can be viewed as yet another rationalization for death.&nbsp; As I mentioned, our religious traditions have gone to extensive lengths to rationalize death.&nbsp; It is obvious to me that death is a tragedy, but up until very recently, it has appeared that there was nothing we could do about it, other than to rationalize that it must, after all, be a good thing.&nbsp; This view would apply to reincarnation.&nbsp; </p>
<p>One might argue that what&#8217;s the harm in rationalizing death?&nbsp; The harm is that in rationalizing something that is tragic, we fail to take the urgent action needed to avoid the tragedy, something which is now becoming feasible.&nbsp; As Dylan Thomas wrote: &quot;Do not go gentle into that good night,. . .Rage, rage against the dying of the light.&quot;</p>
<p><b>Steve: </b>My reincarnation conjecture was in response to Amy&#8217;s&nbsp;[Kurzweil, Ray&#8217;s daughter] statement [below], which blew me away.  Ethan [Kurzweil, Ray&#8217;s son] had already expressed skepticism to me about the desirability of immortality at a previous luncheon, but Amy&#8217;s reason for rejecting it took me totally by surprise:  &quot;So boring.&quot;</p>
<p>I suppose I&#8217;m rather Cartesian in my outlook towards life.  As far as evidence of reincarnation, I&#8217;ve read books that purported to offer some, but really I didn&#8217;t much care about such &quot;evidence&quot; one way or another. There are certain basic assumptions which I seem to be forced into&#038;#8212and from there, logic dictates the rest. </p>
<p>When I was little, my parents like to tease me by saying that if they hadn&#8217;t married, I would never have existed.  I never could buy that.  The idea that my inner Self began at a particular time and will end at a particular time is unimaginable to me.  Now I could just say that&#8217;s just a subjective delusion or defense, but in the end, I know I wouldn&#8217;t be true to myself if I went down that path, because the belief in my own timelessness is just too strong.  I could make believe that I don&#8217;t really believe it, I could decide that it is a foolish belief, but I know in my heart that no matter what, I do believe it, and so to me it makes more sense to just accept it as an assumption and see where I go from there. </p>
<p>I don&#8217;t know if Amy&#8217;s comment about boredom is just a statement about her own current state of mind or an insight into the human condition.  If one does believe in reincarnation, it is a small step to believe in higher beings for whom life is much more interesting than that of humans.  In that vision of reality, evolved beings such as Amy would seek a birth on this planet to confront particular goals&#038;#8212and facing death would of course be one of the main ones.  But her stay here would be a relatively short one&#038;#8212and then back to having fun.</p>
<p>In many traditions, various beings attain immortality.  For those who do it by purifying their nervous systems, life is very, very good, and these fortunate individuals attain great powers, visit celestial beings and do all sorts of things as they wish.  These people are admired by all. However, occasionally, not-so-evolved beings get the immortality trick done, and their feelings are much more mixed.  They feel jealousy as their friends ascend to heaven, and need comforting.</p>
<p>So depending on your world view, and your own condition, physical immortality may not necessarily be a blessing.</p>
<p>However, all in all, if you offered it to me, I would take it.  Fear of death is built in too, I guess, and maybe I&#8217;m proud enough to think I could use the time to make it all worthwhile. </p>
<p>But it is a question worthy of thought.   It is obvious to me that we all wish for things, which if achieved, would not be to our benefit.</p>
<p><b>Ray:  </b>Thanks for your thoughtful reply.</p>
<p>I do think Amy&#8217;s statement is insightful.  It is important to understand my perspective&#038;#8212my &quot;vision of the future&quot;&#038;#8212in its totality.  Most futurists make two mistakes.  They think linearly whereas the trends are exponential. And they consider one trend on today&#8217;s world as if nothing else were going to change.  Amy is essentially correct, that if we simply extended human longevity to hundreds of years, our psychology could not handle it.  We would indeed be gripped with a deep ennui.  But extending human lifespan is not the only radical change in store.  We are also going to merge with our technology and expand our cognitive and emotional capabilities, as well as the depth and richness of our intellectual, relational, artistic, sexual, and emotional experiences many fold, ultimately by factors of trillions as we go through this century.  So boredom will not be an issue.</p>
<p>With regard to reincarnation, I&#8217;d say several things.  Your starting intuition, that &quot;my inner self&quot; is essentially timeless, is reasonable.  We do need to go beyond science when we consider the nature of consciousness, which is to say the nature of one&#8217;s self.  Science is about objective observation and deductions thereof, whereas consciousness&#038;#8212the self&#038;#8212is about subjective experience.  There is a gap there.  An intuition of a &quot;timeless self&quot; is in my view reasonable.</p>
<p>But then you claim that from there, logic brings you to reincarnation.  But there is no logical bridge from &quot;timeless self&quot; to &quot;reincarnation.&quot;  You jump from an essential &quot;timeless&quot; mystery about the self to an ornate system of reincarnation, with greater beings, celestial powers, babies coming back to planets, etc.  It&#8217;s no more logical than stories of heaven, or other attempts to explain in language essential ineffable truths.</p>
<p>A problem I have with these views is that it gives a concrete reality to levels of reality that have no basis, but nonetheless effect people&#8217;s activities in this life (often negatively, but that is not my main point).</p>
<p>Let&#8217;s start with what we do know.  There is a reality that we experience every day.  We can call it physical reality.  Now some philosophers say that this physical reality is really a dream, and so on.  But regardless of its true nature, we do directly experience it, and so we can say that it exists.</p>
<p>Another reasonable intuition is that &quot;reality matters.&quot;  People suffer. Suffering can be alleviated.  Our actions have consequences.  It makes a difference how we act in this world.</p>
<p>Another insight that is quite consistent with how we act and feel is that death is a tragedy.  We don&#8217;t celebrate it.  We are saddened by it.  We feel it as a great loss.  There is a loss of experience and knowledge, not only in the departed, but in those of us left behind.  We don&#8217;t reward murderers.  We despise and punish them.</p>
<p>These are insights we can have some confidence in, in contrast to claimed logical deductions about ornate systems of reincarnation, heaven, etc. that we cannot experience.</p>
<p>While I respect your views and the tradition they stem from, I don&#8217;t really believe that you really firmly believe that reincarnation or any other such &quot;system&quot; is the only possible explanation.  You may find the explanation comforting, but if you really consider your true beliefs, you would have to admit that you don&#8217;t really know this to be true.  As a mental experiment, consider the situation in which somehow, a different truth were revealed to you.  Put aside how it would be possible for any such truth to be &quot;revealed,&quot; but just imagine that somehow this happened.  Would you be totally shocked?  Or would you shrug your shoulders and consider that now you have a deeper insight?</p>
<p>So I come back to what we really know and can have confidence in.  There is a reality to joy and to suffering, and to the suffering, and loss of knowledge and experience that illness and death brings.  And there is joy and gratification in knowledge, discovery, friendship, and experiences that enable us to grow.  And we can move in this direction in the world that we know exists, rather than in metaphorical realms.</p>
<p>I would not describe physical immortality as inherently a blessing, nor a curse.  Rather, we have the opportunity and responsibility to embrace the growth of knowledge and experience, and to alleviate suffering and destruction.  The problem I have with many of the common traditions regarding death is not only that they are &quot;deathist rationalizations,&quot; but they encourage passivity.  To the idea that &quot;death is natural,&quot; I would point out that it is natural for our species to push beyond its boundaries. We did not stay on the ground.  We did not stay on the planet.  We did not stay with our biological life expectancy (which was 37 years in 1800).  And we are not staying with the limitations of our bodies and brains.</p>
<p><b>Steve: </b>I don&#8217;t think we are in disagreement.  But once you open the door to timelessness of consciousness, what happens after death becomes a legitimate consideration in deciding whether you want physical immortality in your present body, as it may be modified.  If you offered me physical immortality as a &#8220;Wallerstein brain&#8221; in a jar (a human brain maintained in a jar interfacing to a virtual reality through its sensory and motor neurons), I, and I think most people, would reject it no matter how good the virtual stimulation might be.  This rejection is based on an inner calculation (which I believe the brain constantly makes in making all kinds of decisions) weighing the risks that such stimulation not being &quot;real&quot; means it may prove unsatisfactory in the long run and weighing of the odds of some sort of preferable reality coming to pass through natural means. It is true that death is painful and hence we seek to avoid it, but after all, birth is painful too, and I don&#8217;t think we would advise anyone against that.</p>
<p>Finally, the future you paint below is only one future:  you have pointed out many times the risks of technology leading to unfortunate outcomes if certain science is misused.</p>
<p>I&#8217;d like physical immortality for myself, I think; I&#8217;m just suggesting some caution may be advised.</p>
<p><b>Ray:  </b>Steve, a relevant quote:</p>
<blockquote><p><i>A mind that stays at the same capacity cannot live forever; after a few thousand years it would look more like a repeating tape loop than a person.  To live indefinitely long, the mind itself must grow. . . . and when it becomes great enough, and looks back. . . .what fellow feeling can it have with the soul that it was originally?  The later being would be everything the original was, but vastly more.        </i></p></blockquote>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<i>&#8211; Vernor Vinge</i></p>
<p><b>Steve:  </b>What fellow feeling indeed?  I think that is the great mystery, the thing that binds the infinite distinct points on the time line into the sense of &quot;I.&quot;</p>
<p><b>Ray:  </b>When I think of myself back in junior high school or high school, I feel a bit of kinship to that person, but at the same time it also seems like someone else.</p>
<p><b>Steve:  </b>Strange, isn&#8217;t it?</p>
<p><i>© 2003 KurzweilAI.net</i></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/a-dialogue-on-reincarnation/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Drexler-Smalley debate on molecular assembly</title>
		<link>https://www.writingsbyraykurzweil.com/the-drexler-smalley-debate-on-molecular-assembly</link>
		<comments>https://www.writingsbyraykurzweil.com/the-drexler-smalley-debate-on-molecular-assembly#respond</comments>
		<pubDate>Mon, 01 Dec 2003 01:01:56 +0000</pubDate>
								<dc:creator></dc:creator>
		
		
				<category><![CDATA[essays]]></category>

		<guid isPermaLink="false">http://www.kurzweilai.net/the-drexler-smalley-debate-on-molecular-assembly</guid>
		<description><![CDATA[Nanotechnology pioneer Eric Drexler and Rice University Professor and Nobelist Richard Smalley have engaged in a crucial debate on the feasibility of molecular assembly. Smalley&#8217;s position, which denies both the promise and the peril of molecular assembly, will ultimately backfire and will fail to guide nanotechnology research in the needed constructive direction, says Ray Kurzweil. [&#8230;]]]></description>
			<content:encoded><![CDATA[<p><em>Nanotechnology pioneer Eric Drexler and Rice University Professor and Nobelist Richard Smalley have engaged in a crucial debate on the feasibility of molecular assembly. Smalley&#8217;s position, which denies both the promise and the peril of molecular assembly, will ultimately backfire and will fail to guide nanotechnology research in the needed constructive direction, says Ray Kurzweil. By the 2020s, molecular assembly will provide tools to effectively combat poverty, clean up our environment, overcome disease, extend human longevity, and many other worthwhile pursuits, he predicts.</em></p>
<p>Nanotechnology pioneer Eric Drexler and Rice University Professor and Nobelist Richard Smalley have engaged in a crucial debate on the feasibility of molecular assembly, which is the key to the most revolutionary capabilities of nanotechnology. Although Smalley was originally inspired by Drexler&#8217;s ground-breaking works and has himself become a champion of contemporary research initiatives in nanotechnology, he has also taken on the role of key critic of Drexler&#8217;s primary idea of precisely guided molecular manufacturing. This debate has picked up intensity with  <a href="http://pubs.acs.org/cen/coverstory/8148/8148counterpoint.html" target="_blank">publication</a> of several rounds of this dialogue between these two pioneers. First some background:</p>
<h3>Background: The Roots of Nanotechnology</h3>
<p>Nanotechnology promises the tools to rebuild the physical world, our bodies and brains included, molecular fragment by molecular fragment, potentially atom by atom. We are shrinking the key feature size of technology, in accordance with what I call the &#8220;law of accelerating returns,&#8221; at the exponential rate of approximately a factor of 4 per linear dimension per decade. At this rate, the key feature sizes for most electronic and many mechanical technologies will be in the nanotechnology range, generally considered to be under 100 nanometers, by the 2020s (electronics has already dipped below this threshold, albeit not yet in three-dimensional structures and not self-assembling). Meanwhile, there has been rapid progress, particularly in the last several years, in preparing the conceptual framework and design ideas for the coming age of nanotechnology.</p>
<p>Most nanotechnology historians date the conceptual birth of nanotechnology to physicist Richard Feynman&#8217;s seminal speech in 1959, &#8220;There&#8217;s Plenty of Room at the Bottom,&#8221; in which he described the profound implications and the inevitability of engineering machines at the level of atoms:</p>
<p>&#8220;The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible. . . .for a physicist to synthesize any chemical substance that the chemist writes down. . .How? Put the atoms down where the chemist says, and so you make the substance. The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed – a development which I think cannot be avoided.&#8221;</p>
<p>An even earlier conceptual root for nanotechnology was formulated by the information theorist John Von Neumann in the early 1950s with his model of a self-replicating system based on a universal constructor combined with a universal computer. In this proposal, the computer runs a program that directs the constructor, which in turn constructs a copy of both the computer (including its self-replication program) and the constructor. At this level of description, Von Neumann&#8217;s proposal is quite abstract &#8212; the computer and constructor could be made in a great variety of ways, as well as from diverse materials, and could even be a theoretical mathematical construction. He took the concept one step further and proposed a &#8220;kinematic constructor,&#8221; a robot with at least one manipulator (arm) that would build a replica of itself from a &#8220;sea of parts&#8221; in its midst.</p>
<p>It was left to Eric Drexler to found the modern field of nanotechnology, with a draft of his seminal Ph.D. thesis in the mid 1980s, by essentially combining these two intriguing suggestions. Drexler described a Von Neumann Kinematic Constructor, which for its &#8220;sea of parts&#8221; used atoms and molecular fragments, as suggested in Feynman&#8217;s speech. Drexler&#8217;s vision cut across many disciplinary boundaries, and was so far reaching, that no one was daring enough to be his thesis advisor, except for my own mentor, Marvin Minsky. Drexler&#8217;s doctoral thesis (premiered in his book, <em><a href="/meme/frame.html?main=/meme/memelist.html?m%3D8" target="_top">Engines of Creation</a></em> in 1986 and articulated technically in his 1992 book <em><a href="http://www.foresight.org/Nanosystems/toc.html#c1%3E" target="_blank">Nanosystems</a></em>) laid out the foundation of nanotechnology and provided the road map still being pursued today.</p>
<p>Von Neumann&#8217;s Universal Constructor, as applied to atoms and molecular fragments, was now called a &#8220;universal assembler.&#8221; Drexler&#8217;s assembler was universal because it could essentially make almost anything in the world. A caveat is in order here. The products of a universal assembler necessarily have to follow the laws of physics and chemistry, so only atomically stable structures would be viable. Furthermore, any specific assembler would be restricted to building products from its sea of parts, although the feasibility of using individual atoms has been repeatedly demonstrated.</p>
<p>Although Drexler did not provide a detailed design of an assembler, and such a design has still not been fully specified, his thesis did provide extensive existence proofs for each of the principal components of a universal assembler, which include the following subsystems:</p>
<ul>
<li>The <span style="text-decoration: underline;">computer</span>: to provide the intelligence to control the assembly process. As with all of the subsystems, the computer needs to be small and simple. Drexler described an intriguing mechanical computer with molecular &#8220;locks&#8221; instead of transistor gates. Each lock required only 5 cubic nanometers of space and could switch 20 billion times a second. This proposal remains more competitive than any known electronic technology, although electronic computers built from three-dimensional arrays of carbon nanotubes may be a suitable alternative.</li>
<li>The <span style="text-decoration: underline;">instruction </span>architecture: Drexler and his colleague Ralph Merkle have proposed a &#8220;SIMD&#8221; (Single Instruction Multiple Data&#8221;) architecture in which a single data store would record the instructions and transmit them to trillions of molecular-sized assemblers (each with their own simple computer) simultaneously. Thus each assembler would not have to store the entire program for creating the desired product. This &#8220;broadcast&#8221; architecture also addresses a key safety concern by shutting down the self-replication process if it got out of control by terminating the centralized source of the replication instructions. However, as Drexler points out<a name="_ednref1" href="#_edn1"></a>[1], a nanoscale assembler does not necessarily have to be self-replicating. Given the inherent dangers in self-replication, the ethical standards proposed by the Foresight Institute contain prohibitions against unrestricted self-replication, especially in a natural environment.</li>
<li><span style="text-decoration: underline;">Instruction transmission</span>: transmission of the instructions from the centralized data store to each of the many assemblers would be accomplished electronically if the computer is electronic or through mechanical vibrations if Drexler&#8217;s concept of a mechanical computer were used.</li>
<li>The <span style="text-decoration: underline;">construction robot</span>: the constructor would be a simple molecular robot with a single arm, similar to Von Neumann&#8217;s kinematic constructor, but on a tiny scale. The feasibility of building molecular-based robot arms, gears, rotors, and motors has been demonstrated in the years since Drexler&#8217;s thesis, as I discuss below.</li>
<li>The <span style="text-decoration: underline;">robot arm tip</span>: Drexler&#8217;s follow-up book in 1992, <em>Nanosystems: molecular machinery, manufacturing, and computation</em>, provided a number of feasible chemistries for the tip of the robot arm that would be capable of grasping (using appropriate atomic force fields) a molecular fragment, or even a single atom, and then depositing it in a desired location. We know from the chemical vapor deposition process used to construct artificial diamonds that it is feasible to remove individual carbon atoms, as well as molecular fragments that include carbon, and then place them in another location through precisely controlled chemical reactions at the tip. The process to build artificial diamond is a chaotic process involving trillions of atoms, but the underlying process has been harnessed to design a robot arm tip that can remove hydrogen atoms from a source material and deposit it at desired location in a molecular machine being constructed. In this proposal, the tiny machines are built out of a diamond-like (called &#8220;diamondoid&#8221;) material. In addition to having great strength, the material can be doped with impurities in a precise fashion to create electronic components such as transistors. Simulations have shown that gears, levers, motors, and other mechanical systems can also be constructed from these carbon arrays. Additional proposals have been made in the years since, including several innovative designs by Ralph Merkle name=&#8221;_ednref2&#8243;&gt;[2]. In recent years, there has been a great deal of attention on carbon nanotubes, comprised of hexagonal arrays of carbon atoms assembled in three dimensions, which are also capable of providing both mechanical and electronic functions at the molecular level.</li>
<li>The assembler&#8217;s <span style="text-decoration: underline;">internal environment</span> needs to prevent environmental impurities from interfering with the delicate assembly process. Drexler&#8217;s proposal is to maintain a near vacuum and build the assembler walls out of the same diamondoid material that the assembler itself is capable of making.</li>
<li>The <span style="text-decoration: underline;">energy</span> required for the assembly process can be provided either through electricity or through chemical energy. Drexler proposed a chemical process with the fuel interlaced with the raw building material. More recent proposals utilize nanoengineered fuel cells incorporating hydrogen and oxygen or glucose and oxygen.</li>
</ul>
<p>Although many configurations have been proposed, the typical assembler has been described as a tabletop unit that can manufacture any physically possible product for which we have a software description. Products can range from computers, clothes, and works of art to cooked meals. Larger products, such as furniture, cars, or even houses, can be built in a modular fashion, or using larger assemblers. Of particular importance, an assembler can create copies of itself. The incremental cost of creating any physical product, including the assemblers themselves, would be pennies per pound, basically the cost of the raw materials. The real cost, of course, would be the value of the information describing each type of product, that is the software that controls the assembly process. Thus everything of value in the world, including physical objects, would be comprised essentially of information. We are not that far from this situation today, since the &#8220;information content&#8221; of products is rapidly asymptoting to 100 percent of their value.</p>
<p>In operation, the centralized data store sends out commands simultaneously to all of the assembly robots. There would be trillions of robots in an assembler, each executing the same instruction at the same time. The assembler creates these molecular robots by starting with a small number and then using these robots to create additional ones in an iterative fashion, until the requisite number of robots has been created.</p>
<p>Each local robot has a local data storage that specifies the type of mechanism it is building. This local data storage is used to mask the global instructions being sent from the centralized data store so that certain instructions are blocked and local parameters are filled in. In this way, even though all of the assemblers are receiving the same sequence of instructions, there is a level of customization to the part being built by each molecular robot. Each robot extracts the raw materials it needs, which includes individual carbon atoms and molecular fragments, from the source material. This source material also includes the requisite chemical fuel. All of the requisite design requirements, including routing the instructions and the source material, were described in detail in Drexler&#8217;s two classic works.</p>
<h3>The Biological Assembler</h3>
<p><em>Nature shows that molecules can serve as machines because living things work by means of such machinery. Enzymes are molecular machines that make, break, and rearrange the bonds holding other molecules together. Muscles are driven by molecular machines that haul fibers past one another. DNA serves as a data-storage system, transmitting digital instructions to molecular machines, the ribosomes, that manufacture protein molecules. And these protein molecules, in turn, make up most of the molecular machinery.</em></p>
<p>&#8212; Eric Drexler</p>
<p>The ultimate existence proof of the feasibility of a molecular assembler is life itself. Indeed, as we deepen out understanding of the information basis of life processes, we are discovering specific ideas to address the design requirements of a generalized molecular assembler. For example, proposals have been made to use a molecular energy source of glucose and ATP similar to that used by biological cells.</p>
<p>Consider how biology solves each of the design challenges of a Drexler assembler. The ribosome represents both the computer and the construction robot. Life does not use centralized data storage, but provides the entire code to every cell. The ability to restrict the local data storage of a nanoengineered robot to only a small part of the assembly code (using the &#8220;broadcast&#8221; architecture), particularly when doing self-replication, is one critical way nanotechnology can be engineered to be safer than biology.</p>
<p>With the advent of full-scale nanotechnology in the 2020s, we will have the potential to replace biology&#8217;s genetic information repository in the cell nucleus with a nanoengineered system that would maintain the genetic code and simulate the actions of RNA, the ribosome, and other elements of the computer in biology&#8217;s assembler. There would be significant benefits in doing this. We could eliminate the accumulation of DNA transcription errors, one major source of the aging process. We could introduce DNA changes to essentially reprogram our genes (something we&#8217;ll be able to do long before this scenario, using gene-therapy techniques).</p>
<p>With such a nanoengineered system, the recommended broadcast architecture could enable us to turn off unwanted replication, thereby defeating cancer, autoimmune reactions, and other disease processes. Although most of these disease processes will have already been defeated by genetic engineering, reengineering the computer of life using nanotechnology could eliminate any remaining obstacles and create a level of durability and flexibility that goes vastly beyond the inherent capabilities of biology.</p>
<p>Life&#8217;s local data storage is, of course, the DNA strands, broken into specific genes on the chromosomes. The task of instruction-masking (blocking genes that do not contribute to a particular cell type) is controlled by the short RNA molecules and peptides that govern gene expression. The internal environment the ribosome is able to function in is the particular chemical environment maintained inside the cell, which includes a particular acid-alkaline equilibrium (pH between 6.8 and 7.1 in human cells) and other chemical balances needed for the delicate operations of the ribosome. The cell wall is responsible for protecting this internal cellular environment from disturbance by the outside world.</p>
<p>The robot arm tip would use the ribosome&#8217;s ability to implement enzymatic reactions to break off each amino acid, each bound to a specific transfer RNA, and to connect it to its adjoining amino acid using a peptide bond.</p>
<p>However, the goal of molecular manufacturing is not merely to replicate the molecular assembly capabilities of biology. Biological systems are limited to building systems from protein, which has profound limitations in strength and speed. Nanobots built from diamondoid gears and rotors can be thousands of times faster and stronger than biological cells. The comparison is even more dramatic with regard to computation: the switching speed of nanotube-based computation would be millions of times faster than the extremely slow transaction speed of the electrochemical switching used in mammalian interneuronal connections (typically around 200 transactions per second, although the nonlinear transactions that take place in the dendrites and synapses are more complex than single computations).</p>
<p>The concept of a diamondoid assembler described above uses a consistent input material (for construction and fuel). This is one of several protections against molecule-scale replication of robots in an uncontrolled fashion in the outside world. Biology&#8217;s replication robot, the ribosome, also requires carefully controlled source and fuel materials, which are provided by our digestive system. As nano-based replicators become more sophisticated, more capable of extracting carbon atoms and carbon-based molecular fragments from less well-controlled source materials, and able to operate outside of controlled replicator enclosures such as in the biological world, they will have the potential to present a grave threat to that world, particularly in view of the vastly greater strength and speed of nano-based replicators over any biological system. This is, of course, the source of great controversy, which is alluded to in the Drexler-Smalley debate article and letters.</p>
<p>In the decade since publication of Drexler&#8217;s <em>Nanosystems</em>, each aspect of Drexler&#8217;s conceptual designs has been strengthened through additional design proposals, supercomputer simulations, and, most importantly, actual construction of molecular machines. Boston College chemistry professor T. Ross Kelly reported in the journal <em>Nature</em> that his construction of a chemically-powered nanomotor was built from 78 atoms.<a name="_ednref3" href="#_edn3"></a>[3] A biomolecular research group headed by C. D. Montemagno created an ATP-fueled nanomotor.<a name="_ednref4" href="#_edn4"></a>[4] Another molecule-sized motor fueled by solar energy was created by Ben Feringa at the University of Groningen in the Netherlands out of 58 atoms.<a name="_ednref5" href="#_edn4"></a>[5] Similar progress has been made on other molecular-scale mechanical components such as gears, rotors, and levers. Systems demonstrating the use of chemical energy and acoustic energy (as originally described by Drexler) have been designed, simulated, and, in many cases, actually constructed. Substantial progress has been made in developing various types of electronic components from molecule-scale devices, particularly in the area of carbon nanotubes, an area that Smalley has pioneered.</p>
<h3>Fat and Sticky Fingers</h3>
<p>In the wake of rapidly expanding development of each facet of future nanotechnology systems, no serious flaw to Drexler&#8217;s universal assembler concept has been discovered or described. Smalley&#8217;s highly publicized objection in <em>Scientific American</em> <a name="_ednref6" href="#_edn1"></a>[6] was based on a distorted description of the Drexler proposal; it ignored the extensive body of work in the past decade. As a pioneer of carbon nanotubes, Smalley has gone back and forth between enthusiasm and skepticism, having written that &#8220;nanotechnology holds the answer, to the extent there are answers, to most of our pressing material needs in energy, health, communication, transportation, food, water ….&#8221;</p>
<p>Smalley describes Drexler&#8217;s assembler as consisting of five to ten &#8220;fingers&#8221; (manipulator arms) to hold, move, and place each atom in the machine being constructed. He then goes on to point out that there isn&#8217;t room for so many fingers in the cramped space that a nanobot assembly robot has to work (which he calls the &#8220;fat fingers&#8221; problem) and that these fingers would have difficulty letting go of their atomic cargo because of molecular attraction forces (the &#8220;sticky fingers&#8221; problem). Smalley describes the &#8220;intricate three-dimensional waltz that is carried out&#8221; by five to fifteen atoms in a typical chemical reaction. Drexler&#8217;s proposal doesn&#8217;t look anything like the straw man description that Smalley criticizes. Drexler&#8217;s proposal, and most of those that have followed, have a single probe, or &#8220;finger.&#8221;</p>
<p>Moreover, there have been extensive description and analyses of viable tip chemistries that do not involve grasping and placing atoms as if they were mechanical pieces to be deposited in place. For example, the feasibility of moving hydrogen atoms using Drexler&#8217;s &#8220;propynyl hydrogen abstraction&#8221; tip<a name="_ednref7" href="#_edn7"></a> title=&#8221;&#8221;&gt;[7] has been extensively confirmed in the intervening years. name=&#8221;_ednref8&#8243;&gt;[8] The ability of the scanning probe microscope (SPM), developed at IBM in 1981, and the more sophisticated atomic force microscope to place individual atoms through specific reactions of a tip with a molecular-scale structure provide additional existence proofs. Indeed, if Smalley&#8217;s critique were valid, none of us would be here to discuss it because life itself would be impossible.</p>
<p>Smalley also objects that despite &#8220;working furiously . . . generating even a tiny amount of a product would take [a nanobot] … millions of years.&#8221; Smalley is correct, of course, that an assembler with only one nanobot wouldn&#8217;t produce any appreciable quantities of a product. However, the basic concept of nanotechnology is that we will need trillions of nanobots to accomplish meaningful results. This is also the source of the safety concerns that have received ample attention. Creating trillions of nanobots at reasonable cost will require the nanobots to make themselves. This self-replication solves the economic issue while introducing grave dangers. Biology used the same solution to create organisms with trillions of cells, and indeed we find that virtually all diseases derive from biology&#8217;s self-replication process gone awry.</p>
<p>Earlier challenges to the concepts underlying nanotechnology have also been effectively addressed. Critics pointed out that nanobots would be subject to bombardment by thermal vibration of nuclei, atoms, and molecules. This is one reason conceptual designers of nanotechnology have emphasized building structural components from diamondoid or carbon nanotubes. Increasing the strength or stiffness of a system reduces its susceptibility to thermal effects. Analysis of these designs have shown them to be thousands of times more stable in the presence of thermal effects than biological systems, so they can operate in a far wider temperature range<a name="_ednref9" href="#_edn9"></a>[9].</p>
<p>Similar challenges were made regarding positional uncertainty from quantum effects, based on the extremely small feature size of nanoengineered devices. Quantum effects are significant for an electron, but a single carbon atom nucleus is more than 20,000 times more massive than an electron. A nanobot will be constructed from hundreds of thousands to millions of carbon and other atoms, so a nanobot will be billions of times more massive than an electron. Plugging this ratio in the fundamental equation for quantum positional uncertainty shows this to be an insignificant factor.</p>
<p>Power has represented another challenge. Drexler&#8217;s original proposals involved glucose-oxygen fuel cells, which have held up well in feasibility studies. An advantage of the glucose-oxygen approach is that nanomedicine applications can harness the glucose, oxygen, and ATP resources already provided by the human digestive system. A nanoscale motor was recently created using propellers made of nickel and powered by an ATP-based enzyme.<a name="_ednref10" href="#_edn10"></a> title=&#8221;&#8221;&gt;[10]</p>
<p>However, recent progress in implementing MEMS-scale and even nanoscale hydrogen-oxygen fuel cells have provided an alternative approach. Hydrogen-oxygen fuel cells, with hydrogen provided by safe methanol fuel, have made substantial progress in recent years. A small company in Massachusetts, Integrated Fuel Cell Technologies, Inc.<a name="_ednref11" href="#_edn11"></a>[11] has demonstrated a MEMS-based fuel cell. Each postage-stamp- sized device contains thousands of microscopic fuel cells and includes the fuel lines and electronic controls. NEC plans to introduce fuel cells based on nanotubes in 2004 for notebook computers and other portable electronics. They claim their small power sources will power devices for up to 40 hours before the user needs to change the methanol canister.</p>
<h3>The Debate Heats Up</h3>
<p>On April 16, 2003, Drexler responded to Smalley&#8217;s Scientific American article with an <a href="/an-open-letter-to-richard-smalley" target="_top">open letter</a>. He cited 20 years of research by himself and others and responded specifically to the fat and sticky fingers objection. As I discussed above, molecular assemblers were never described as having fingers at all, but rather precise positioning of reactive molecules. Drexler cited biological enzymes and ribosomes as examples of precise molecular assembly in the natural world. Drexler closes by quoting Smalley&#8217;s own observation that &#8220;when a scientist says something is possible, they&#8217;re probably underestimating how long it will take. But if they say it&#8217;s impossible, they&#8217;re probably wrong.&#8221;</p>
<p><a href="http://pubs.acs.org/cen/coverstory/8148/8148counterpoint.html" target="_blank">Three more rounds</a> of this debate were published today. Smalley responds to Drexler&#8217;s open letter by backing off of his fat and sticky fingers objection and acknowledging that enzymes and ribosomes do indeed engage in the precise molecular assembly that Smalley had earlier indicated was impossible. Smalley says biological enzymes only work in water and that such water-based chemistry is limited to biological structures such as &#8220;wood, flesh and bone.&#8221; As Drexler has stated<a name="_ednref12" href="#_edn12"></a>[12], this is erroneous. Many enzymes, even those that ordinarily work in water, can also function in anhydrous organic solvents and some enzymes can operate on substrates in the vapor phase, with no liquid at all.name=&#8221;_ednref13&#8243;&gt;[13].</p>
<p>Smalley goes on to state (without any derivation or citations) that enzymatic-like reactions can only take place with biological enzymes. This is also erroneous. It is easy to see why biological evolution adopted water-based chemistry. Water is the most abundant substance found on our planet. It also comprises 70 to 90 percent of our bodies, our food, and indeed of all organic matter. Most people think of water as fairly simple, but it is a far more complex phenomenon than conventional wisdom suggests.</p>
<p>As every grade school child knows, water is comprised of molecules, each containing two atoms of hydrogen and one atom of oxygen, the most commonly known chemical formula, H 2O. However, consider some of water&#8217;s complications and their implications. In a liquid state, the two hydrogen atoms make a 104.5° angle with the oxygen atom, which increases to 109.5° when water freezes. This is why water molecules are more spread out in the form of ice, providing it with a lower density than liquid water. This is why ice floats.</p>
<p>Although the overall water molecule is electrically neutral, the placement of the electrons creates polarization effects. The side with the hydrogen atoms is relatively positive in electrical charge, whereas the oxygen side is slightly negative. So water molecules do not exist in isolation, rather they combine with one another in small groups to assume, typically, pentagonal or hexagonal shapesname=&#8221;_ednref14&#8243;&gt;[14]. These multi-molecule structures can change back and forth between hexagonal and pentagonal configurations 100 billion times a second. At room temperature, only about 3 percent of the clusters are hexagonal, but this increases to 100 percent as the water gets colder. This is why snowflakes are hexagonal.</p>
<p>These three-dimensional electrical properties of water are quite powerful and can break apart the strong chemical bonds of other compounds. Consider what happens when you put salt into water. Salt is quite stable when dry, but is quickly torn apart into its ionic components when placed in water. The negatively charged oxygen side of the water molecules attracts positively charged sodium ions (Na<sup>+</sup>), while the positively charged hydrogen side of the water molecules attracts the negatively charged chlorine ions (Cl<sup>&#8211;</sup>). In the dry form of salt, the sodium and chlorine atoms are tightly bound together, but these bonds are easily broken by the electrical charge of the water molecules. Water is considered &#8220;the universal solvent&#8221; and is involved in most of the biochemical pathways in our bodies. So we can regard the chemistry of life on our planet primarily as water chemistry.</p>
<p>However, the primary thrust of our technology has been to develop systems that are not limited to the restrictions of biological evolution, which exclusively adopted water-based chemistry and proteins as its foundation. Biological systems can fly, but if you want to fly at 30,000 feet and at hundreds or thousands of miles per hour, you would use our modern technology, not proteins. Biological systems such as human brains can remember things and do calculations, but if you want to do data mining on billions of items of information, you would want to use our electronic technology, not unassisted human brains.</p>
<p>Smalley is ignoring the past decade of research on alternative means of positioning molecular fragments using precisely guided molecular reactions. Precisely controlled synthesis of diamondoid (diamond-like material formed into precise patterns) has been extensively studied, including the ability to remove a single hydrogen atom from a hydrogenated diamond surface.<a name="_ednref15" href="#_edn15"></a> title=&#8221;&#8221;&gt;[15] Related research supporting the feasibility of hydrogen abstraction and precisely-guided diamondoid synthesis has been conducted at the Materials and Process Simulation Center at Caltech; the Department of Materials Science and Engineering at North Carolina State University; the Institute for Molecular Manufacturing, the University of Kentucky; the United States Naval Academy, and the Xerox Palo Alto Research Center.<a name="_ednref16" href="#_edn16"></a>[16]</p>
<p>Smalley is also ignoring the well-established scanning probe microscope mentioned above, which uses precisely controlled molecular reactions. Building on these concepts, Ralph Merkle has described tip reactions that can involve up to four reactants.<a name="_ednref17" href="#_edn16"></a>[17] There is extensive literature on site-specific reactions that can be precisely guided and that would be feasible for the tip chemistry in a molecular assembler.<a name="_ednref18" href="#_edn18"></a>[18] Smalley ignores this body of literature when he maintains that only biological enzymes in water can perform this type of reaction. Recently, many tools that go beyond SPMs are emerging that can reliably manipulate atoms and molecular fragments.</p>
<p>On September 3, 2003, Drexler <a href="/toward-closure-open-letter-to-prof-smalley" target="_top">responded</a> to <a href="http://pubs.acs.org/cen/coverstory/8148/8148counterpoint.html" target="_blank">Smalley&#8217;s response</a> by alluding once again to the extensive body of literature that Smalley ignores. He cites the analogy to a modern factory, only at a nano-scale. He cites analyses of transition state theory indicating that positional control would be feasible at megahertz frequencies for appropriately selected reactants.</p>
<p>The latest installment of this debate is a <span style="text-decoration: underline;"><a href="http://pubs.acs.org/cen/coverstory/8148/8148counterpoint.html" target="_blank">follow-up letter</a></span> by Smalley. This letter is short on specifics and science and long on imprecise metaphors that avoid the key issues. He writes, for example, that &#8220;much like you can&#8217;t make a boy and a girl fall in love with each other simply by pushing them together, you cannot make precise chemistry occur as desired between two molecular objects with simple mechanical motion…cannot be done simply by mushing two molecular objects together.&#8221; He again acknowledges that enzymes do in fact accomplish this, but refuses to acknowledge that such reactions could take place outside of a biological-like system: &#8220;this is why I led you…..to talk about real chemistry with real enzymes….any such system will need a liquid medium. For the enzymes we know about, that liquid will have to be water, and the types of things that can be synthesized with water around cannot be much broader than meat and bone of biology.&#8221;</p>
<p>I can understand Drexler&#8217;s frustration in this debate because I have had many critics that do not bother to read or understand the data and arguments that I have presented for my own conceptions of future technologies. Smalley&#8217;s argument is of the form that &#8220;we don&#8217;t have &#8216;X&#8217; today, therefore &#8216;X&#8217; is impossible.&#8221; I encounter this class of argument repeatedly in the area of artificial intelligence. Critics will cite the limitations of today&#8217;s systems as proof that such limitations are inherent and can never be overcome. These critics ignore the extensive list of contemporary examples of AI (for example, airplanes and weapons that fly and guide themselves, automated diagnosis of electrocardiograms and blood cell images, automated detection of credit card fraud, automated investment programs that routinely outperform human analysts, telephone-based natural language response systems, and hundreds of others) that represent working systems that are commercially available today that were only research programs a decade ago.</p>
<p>Those of us who attempt to project into the future based on well-grounded methodologies are at a disadvantage. Certain future realities may be inevitable, but they are not yet manifest, so they are easy to deny. There was a small body of thought at the beginning of the 20<sup>th</sup> century that heavier-than-air flight was feasible, but mainstream skeptics could simply point out that if it was so feasible, why had it never been demonstrated? In 1990, Kasparov scoffed at the idea that machine chess players could ever possibly defeat him. When it happened in 1997, observers were quick to dismiss the achievement by dismissing the importance of chess.</p>
<p>Smalley reveals at least part of his motives at the end of his most recent letter when he writes:</p>
<p>&#8220;A few weeks ago I gave a talk on nanotechnology and energy titled &#8216;Be a Scientist, Save the World&#8217; to about 700 middle and high school students in the Spring Branch ISD, a large public school system here in the Houston area. Leading up to my visit the students were asked to &#8216;write an essay on &#8216;why I am a Nanogeek. Hundreds responded, and I had the privilege of reading the top 30 essays, picking my favorite top 5. Of the essays I read, nearly half assumed that self-replicating nanobots were possible, and most were deeply worried about what would happen in their future as these nanobots spread around the world. I did what I could to allay their fears, but there is no question that many of these youngsters have been told a bedtime story that is deeply troubling. You and people around you have scared our children.&#8221;</p>
<p>I would point out to Smalley that earlier critics also expressed skepticism that either world-wide communication networks or software viruses that would spread across them were feasible. Today, we have both the benefits and the damage from both of these capabilities. However, along with the danger of software viruses has also emerged a technological immune system. While it does not completely protect us, few people would advocate eliminating the Internet in order to eliminate software viruses. We are obtaining far more benefit than damage from this latest example of intertwined promise and peril.</p>
<p>Smalley&#8217;s approach to reassuring the public about the potential abuse of this future technology is not the right strategy. Denying the feasibility of both the promise and the peril of molecular assembly will ultimately backfire and fail to guide research in the needed constructive direction. By the 2020s, molecular assembly will provide tools to effectively combat poverty, clean up our environment, overcome disease, extend human longevity, and many other worthwhile pursuits.</p>
<p>Like every other technology that humankind has created, it can also be used to amplify and enable our destructive side. It is important that we approach this technology in a knowledgeable manner to gain the profound benefits it promises, while avoiding its dangers. Drexler and his colleagues at the Foresight Institute have been in the forefront of developing the ethical guidelines and design considerations needed to guide the technology in a safe and constructive direction.</p>
<p>Denying the feasibility of an impending technological transformation is a short-sighted strategy.</p>
<h2>Notes</h2>
<p><a name="_edn1" href="#_ednref1"></a>[1] <a href="http://pubs.acs.org/cen/">Chemical &amp; Engineering News</a>, December 1, 2003</p>
<p><a name="_edn2" href="#_ednref2"></a>[2] Ralph C. Merkle, &#8220;A proposed &#8216;metabolism&#8217; for a hydrocarbon assembler,&#8221; <em>Nanotechnology</em> <strong>8</strong> (1997): 149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html.</p>
<p><a name="_edn3" href="#_ednref3"></a>[3] T.R. Kelly, H. De Silva, R.A. Silva, &#8220;Unidirectional rotary motion in a molecular system,&#8221; <em>Nature</em> <strong>401</strong> (September 9, 1999): 150-152.</p>
<p><a name="_edn4" href="#_ednref4"></a>[4] C.D. Montemagno, G.D. Bachan, &#8220;Constructing nanomechanical devices powered by biomolecular motors,&#8221; <em>Nanotechnology </em><strong>10</strong> (1999): 225-231; G. D. Bachand, C.D. Montemagno, &#8220;Constructing organic / inorganic NEMS devices powered by biomolecular motors,&#8221; <em>Biomedical Microdevices </em><strong>2</strong> (2000): 179-184.</p>
<p><a name="_edn5" href="#_ednref5"></a>[5] N. Koumura, R.W. Zijlstra, R.A. van Delden, N. Harada, B.L. Feringa, &#8220;Light-driven monodirectional molecular rotor,&#8221; <em>Nature </em><strong>401</strong> (September 9, 1999): 152-155.</p>
<p><a name="_edn6" href="#_ednref6"></a>[6] Richard E. Smalley, &#8220;Of chemistry, love, and nanobots,&#8221; <em>Scientific American </em><strong>285</strong> (September, 2001): 76-77. http://smalley.rice.edu/rick&#8217;s%20publications/SA285-76.pdf.</p>
<p><a name="_edn7" href="#_ednref7"></a>[7] <em>Nanosystems: molecular machinery, manufacturing, and computation</em>, by K. Eric Drexler, Wiley 1992.</p>
<p><a name="_edn8" href="#_ednref8"></a>[8] See for example, <em>Theoretical Studies of a Hydrogen Abstraction Tool for Nanotechnology</em>, by Charles B. Musgrave, Jason K. Perry, Ralph C. Merkle, and William A. Goddard III, Nanotechnology 2, 1991 pages 187-195.</p>
<p><a name="_edn9" href="#_ednref9"></a>[9] See equation and explanation on page 3 of &#8220;That&#8217;s Impossible!&#8221; How good scientists reach bad conclusions by Ralph C. Merkle, http://www.zyvex.com/nanotech/impossible.html.</p>
<p><a name="_edn10" href="#_ednref10"></a>[10] Montemagno, C., and Bachand G. 1999 <em>Nanotechnology </em><strong>10</strong> 225.</p>
<p><a name="_edn11" href="#_ednref11"></a>[11] By way of disclosure, the author is an advisor and investor in this company.</p>
<p><a name="_edn12" href="#_ednref12"></a>[12] <a href="http://pubs.acs.org/cen/">Chemical &amp; Engineering News</a>, December 1, 2003</p>
<p><a name="_edn13" href="#_ednref13"></a>[13] A. Zaks and A.M. Klibanov in <em>Science</em> (1984, <strong>224</strong>:1249-51)</p>
<p><a name="_edn14" href="#_ednref14"></a>[14] &#8220;The apparent simplicity of the water molecule belies the enormous complexity of its interactions with other molecules, including other water molecules&#8221; (A. Soper. 2002. &#8220;Water and ice.&#8221; <em>Science</em> 297: 1288-1289). There is much that is still up for debate, as shown by the numerous articles still being published about this most basic of molecules, H<sub>2</sub>0. For example, D. Klug. 2001. &#8220;Glassy water.&#8221; <em>Science</em> 294:2305-2306; P. Geissler et al., 2001. &#8220;Autoionization in liquid water.&#8221; <em>Science</em> 291(5511):2121-2124; J.K. Gregory et al. 1997. &#8220;The water dipole moment in water clusters.&#8221; <em>Science</em> 275:814-817; and K. Liu et al. 1996. &#8220;Water clusters.&#8221; <em>Science</em> 271:929-933;</p>
<p>A water molecule has slightly negative and slightly positive ends, which means water molecules interact with other water molecules to form networks. The partially positive hydrogen atom on one molecule is attracted to the partially negative oxygen on a neighboring molecule (hydrogen bonding). Three-dimensional hexamers involving 6 molecules are thought to be particularly stable, though none of these clusters lasts longer than a few picoseconds.</p>
<p>The polarity of water results in a number of anomalous properties. One of the best known is that the solid phase (ice) is less dense than the liquid phase. This is because the volume of water varies with the temperature, and the volume increases by about 9% on freezing. Due to hydrogen bonding, water also has a higher-than-expected boiling point.</p>
<p><a name="_edn15" href="#_ednref15"></a>[15] <a href="http://www.foresight.org/SciAmDebate/SciAmResponse.html">http://www.foresight.org/SciAmDebate/SciAmResponse.html</a>, <a href="http://www.imm.org/SciAmDebate2/smalley.html">http://www.imm.org/SciAmDebate2/smalley.html</a>, <a href="http://www.rfreitas.com/Nano/DimerTool.htm">http://www.rfreitas.com/Nano/DimerTool.htm</a>.</p>
<p><a name="_edn16" href="#_ednref16"></a>[16] The analysis of the hydrogen abstraction tool has involved many people, including: Donald W. Brenner, Richard J. Colton, K. Eric Drexler, William A. Goddard, III, J. A. Harrison, Jason K. Perry, Ralph C. Merkle, Charles B. Musgrave, O. A. Shenderova, Susan B. Sinnott, and Carter T. White.</p>
<p><a name="_edn17" href="#_ednref17"></a>[17] Ralph C. Merkle, &#8220;A proposed &#8216;metabolism&#8217; for a hydrocarbon assembler,&#8221; <em>Nanotechnology</em> <strong>8</strong>(1997):149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html</p>
<p><a name="_edn18" href="#_ednref18"></a>[18] Wilson Ho, Hyojune Lee, &#8220;Single bond formation and characterization with a scanning tunneling microscope,&#8221; <em>Science</em> <strong>286</strong>(26 November 1999):1719-1722; <a href="http://www.physics.uci.edu/~wilsonho/stm-iets.html">http://www.physics.uci.edu/~wilsonho/stm-iets.html</a>.</p>
<p>K. Eric Drexler, <em>Nanosystems: Molecular Machinery, Manufacturing, and Computation</em>, John Wiley &amp; Sons, New York, 1992, Chapter 8.</p>
<p>Ralph C. Merkle, &#8220;A proposed &#8216;metabolism&#8217; for a hydrocarbon assembler,&#8221; <em>Nanotechnology</em> <strong>8</strong>(1997):149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html.</p>
<p>Charles B. Musgrave, Jason K. Perry, Ralph C. Merkle, William A. Goddard III, &#8220;Theoretical studies of a hydrogen abstraction tool for nanotechnology,&#8221; <em>Nanotechnology</em> <strong>2</strong>(1991):187-195; http://www.zyvex.com/nanotech/Habs/Habs.html.</p>
<p>Michael Page, Donald W. Brenner, &#8220;Hydrogen abstraction from a diamond surface: <em>Ab initio</em> quantum chemical study using constrained isobutane as a model,&#8221; <em>J. Am. Chem. Soc.</em> <strong>113</strong>(1991):3270-3274.</p>
<p>Susan B. Sinnott, Richard J. Colton, Carter T. White, Donald W. Brenner, &#8220;Surface patterning by atomically-controlled chemical forces: molecular dynamics simulations,&#8221; <em>Surf. Sci.</em> <strong>316</strong>(1994):L1055-L1060.</p>
<p>D.W. Brenner, S.B. Sinnott, J.A. Harrison, O.A. Shenderova, &#8220;Simulated engineering of nanostructures,&#8221; <em>Nanotechnology</em> <strong>7</strong>(1996):161-167; http://www.zyvex.com/nanotech/nano4/brennerPaper.pdf</p>
<p>S.P. Walch, W.A. Goddard III, R.C. Merkle, &#8220;Theoretical studies of reactions on diamond surfaces,&#8221; Fifth Foresight Conference on Molecular Nanotechnology, 1997; http://www.foresight.org/Conferences/MNT05/Abstracts/Walcabst.html.</p>
<p>Stephen P. Walch, Ralph C. Merkle, &#8220;Theoretical studies of diamond mechanosynthesis reactions,&#8221; <em>Nanotechnology</em> <strong>9</strong>(1998):285-296.</p>
<p>Fedor N. Dzegilenko, Deepak Srivastava, Subhash Saini, &#8220;Simulations of carbon nanotube tip assisted mechano-chemical reactions on a diamond surface,&#8221; <em>Nanotechnology</em> <strong>9</strong>(December 1998):325-330.</p>
<p>J.W. Lyding, K. Hess, G.C. Abeln, D.S. Thompson, J.S. Moore, M.C. Hersam, E.T. Foley, J. Lee, Z. Chen, S.T. Hwang, H. Choi, P.H. Avouris, I.C. Kizilyalli, &#8220;UHV-STM nanofabrication and hydrogen/deuterium desorption from silicon surfaces: implications for CMOS technology,&#8221; <em>Appl. Surf. Sci.</em> <strong>130</strong>(1998):221-230.</p>
<p>E.T. Foley, A.F. Kam, J.W. Lyding, P.H. Avouris, P. H. (1998), &#8220;Cryogenic UHV-STM study of hydrogen and deuterium desorption from Si(100),&#8221; <em>Phys. Rev. Lett.</em> <strong>80</strong>(1998):1336-1339.</p>
<p>M.C. Hersam, G.C. Abeln, J.W. Lyding, &#8220;An approach for efficiently locating and electrically contacting nanostructures fabricated via UHV-STM lithography on Si(100),&#8221; <em>Microelectronic Engineering</em> <strong>47</strong>(1999):235-.</p>
<p>L.J. Lauhon, W. Ho, &#8220;Inducing and observing the abstraction of a single hydrogen atom in bimolecular reaction with a scanning tunneling microscope,&#8221; <em>J. Phys. Chem.</em> <strong>105</strong>(2000):3987-3992.</p>
<p>Ralph C. Merkle, Robert A. Freitas Jr., “Theoretical analysis of a carbon-carbon dimer placement tool for diamond mechanosynthesis,” J. Nanosci. Nanotechnol. 3(August 2003):319-324. http://www.rfreitas.com/Nano/JNNDimerTool.pdf</p>
<p>Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, “Theoretical Analysis of Diamond Mechanosynthesis. Part I. Stability of C2 Mediated Growth of Nanocrystalline Diamond C(110) Surface,” J. Comp. Theor. Nanosci. 1(March 2004). In press.</p>
<p>David J. Mann, Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, “Theoretical Analysis of Diamond Mechanosynthesis. Part II. C2 Mediated Growth of Diamond C(110) Surface via Si/Ge-Triadamantane Dimer Placement Tools,” J. Comp. Theor. Nanosci. 1(March 2004). In press.</p>
<p><em>© 2003 KurzweilAI.net</em></p>
]]></content:encoded>
			<wfw:commentRss>https://www.writingsbyraykurzweil.com/the-drexler-smalley-debate-on-molecular-assembly/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
