The Singularity is Near: When Humans Transcend Biology by Ray Kurzweil

Memorable quotes
I am often reminded of Arthur C. Clarke’s third law, that “any sufficiently advanced technology is indistinguishable from magic.
Everyone takes the limits of his own vision for the limits of the world – Arthur Schopenhauer
We’ll make another twenty years of progress in just fourteen years (by 2014), and then do the same again in only seven years. To express this another way, we won’t experience one hundred years of technological advance in the twenty-first century; we will witness on the order of twenty thousand years of progress (again, when measured by today’s rate of progress), or about one thousand times greater than what was achieved in the twentieth century.
The evolutionary process of technology improves capacities in an exponential fashion. Innovators seek to improve capabilities by multiples. Innovation is multiplicative, not additive. Technology, like any evolutionary process, builds on itself. This aspect will continue to accelerate when the technology itself takes full control of its own progression in Epoch Five.
The overall rate of adopting new paradigms, which parallels the rate of technological progress, is currently doubling every decade. That is, the time to adopt new paradigms is going down by half each decade. At this rate, technological progress in the twenty-first century will be equivalent (in the linear view) to two hundred centuries of progress (at the rate of progress in 2000). 20, 21
Civilization advances by extending the number of important operations which we can perform without thinking about them. – Alfred North Whitehead
All of the technology trend charts in this chapter represent massive deflation. There are many examples of the impact of these escalating efficiencies. BP Amoco’s cost for finding oil in 2000 was less than one dollar per barrel, down from nearly ten dollars in 1991. Processing an Internet transaction costs a bank one penny, compared to more than one dollar using a teller. It is important to point out that a key implication of nanotechnology is that it will bring the economics of software to hardware that is, to physical products. Software prices are deflating even more quickly than those of hardware.
The current disadvantages of Web-based commerce (for example, limitations in the ability to directly interact with products and the frequent frustrations of interacting with inflexible menus and forms instead of human personnel) will gradually dissolve as the trends move robustly in favor of the electronic world. By the end of this decade, computers will disappear as distinct physical objects, with displays built in our eyeglasses, and electronics woven in our clothing, providing full-immersion visual virtual reality. Thus, “going to a Web site” will mean entering a virtual-reality environment at least for the visual and auditory senses where we can directly interact with products and people, both real and simulated.
By 2050, one thousand dollars of computing will exceed the processing power of all human brains on Earth.
And, of course, human civilization will not be limited to computing with just a few pounds of matter. In chapter 6, we’ll examine the computational potential of an Earthsize planet and computers on the scale of solar systems, of galaxies, and of the entire known universe. As we will see, the amount of time required for our human civilization to achieve scales of computation and intelligence that go beyond our planet and into the universe may be a lot shorter than you might think. I set the date for the Singularity representing a profound and disruptive transformation in human capability as 2045. The non biological intelligence created in that year will be one billion times more powerful than all human intelligence today.
Most computers today are all digital and perform one (or perhaps a few) computations at a time at extremely high speed. In contrast, the human brain combines digital and analog methods but performs most computations in the analog (continuous) domain, using neurotransmitters and related mechanisms. Although these neurons execute calculations at extremely slow speeds (typically two hundred transactions per second), the brain as a whole is massively parallel: most of its neurons work at the same time, resulting in up to one hundred trillion computations being carried out simultaneously. The massive parallelism of the human brain is the key to its pattern-recognition ability, which is one of the pillars of our species’ thinking.
But it’s massively parallel. The brain has on the order of one hundred trillion interneuronal connections, each potentially processing information simultaneously. These two factors (slow cycle time and massive parallelism) result in a certain level of computational capacity for the brain, as we discussed earlier. Today our largest supercomputers are approaching this range. The leading supercomputers (including those used by the most popular search engines) measure over 10 14 cps, which matches the lower range of the estimates I discussed in chapter 3 for functional simulation. It is not necessary, however, to use the same granularity of parallel processing as the brain itself so long as we match the overall computational speed and memory capacity needed and otherwise simulate the brain’s massively parallel architecture.
It was a student of Adrian, Horace Barlow, who contributed another lasting insight, “trigger features” in neurons, with the discovery that the retinas of frogs and rabbits has single neurons that would trigger on “seeing” specific shapes, directions, or velocities. In other words, perception involves a series of stages, with each layer of neurons recognizing more sophisticated features of the image.
Until very recently neuroscience was characterized by overly simplistic models limited by the crudeness of our sensing and scanning tools. This led many observers to doubt whether our thinking processes were inherently capable of understanding themselves. Peter D. Kramer writes, “If the mind were simple enough for us to understand, we would be too simple to understand it”.
Our idea was that you actually don’t need to make many new synapses and get rid of old ones when you learn, memorize. You just need to modify the strength of the preexisting synapses for short-term learning and memory. However, it’s likely that a few synapses are made or eliminated to achieve long-term memory.” 59 The reason memories can remain intact even if three quarters of the connections have disappeared is that the coding method used appears to have properties similar to those of a hologram. In a hologram, information is stored in a diffuse pattern throughout an extensive region. If you destroy three quarters of the hologram, the entire image remains intact, although with only one quarter of the resolution. Research by Pentti Kanerva, a neuroscientist at Redwood Neuroscience Institute, supports the idea that memories are dynamically distributed throughout a region of neurons. This explains why older memories persist but nonetheless appear to “fade,” because their resolution has diminished.
Moreover, the detailed arrangement of connections and synapses in a given region is a direct product of how extensively that region is used. As brain scanning has attained sufficiently high resolution to detect dendritic-spine growth and the formation of new synapses, we can see our brain grow and adapt to literally follow our thoughts. This gives new shades of meaning to Descartes’ dictum “I think therefore I am.”
An experiment by genetics researchers Fred Gage, G. Kempermann, and Henriette van Praag at the Salk Institute for Biological Studies showed that neurogenesis is actually stimulated by our experience. Moving mice from a sterile, uninteresting cage to a stimulating one approximately doubled the number of dividing cells in their hippocampus regions.
To actually infer the path of the ball in three-dimensional space would require solving difficult simultaneous differential equations. Additional equations would need to be solved to predict the future course of the ball, and more equations to translate these results into what was required of the player’s own movements. How does a young outfielder accomplish all of this in a few seconds with no computer and no training in differential equations? Clearly, he is not solving equations consciously, but how does his brain solve the problem?
But the big feature of human-level intelligence is not what it does when it works but what it does when it’s stuck.
Interestingly, we are able to predict or anticipate our own decisions. Work by physiology professor Benjamin Libet at the University of California at Davis shows that neural activity to initiate an action actually occurs about a third of a second before the brain has made the decision to take the action. The implication, according to Libet, is that the decision is really an illusion, that “consciousness is out of the loop.” The cognitive scientist and philosopher Daniel Dennett describes the phenomenon as follows: “The action is originally precipitated in some part of the brain, and off fly the signals to muscles, pausing en route to tell you, the conscious agent, what is going on (but like all good officials letting you, the bumbling president, maintain the illusion that you started it all).”
Inputs from the body (estimated at hundreds of megabits per second), including nerves from the skin, muscles, organs, and other areas, stream into the upper spinal cord. These carry messages about touch, temperature, acid levels (for example, lactic acid in muscles), the movement of food through the gastrointestinal tract, and many other types of information. This data is processed through the brain stem and midbrain. Key cells called Lamina 1 neurons create a map of the body representing its current state, not unlike the displays used by flight controllers to track airplanes. The information then flows through a nut-size region called the posterior ventromedial nucleus (VMpo), which apparently computes complex reactions to bodily states such as “this tastes terrible,” “what a stench,” or “that light touch is stimulating.
The most important thing is this: To be able at any moment to sacrifice what we are for what we could become.
By the 2030s the nonbiological portion of our intelligence will predominate, and by the 2040s, as I pointed out in chapter 3, the nonbiological portion will be billions of times more capable. Although we are likely to retain the biological portion for a period of time, it will become of increasingly little consequence. So we will have effectively uploaded ourselves, albeit gradually, never quite noticing the transfer. There will be no “old Ray” and “new Ray,” just an increasingly capable Ray. Although I believe that uploading as in the sudden scan-and-transfer scenario discussed in this section will be a feature of our future world, it is this gradual but inexorable progression to vastly superior nonbiological thinking that will profoundly transform human civilization.
The first half of the twenty-first century will be characterized by three overlapping revolutions: Genetics, Nanotechnology, and Robotics. These will usher in what I referred to earlier as Epoch Five, the beginning of the Singularity. We are in the early stages of the “G” revolution today. By understanding the information processes underlying life, we are starting to learn to reprogram our biology to achieve the virtual elimination of disease, dramatic expansion of human potential, and radical life extension. Hans Moravec points out, however, that no matter how successfully we fine-tune our DNA-based biology, humans will remain “second-class robots,” meaning that biology will never be able to match what we will be able to engineer once we fully understand biology’s principles of operation.
New land-based robotic telescopes are able to make their own decisions on where to look and how to optimize the likelihood of finding desired phenomena. Called “autonomous, semi-intelligent observatories,” the systems can adjust to the weather, notice items of interest, and decide on their own to track them. They are able to detect very subtle phenomena, such as a star blinking for a nanosecond, which may indicate a small asteroid in the outer regions of our solar system passing in front of the light from that star. One such system, called Moving Object and Transient Event Search System (MOTESS), has identified on its own 180 new asteroids and several comets during its first two years of operation. “We have an intelligent observing system,” explained University of Exeter astronomer Alasdair Allan. “It thinks and reacts for itself, deciding whether something it has discovered is interesting enough to need more observations. If more observations are needed, it just goes ahead and gets them.
Ascent Technology, founded by Patrick Winston, who directed MIT’s AI Lab from 1972 through 1997, has designed a GA-based system called Smart-Airport Operations Center (SAOe) that can optimize the complex logistics of an airport, such as balancing work assignments of hundreds of employees, making gate and equipment assignments, and managing a myriad of other details. Winston points out that “figuring out ways to optimize a complicated situation is what genetic algorithms do.” SAOC has raised productivity by approximately 30 percent in the airports where it has been implemented.
A recent trend in software is for AI systems to monitor a complex software system’s performance, recognize malfunctions, and determine the best way to recover automatically without necessarily informing the human user. The idea stems from the realization that as software systems become more complex, like humans, they will never be perfect, and that eliminating all bugs is impossible. As humans, we use the same strategy: we don’t expect to be perfect, but we usually try to recover from inevitable mistakes. “We want to stand this notion of systems management on its head,” says Armando Fox, the head of Stanford University’s Software Infrastructures Group, who is working on what is now called “autonomic computing.” Fox adds, “The system has to be able to set itself up, it has to optimize itself. It has to repair itself, and if something goes wrong, it has to know how to respond to external threats.” IBM, Microsoft, and other software vendors are all developing systems that incorporate autonomic capabilities.
RAY: Then perhaps our basic disagreement is over the nature of being human. To me, the essence of being human is not our limitations although we do have many, it’s our ability to reach beyond our limitations. We didn’t stay on the ground. We didn’t even stay on the planet. And we are already not settling for the limitations of our biology.
MOLLY 2004: Like the girl you mentioned who found everything hilarious when the surgeons stimulated a particular spot in her brain? RAY: Exactly. There are neurological correlates of all of our experiences, sensations, and emotions. Some are localized whereas some reflect a pattern of activity. In either case we’ll be able to shape and enhance our emotional reactions as part of our virtual-reality experiences.
It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured. – Richard Feynman
But the parameters above are arguably very high. If we make more conservative assumptions on the difficulty of evolving life and intelligent life in particular, we get a very different outcome. If we assume that 50 percent of the stars have planets (fp = 0.5), that only one tenth of these stars have planets able to sustain life (ne = 0.1 based on the observation that life-supporting conditions are not that prevalent), that on 1 percent of these planets life has actually evolved (f l = 0.01 based on the difficulty of life starting on a planet), that 5 percent of these life-evolving planets have evolved intelligent life (f i = 0.05, based on the very long period of time this took on Earth), that half of these are radiocapable (f = c 0.5), and that the average radio-capable civilization has been broadcasting for ten thousand years (fL = 10 ñ 6 ), the Drake equation tells us that there is about one (1.25 to be exact) radio-capable civilization in the Milky Way. And we already know of one. In the end, it is difficult to make a strong argument for or against ETI based on this equation. If the Drake formula tells us anything, it is the extreme uncertainty of our estimates. What we do know for now, however, is that the cosmos appears silent, that is, we’ve detected no convincing evidence of ETI transmissions.
The conclusion I reach is that it is likely (although not certain) that there are no such other civilizations. In other words, we are in the lead. That’s right, our humble civilization with its pickup trucks, fast food, and persistent conflicts (and computation!) is in the lead in terms of the creation of complexity and order in the universe. Now how can that be? Isn’t this extremely unlikely, given the sheer number of likely inhabited planets? Indeed it is very unlikely. But equally unlikely is the existence of our universe, with its set of laws of physics and related physical constants, so exquisitely, precisely what is needed for the evolution of life to be possible.
In my view the purpose of life, and of our lives, is to create and appreciate ever-greater knowledge, to move toward greater “order.” As I discussed in chapter 2, increasing order usually means increasing complexity, but sometimes a profound insight will increase order while reducing complexity.
The ongoing acceleration of many intertwined technologies produces roads paved with gold. (I use the plural here because technology is clearly not a single path.) In a competitive environment it is an economic imperative to go down these roads. Relinquishing technological advancement would be economic suicide for individuals, companies, and nations.
But the battle concerning software viruses and the panoply of software pathogens will never end. We are becoming increasingly reliant on mission-critical software systems, and the sophistication and potential destructiveness of self-replicating software weapons will continue to escalate.
As I discussed earlier, nanoengineered solar panels will be able to meet our energy needs in a distributed, renewable, and clean fashion. Ultimately technology along these lines could power everything from our cell phones to our cars and homes. These types of decentralized energy technologies would not be subject to disaster or disruption. As these technologies develop, our need for aggregating people in large buildings and cities will diminish, and people will spread out, living where they want and gathering together in virtual reality.
To argue that one piece of structured water or one quantum coherence is a necessary detail in the functional description of the brain would clearly be ludicrous. But if, in every cell, molecules derive systematic functionality from these submolecular processes, if these processes are used all the time, allover the brain, to reflect, record and propagate spatiotemporal correlations of molecular fluctuations, to enhance or diminish the probabilities and specificities of reactions, then we have a situation qualitatively different from the logic gate. At one level he is disputing the simplistic models of neurons and interneuronal connections used in many neuralnet projects. Brain-region simulations don’t use these simplified models, however, but rather apply realistic mathematical models based on the results from brain reverse engineering. The real point that Bell is making is that the brain is immensely complicated, with the consequent implication that it will therefore be very difficult to understand, model, and simulate its functionality.
Now when you say that a snail may be conscious, I think what you are saying is the following: that we may discover a certain neurophysiological basis for consciousness (call it “x”) in humans such that when this basis was present humans were conscious, and when it was not present humans were not conscious. So we would presumably have an objectively measurable basis for consciousness. And then if we found that in a snail, we could conclude that it was conscious. But this inferential conclusion is just a strong suggestion, it is not a proof of subjective experience on the snail’s part. It may be that humans are conscious because they have “x” as well as some other quality that essentially all humans share, call this “y.” The “y” may have to do with a human’s level of complexity or something having to do with the way we are organized, or with the quantum properties of our microtubules (although this may be part of “x”), or something else entirely. The snail has “x” but doesn’t have “y” and so it may not be conscious.
The majority of Americans will not simply sit still while some elite strips off their personalities and uploads themselves into their cyberspace paradise. They will have something to say about that. There will be vehement debate about that in this country. – Leon Fuerth, Former National Security Adviser to Vice President Al Gore at the 2002 Foresight Conference.
With the reverse engineering of the human brain we will be able to apply the parallel, self-organizing, chaotic algorithms of human intelligence to enormously powerful computational substrates. This intelligence will then be in a position to improve its own design, both hardware and software, in a rapidly accelerating iterative process.
We are the last. The last generation to be unaugmented. The last generation to be intellectually alone. The last generation to be limited by our bodies. We are the first. The first generation to be augmented. The first generation to be intellectually together. The first generation to be limited only by our imaginations. We stand both before and after, balancing on the razor edge of the Event Horizon of the Singularity. That this sublime juxtapositional tautology has gone unnoticed until now is itself remarkable. We’re so exquisitely privileged to be living in this time, to be born right on the precipice of the greatest paradigm shift in human history, the only thing that approaches the importance of that reality is finding like minds that realize the same, and being able to make some connection with them.