How To Create A Mind (book review)

How To Create a Mind
Ray Kurzweil, 2012

In this, his fourth book on the subject of computers, mind, and the future, Ray Kurzweil is clear on what he is trying to do: “find an algorithm that would turn a computer into an entity that is equivalent to a human brain.”[1]

As Kurzweil tells it, he became a technology forecaster in order to become a more successful inventor. Inventors often miss the mark, he says, by bringing an item to market before technology can support it, or after others have marketed the invention. The trick is to look ahead and see where technology is heading, then plan a development schedule so your invention is ready at just the right time.

Immersing himself in technology curves in the 1980’s, he came up with the theory that he eventually called “accelerating returns,” the idea that technological progress speeds up with each new capacity. We’re familiar with this in the specific case of Moore’s Law, which is presented as an exponential change (growth of chip density doubles in a period of time). During the last decade the idea of exponential change has been accepted by many and extended to other fields, as well as backward in time to physical and biological evolution.

We should be careful to distinguish exponential change or acceleration from the general belief that the times are out of control, or that we are bombarded by too much information, or that we can’t understand the technology around us, or that the nature of thinking is changing with the introduction of television, the web, and communication devices. Each of these views is partly valid, but each is different from the acceleration of technology as a whole.

In his first three books (The Age of Intelligent Machines, The Age of Spiritual Machines, and The Singularity Is Near) Kurzweil presented wide-ranging, extensively documented, graphically-illustrated and often radical predictions on the near and far future, based on extrapolations of technology. Most are based on trends in information science and computers, which are well-documented and the most reliable. One can pick apart his specific predictions, but in a detailed review of them that he made in 2010, they hold up well.

Kurzweil has always been interested in the brain, mind, consciousness, artificial intelligence, and specific related fields like text, speech, and language recognition. In The Age of Spiritual Machines he devotes thirty pages to the topic of “building new brains,” arguing that it is possible, that computer technologies are close to having an equivalent capacity to the human brain, and refuting common arguments that building an artificial brain is impossible. Still, his previous books offered a general vision for creating an artificial brain, but not a roadmap or theory.

In his latest book, How To Create A Mind, Kurzweil finally lays out the roadmap. Here he is more focused than in the previous three titles. In many ways this book is less a vision and more a project plan – and perhaps not coincidentally, five weeks after the book was released he was hired to be the chief engineer for Google, a company with ten billion dollars in profit per year and 53,000 employees. If nothing else, this indicates that the considerable brainpower at Google takes the roadmap in How To Create A Mind seriously, and they are willing to bet their company on that direction.

How To Create A Mind argues for what Kurzweil calls the “pattern recognition theory of mind” (PRTM). In a nutshell, PRTM hypothesizes that mind and thought are primarily a matter of recognizing patterns, and this is done through hierarchically-linked modules (“recognizers”) in the brain. Modules arrange themselves through learning, in a process that is mathematically similar to hierarchical hidden Markov models (HHMM) wherein one pattern recognizer sums the input from those below it, then sends signals up to those above. Signals are also sent down the hierarchy, in a predictive mode, so that if a pattern looks likely, it lowers the threshold for  activation of the input modules. In his estimation there are about 300 million of these pattern recognizers.

The hierarchy of pattern recognizers is conceptual, not spatial. Each pattern recognizer sends its signal to a level that is conceptually higher than itself – for example, one that recognizes a letter sends it to one that recognizes words, then sentences, then meaning, and so on up the scale. There is no final level, which is the reason that human thought can climb the ladder of abstraction as far as we like.

Even though input to a single module is multi-dimensional (incoming signals may represent many different aspects or dimensions related to the concept of the module) multiple dimensions can be mathematically represented by a one-dimensional list. This process, similar to the technique of dimensionality reduction used in AI, creates efficiency in computation,[2] throughout the book he characterizes pattern recognizers as lists. In this context he  mentions the LISP programming language[3], which has often been noted in discussions of artificial intelligence and language, although he notes limitations in that particular language in relation to modeling the brain.

The use of lists within lists brings up the concept of recursion. Kurzweil writes, “One list can have a certain category of pattern; the cortex will apply this same list of possible changes to another pattern. That is how we understand such language phenomena as metaphors and similes.” Douglas Hofstadter engages in a 700-page meditation on recursion in his book  Gödel, Escher, Bach, building up a philosophy of brains, thought, meaning, and computation from the simple idea of a thing containing itself. A list containing itself, and that list containing itself again, is a prospect of infinite recursion worthy of an evening’s fireside pondering. The possibility that the brain/mind may be fundamentally based upon this quality is intriguing, to say the least.

In PRTM, learning happens simultaneously with recognition in the mind/brain, and on every level (sensory, conceptual, motor) as well. Each of these is a hierarchical pattern, but at a different conceptual level. In terms of the process of learning, Kurzweil asserts “it is difficult to learn too many conceptual levels simultaneously. We can essentially learn one or at most two conceptual levels at a time. Once that learning is relatively stable, we can go on to learn the next level.” Why only one or two levels at a time? That’s not quite spelled out.

This multi-level description of thought and mind provides a coherent explanation of consciousness.  In addition the PRTM does not confine thought to language, a major limitation of some theories. “Our thoughts are not conceived primarily in the elements of language, although since language also exists as hierarchies of patterns in our neocortex, we can have language-based thoughts.”[4] If the stuff of mind, the activity that minds do, is to essentially build patterns upon patterns, and these patterns start with the senses and traverse many levels, then linguistic thought is not the central nor the final level.

As a side-note, this multi-patterned nature of consciousness/thought meshes with what Ben Goertzel described in terms of the challenge of and possible mechanism for technical thought-transference. If understanding a sentence is spread throughout many pattern recognizers (e.g., how I perceive letters, how I understand rhythm, the associations I have with certain words, my experience of the thing being discussed, my emotions about the topic), then there is no way to “send” a thought without referencing those unique patterns in my brain – in effect, without being able to model my entire brain.

Hence to transmit a thought to another person using technology, there will have to be middleware, some kind of translator. This middleware cannot have a single structure or algorithm (since it will necessarily differ from person to person), but rather it must be capable of translating an incoming thought in terms of the complex structure of patterns in my particular brain/mind. This is similar to the way software has gone with the Java programming language, enabling object-oriented programming and run-time environments.

The roots of the PRTM model[5] arose from Kurzweil’s work with text translation and hidden Markov models. “The [HHMM] technique… included a hierarchy of patterns with each higher level being conceptually more abstract than the one below it.” He compares it to the work done by Jeff Hawkins (On Intelligence), but critiques Hawkins’ idea that lists in the brain are all time-based (temporal). While procedural tasks are certainly time-based (Kurzweil illustrates this by noting the difficulty in counting backwards), not all learning or patterns in the mind/brain is necessarily  temporal. There must be spatial as well as other dimensions involving not just the senses, but indeed every level of the mind.

This is where Kurzweil could have gone farther, given his mathematical knowledge. The basis of the mind may be reducible via linear programming and sparse coding to one-dimensional lists, and for practical purposes that is certainly feasible – but on a mental and spiritual level it may be more useful to consider every relationship as a dimension, in a kind of Hilbert space of the mind, a Ruckerian infinite mindscape of thought. Certainly the first levels start at identifying simple objects in space and time, then progress to physical cause and effect, then on to language and its nuances, abstract concepts, and (in my opinion) to the most complex thing in the universe: the intricacies of human beings. But why stop there?

What if the mind is, in fact, capable of infinite growth and recursion, so that we can simultaneously rise to more and more complex levels, and at the same time relate those levels back down the entire hierarchy, all the way to basic senses, the body, space and time, all the skills that we learn before the age of five? What then?

In addition to the concept of patterns and hierarchy, Kurzweil argues that hierarchical hidden Markov models “are mathematically very similar to the methods that biology evolved in the form of the neocortex.”[6] More than that, he describes the progress of AI over the last thirty years as naturally settling on these methods because they work – not because of a pre-decided concept or theory.

In considering the cortex itself, Kurzweil cites the 2011 findings by Henry Markram[7] which demonstrate that there are “functional neuronal assemblies,” or “synaptic clustering,” what Kurzweil claims are essentially the same as pattern recognizers. This is a significant break with past neurology, in that Hebbian learning (“cells that fire together wire together”) has generally  been associated with single neurons. This new view describes the brain as having “assemblies” of perhaps 10,000 neurons, which are the basis for learning. These are functional, however, NOT anatomical – the assemblies (so far) have not been shown to be physically separable.

Along with that, the other new evidence came in the March 2012 study in Science magazine by Van J. Wedeen[8] which demonstrated that the wiring in the brain is regular and gridlike, not the messy tangle that has been believed for many decades. Progress in scanning resolution has led to this particular evidence, and is another example of the second type of scientific revolution described by Freeman Dyson (technology leading to ideas), as opposed to the first type (idea leading to technology) described by Thomas Kuhn.

The evidence for neuron assemblies and gridlike wiring bolster the idea that it is feasible to recreate the design architecture of the brain. If modeling the brain required mapping trillions of specific connections (and if that basic architecture differed for every individual), then mapping the brain or creating an equivalent one would be substantially more difficult – perhaps not even remotely feasible for decades, if at all.

Kurzweil reviews the question of where “information” in the brain comes from, noting that although there may be 1015 connections, there are only 25 megabytes of information in the genome that codes for the brain. Obviously, the level of information in the adult brain did not come from the genome; the vast majority comes through learning. (Though some of it does come from evolution, in the shape of pre-established strengths of certain areas, like the optical or auditory cortex.)

There are sections of Kurzweil’s book on the functions of the amygdala, thalamus, and cerebellum, though these don’t change the essential picture of the mind/brain.

However, the following idea about our capacity for the scope of thought is a radical departure from current beliefs. “Every one of the approximately 300 million pattern recognizers in our neocortex is recognizing and defining a pattern and giving it a name… That symbol [the pattern] in turn then becomes part of another pattern. The recognizers can fire up to 100 times a second, so we have the potential of recognizing up to 30 billion metaphors a second…. It is fair to say that we are indeed recognizing millions of metaphors a second.” [9]

This is a very different view of the mind than the belief that we can only think of one thing at a time, or only hold seven things in the mind at one time. Again, it is valuable to widen our conceptions of what thought itself involves, rather than limit it to the words and sentences that flow through our mind.

In moving to the chapter on building a digital brain (The Biologically-Inspired Digital Neocortex), Kurzweil considers current efforts, such as Markram’s Blue Brain project, Dalymple’s Nemaload, the Synapse project, etc. Of note is the “patch-clamp robot” developed by Markram’s team, which appears to have made a leap forward in both speed and resolution. “They are measuring the specific ion channels, neurotransmitters, and enzymes that are responsible for the electrochemical activity within each neuron.”[10] There is a lot of distance to go in terms of simulation and modeling. Even Kurzweil does not expect a Blue Brain simulation to be able to run until the early 2020’s, and non-destructive scanning until the 2040’s.

The next section goes over the history of neural nets and the approach known as connectionism, noting that the thorny problem of “invariance” which halted most work in the 80’s (how to have a system recognize something as the same even with different instances, for example, a letter written in different script) has mostly been solved through vector quantization.[11] Then he reviews hierarchical hidden Markov models, which give the mathematical support for linking “pattern recognizers” in a dynamic unsupervised learning process (which was not done in early neural nets; they were hard-coded). The use of HHMM’s have led to the great advances in speech, text, and language recognition and synthesis by enabling both unsupervised and semi-supervised learning. These two are, in fact, how human beings learn; we get cues from other people, and at the same time synthesize knowledge from a mass of perceptions and thoughts. (This is also how Siri and Google Translate are working – starting with a few basic rules and then letting them “learn” from use.)[12] “Today, the HHMM together with its mathematical cousins makes up a major portion of the world of AI.”[13] Finally, genetic algorithms are used to set the initial “God parameters” of a system, analogous to what evolution did with the brain over millions of years.

Describing IBM’s Watson, Kurzweil notes that its core, Watson’s Unstructured Information Management Architecture (UIMA), “goes substantially beyond earlier systems… in that its individual systems can contribute to a result without necessarily coming up with a final answer. It is sufficient if a subsystem helps narrow down the solution. UIMA is also able to compute how much confidence it has…”[14] And in a direct answer to Searle’s philosophical Chinese Box problem, he says that “Watson’s ‘understanding’ of language cannot be found in UIMA alone but rather is distributed across all of its many components…”[15]

This is both an accurate description of human understanding of language (and agrees with the  school of thought that describes language as body-based or metaphor-based) and a refutation of Searle. Language is a function of the whole system, not a philosophically imaginary homunculus  acting at the lowest physical level (bits or neurons, take your pick).

In the final section of this chapter, Kurzweil outlines his strategy on how to build a mind,[16] what might be seen as a concise design specification. In addition to the elements described above, he has two intriguing suggestions. One is that such a mind should have “a critical thinking module, which would perform a continual background scan of all of the existing patterns, reviewing their compatibility with the other patterns (ideas) in the software neocortex.” In so doing if it found contradictions, it “would find an idea at a higher conceptual level that resolves the apparent contradiction by providing a perspective that explains each idea.” In other words, thesis/antithesis/synthesis, the process of great minds from Hegel to Goethe to Aurobindo.

When building a mind, Kurzweil would “provide a module that identifies open questions in every discipline. As another background task, it would search for solutions to them in other disparate areas of knowledge.”[17] A good regimen for the broad-based thinker, for the person who wants to discover the truth. And a good argument for generalist skill, experience, and knowledge.

In the chapter “The Brain As Computer” Kurzweil examines two questions: what are the fundamental ideas inherent in building a digital brain? And, what technology is necessary to achieve it? For the core ideas, he identifies four “key concepts that underlie the universality and feasibility of computation and its applicability to our thinking”[18] from information science. First is Shannon’s theory of communication, or more specifically, the method of redundancy by which information can be communicated reliably through a noisy channel. Since mental activity (or computation) requires communication, this is the foundation for all effective computer technology. Computer science and technology would not have progressed at all without reliable communication, since it is pervasive at every level, from bit registers within memory up to system-to-system transactions.

Second is Alan Turing’s proof of the universality of computation (the Church-Turing Thesis), which proved that if an algorithm is solvable by a computation, it is solvable by a Turing machine. In the strong interpretation, the brain is subject to natural law;, computation is done by machines according to natural laws; therefore “the information-processing ability [of the brain] cannot exceed that of a machine (and therefore that of a Turing machine).”[19] This effectively rules out magic, mysteriously unknowable quantum processes, and other attempts to put mind outside of science.

The third is John von Neumann’s actual design for a digital computer, which is the effective design for all computers, from his 1948 report on the EDVAC. The most important elements in this were the concept of a stored program and random access memory. John von Neumann and Turing, arguably the architects of our current information age, were both working on the idea of “thinking machines” at the end of their lives. Kurzweil examines von Neumann’s posthumous book The Computer and the Brain in detail, noting that his “fundamental insight was that there is an essential equivalence between a computer and the brain.”[20]

Von Neumann did similar calculations to estimate whether and when it would be possible to build a brain. “The key issue for providing the requisite hardware to successfully model a human brain, though, is the overall memory and computational throughput required.”[21] Kurzweil’s estimate: “emulating one cycle in a single pattern recognizer in the biological brain’s neocortex would require about 3,000 calculations. .. With the brain running at about 102  (100) cycles per second, that comes to 3×105 (300,000) calculations per second per pattern recognizer. Using my estimate of 3×108 (300 million) pattern recognizers, we get about 1014 (100 trillion) calculations per second… Hans Moravec’s estimate, based on extrapolating the computational requirements of the early (initial) visual processing across the entire brain, is 1014 cps…”[22]

What does that mean in today’s world? “Routine desktop machines can reach 1010 cps, though this level of performance can be significantly amplified by using cloud resources. The fastest supercomputer, Japan’s K Computer, has already reached 1016 cps.”[23]

In terms of memory, Kurzweil uses the following estimate: “We need about 30 bits (about four bytes) for one connection to address one of 300 million other pattern recognizers. If we estimate an average of eight inputs to each pattern recognizer, that comes to 32 bytes per recognize.” He adds more for connections and branching, coming to 72 bytes. Then, “with 3×108 (300 million) pattern recognizers at 72 bytes each, we get an overall memory requirement of about 2×1010 (20 billion) bytes.”[24] That’s 20 gigabytes. Not much in today’s terms.

The fourth concept is “to find the key algorithms employed by the brain and then use these to turn a computer into a brain.”[25] In other words, the goal of Kurzweil’s book.

A significant portion of the book is devoted to the chapter “Thought Experiments on the Mind,” which goes through the philosophical objections, implications, and possibilities in relation to science and consciousness. The topics are too extensive to review in short order. In summary fashion Kurzweil covers Chalmers’ “hard problem of consciousness,” panpsychism, qualia, the quantum microtubule theory, emergence, substrates, the Turing test, non-human and non-biological intelligence, consciousness as the fundamental reality, the interpretation of quantum theory in terms of consciousness, logical positivism, spirituality and spiritual machines, free will, split-brain observations, Minsky’s “society of mind,” Libet’s experiments, determinism, incompatibilism, class IV cellular automata and non-predictable deterministic processes, and finally, identity.

He ends on identity because “The issue of identity is perhaps even harder to define than consciousness or free will, but is arguably more important.”[26] Many of the thorniest questions about creating an artificial being or consciousness, augmenting our existing mind, replacing the mind with an artificial version, or uploading/downloading/backing up the mind come down to the question of identity: where does the individual reside? Who am I?

Kurzweil starts with the observation that in the body, molecules, organelles, and cells are regularly replaced. “Neurons persist, but their organelles and their constituent molecules turn over within a month. The half-life of a neuron microtubule is about ten minutes; the actin filaments in the dendrites last about forty seconds; the proteins that provide energy to the synapses are replaced every hour…”[27]

He is saying that this process, in which “you are completely replaced in a matter of months,” is compatible with his unique thought-experiment on identity. Imagine that the brain is gradually replaced by mechanical devices – on the analogy of a cochlear or Parkinsonian implant. When a function (say, hearing or memory or kinesthetic control of movement) is replaced, your identity remains stable. You are still you. So it should be possible to replace all functions gradually, a process in which you (your identity) would remain the same, but at the end you would have a 2.0 non-biological brain. His ultimate conclusion: “My leap of faith on identity is that identity is preserved through continuity of the pattern of information that makes us us. Continuity does allow for continual change, so whereas I am somewhat different than I was yesterday, I nonetheless have the same identity. However, the continuity of the pattern that constitutes my identity is not substrate-dependent.”[28]

In the chapter, “The Law of Accelerating Returns Applied to the Brain,” Kurzweil revisits the core concept from his previous three books, that information technologies are on an exponential curve, and that once anything becomes an information technology (e.g., genetics), it begins climbing that curve. One less well-known factor related to our growing knowledge of the brain is the resolution of imaging technology, which has been on an exponentially shrinking curve for the last couple of decades. The latest advance is the patch-clamp robot developed by Markram as part of the Blue Brain Project. Progress is happening here much faster than most people realize.

In his final chapter, “Objections,” Kurzweil runs quickly through critical arguments, starting with the “criticism from incredulity” that “exponential projections seem incredible given our linear predilection.”[29] He rebuts in detail Paul Allen’s 2011 essay, “The Singularity Isn’t Near.” Kurzweil’s arguments here are valid, and I think they effectively rebut Allen’s objections, as well as others, such as Searle’s Chinese room, or the argument that consciousness is necessarily based on mysterious quantum processes.

With the summary above, here are a few of my thoughts on how Kurzweil describes mind, consciousness, and particularly spirituality. I think he wants to be sympathetic to those who value spiritual experience and tradition, and he makes an effort to harmonize a general spiritual outlook with his view of the mind. However, the integration doesn’t quite work.

The biggest problem with his theory is that Kurzweil skirts the question of the role of mind/consciousness in relation to spiritual reality. In the section “East is East and West is West,” he introduces the discussion by saying “In the Eastern view, consciousness is the fundamental reality; the physical world only comes into existence through the thoughts of conscious beings.”[30] Then he relates this to the “Buddhist school of quantum mechanics,” in which “particles essentially don’t exist until they are observed by a conscious person.”

But Kurzweil is shifting the terms of the argument. The phrase “conscious beings” has become “conscious person”(s). The Eastern view is never limited to human beings; the mind that is envisioned as “creating” the world is larger, more essential, more universal than a (limited human) person. But by recasting the phrase as a person rather than as a being, the discussion begins to move back into the Western, materialistic realm. And by turning the conversation toward quantum particles, his assumption is that the question hinges upon what creates particles – physical, materialistic particles. The discussion here is solidly in the realm of physics, rather than the spirit.

Then after considering the two phases of Wittgenstein’s career and thought, where in the first, the world is all that is important and worth speaking of, and in the second, consciousness is all that is important, Kurzweil says, “In my view, both perspectives [East and West] have to be true. … it is foolish to deny the physical world. … On the other hand, the Eastern perspective – that consciousness is fundamental and represents the only reality that is truly important – is difficult to deny.” He then goes on, “Even if we regard consciousness as an emergent property of a complex system, we cannot take the position that it is just another attribute… It represents what is truly important.”[31]

The problem is that he starts with one conception about the Eastern view (that consciousness is a fundamental reality) and ends with a second (consciousness is fundamentally important). This glosses over a much more important point: that in the Eastern view, consciousness predates or co-evolves with the physical world; it is an inherent kind of stuff, and not emergent. The two are not the same, and the problem of integrating science and spirituality remains. The pattern-recognition theory of mind, compelling though it may be, is not essentially consistent with the traditional Eastern understanding of consciousness as an inherent property of the world.

Further, in the chapter titled “Transcendent Abilities,” in which he considers aptitude, creativity and love, the discussion revolves around neurotransmitters, spindle cells, and external manifestations of experiences such as love or mourning. From this basis, he goes on to say “Studies of ecstatic religious experiences also show the same physical phenomena [as ecstatic human love].”[32] These are characterized by neurotransmitter changes (dopamine, serotonin, etc.), leading to “elation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire.”[33]

He overlooks the many kinds of religious or spiritual experience that are not ecstatic. Many of these can fairly be characterized as involving love and transcendence. While it may well be true that the initial flush of love, whether for a human person or for God, involves the same manifestations in the body, there are many types of spiritual experience that have no relation to these: the silent mind, trance states, out of body experiences, occult experiences (telepathy, precognition), interaction with other types of beings. Kurzweil has not really integrated the vast body of spiritual experience, literature, or experiment with his theory. That remains to be done.

Still, within the purpose and scope of the book – how to create an artificial general intelligence, the basis for it in the brain, the algorithmic approach, estimates on the complexity, and rebuttals to scientific skeptics – the book holds together well. As I noted at the beginning, Kurzweil is trying to “find an algorithm that would turn a computer into an entity that is equivalent to a human brain.”[34]

The overall understanding of the brain/mind as having hierarchical levels of pattern, which therefore rise to greater and greater levels of abstraction, is intuitively sensible, given the nature of thought. As that such levels start with the most basic patterns and procedures, the kinds of learning that take place in the first years of life. Along the way there are some intriguing ideas, for example that the evolutionary purpose of the neocortex is to predict the future, and the parallel idea that a signal down the hierarchy from a higher pattern/abstraction to one of its components, is essentially a prediction of what that lower level should expect.

The essence of the hierarchical hidden Markov model seems clear enough. “They all involve hierarchies of linear sequences where each element has a weight, connections that are self-adapting, and an overall system that self-organizes based on learning data.”[35] This may be why Kurzweil was hired by Google: the strategy is direct, capable of implementation, and has a history of success in the field. Unlike many other theories and programs for creating AI, HMM’s have proven success in the field, and there is a fairly clear path toward implementing them along the lines envisioned in the book.  

The first chapter, “Thought Experiments on the World,” recounts the stories of Darwin and Einstein to illustrate how a single idea can change an entire field, and by reflection the value of long-form argument, as Nicolas Carr describes this possibility “We can get remarkably far in figuring out how human intelligence works through some simple mind experiments.”[36] Kurzweil is nothing if not sure of himself: he’s explicitly saying that the importance of his book is comparable to The Origin of Species, or to Einstein’s 1905 paper on special relativity.

Kurzweil doesn’t try to solve every philosophical question; rather he is laying out a research and development program, a blueprint. In the chapter “Thought Experiments On The Mind” he reiterates his long-held notion that outward signs and behavior, whether of a biological or an artificial being, do not decide the ultimate question whether that being is conscious. The question of what is conscious is essentially a “leap of faith.” makes his own leaps explicit, and one in particular is commonsense: that when machines act like humans, we will treat them as such. Specifically, he says “My objective prediction is that machines in the future will appear to be conscious and that they will be convincing to biological people when they speak of their qualia.”[37] In other words, when they sound like they’re hurting, or happy, or offended.

The book presents an encompassing vision. Advances in the last few decades certainly indicate that “intelligence” is rapidly being incorporated into things, developing new capabilities, and driving the very industries that enable it to keep progressing. The vision of mind/brain/consciousness as computational and emergent is hard to argue with on scientific grounds, but it is profoundly alien and unsettling to people with a  spiritual or religious awareness. The future is going to be complex and baffling, splendid and exciting, revelatory and profoundly surprising. Who knows – even as we walk around today with “supercomputers” in our pockets, fifty years from now we may well be walking around with supercomputers in our body or bloodstream – or our brains.

Ultimately Kurzweil’s view of consciousness can be described as computational, emergent, complexity-based. He seems to agree with Susan Blackmore that “consciousness certainly exists as an idea… But it is not clear that it refers to something real.”[38] In other words, he has not truly integrated East and West. He is firmly rooted in Western, technological, and computational ground. From there, as is his tendency, he calmly argues for (and makes predictions about) the furthest-reaching possibilities, what most would consider science fiction.

The Epilogue of the book is a summary of his vision of what is to come: “We will merge with the intelligent technology we are creating. Intelligent nanobots in our bloodstream will keep our biological bodies healthy at the cellular and molecular levels. They will go into our brains non-invasively through the capillaries and interact with our biological neurons, directly extending our intelligence.”[39] Ultimately our destiny is “waking up the universe” and then “intelligently deciding its fate by infusing it with our human intelligence in its non-biological form.”[40]

These possibilities are beyond the horizon for the average person – even the average research scientist. Mostly we think about the day ahead of us, or if we’re project managers, we plan a year or two in advance. We simply cannot imagine the kind of transformation where human beings merge with intelligent machines and our consciousness is expanded a million-fold, but this may well be our future. Let’s wait and see what comes in the next decade or two, and then revisit these ideas – perhaps using our new cortical implants and collective mindspace.

 


[1] Kurzweil, Ray. How To Create a Mind, Viking Penguin (2012). p. 181

[2] p. 141

[3] p. 153

[4] p. 68

[5] p. 72

[6] p. 7

[7] p. 80

[8] p. 82

[9] p. 113

[10] p. 125

[11] pp. 135-141

[12] p. 164

[13] p. 155

[14] p. 167

[15] p. 168

[16] pp. 172-177

[17] p. 176

[18] p. 182

[19] p. 186

[20] p. 194

[21] p. 195

[22] pp. 195-196

[23] p. 196

[24] p. 197

[25] p. 191

[26] p. 242

[27] p. 245

[28] p. 245

[29] p. 266

[30] p. 218

[31] p. 222

[32] p. 118

[33] p. 118

[34] p. 181

[35] p. 162

[36] p. 24

[37] p. 209

[38] p. 211

[39] p. 280

[40] p. 282

About Dave Hutchinson

David Hutchinson is the main author of Undiscovered Country.
This entry was posted in Artificial intelligence, Computer science, Mind and thought, Singularity, Technology. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>