Dr. Kaku may be a fine physicist, but I see little evidence that he has engaged in any deep study of the problem of consciousness and its related spheres of perception, memory, cognition and cognitive development. The book is indeed interesting - if only on a semi-surface level - for its tour of developments in neuroscience with its new mapping technologies (e.g., fMRI), genetics and brain regions, massive planned research programs proposing to map the entire neural structure and connections of the brain and ultimately of course, recreate it in silicon. I say "on a semi-surface level" because Kaku does, at points, pull the unbridled optimism and unrealistic time-projections for future AI achievements a little closer to reality than I have seen elsewhere. Despite this, by the end, he is not far from the same optimism - one that belies any deep understanding - a mindset surely supported by the unhesitating reductionism displayed throughout the book, towards the end manifesting by simply assigning NDE's and out-of-the-body experiences as simply "generated" by the brain, in a treatment that nicely ignores all the problematic phenomena reported that might indicate an objective status to these experiences.
The book holds an interesting discussion of various brain imaging methods, their strengths and limitations, and therefore the fact that these are far from a panacea for research. As Kaku examines the topic of using raw computer power to simulate brains, to include Kurzweil's invocation of Moore's law with its projected doubling of computer power each year, he notes a brick wall that is about to block this tech-advance via quantum-physical limitations - an interesting point. In his discussion of plans to "download" memories or transfer them to other brains/devices, he does reveal that in reality there is no understanding today as to how the brain actually stores experienced events, noting the standard view that fragments or features of the event are stored in various spots in the brain, but that it is not known how these are reassembled in a remembering operation. This is not a usual admission. As he explores the future in creating an artificial human with the ability to act intelligently in the concrete world, he does some serious acknowledgement of the problem of "common sense knowledge" and AI's failures on this hitherto, and he projects that this will take much longer than writers like Kurzweil and others suppose. These are some welcome notes of caution, rare in this literature.
The problem is that these latter two problems go far more deeply than Kaku realizes, so deep that they question the entire information processing paradigm in which his book is framed. That little problem of how experience is actually stored in the brain stems directly from the fact that there is no theory of perception, i.e., how we see a coffee cup "out there," on the table surface, with its coffee being stirred by a spoon. Yes, this scene/event has "qualia" that must be accounted for, for given that the information from the external world has been transduced to neural-chemical flows (or for computers, changing bit patterns) which look nothing like the external world, we must explain how, from such an homogeneous architecture, we account for the "whiteness" of the cup, the "clinking" of the spoon, the smell of the coffee. The "qualia" formulation is Chalmers', and Kaku, following Dennett (who is far from accepted in philosophical circles), simply rejects as a problem how we explain the way the brain architecture, or any AI architecture, accounts for qualia.
The difficulty is this: Chalmers' formulation has been misleading; the deeper problem is explaining the origin of the image of the external world - not only the cup with its "whiteness," but the kitchen table with its wood-grained surface, the spoon stirring, coffee swirling, steam twisting and rising, the floor stretching in every direction with its tiles ... The "forms" in the image, and more obviously forms dynamically changing over time - rotating cups, twisting leaves, gently waving kitchen curtains - are themselves qualia, and equally non-computable. The origin of the image as a whole is the problem, and this image (of our kitchen with cup) is equally our "experience." This is why the problem is more critical than any AI-type theorist wants to realize: If you have no theory of the origin of our image of the external world, then you have no theory of experience, and in turn therefore, you can have no theory of the "storage" of this experience; your theory of memory is totally ungrounded, and this despite the current confidence, echoed by Kaku, that only a "subset" or a selected set of elements/features of this "experience" is stored - a current, widely held theory by the way with absolutely no in-principle method of the selection of what "parts" or "elements" or "features" of the coffee-stirring event will be stored, let alone of how this dis-assembly/reassembly would work - either in the real time required while the coffee stirring event is ongoing, or at a later time for retrieval of the experience. Ungrounded too then is any theory of cognition, therefore of that problematic "common sense knowledge," reliant as this knowledge is on the retrieval and use of our experience.
Abstract "computations" in themselves (and this is entirely the framework in which Kaku works) are simply insufficient to explain consciousness (our qualia-laden experience). There is a possibility concerning the nature of the brain that should give Kaku - particularly in his physicist persona - some pause: What if the brain, along with its computations or statistical/network analyses (same thing) is at the same time, and actually more importantly, sustaining a real, concrete dynamics - as real, for the sake of example, as an AC motor generating an oscillating electric field of force. Yes, knowing its equations, one can "simulate" the AC motor via a computer, but the computer is not generating the oscillating field of electric force; it is not even running a tiny light bulb. For this, one needs a device whose construction and function is to generate a real, concrete dynamics. One would need to engage in real engineering. This in fact was the thesis of Bergson (Matter and Memory, 1896). Bergson had presciently seen the essence of holography in 1896 (making his theory incomprehensible to his contemporaries). He viewed the universal field, in which we all are embedded, as holographic - a vast interference pattern, a field intrinsically non-image-able. Effectively, he saw the brain (with all its underlying quantum dynamics) as a modulated reconstructive wave passing through this holographic field, selecting out information in the field related to the action systems of the body, and in this becoming "specific to" a subset of the field - now, by this process, an image of aspects of the field, e.g., the kitchen with its tables, its chairs and cup. In other words, we are explaining how perception is limited, not how it arises. This image of the external world, due to the brain's dynamics (with its underlying chemical velocities) is specified at a scale of time - a fly "buzzing" by the coffee cup, his wings oscillating at 200 cps, is seen as a blur in our normal scale of time. Drop in a catalyst into this dynamics - the brain/modulated wave is now specific to a heron-like fly slowly flapping his wings and equally now specific to a new possible action of the body, e.g., picking the fly out of the air by a wing, for as the selection of a subset of the field is made in relation to the action systems of the body, then, as Bergson stated succinctly, perception is virtual action. If the brain is actually such a device - a modulated reconstructive wave - all the future brain-mapping projects Kaku is discussing will be proceeding under the wrong assumptions, and the goal of rebuilding all this as a device in silicon, as purely sustaining computations, is utterly misguided.
For all this, Bergson's model requires a quite different model of time, where the flow of time is indivisible or non-differentiable, and it demands a re-conception of the relation of subject and object, for the difference between, and the relation of each, is in terms, not of space, but of time. But one will find in Kaku but a trivial discussion of the problem of time in relation to mind, namely the role of consciousness as planning for future events, and this is in reality the great problem of explicit memory or the localization of events in time, something requiring the development of the symbolic function - an extremely complex trajectory of development requiring the human child several years, long ago discussed in great depth by Piaget - of all of which Kaku (and AI as well for that matter) is apparently unaware, but a trajectory that would need to be replicated by his AIs. One finds nothing in Kaku on the origin of our scale of perceptual time, or the form of memory that supports the ongoing perception of rotating cubes or stirring spoons, or the support of invariance laws defined only over time. One will find nothing of the problem of subject and object.
In Bergson's conception, since the brain is specific to sources within the external field (as an image) perception/experience is not occurring solely within the brain (nor is it simply "generated" by the brain), therefore experience cannot be solely stored there, yet our experience is retrievable by the same reconstructive wave process. The fact that a konk on the head produces retrograde amnesia does not mean that experiences are stored in the brain and are destroyed - as opposed rather to there now being damage to the mechanisms responsible for modulating the retrieving reconstructive wave. (Similarly, a successful artificial retinal implant supporting vision - one of many advances noted by Kaku that appear to support the computational metaphor - does not imply more than achieving partial support of the overall, very concrete dynamics supporting vision). Obviously this is a quite different theory of memory retrieval; it is inherently supportive of analogical retrieval - a phenomenon basic to thought, to include analogy in general. Hofstadter in his vast consideration of the subject of analogy (Surfaces and Essences: Analogy as the Fuel and Fire of Thinking) clearly has no idea (and appears to harbor doubts) as to how to implement this operation in a computer (or neural net). Yet analogy - the foundational operation of thought as Hofstadter shows - is at the heart of common sense knowledge: I am given a 12" cubical box, rubber bands, pencils, toothpicks, string, razorblade, staples, cheese, etc., and asked to create a mousetrap. I make a "crossbow" using the box, pencil and rubber bands, or a "beheader" using the box, pencil, razorblade and rubber bands. I am doing analogy via my stored experience. This is why that problematic issue of common sense knowledge is so deceptively difficult - it is bound to the entire problem of conscious perception or experience (with the "qualia" problem as a subset) and the memory "storage" of this experience. (For the sake of those interested in a deeper discussion of Bergson and these issues, as I know of none other, one can search (on Amazon) for "Collapsing the Singularity.")
The failure of current science/AI to solve (or admit) the hard problem, properly understood as the more general problem of the image of the external world, is an index into the possibility that the entire framework in which Kaku, AI and neuroscience are working is badly wrong. But this is just a glimpse - when we write of projected feats such as "downloading memories," "transferring consciousness," or "AIs as or more intelligent than humans" - of the tremendous scope and depth of the issues surrounding these topics that Kaku has presumed irrelevant.
For an interesting tour of projects and developments, the book is good. As I have grown tired of these shallow analyses of the issues involved, I can only give so many stars.