Book review: Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy
Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy. William Hirstein. Oxford University Press.
I found this book by accident, when somebody on Facebook happened to share a link to its Amazon page. I was intrigued to read the title, and even more intrigued to read the Amazon blurb:
William Hirstein argues that it is indeed possible for one person to directly experience the conscious states of another, by way of what he calls mindmelding. This would involve making just the right connections in two peoples’ brains, which he describes in detail. He then follows up the many other consequences of the possibility that what appeared to be a wall of privacy can actually be breached. Drawing on a range of research from neuroscience and psychology, and looking at executive functioning, mirror neuron work, as well as perceptual phenomena such as blind-sight and filling-in, this book presents a highly original new account of consciousness.
This description sounded very similar to my and Harri Valpola’s paper Coalescing Minds: Brain Uploading-Related Group Mind Scenarios, which was published last year. In that paper, we argued that it would be possible to join two minds together by creating artificial connections between their brains, and that this could allow anything ranging from mere improved communication to a full-blown merger between two minds. Since it seemed like Hirstein was talking about the same thing, I got curious – had this book, published a few months before our paper, already said everything that we argued for, and more?
Fortunately, it turns out that the book and the paper are actually rather nicely complementary. To briefly summarize the main differences, we intentionally skimmed over many neuroscientific details in order to establish mindmelding as a possible future trend, while Hirstein extensively covers the neuroscience but is mostly interested in mindmelding as a thought experiment. We seek to predict a possible future trend, while Hirstein seeks to argue a philosophical position: Hirstein focuses on philosophical implications while we focus on societal implications. Hirstein talks extensively about the possibility of one person perceiving another’s mental states while both remaining distinct individuals, while we mainly discuss the possibility of two distinct individuals coalescing together into one.
The main purpose of Hirstein’s book is to argue against a position in philosophy of mind which holds that conscious states are necessarily private, that is, only available to a single person. If conscious states were private, that could also be used to argue against materialism, the position that everything is physical, by the following privacy argument:
Premise 1: No physical states are private.
Premise 2: All conscious states are private.
Conclusion: No conscious states are physical states.
Hirstein seeks to use the possibility of mindmelding to refute this argument. He proposes that it should be possible to link the brains of two people together so that when A experienced something, that experience could be relayed to the brain of B, who would then also experience essentially the same thing. Thus, premise 2 of the privacy argument would be shown to be false.
To support his proposal, Hirstein arrays an impressive amount of neuroscience. I would briefly summarize his argument as follows: the brain employs what are called executive processes, which are responsible for dealing with novel or unanticipated situations:
“There is an ongoing debate about what exactly is in the set of executive functions, but the following tend to appear in most lists: attention, remembering, decision-making, planning, task-switching, intending, and inhibiting. Executive processes play a part in our non-routine actions. When we attempt something new, executive processes are required. They are needed when there are no effective learned input-output links. As we get better at a new task, processing moves to other brain areas that specialize in effectively performing routine actions without conscious interruption. Gilbert and Burgess say that, ‘executive functions are the high-level cogitive processes that facilitate new ways of behaving, and optimise one’s approach to unfamiliar circumstances’ (2008, p.110). As Mille and Wallis pithily state it, ‘You do not executive control to grab a beer, but you will need it to finish college’ (2009, p.99). According to Gilbert and Burgess, ‘we particularly engage such processes when, for instance, we make a plan for the future, or voluntarily switch from one activity to another, or resist tempttion: in other words, whenever we do many of the things that allows us to lead independent, purposeful lives’ (2008, p.110).” (p. 87)
In order for the executive processes to be able to do their job correctly, they need just the right kind of information. For this purpose, the brain carries out an extensive amount of processing on all the sensory information it receives, creating a kind of an ”executive summary” of the most relevant content of that information. Executive processes then use that highly-preprocessed data in order to make their decisions. Essentially, conscious states are this ”executive summary”, and all the decisions that we consciously choose to make are made by the executive processes, which are the ones perceiving the conscious states.
Colors are one example of the kind of preprocessing that’s done on the sensory data before it’s presented to the executive processes. Light hits our eyes on a variety of different wavelengths, giving our visual system information about the way that light is reflected off various objects. The data about these various reflectance profiles then undergoes a complicated transformation in which the data is simplified, and the different objects are labeled with colors that summarize their reflectance profiles. This data, in turn, is useful for making sense of the things that we see: it allows us to tell different objects apart with considerable ease.
Emotions are another possible example of the kind of preprocessing that our brains carry out on sensory data before it’s presented to the executive processes. Hirstein doesn’t discuss emotions very much, but my ”Avoid misinterpreting your emotions” article from some time back discussed this theory of emotion:
The Information Principle says that emotional feelings provide conscious information from unconscious appraisals of situations. Your brain is constantly appraising the situation you happen to be in. It notes things like a passerby having slightly threatening body language, or conversation with some person being easy and free of misunderstandings. There are countless of such evaluations going on all the time, and you aren’t consciously aware of them because you don’t need to. Your subconscious mind can handle them just fine on its own. The end result of all those evaluations is packaged into a brief summary, which is the only thing that your conscious mind sees directly. That “executive summary” is what you experience as a particular emotional state. The passerby makes you feel slightly nervous and you avoid her, or your conversational partner feels pleasant to talk with and you begin to like him, even though you don’t know why.
Surveying neuroscientific data, Hirstein proposes that the temporal lobes seem to hold the ”final stage” of conscious states – data that has undergone all the preprocessing steps, and which is ready to be presented to the executive processes. The executive processes, in turn, are located in the prefrontal cortex, and access the data via thick fiber tracts connecting the two parts of the brain. Hirstein’s mindmelding proposal, then, is that if we could connect the temporal lobes of person A with the prefrontal cortex of person B, A and B could then simultaneously perceive A’s conscious states.
One can compare this to our paper, in which we discussed the possibility of a ”reverse split brain operation”: it is known that splitting the axons which connect the two hemispheres of a human brain will produce two different conscious minds, one for each hemisphere. Presumably, if such severed connections could be recreated, the two consciousnesses would merge back together. More speculatively, if artificial connections could be created between the hemispheres of two (or more) distinct humans, then the consciousnesses of those two people would eventually also merge together.
Of course, two people merging together to have only a single consciousness would probably be less useful than having two people who had merged together and had access to each other’s information and knowledge, but also had two separate streams of consciousness. So we postulated that one might construct an exocortex, a prosthetic which mimicked the functions of the brain and which would gradually integrate to become a seamless part of its user’s brain. Once this had happened, the exocortex could be connected to the exocortices of other people, with the exocortex having been built to manage the connection in a way that allowed for information-sharing but prevented the consciousnesses from becoming completely merged. We based our argument for the feasibility of the exocortex on the following three assumptions:
1. There seems to be a relatively unified cortical algorithm which is capable of processing different types of information. The brain seems to start out with a general-purpose algorithm which will gradually specialize for the kind of data it receives. Implement that general-purpose algorithm in an exocortex, and with enough time, it could learn to understand the thoughts of the brains that it was linked to. It could act as a kind of translator between the “mental language” of its user, and the “mental language” employed in other exocortexes.
2. We already have a fairly good understanding on how the cerebral cortex processes information and gives rise to the attentional processes underlying consciousness. We have a good reason to believe that an exocortex would be compatible with the existing cortex and would integrate with the mind.
3. The cortical algorithm has an inbuilt ability to transfer information between cortical areas. Information is known to move around in the brain. Long-term memories are first formed in the hippocampus but then gradually consolidated in the cerebral cortex; gradual damage to the cortex can cause it to shrink while the patient retains the ability to act normally, as damaged functions are relocated. Once a person was equipped with an exocortex, many of their existing memories and knowledge might gradually move over to it.
Hirstein’s work and ours, then, are nicely complementary: Hirstein does not really cover a full mindmeld at all, while we only briefly touch upon the mere sharing of access to another’s conscious states without a full mindmeld.
The societal implications of mind coalescence was one of the main focuses of our paper: we argued that it might lead to evolutionary scenarios in which individual minds would end up outcompeted, with all of the power accumulating to different group minds. We also suggested that exocortices might allow for mind uploading, transferring a human mind to run on a digital computer. As one’s biological brain gradually degraded and died, its functions could increasingly be transferred on the exocortex, until the individual’s mind was solely located on the exocortex.
In contrast, Hirstein seems content to treat mindmelding as a pure thought experiment, saying nothing about the consequences of the technology actually being developed. Perhaps this is because Hirstein wishes to present mindmelding as a serious philosophical argument, and avoid the stigma of being associated with science fictional speculation. Nonetheless, the style of mindmelding that he presents would have plenty of interesting consequences on its own.
Most obviously, if another person’s conscious states could be recorded and replayed, it would open the doors for using this as entertainment. Were it the case that you couldn’t just record and replay anyone’s conscious experience, but learning to correctly interpret the data from another brain would require time and practice, then individual method actors capable of immersing themselves in a wide variety of emotional states might become the new movie stars. Once your brain learned to interpret their conscious states, you could follow them in a wide variety of movie-equivalents, with new actors being hampered by the fact that learning to interpret the conscious states of someone who had only appeared in one or two productions wouldn’t be worth the effort. If mind uploading was available, this might give considerable power to a copy clan consisting of copies of the same actor, each participating in different productions but each having a similar enough brain that learning to interpret one’s conscious states would be enough to give access to the conscious states of all the others.
The ability to perceive various drug- or meditation-induced states of altered consciousness while still having one’s executive processes unhindered and functional would probably be fascinating for consciousness researchers and the general public alike. At the same time, the ability for anyone to experience happiness or pleasure by just replaying another person’s experience of it might finally bring wireheading within easy reach, with all the dangers associated with that.
A Hirstein-style mind meld might possibly also be used as an uploading technique. Some upload proposals suggest compiling a rich database of information about a specific person, and then later using that information to construct a virtual mind whose behavior would be consistent with the information about that person. While creating such a mind based on just behavioral data makes questionable the extent to which the new person would really be a copy of the original, the skeptical argument loses some of its force if we can also include in the data a recording of all the original’s conscious states during various points in their life. If we are able to use the data to construct a mind that would react to the same sensory inputs with the same conscious states as the original did, whose executive processes would manipulate those states in the same ways as the original, and who would take the same actions as the original did, would that mind then not essentially be the same mind as the original mind?
Hirstein’s argumentation is also relevant for our speculations concerning the evolution of mind coalescences. We spoke abstractly about the ”preferences” of a mind, suggesting that it might be possible for one mind to extract the knowledge from another mind without inherting its preferences, and noting that conflicting preferences would be one reason for two minds to avoid coalescing together. However, we did not say much about where in the brain preferences are produced, and what would be actually required for e.g. one mind to extract another’s knowledge without also acquiring its preferences. As the above discussion hopefully shows, some of our preferences are implicit in our automatic habits (the things that we show we value with our daily routines), some in the preprocessing of sensory data that our brains carry out (the things and ideas that are ”painted with” positive associations or feelings), and some in the configuration of our executive processes (the actions we actually end up doing in response to novel or conflicting situations). (See also.) This kind of a breakdown seems like very promising material for some neuroscience-aware philosopher to tackle in an attempt to figure out just what exactly preferences are; maybe someone has already done so.
Getting back to the topic of the book itself, a considerable part of Hirstein’s argumentation is focused on things that are probably of not very much interest to people who haven’t diven deep into the things that philosophers of mind care about. For example, it is important for Hirstein’s argument that A and B actually have access to the same conscious state, as opposed to B only having a copy of A’s conscious state, so he spends time establishing this, which I personally felt was somewhat uninteresting. Considerable attention is also given to other similar technical points of philosophy throughout the book. Some of these I did find rather interesting: for instance, I had previously been rather persuaded by the ”there is no such thing as a self” school of thought, but Hirstein makes a convincing argument for identifying the self with the executive functions, and also makes a good defense against possible homunculus accusations that this might cause. Others will probably find this whole line of argument meaningless.
The bulk of the book, however, is focused on establishing the philosophical and neuroscientific plausibility of mindmelding. So I would in any case recommend this book for anyone interested in seeing a detailed argument for how one variety of mindmelding could be accomplished. And if you already have a strong interest in philosophy of mind, all the better.