Neuroinformatics 4 seminar, session III – GWT/meditation, neural correlates of consciousness

Yes, I know that I’m way behind on my reports: session III was over a month ago. Better late than never.

I’ve been thinking about global workspace theory on and off in the context of meditation. Haven’t come up with anything particularly insightful, basically just a repetition of the argument in the Dietrich paper: in meditation, attentional resources are used to actively amplify a particular event such as a mantra until it becomes the exclusive content in the working memory buffer. This intentional, concentrated effort selectively disengages all other cognitive capacities of the prefrontal cortex.

Put into GWT terminology, normally sensory systems and “thought systems” within our brain generate a number of (bottom-up) inputs that compete for control of the global neuronal workspace (GNW), and some process of top-down attention picks inputs that get strenghtened until they dominate the neuronal workspace. In meditation, the practicioner seems to train their attentional network into only choosing a specific set of stimuli (e.g. their breath, a mantra, the sensations of their body, etc.) and ignoring all the others. As they concentrate on these stimuli, those get transmitted into all the brain regions that receive input from the GNW. Since this is an abnormal input that most of the systems can’t do anything with, they gradually get turned off – especially since the it doesn’t matter what output they produce in response, as the successful meditation practicioner pays no attention to it. Of course, it will take a lot of practice for a practicioner to get this far, since the brain is practically built to “get sidetracked” from meditation and concentrate on something more important.

It’s interesting to ask why this would lead to perceptual changes, such as an increased tolerance for pain. A straighforward guess would be that if the GW/GNW gets taken over by a very simple stimulus, and that stimulus gets broadcast into all the different systems in the brain, then there are systems related to learning that can’t help but to analyze the stimulus. If a meditation practicioner consciously begins to break a sensation into smaller and smaller components, or begins to note and name individual sensations, then the implict learning systems will pick up on this and learn how to do it better. Also, as the meditator forces his brain to analyze very simple inputs, the brain allocates disproportionate computational resources into analyzing them and begins to find in them increasingly subtle hidden details – which the meditator then dismisses, forcing his brain to go to even more extreme lengths to find something. Over time and with enough practice, he learns to feel and notice these subtle sensations even when not meditating.

Of course, it’s a bit of a misnomer to talk about the brain “finding” subtler sensations, since those sensations are themselves also generated by the brain. Rather what’s happening is that there is a hierarchical process in which simpler inputs get increasingly complex layers of interpretation applied on them, and meditation strips away those layers of interpretation. Thus information that’s usually thrown away during earlier processing stages becomes revealed and accessible to the conscious mind. That’d my guess, anyway. It’s also interesting to note that savant abilities are also hypothesized to be created via having access to lower-level brain processing, but so far I haven’t heard of anyone becoming a genius savant through meditation, even if it should be theoretically possible.

As I noted the last time, there’s still the puzzle of how the attentional networks find out about an input that might be worth promoting into the GNW, if the GNW is already dominated by another input. A hypothesis that might make sense is that we’re actually rapidly cycling a lot of content into and out of consciousness, and the attentional networks decide which stuff gets the most “clock cycles” (here’s an obvious analogy to operating systems and multiprogramming). E.g. this text gets processed within the GNW, then I hear a sound coming from outside and that input pushes its way to the GNW for a brief moment, and then an attentional system decides that it isn’t important and gets back to the task of writing this text. While the outside noise has pushed the text out of the GNW, it’s still locally active in the brain regions that were most heavily involved in processing it, and the attentional network can home into the activation in those regions and strenghten it again.

Alternatively, this whole hypothesis of swapping stuff in and out might be unnecessarily complicated, and there could just be cross-region communication that wsan’t conscious. There are a number of results saying that cross-modality integration of sense data can happen without consciousness. E.g. in ventriloquism we see a talking puppet mouth and hear sound coming from the puppeteer’s closed mouth. Somehow this conflict gets resolved into us hearing the sound as if it were coming from the puppeteer’s mouth, without us being consciously aware of the process. Also the results of the paper below, which suggest that attention and consciousness can both occur without each other, would support that hypothesis.

None of that actually has anything to do with the third session, though – it’s just stuff that occurred to me while thinking about some of the seminar papers in general. So let’s get to the actual topic…

—-

The third Neuroinformatics presentation covered Giulio Tononi & Christof Koch (2008) The Neural Correlates of Consciousness: An Update. Annals of the New York Academy of Sciences. The paper was pretty packed with information, and there was a lot of interesting stuff mentioned. I won’t try to cover all of it, but will rather concentrate on some of the most interesting bits.

In particular, the previous Neuroinformatics papers seemed to come to close to equating consciousness and attention. If input from our senses (or from internal sources like e.g. memory) becomes conscious if it is chosen to be promoted to consciousness by attentional processes, does that mean that we are conscious of the things that we pay attention to? Subjectively, I’m often conscious of experiences that I try to direct my attention away from, though that might just mean that a top-down attentional mechanism is competing with a bottom-up one. Introspection is notoriously unreliable, anyway.

Tononi & Koch argue that the two are not the same, and there can be both attention without consciousness and consciousness without attention. Let’s first look at attention without consciousness. Among the studies that they cite, Naccache et al. (2002) is probably the easiest to explain.

The experimental subjects were shown (“target”) numbers ranging from 1 to 9, and had to say whether the number they saw was smaller or larger than 5. (They were not shown any fives.) Unknown to them, each number was preceded by another (“priming”) number, hidden by a geometric masking shape. In some versions of the experiment, the subjects knew when they were going to see the number, and could pay attention around that time. In other version, they did not, and could not focus their attention specifically at the right window in time. When the subjects were paying attention at the right time (and therefore also paying attention to the priming number), there was what’s called a priming effect. Their reaction times were faster when the prime number was congruent with the target number, i.e. either both were smaller than 5 or both were larger. When the numbers were incongruent, the reaction times were faster. When the subjects couldn’t focus their attention on the right time period, the priming effect didn’t occur. Tononi & Koch interpret these results to mean that there can be attention without consciousness: the priming numbers were always seen too quickly to enter conscious awareness, but they caused a priming effect depending on whether or not the subjects paid attention to them.

The opposite case is consciousness without attention. There are experiments in which the subjects are made to focus their attention to the middle of their visual field, and something else is then briefly flashed in their peripheral field of vision. Subjects are often capable of reporting on the contents of the peripheral image and performing some quite complex discrimination tasks. They can tell male faces from female ones, or distinguish between famous and non-famous people, even though the image was (probably) flashed too briefly for top-down attention to kick in. At the same time, they cannot perform some much easier tasks, such as discriminating a rotated letter “L” from a rotated letter “T”. So at least some kinds of consciousness-requiring tasks seem to be possible in the absence of directed attention, while others aren’t.

Tononi & Koch conclude this section by summarizing their view of the differences between attention and consciousness, and by citing Baars and saying something akin to his Global Workspace Theory:

Attention is a set of mechanisms whereby the brain selects a subset of the incoming sensory information for higher level processing, while the nonattended portions of the input are analyzed at a lower band width. For example, in primates, about one million fibers leave each eye and carry on the order of one megabyte per second of raw information. One way to deal with this deluge of data is to select a small fraction and process this reduced input in real time, while the nonattended data suffer from benign neglect. Attention can be directed by bottom-up, exogenous cues or by top-down endogenous features and can be applied to a spatially restricted part of the image (focal, spotlight of attention), an attribute (e.g., all red objects), or to an entire object. By contrast, consciousness appears to be involved in providing a kind of “executive summary” of the current situation that is useful for decision making, planning, and learning (Baars).

As has often been the case lately, I wonder how much weight I should actually put on these results. A study that has not been replicated is little better than an anecdote, and while Tononi & Koch do cite several studies with similar results, there have been previous cases where the initial replications all seemed to support a theory but then stopped doing so. So for all that I know, everything in the paper (and the previous papers, of course) might turn out to be wrong within a few years. Still, it’s the best that we have so far.

Like some of the GWS/GNS papers, this one also suggested that non-dreaming sleep involves reduced connectivity between cortical regions, and the regions communicate in a more local manner. That’s also interesting.

No comments

Trackbacks/Pingbacks

  1. Predictive brains | Kaj Sotala - […] whose behavior is easy to predict out of our consciousness. Interestingly, extended meditation seems to bring some of the…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.