Interesting paper on the neuroscience of meditation

http://www.cogsci.ucsd.edu/~pineda/COGS175/readings/Dietrich.pdf

It proposes that what we experience as consciousness is built up in a hierarchical process, with various parts of the brain doing further processing on the flow of information and contributing their own part to the “feel” of consciousness. It’s possible to subtract various parts of the process, thereby leading to an altered state of consciousness, without consciousness itself disappearing.

The prefrontal cortex is usually associated with “higher-level” tasks, including emotional regulation, but the authors suggest that this is due to the prefrontal cortex refining the outputs of the earlier processing stages, rather than inhibiting them:

“In such a view, the prefrontal cortex does not represent a supervisory or control system. Rather, it actively implements higher cognitive functions. It is further suggested that the prefrontal cortex does not act as an inhibitory agent of older, more primitive brain structures. The prefrontal cortex restrains output from older structures not by suppressing their computational product directly but by elaborating on it to produce more sophisticated output. If the prefrontal cortex is lost, the person simply functions on the next highest layer that remains.The structures implementing these next highest layers are not disinhibited by the loss of the prefrontal cortex. Rather, their processing is unaffected except that no more sophistication is added to their processing before a motor output occurs.”

Their theory is that several altered states of consciousness involve a reduction in the activity of the prefrontal cortex:

“It is proposed in this article that altered states of consciousness are due to transient prefrontal deregulation. Six conscious states that are considered putative altered states (dreaming, the runner’s high, meditation, hypnosis, daydreaming, and various drug-induced states) are briefly examined. These altered states share characteristics whose proper function are regulated by the prefrontal cortex such as time distortions, disinhibition from social constraints, or a change in focused attention. It is further proposed that the phenomenological uniqueness of each state is the result of the differential viability of various [dorsolateral] circuits. To give one example, the sense of self is reported to be lost to a higher degree in meditation than in hypnosis; whereas, the opposite is often reported for cognitive flexibility and willed action, which are absent to a higher degree in hypnosis.The neutralization of specific prefrontal contributions to consciousness has been aptly called ‘‘phenomenological subtraction’’ by Allan Hobson (2001).The individual in such an altered state operates on what top layers remain. In altered states that cause severe prefrontal hypofunction, such as non-lucid dreaming or various drug states, the resulting phenomenological awareness is extraordinarily bizarre. In less dramatic altered states, such as long-distance running, the change is more subtle.”

And about meditation in particular, they hypothesize that it involves a general lowered prefrontal activity, with the exception of increased activation in the prefrontal attentional network:

“It is evident that more research is needed to resolve the conflicting EEG and neuroimaging data. Reinterpreting and integrating the limited data from existing studies, it is proposed that meditation results in transient hypofrontality with the notable exception of the attentional network in the prefrontal cortex. The resulting conscious state is one of full alertness and a heightened sense of awareness, but without content. Since attention appears to be a rather global prefrontal function (e.g., Cabeza & Nyberg, 2000), PET, SPECT, and fMRI scans showed an overall increase in DL activity during the practice of meditation. However, the attentional network is likely to overlap spatially with modules subserving other prefrontal functions and an increase as measured by fMRI does not inevitably signify the activation of all of the region’s modules. Humans appear to have a great deal of control over what they attend to (Atkinson & Shiffrin, 1968), and in meditation, attentional resources are used to actively amplify a particular event such as a mantra until it becomes the exclusive content in the working memory buffer. This intentional, concentrated effort selectively disengages all other cognitive capacities of the prefrontal cortex, accounting for the a-activity. Phenomenologically, meditators report a state that is consistent with decreased frontal function such as a sense of timelessness, denial of self, little if any self-reflection and analysis, little emotional content, little abstract thinking, no planning, and a sensation of unity. The highly focused attention is the most distinguishing feature of the meditative state, while other altered states of consciousness tend to be more characterized by aimless drifting.”

They do not discuss permanent changes caused by meditation in the paper, but if the prefrontal cortex is involved with last-stage processing of incoming sensory data, then prefrontal regulation would fit together with meditators’ reports of being able to experience sensory information in a more “raw”, unprocessed form. Likewise, if the prefrontal cortex unifies and integrates information from earlier processing stages, then meditation revealing the unity of self to be an illusion would be consistent would reduced prefrontal activity.

Vipassana jhanas, or other forms of meditation aimed towards reaching enlightenment, would then somehow involve permanently reducing or at least changing the nature of prefrontal processing. Meditation practicioners speak of “the Dark Night“, an intermediate stage during the search for enlightenment, which is experienced as strongly unpleasant and where “our dark stuff tends to come bubbling up to the surface with a volume and intensity that we may never have known before”. This is achieved after making sufficient progress in meditation, and will continue until the practicioner makes enough progress to make it go away.

Under the model suggested by the paper, the Dark Night would then be an intermediate stage where the activity of the prefrontal cortex had been reduced/changed to such an extent that it was no longer capable of moderating the output of the various earlier emotional systems. Resolving the Dark Night would involve somehow finding a new balance where the outputs of any systems involved with negative emotions could be better handled again, but I have no idea of how that happens.

AI thought process visualization

I started thinking about all the original computer science CGI stuff you could do in a sci-fi movie or TV series.

Like, you have this robot thinking about what it’ll do, and it’s running some breadth-first search or whatever, and we’re shown an elaborate crystalline 3D search tree slowly expanding from some starting point. We can see various possible actions, like “ASK THE HUMAN NICELY FOR INFORMATION”, sitting at various points in the search space. One by one, the tree expands to them and then discards after a moment’s evaluation.

And then, a distance away from the starting point, there’s this “SHOOT THE HUMAN” decision that we’re shown and that we can see the decision tree slowly but inevitably expanding towards. Then maybe it’s a race against time for the main characters to give the robot some new information that will change its evaluation criteria to reject that course of action when it reaches it. Or something.

Anyway, I bet that all kinds of seemingly-boring, basic compsci concepts like search trees could be made to seem really cool and exciting with a little work.

(This thought was inspired by seeing http://www.idsia.ch/~juergen/oopstree720.jpg and then imagining that search tree slowly and organically growing. Although that one looks more like depth-first, but anyway.)

————–

AI thought process visualization, part II: The AI is shown as a space ship composed of many modules, floating in (concept)space. Around it are various fields of knowledge and subjects it could be analyzing, visualized as asteroids or other physical objects.

The AI’s attention is visualized as searchlights shooting out from its various modules to the surrounding objects. Most of the objects get no attention at all, or they are only occasionally thought about at a superficial level of analysis. This is shown as searchlights that gradually wander through the various objects, sometimes stopping for a moment at one but mostly just moving on.

Some objects, however, will catch the AI’s interest. At first one of the searchlights will only pause at such an interesting object. Instead of moving on, it stays there, and several other wandering searchlights will also be redirected to study it.

The AI then begins to devote more processing power to analyze this object (really a domainof knowledge, such as geography), switching from coarse-grained analysis using a few basic algorithms to a detailed investigation using a number of specialized routines. One of the modules that was shining light at the object will split apart, a large homogenous shape separating to a buzzing swarm of shapes, each with their own searchlight that is smaller but much more intense. They move to surround the object being studied, and soon the tiny searchlights find seams in its structure, cutting away its outer layer. As they do so, the object expands, unveiling a dark mass within. As the swarm turns its searchlights on the mass, they reveal its features, a planet’s surface appearing from under the surface of an asteroid. And each feature grows a new unexplored region around it as it is revealed, the object having a fractal structure that grows more and more complex the more that it is studied.

The initial, domain-general algorithms and tricks that were initially used to study the subject rapidly grow less useful. The searchlights of the initial swarm of shapes begin to dim, revealing fewer things in the darkness with each pass. The object stops growing, as well: features are slowly discovered, but they no longer yield new insights. At first, the searchlights found nothing but new kinds of things: forests, mountains, lakes. Now they find nothing new, no cities or volcanos. Each revealed feature is just a slightly different variation of the old ones, and it looks like the object’s new surface might soon be entirely mapped.

But then, after enough seemingly-identical features have appeared, one may start to notice a pattern that was not previously discovered. Searchlights are aimed at this pattern, and the AI begins experimenting with ways to exploit the new structure that has been found in the domain. From one of its modules, a new stream of small shapes – experimental algorithms customized to make use of the new information – makes it way to the object. One by one they train their searchlights on the new pattern, most of them still revealing nothing new. But then cracks begin to appear in it, cracks which widen as ever more new shapes shine light on it. And then parts of this layer, too, break away, again revealing a new kind of world below. It does not happen all at once, and many different patterns need to be studied and exploited before the whole layer gets peeled away: but gradually, deeper and deeper layers are found within the object.

The AI keeps making progress at understanding the object better: and as it builds new kinds of algorithms for studying this object, they get collected into an entirely new module, one optimized for this domain. Some members of the swarm join existing modules, too: the AI will experiment trying them at different objects, to see if they might have cross-domain applicability.

Gradually, the amount of new insights that can be collected from the object begins to again slow; and meanwhile, the searchlights of the AI’s other modules are wandering around the surrounding space, looking for something that might catch its interest…

Personal achievement report, Nov – Dec 2011

theferrett has this awesome habit of making regular updates on how his story-writing and his published stories are doing. I find them inspiring. After seeing his latest update, it occurred to me that I should write one of my own, to help keep track of how I’m doing, and to remind my brain to keep thinking about the stuff I want it to be thinking about. And maybe to also boast a tiny little bit. Anyway. Here are my projects and achievements from November 1st onwards. Overall, not too bad.

COMPLETED WRITINGS

* Less Wrong post, “Modularity, signaling, and belief in belief“. Part of my series summarizing Robert Kurzban’s book “Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind“. Only got 16 upvotes and 733 page views (not counting front-page views), probably because it was mostly covering material the LW community already knew.
* Less Wrong post, “The Curse of Identity“. I’m really happy with this one: at 98 upvotes and 4858 page views, it’s my most popular LW post to date. Also the one that I’m possibly personally the happiest with.
* Less Wrong post, “5-second level case study: Value of Information“. This one fared much less impressively: 18 upvotes and 770 page views. But it was an experimental post, and I did expect that it might not be very popular.
* FB/G+ posts on how to visualize AI thought processes in a movie or TV series. [1] [2]. Random fun and not too insightful, but I’m quite happy with the second one in particular.
* FB/G+/LJ posts on how science doesn’t work and it seems really hard for us to know anything. It was a rant that I’ve had in mind for a long time, and seemed to strike a chord with some of my readers, with 14 G+ shares.
* A couple of briefer posts that I don’t count as achievements.

ACADEMIC FAME AND WORKS-IN-PROGRESS

* I received two reviewer’s comments for my and Harri Valpola’s paper Coalescing minds: brain uploading-related group mind scenarios for the Mind Uploading special issue in the International Journal of Machine Consciousness. The comments were excellent, and we will be doing substantial revising soon.
* I also received one reviewer’s comments for my other paper in the same issue, Relative advantages of uploads, artificial general intelligences, and other digital minds. They were next to useless, and I still haven’t received comments from the second reviewer.
* Provided comments for two papers that other people wrote for that special issue.
* Playing around with Google Scholar, I found out that my 2010 ECAP paper, From mostly harmless to civilization-threatening: pathways to dangerous artificial general intelligences, had been cited in a paper for the 2011 AGI conference. I wasn’t very impressed with the paper, but at least I now have an h-index of 1!
(* Jokapiraatinoikeus, the book I wrote on copyright together with Ahto Apajalahti, had also been previously cited in two Bachelor’s-level theses and one Master’s thesis, but Google Scholar apparently doesn’t understand Finnish theses since they don’t appear as citations even though it finds them.)

BOOKS-IN-PROGRESS

* Novel: secret co-written one that I’m not at a liberty to talk much about. But I can probably mention that I wrote about 8000 words of prose for it before we decided that it wasn’t working as well as it could and we had to rethink our approach.
* Novel: The City of Light and Fire. Some of you will remember me starting on this in summer. I didn’t really have a clear enough idea of where it was going, and the protagonist was too passive for my tastes, so I put it on the back burner while trying to figure out what I wanted to do with it. Some discussions with alicorn24 have given me a bit of an idea, and I might re-work what I have and return to it on Christmas leave.
* Non-fiction book: How human minds differ, or, I need a catchier title (working title). Put up a couple of posts in various places asking people for their experiences, began collecting ideas. Haven’t gotten much farther than that, though the LW thread in particular provided a lot of interesting material.
* Non-fiction book: Human thought, or, I need a catchier title even worse (working title). A book on human rationality which is still trying to figure out what its central claim will be. I’ve been jotting down notes on that for nearly a year now, and each time I write down a new central claim, I note that it’s completely different from everything else I’ve written. The most intriguing approach would be to write about the effect of social norms and the curse of identity on our thought, but I’d need to read up on my social psychology more for that.

OTHER-WRITING-IN-PROGRESS

* A popular article on A) overfitting and AI goals, as well as B) that old “but surely a superintelligent AI would understand that this wasn’t what we really wanted” claim. I intended to only write about A, but then I ended up writing five pages worth of B first and still haven’t gotten around A. I’m trying to decide whether I should split it in two articles or rearrange the structure somehow.
* I need to finish my LW series on Robert Kurzban’s previously-mentioned book.

OTHER PROJECTS

* Secret crazy website project that I’m working on together with a friend. Did a bunch of writing and planning for it, he’s been doing programming and planning. We intended to unveil it at the end of last week, but didn’t meet that goal.
* I haven’t done much progress with regard to overcoming suffering and equanimity lately: in fact, I’ve lost most of what I did achieve. It seemed like being happy and free from suffering made me less productive, since I was just happy doing nothing, so I’ve put that on a hold until I figure out how to fix that problem.

COMPLETED SCHOOLWORK

* Made the final decision to change my major to computer science for my Master’s degree: applied for the program and was admitted.
* Aced an Operating Systems exam, have two other exams coming up which I expect to pass. I’m currently set to net a total of 12 credits from the fall term, which isn’t very impressive given that the official target is 30 credits a term. I should do more school stuff and less of everything else.

My knowledge as anti-knowledge

During my more pessimistic moments, I grow increasingly skeptical about our ability to know anything.

Take science. Academia is supposed to be our most reliable source of knowledge, right? And yet, a number of fields seem to be failing us. Any results shouldn’t really be believed before they’ve been replicated several times. Yet, of the 45 most highly regarded studies within medicine suggesting effective interventions, 11 haven’t been retested, and 14 have been shown to be convincingly wrong or exaggarated. John Ioannidis suggests that up to 90 percent of the published medical information that doctors rely on is flawed – and the medical community has for the most accepted most of his findings. His most cited paper, “Why Most Published Findings Are False” has been cited almost a thousand times.

Psychology doesn’t seem to be doing that much better. Last May, the Journal of Personality & Social Psychology refused to publish a failed replication of the parapsychology paper they published earlier.

The reason Smith gives is that JPSP is not in the business of publishing mere replications – it prioritises novel results, and he suggests the authors take their work to other (presumably lesser) journals. This is nothing new – flagship journals like JPSP all have policies in place like this. […] …major journals simply won’t publish replications. This is a real problem: in this age of Research Excellence Frameworks and other assessments, the pressure is on people to publish in high impact journals. Careful replication of controversial results is therefore good science but bad research strategy under these pressures, so these replications are unlikely to ever get run. Even when they do get run, they don’t get published, further reducing the incentive to run these studies next time. The field is left with a series of “exciting” results dangling in mid-air, connected only to other studies run in the same lab.

This problem is not unique to psychology – all fields suffer from it. But while we are on the subject of psychology, the majority of its results are from studies conducted on Western college students, who have been presumed to be representative of humanity.

A recent survey by Arnett (2008) of the top journals in six sub-disciplines of psychology revealed that 68% of subjects were from the US and fully 96% from ‘Western’ industrialized nations (European, North American, Australian or Israeli). That works out to a 96% concentration on 12% of the world’s population (Henrich et al. 2010: 63). Or, to put it another way, you’re 4000 times more likely to be studied by a psychologist if you’re a university undergraduate at a Western university than a randomly selected individual strolling around outside the ivory tower.” Yet cross-cultural studies indicate a number of differences between industrialized and “small-scale” societies, in areas such “visual perception, fairness, cooperation, folkbiology, and spatial cognition“. There are also a number of contrasts between “Western” and “non-Western” populations “on measures such as social behaviour, self-concepts, self-esteem, agency (a sense of having free choice), conformity, patterns of reasoning (holistic v. analytic), and morality” ( http://neuroanthropology.net/2010/07/10/we-agree-its-weird-but-is-it-weird-enough/ ; http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=7825833 ).

Many supposedly “universal” psychological results may actually only be “universal” to US college students.

In any field, quantiative studies require intricate knowledge about statistics and a lot of care to get right. Academics are pressed to publish things at a fast pace, and the reviewers of scientific journals often have relatively low standards. The net result is that the researchers have neither the time nor the incentive to conduct their research with the necessary care.

Qualitative research doesn’t suffer from this problem, but it suffers from the obvious problem of often having a limited sample group and difficult-to-generalize findings. Many social sciences that are heavily based on qualitative methods outright state that carrying out an objective analysis, where the researcher’s personal attributes and opinions don’t influence the results, is not just difficult but impossible in principle. At least with quantiative sciences, it may be possible to convincingly prove results wrong. With qualitative sciences, there is much more wiggle room.

And there’s plenty of room for the wiggling to do a lot of damage even in the quantative sciences. From the previous article on John Ioannidis:

Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process, in which journals ask researchers to help decide which studies to publish, to suppress opposing views. “You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,” says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.

Of course, all of this is not to say that science wouldn’t be good for anything. I’m typing this on a computer that obviously works, in an apartment built by human hands, surrounded by countless of technological widgets. The more closely related a science is to a branch of engineering, the more likely it is that it is basically right. Its ideas are constantly and rigorously being tested in a way that actually incentivizes being right, not just publishing impressive-looking studies. The farther out a science is from engineering and from having practical applications that can be tested at once, the more likely it is that it’s just full of nonsense.

Take governmental institutions. Academia, at least, still has some incentive to seek the truth. Meanwhile, politicians have an incentive to look good to voters, who by and large do not care about the truth. The issues that citizens care the most strongly about tend to be the issues that they know the least about, and often they do not even know the political agendas of the parties or politicians that they vote for. For the average voter, who has very little influence on actual decisions but who can take a lot of pleasure from believing things that are actually pleasant to believe, remaining ignorant is actually a rational course of action. Statements that sound superficially good or that appeal to the predjudices of a certain segment of the population are much more important for politicians than actually caring about the truth. Often, even considering a politically unpopular opinion to be possibly true is thought to be immoral and suggestive of a suspicious character.

And various governmental institutions, from academics funded by government funds to supposedly neutral public institutions are all suspect to pressures from above to sound good and produce pleasing results. The official recommendations of any number of government agencies can be the result of political compromise as much as anything else, and researchers are routinely hired to act as the politicians’ warriors. Even seemingly apolitical institutions like schools and the police may fall victim to the pressure to produce good results and start reporting statistics and results that do not reflect reality. (For a particularly good illustration of this, watch all five seasons of The Wire, possibly the best television series ever made.)

Take the media. Is there any reason to expect the media to do much better? I don’t see why there would be. Compared to academia, journalists are under even more time pressure to produce articles, have even less in the way of rigorous controls on truthfulness, and have even more of an incentive to focus on big eye-catching headlines. Even for the journalists who actually follow strict codes of ethics, the incentives for sloppy work are strong. Anybody who has an expertise in pretty much any field that’s been reported on will know that what’s written often has very little resemblance to reality.

Some time ago, there were big claims about how Twitter was powering revolutions and protests in a number of authoritarian countries. Many of us have probably accepted those claims as fact. But how true are they, really?

In the Iranian case, meanwhile, the people tweeting about the demonstrations were almost all in the West. ‘It is time to get Twitter’s role in the events in Iran right,’ Golnaz Esfandiari wrote, this past summer, in Foreign Policy. ‘Simply put: There was no Twitter Revolution inside Iran.’ The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. ‘Western journalists who couldn’t reach – or didn’t bother reaching? – people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,’ she wrote. ‘Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.’” ( http://www.newyorker.com/reporting/2010/10/04/101004fa_fact_gladwell )

Take the Internet. Online, we are increasingly living in filter bubbles, where the services we use attempt to personalize the information we read to what they think we want to see. Maybe you’ve specifically gone to the effort of including both liberals and conservatives as your Facebook friends, as you want to be exposed to the opinions of both. But if you predominantly click on the liberal links, then eventually the conservative updates will be invisibly edited out by Facebook’s algorithms, and you will only see liberal updates in your feed. Various sites are increasingly using personalization techniques, trying to only offer us content they think we want to see – which is often the content most likely to appeal to our existing opinions.

Take yourself. Depressed by all of the above? Think you should only trust yourself? Unfortunately, that might very well produce even worse results than trusting science. We are systematically biased to favorably misremember events, only seek evidence confirming our beliefs, and interpret everything in our own favor. Our conscious minds may not be evolved to look for the truth at all, but to choose of various defensible positions the one that the most favors ourselves. ( http://lesswrong.com/lw/8gv/the_curse_of_identity/ ; http://lesswrong.com/tag/whyeveryonehypocrite )

Our minds run on corrupted hardware: even as we think we are trying to impartially look for the truth, other parts of our brains are working hard to give us that impression while hiding the actual biased thought processes we engage in. We have conscious access to only a small part of our thought processes, and have to rely on countless amounts of information prepared by cognitive mechanisms whose accuracy we have no way of verifying directly. Science, at least, has some safeguards in place that attempt to counter such mechanisms – in most cases, we will still do best by relying on expert opinion.

But if you plan to mostly ignore the experts and base your beliefs on your own analysis, you need to not only assume that ideological bias has so polluted the experts as to make them nearly worthless, but you also need to assume that you are mostly immune from such problems!” ( Robin Hanson: Against DIY Academics )

—-

Most of the things I know are probably wrong: with each thing I think I learn, I might be learning falsehoods instead. Because the criteria for an idea catching on and for an idea to be true are different, the ideas that a person is the more likely to hear about are ones that are more likely to be wrong. Thus most of the things I run across in my life (and accept as facts) will be wrong.

And of course, I’m quite aware of the irony in that I have here appealed to a number of sources, all of which might very well be wrong. I hope I’m wrong about being wrong, but I can’t count on it.

(Essay also cross-posted to Google Plus.)