Meditation instructions for self-compassion

I really liked, and have gotten a lot out of, the self-compassion advice in the book The Wisdom of No Escape and the Path of Loving-Kindness.

First, on the general attitude and approach:

When people start to meditate or to work with any kind of spiritual discipline, they often think that somehow they’re going to improve, which is a sort of subtle aggression against who they really are. It’s a bit like saying, ‘If I jog, I’ll be a much better person.’ ‘If I could only get a nicer house, I’d be a better person.’ ‘If I could meditate and calm down, I’d be a better person.’ Or the scenario may be that they find fault with others; they might say, ‘If it weren’t for my husband, I’d have a perfect marriage.’ ‘If it weren’t for the fact that my boss and I can’t get on, my job would be just great.’ And ‘If it weren’t for my mind, my meditation would be excellent.’

But loving-kindness – maitri – toward ourselves doesn’t mean getting rid of anything, Maitri means that we can still be crazy after all these years. We can still be angry after all these years. We can still be timid or jealous or full of feelings of unworthiness. The point is not to try to change ourselves. Meditation practice isn’t about trying to throw ourselves away and become something better. It’s about befriending who we are already. The ground of practice is you or me or whoever we are right now, just as we are. That’s the ground, that’s what we study, that’s what we come to know with tremendous curiosity and interest. […]

Sometimes among Buddhists the word ego is used in a derogatory sense, with a different connotation than the Freudian term. As Buddhists, we might say, ‘My ego causes me so many problems.’ Then we might think, ‘Well, then, we’re supposed to get rid of it, right? Then there’d be no problem.’ On the contrary, the idea isn’t to get rid of ego but actually to begin to take an interest in ourselves, to investigate and be inquisitive about ourselves. […]

This is not an improvement plan; it is not a situation in which you try to be better than you are now. If you have a bad temper and you feel that you harm yourself and others, you might think that sitting for a week or a month will make your bad temper go away – you will be that sweet person that you always wanted to be. Never again will a harsh word leave your lily-white lips, The problem is that the desire to change is fundamentally a form of aggression toward yourself. The other problem is that our hangups, unfortunately or fortunately, contain our wealth. Our neurosis and our wisdom are made out of the same material. If you throw out your neurosis, you also throw out your wisdom. Someone who is very angry also has a lot of energy; that energy is what’s so juicy about him or her. That’s the reason people love that person. The idea isn’t to try to get rid of your anger, but to make friends with it, to see it clearly with precision and honesty, and also to see it with gentleness. That means not judging yourself as a bad person, but also not bolstering yourself up by saying, ‘It’s good that I’m this way, it’s right that I’m this way. Other people are terrible, and I’m right to be so angry at them all the time.’ The gentleness involves not repressing the anger but also not acting it out. It is something much softer and more open-hearted than any of that. It involves learning how, once you have fully acknowledged the feeling of anger and the knowledge of who you are and what you do, to let it go. You can let go of the usual pitiful little story line that accompanies anger and begin to see clearly how you keep the whole thing going. So whether it’s anger or craving or jealousy or fear or depression – whatever it might be – the notion is not to try to get rid of it, but to make friends with it. That means getting to know it completely, with some kind of softness, and learning how, once you’ve experienced it fully, to let go.

And then on the specific instructions for self-compassionate meditation:

The technique is, first, to take good posture and, second, to become mindful of your out-breath. This is just your ordinary out-breath, not manipulated or controlled in any way. Be with the breath as it goes out, feel the breath go out, touch the breath as it goes out. Now, this seems simple, but to actually be with that breath and to be there for every breath requires a lot of precision. When you sit down and begin to meditate, the fact that you always come back to that breath brings out the precision, the clarity, and the accuracy of your mind. Just the fact that you always come back to this breath and that you try, in a gentle way, to be as fully with the breath as you can sharpens your mind.

The third part of the technique is that, when you realize that you’ve been thinking, you say to yourself, ‘Thinking.’ Now, that also requires a lot of precision. Even if you wake up as if from a dream and realize that you’ve been thinking, and you immediately go back to the breath and accidentally forget about the labeling, even then you should just pause a little bit and say to yourself, ‘Thinking.’ Use the label, because the label is so precise. Just acknowledge that you’ve been thinking, just that, no more, no less, just ‘thinking.’ Being with the out-breath cultivates the precision of your mind, and when you label, that too brings out the precision of your mind. Your mind becomes more clear and stabilized. As you sit, you might want to be aware of this.

If we emphasized only precision, our meditation might become quite harsh and militant. It might get too goal-oriented. So we also emphasize gentleness. One thing that is very helpful is to cultivate an overall sense of relaxation while you are doing the meditation. I think you’ll notice that as you become more mindful and more aware and awake, you begin to notice that your stomach tends to get very tense and your shoulders tend to get very tight. It helps a lot if you notice this and then purposely relax your stomach, relax your shoulders and your neck. If you find it difficult to relax, just gradually, patiently, gently work with it. […]

The moment when you label your thoughts ‘thinking’ is probably the key place in the technique where you cultivate gentleness, sympathy, and loving-kindness. Rinpoche used to say, ‘Notice your tone of voice when you say “thinking.”’ It might be really harsh, but actually it’s just a euphemism for ‘Drat! You were thinking again, gosh darn it, you dummy.’ You might really be saying, ‘You fool, you absolutely miserable meditator, you’re hopeless.’ But it’s not that at all. All that’s happened is that you’ve noticed. Good for you, you actually noticed! You’ve noticed that mind thinks continuously, and it’s wonderful that you’ve seen that. Having seen it, let the thoughts go. Say, ‘Thinking.’ If you notice that you’re being harsh, say it a second time just to cultivate the feeling that you could say it to yourself with gentleness and kindness, in other words, that you are cultivating a nonjudgmental attitude. You are not criticizing yourself, you are just seeing what is with precision and gentleness, seeing thinking as thinking. That is how this technique cultivates not only precision but also softness, gentleness, a sense of warmth toward oneself. The honesty of precision and the goodheartedness of gentleness are qualities of making friends with yourself. So during this period, along with being as precise as you can, really emphasize the softness. If you find your body tensing, relax it. If you find your mind tensing, relax it. Feel the expansiveness of the breath going out into the space. When thoughts come up, touch them very lightly, like a feather touching a bubble. Let the whole thing be soft and gentle, but at the same time precise. […]

You may have wondered why we are mindful of our out-breath and only our out-breath. Why don’t we pay attention to the out-breath and the in-breath? There are other excellent techniques that instruct the meditator to be mindful of the breath going out and mindful of the breath coming in. That definitely sharpens the mind and brings a sense of one-pointed, continuous mindfulness, with no break in it. But in this meditation technique, we are with the out-breath; there’s no particular instruction about what to do until the next out-breath. Inherent in this technique is the ability to let go at the end of the out-breath, to open at the end of the out-breath, because for a moment there’s actually no instruction about what to do. There’s a possibility of what Rinpoche used to call ‘gap’ at the end of the out-breath: you’re mindful of your breath as it goes out, and then there’s a pause as the breath comes in. It’s as if you … pause. It doesn’t help at all to say, ‘Don’t be mindful of the in-breath’ – that’s like saying, ‘Don’t think of a pink elephant.’ When you’re told not to be mindful of something, it becomes an obsession. Nevertheless, the mindfulness is on the out-breath, and there’s some sense of just waiting for the next out-breath, a sense of no project. One could just let go at the end of the out-breath. Breath goes out and dissolves, and there could be some sense of letting go completely. Nothing to hold on to until the next out-breath.

Even though it’s difficult to do, as you begin to work with mindfulness of the out-breath, then the pause, just waiting, and then mindfulness of the next out-breath, the sense of being able to let go gradually begins to dawn on you. So don’t have any high expectations – just do the technique. As the months and years go by, the way you regard the world will begin to change.

On my burnout

I’ve said a lot about depression, self-compassion, and breakup blues.

I haven’t said much about burnout. I have that too. Have had for years, in fact.

This is just the first time that I’ve had a chance to stop and heal.

I did a day of work last week, the first one I’ve done since the end of November. It went well. It felt good. So I thought I would try to get a full week’s worth of work done.

Then I basically crashed again.

Sometimes, your skin feels sensitive and raw. Everything is, not if outright painful, then at least unpleasant to touch.

That’s how I feel today, and on a lot of days. Except that the skin is my mind, and the things that I touch are thoughts about things to be done.

Goals. Obligations. Future calendar entries. But even things like a computer game I was thinking of playing, or a Facebook comment I’m thinking of replying to. Anything that I need to keep track of, touches against that rawness in my mind.

That’s another big part of why I’ve been so focused on self-compassion recently. On being okay with not getting anything done. On taking pleasure from just being present. On enjoying little, ordinary things. Because that’s all I have, on moments like this.

I’m getting better. There are fewer days like this. There are many days when I’m actually happy, enjoying it when I do things.

But I’m still not quite recovered. And I need to be careful not to forget that, lest I push myself so much that I crash again.

Self-compassion

Often when we are in pain, what we really want is some validation for the pain.

Not advice. Not someone trying to make that pain go away (because it discomforts them). But someone to tell us that it’s okay to be in pain. That the things that bother us, are valid and normal reasons to feel bad about.

Much of self-compassion seems to be the same. Not trying to stop being in pain. Not trying to change yourself. But giving yourself the validation that we usually look for from the outside. Accepting it as a part of yourself, as something that is alright to feel. Something that you can sympathize with yourself for feeling.

And if you find that you *cannot* accept the pain…

Then you unjudgingly accept that too. That today, this pain is too much for me to bear. You just are with it, without trying to change it.

And if you find that you cannot do that either, and feel bad and guilty for being so bad at this self-compassion thing…

Then you accept that, without trying to change it.

And if you find yourself being kinda okay with being in pain, but still wanting to change it, still wanting to explicitly apply some technique for deeper self-compassion rather than just accepting everything…

Then you accept that, and let yourself do it.

Dealt with in this way, self-compassion oddly starts looking like not really doing anything in particular. After all, you just go about living your life as you always have, not trying to change anything about yourself. Or trying, if that’s what you’re like. Not trying to exert any particular control over your behavior, except when you do.

Yet somehow you end up feeling quite different from normal.

(Except when you don’t, which is also fine.)

Disjunctive AI scenarios: Individual or collective takeoff?

In this post, I examine Magnus Vinding’s argument against traditional “single AI fooms off” scenarios, as outlined in his book “Reflections on Intelligence”. While the argument itself is not novel – similar ones have been made before by Robin Hanson and J Storrs Hall, among others – I found Vinding’s case to be the most eloquently and compellingly put so far.

Vinding’s argument goes basically as follows: when we talk about intelligence, what we actually care about is the ability to achieve goals. For instance, Legg & Hutter collected 70 different definitions for intelligence, and concluded that a summary which captured the essential spirit of most of them was “Intelligence measures an agent’s ability to achieve goals in a wide range of environments”.

But once we substitute “intelligence” with “the ability to achieve goals”, we notice that we are actually talking about having tools, in several senses of the word:

  • Cognitive tools: our brains develop to have specialized processes for performing various kinds of tasks, such as recognizing faces, recognizing emotions, processing language, etc. Humans have some cognitive tools that are unique to us (such as sophisticated language) while lacking some that other animals have (such as the sophisticated smell processing of a dog).
  • Anatomical tools: not only do our brains carry out specific tasks, we also have an anatomy that supports it. For instance, our vocal cords allow us to produce a considerable variety of sounds to be used together with our language-processing capabilities. On the other hand, we also lack some other anatomical tools, such as the impressive noses of dogs. It is the combination of cognitive and anatomical tools that allows us to achieve a variety of different goals.
  • Physical tools: tools in the most conventional sense of the word, we would not be capable of achieving much unless we had various physical devices that can be used for manipulating the world.
  • Cultural tools: nobody would get very far if they had to derive all of their ideas from scratch. Rather, we acquire most of our language, ideas, and ways of thought that we use from the people around us.
  • Societal tools: an individual’s ability to achieve things has grown enormously as our economy has grown increasingly specialized. No single person could build a laptop, or even a pencil, all by themselves. Yet we have at our disposal tools – computers, web browsers, Internet service providers, online stores, manufacturers, delivery companies – which allow us to almost effortlessly acquire laptops and pencils and then put them into use.

This paragraph from Vinding’s book summarizes much of his argument:

“Human intelligence” is often compared to “chimpanzee intelligence” in a manner that presents the former as being so much more awesome than, and different from, the latter. Yet this is not the case. If we look at individuals in isolation, a human is hardly that much more capable than a chimpanzee. They are both equally unable to read and write on their own, not to mention building computers or flying to the moon. And this is also true if we compare a tribe of, say, thirty humans with a tribe of thirty chimpanzees. Such two tribes rule the Earth about equally little. What really separates humans from chimpanzees, however, is that humans have a much greater capacity for accumulating information, especially through language. And it is this – more precisely, millions of individuals cooperating with this, in itself humble and almost useless, ability – that enables humans to accomplish the things we erroneously identify with individual abilities: communicating with language, doing mathematics, uncovering physical laws, building things, etc. It is essentially this you can do with a human that you cannot do with a chimpanzee: train them to contribute modestly to society. To become a well-connected neuron in the collective human brain. Without the knowledge and tools of previous generations, humans are largely indistinguishable from chimpanzees.

So what are the implications for AI risk?

One of Vinding’s arguments is that “intelligence” has gotten increasingly distributed. Whereas a hunter-gatherer might only have drawn upon the resources of their own tribe, a modern human will enhance their capabilities by tapping into a network of resources that literally spans the entire globe. Thus, it may be misguided to focus on the point when AIs achieve human-level intelligence, for a single individual’s intelligence alone isn’t sufficient for achieving much. Instead, if AIs were to wipe out humanity, they would need to first achieve the level of capability that human society has… but the easiest way of achieving that would be to collaborate with human society and use its resources peacefully, rather than cause damage to it.

A similar argument was previously put forward by J Storrs Hall in his paper Engineering Utopia, which uses a more economic argument. Hall notes that even when a single AI is doing self-improvement (such as by developing better cognitive science models to improve its software), the rest of the economy is also developing better such models. Thus it’s better for the AI to focus on improving at whatever thing it is best at, and keep trading with the rest of the economy to buy the things that the rest of the economy is better at improving.

However, Hall notes that there could still be a hard takeoff, once enough AIs were networked together: AIs that think faster than humans are likely to be able to communicate with each other, and share insights, much faster than they can communicate with humans. The size of the AI economy could grow quite quickly, with Hall suggesting a scenario that goes “from […] 30,000 human equivalents at the start, to approximately 5 billion human equivalents a decade later”.

Any individual AI, then, will be most effective as a cooperating element of a community (as is any individual human […]). AI communities, on the other hand, will have the potential to grow into powers rivalling or exceeding the capability of the human race in relatively short order. The actions of communities are effects of the set of ideas they hold, the result of an extremely rapid memetic evolution […]

Real-time human oversight of such AI communities is infeasible. Once a networked AI community was established, a “cultural revolution” could overtake it in minutes on a worldwide scale, even at today’s communication rates. The essence of our quest for a desirable future world, then, both for ourselves and for the AIs, lies in understanding the dynamics of memetic evolution and working out ways to curb its excesses.

Hall suggests that an community could rapidly grow to the point where they were exclusively communicating and trading with each other, humans being too slow to bother with. Suppose that you were a digital mind that thought a thousand times as fast as biological humans. If you wanted a task done, would you rather hire another digital mind to do it, taking what felt to you like an hour – or would you hire a biological human, and have to wait what felt like a month and a half? You’d probably go with your digital friend.

One obvious limitation is that this speed advantage would only apply for purely mental tasks. If you needed something manufactured, you might as well order something from the humans.

Vinding’s book could also be read as a general argument suggesting that the amount of distributed intelligence in human society was so large that AIs would still benefit from trade, and would need a large amount of time to learn to do everything themselves. Vinding writes:

… the majority of what humans do in the economy is not written down anywhere and thus not easily copyable. Customs and know-how run the world to an extent that is hard to appreciate – tacit knowledge and routines concerning everything from how to turn the right knobs and handles on an oil rig to how to read the faces of other humans, none of which is written down anywhere. For even on subjects where a lot is written down – such as how to read faces – there are many more things that are not. In much of what we do, we only know how we do, not exactly “what”, and this knowledge is found in the nooks and crannies of our brains and muscles, and in our collective organization as a whole. Most of this unique knowledge cannot possibly be deduced from a few simple principles – it can only be learned through repeated trial and error – which means that any system that wants to expand the economy must work with this enormous set of undocumented, not readily replaceable know-how and customs.

This is a compelling argument, but with recent progress in AI, it feels less compelling than it might have felt a few years back. Vinding mentions reading faces as an example of a domain involving much tacit knowledge, but computers are already outperforming humans at facial recognition and are starting to match humans at recognizing and interpreting emotional expressions, as well as in recognizing rare syndromes from facial patterns. As a more industrial example, DeepMind’s AI technology was recently deployed to optimize power usage at Google’s data centers, for a 15 percent improvement in power usage efficiency. Since relatively small reductions in power use translate to large savings – this change is estimated to save Google hundreds of millions of dollars – these were already highly-optimized centers.

Tacit knowledge is essentially knowledge that is based on pattern recognition, and pattern recognition is rapidly becoming one of AI’s strengths. Currently this still requires massive datasets – Goodfellow et al. (2016, chap 1) note that as a rule of thumb, a deep learning algorithm requires a dataset of at least 10 million labeled examples in order to achieve human-level or better performance. On the other hand, they also note that a large part of the success of deep learning has been because the digitization of society has made such large datasets increasingly available.

It seems likely that as the development of better and better AI pattern recognition will drive further investment into collecting larger datasets, which will in turn make it even more profitable to continue investing in better pattern recognition. After DeepMind’s success with improving power efficiency at Google’s data centers, DeepMind’s Demis Hassabis told Bloomberg that “[DeepMind] knows where its AI system lacks information, so it may ask Google to put additional sensors into its data centers to let its software eke out even more efficiency”.

If AI allows efficiency to be increased, then businesses will be rebuilt in such a way as to give AI all the necessary information it needs to run them maximally efficiently – making tacit human knowledge of how things were previously done both unnecessary and obsolete. The items in Amazon’s warehouses are algorithmically organized according to a logic that makes little intuitive sense to humans, with an AI system telling the workers where to go; Foxconn is in the process of fully automating its factories; Uber is seeking to replace human drivers with self-driving cars. We are bound to see this kind of automation penetrate into ever larger parts of the economy over time, which will drive the further deployment of sensors and collection of better datasets in order to enable it. By the time AGI manifests, after several decades of this development, there’s no obvious reason to assume that very much of the tacit knowledge needed for running an economy would necessarily remain locked up in human heads anymore.

To sum things up, this suggests that beyond the classical “one AI fooms to a superintelligence and takes over the world” scenario, there may plausibly exist a scenario where the superintelligences are initially best off trading with humans. As time goes on and the size of the AI community grows, this community may collectively foom off as they come to only trade with each other and have little use for humans. Depending on how long it takes for the community grow, this may or may not look any different from traditional foom.

This blog post was written as part of research funded by the Foundational Research Institute.