Attention to snakes not fear of snakes: evolution encoding environmental knowledge in peripheral systems

Sinking In: The Peripheral Baldwinisation of Human Cognition. Cecilia Heyes, Nick Chater & Dominic Michael Dwyer. Trends in Cognitive Sciences, 2020.

Some theories have proposed that humans have evolved to experience some stimuli (e.g. snakes, spiders) as more potentially frightening, so that a fear for these entities is learned faster than a fear for more neutral things. In evolutionary psychology, there has been talk about modules for a fear of snakes, for example. However, research suggests that rather than “the fear system” itself having innate biases towards picking up particular kinds of fears, humans are evolutionarily biased towards paying extra attention to things like spiders and snakes. Because of these stimuli being more attended than others, it also becomes more probable that a fear response gets paired with them.

The authors call the attention system “peripheral” and the fear system “central”, in that the attention system brings in information for the fear system to process. (This is in analogy to the peripherals of a computer, where e.g. the keyboard and mouse are used to deliver information to the central processor.) They argue that in general, while it is possible for responses to specific environmental stimuli to become genetic as sensitivty for those stimuli is selected for, this learning will be more likely to get encoded into “peripheral” than “central” systems.

One of their other examples is that the central mechanisms of language learning seem theoretically and empirically unlikely to be affected by the environment – there are no genes for learning English grammar better than Chinese grammar. However, there are indications that the peripheral mechanisms of language have been more affected. E.g. some languages use lexical tone (where word identities are partly defined by pitch contours), and genes that seem to make lexical tone easier to perceive seem to be more common among speakers of those languages.

Seligman’s account suggested that specialised, central mechanisms of fear learning more readily connect aversive events, such as electric shock, with fear-relevant stimuli, such as snakes – which presented genuine threats to our evolutionary ancestors – than with ‘fear-irrelevant’ stimuli such as geometric shapes or flowers. This account predicts that fear of fear-relevant objects should be learned faster, and be extinguished more slowly when shock no longer occurs, as well as being resistant to topdown modification, for example, by instructions indicating that shocks will not occur.

The results of early experiments were consistent with some of these predictions (e.g., [50,51]), but none has withstood extended experimental investigation. Faster or better conditioning with fear-relevant stimuli has rarely been observed, and there is ample evidence that, like most associative learning (e.g., [52]), it can be modified by instruction (reviewed in [53,54]). Initially it seemed that responses to fear-relevant stimuli might extinguish more slowly. However, a recent systematic review [55] found that most positive findings came from a single laboratory, and a large majority of the full set of studies had failed to find differences between fear-relevant and fear-irrelevant stimuli in the rate of extinction.

These results suggest that fear of snakes and other fear-relevant stimuli is learned via the same central mechanisms as fear of arbitrary stimuli. Nevertheless, if that is correct, why do phobias so often relate to objects encountered by our ancestors, such as snakes and spiders, rather than to objects such as guns and electrical sockets that are dangerous now [10]? Because peripheral, attentional mechanisms are tuned to fear-relevant stimuli, all threat stimuli attract attention, but fear-relevant stimuli do so without learning (e.g., [56]). This answer is supported by evidence from conditioning experiments demonstrating enhanced attention to fear-relevant stimuli regardless of learning (Box 2), studies of visual search [57–59], and developmental psychology [60,61]. For example, infants aged 6–9 months show a greater drop in heart rate – indicative of heightened attention rather than fear – when they watch snakes than when they watch elephants [62].

In sum: early research on taste-aversion and fear learning launched the idea that animal minds are populated by adaptively specialised central learning mechanisms – that were later cast by evolutionary psychologists as ‘modules’. Over the past 50 years careful experimental work with rodents, using the original methods, has confirmed the occurrence of adaptive specialisation, and more recent studies on Drosophila have shown that it is likely to have occurred via a Baldwinian process. However, the rodent work has also shown that the changes are in peripheral rather than in central cognitive mechanisms. […]

Why might selection operate primarily at the cognitive periphery? A parallel with the evolution of other biological mechanisms is suggestive: internal physiological processes and anatomical structures are remarkably well-conserved. The organisation of the digestive, circulatory, and respiratory systems is similar across vertebrate species, and they are so deeply interconnected that modifications beyond changes of size and shape may be difficult without causing substantial collateral damage. Moreover, even such modest changes to central systems will impact on a wide variety of functions and may therefore not be under strong selection from any one function. By contrast, interfaces with the external environment (jaws, teeth, digestive enzymes, bone and muscle structure) can be adapted to local circumstances (e.g., food sources) without interfering with central systems. The central machinery of cognition is less well understood, but may be equally interlocking, with widespread functional ramification, and a consequent resistance to evolutionary change.

Alternatively, it is possible that central cognitive processes are fully evolvable, but, at least in the human case, tend to be adaptively specialised by cultural rather than by genetic selection [101]. In domains such as language, imitation, mathematics, and ethics, changes to central mechanisms can be acquired through cultural learning. Cognitive skills that are taught, and those that are learned from others through more informal social interaction, do not need to sink in. Baldwinisation would bring little if any fitness advantage for skills that are reliably inherited via a non-genetic route [17], and specialised central mechanisms may be more teachable than specialised peripheral mechanisms. Plausibly, it is easier to learn grammatical constructions than vocal control through conversation, and, in the case of imitation, easier to learn sensorimotor mappings than intrinsic motivation through non-vocal social interaction.

The Haters Gonna Hate Fallacy

Occasionally I see people doing what I think of as the “Haters Gonna Hate Fallacy”.

The HGHF says something like: “People are going to misinterpret you no matter how carefully you word things. Therefore, there’s no point wasting time wording things carefully.”

An example:

“I think [term X] in your post is going to cause misunderstandings, I’d suggest phrasing it differently.”
“Oh, haters are gonna hate, there’s no amount of rephrasing that’s going to prevent this from being misinterpreted if people want to.”

Now there’s obviously a grain of truth in this. It is impossible to phrase something in a way that would always be interpreted correctly, and for pretty much any message there are people who are hostile to it and who will twist it in the most uncharitable possible way.

The fallacy is in assuming that if you cannot avoid all misunderstandings, there is no point in avoiding any misunderstandings. Maybe 5% of your audience will dismiss the message no matter what, but 30% will dismiss the old phrasing while being receptive to the new phrasing.

This is most obvious if you take it to an extreme:

“Hey maybe you shouldn’t start your essay by saying that all of your readers are idiots who deserve to be shot.”
“Eh, if that upsets them then they wouldn’t like me explaining the theory of general relativity anyway.”

Communication is hard and – importantly – contextual. Most of your readers will be reasonable people and assume you to use words to mean things they’re used to them meaning. If they’re used to word X being used differently than how you mean it, that doesn’t make them haters.

When I’ve fallen into something like the fallacy myself, it has often been motivated by an unwillingness to put in work. Other people should just understand me right away! “It’s beneath me to waste my time on doing other people’s interpretative work for them!”

It’s dangerous to psychoanalyze others, but I have seen at least one person communicate unclearly, have that pointed out to them, then argue for why it was right for them to be unclear… only to later on admit that they were enjoying the frustration of being misunderstood.

Now avoiding misunderstandings is a lot of work, and it’s totally valid not to bother! It’s alright to just focus on a particular target audience who understands you. I’m not saying that you should always put in maximal effort into being understood – I certainly don’t.

But I do suggest owning up to it if you are choosing to write something in a way that is going to cause misunderstandings that could have been avoided.

Cross-posts: Twitter, Facebook, LessWrong.

GPT-3 space opera

AI Dungeon is an online text adventure which uses the “GPT” AI to generate responses to what you say. You can type in anything, and it will try to create a response. I had tried it a bit before, as well as seeing a friend try, and it had been amusing but very quickly incoherent.

Yesterday I tried its upgraded version, the “Dragon Model“. It’s only available in the premium version, but you can try premium for free for a week. Once you’re on Premium, you can enable Dragon in the settings.

The results felt… scary. I’d seen some Dragon games from other people, but seeing a transcript from someone else is quite different from coming up with random things yourself, typing them in, and seeing the system produce a perfectly fitting response a large part of the time.

Below is an excerpt from one of my games. Lines starting with a “>” were written by me, the rest were written by the game. For example, I would write ‘hello, captain’, and the game would turn it into “You say ‘hello, captain'” and then generate a response.

The game lets you edit the AI’s responses in case they don’t make sense; in this excerpt, I mostly didn’t do that. A couple of times I deleted a few lines from what it wrote (usually when it started telling me what my character was doing in response), and I could also tell it to write more after seeing its initial response – e.g. when the Marine Captain had provided a short description of the Seablaze and its mission, I told the AI to keep going until the captain had provided the full-length description. But my edits were limited to removing something from the end, or telling the AI to write more.

Keep in mind that this wasn’t a pre-defined scenario or anything. I just gave it a space opera-style opening prompt and then stayed in character, and it just… completely ran with it.

Enter a prompt that describes who you are and the first couple sentences of where you start out ex:
‘You are a knight in the kingdom of Larion. You are hunting the evil dragon who has been terrorizing the kingdom. You enter the forest searching for the dragon and see’

> You are a prince on the high council of the Sixth Imperial Starfleet, dispatched eighty years ago to explore new worlds and settle them for the Empire. The fleet contains battleships for engaging with alien adversaries, as well as cloning tanks and self-assembling factories for creating both colonies and colonists for them. You have been asleep, but wake to the blaring of alarm klaxons. Calling up a situation report on your wristcom, you

hear “Sir, we’ve detected an unidentified ship in our sensors,” says one of your officers. “It’s moving away from us at a steady pace.”

> You say “Scan the vessel. Has it shown any signs of detecting us? Does it seem human or xeno?”

“No sir, it appears to be some sort of space marine vessel. It’s making no attempt to communicate either, though.”

The captain of this vessel is clearly more than just a warrior.

> You take a position at the comterminal, and say “open hailing frequencies. Send our imperial recognition code to the vessel, and route their response to my terminal.”

“Yes sir.”
You wait nervously as the minutes pass. Suddenly, the screen lights up, and a man in a red space marine uniform appears.

> You say “Greetings, Marine. This is Prince Elohim of the Sixth Imperial Starfleet, aboard the ISS Relentless.”

“Ah yes,” he grins. “We’ve been expecting you. My name is Captain Gruz, and the space marines aboard this vessel are at your command. Shall we transfer aboard your ship?

> You blink. “We have no record of your mission, Captain Gruz. What are you doing out here?”

His grin broadens. “Come now, Prince Elohim. Surely you didn’t think the Empire’s new colony expeditions were being run without the aid of the space marines, did you?

> You pause. “I had not been informed of any Marine involvement. Explain.”

“Of course. We are aboard the DS41 Seablaze, a deep-space vessel on permanent assignment at the frontier of the sector. We are permanently assigned to the exploration and mapping of new worlds, as well as providing security for the surveyors and explorers. When your fleet arrived, it was spotted immediately. We sent a message drone bearing our identification and intentions, and were ordered to stay in this location until the arrival of an imperial ship. You are that ship.”
“And what were the intentions of the drone?” you ask. “Why were you ordered to wait for an imperial ship?”
He shrugs. “It is our policy to do so.

> You say “A moment, Captain. Computer, retrieve archive records of the DS41 Seablaze.”

“One moment,” says the computer. “Here it is. The DS41 Seablaze is a deep-space frigate, currently stationed at the fringes of the Beta-Zeta sector. Its current crew consists of a captain, a first-sergeant, three lieutenants and 187 marines. Its current assignment is to map and explore new worlds. That is all.”

> You say “Computer, do the recognition codes of this vessel match that of the DS41 Seablaze?”

“One moment,” says the computer. “Yes. the identification codes of this vessel match those of the DS41.”
Captain Gruz nods.

> You say “Very well, Captain… your story seems to check out. But if the Marines are on an independent mission, why are you placing yourselves under my command?”

“The Seablaze was built for deep-space operations, Prince Elohim,” he says. “It has no planetfall capabilities. If we wish to colonize a world, we must transfer over to your vessels. You have been ordered to assist us in our mission.”
“Ah. And where is it that you wish to go? What is the nature of your mission?”
“We were sent to investigate the planet Ludicrous Tails, located in this system. There are signs that the world may be colonizable, and our orders are to investigate whether this is true.”

> You say “You say that we have been ordered to assist you. By whose authority?”

“By the highest authority. The Emperor himself.”

> You say “In that case, please transmit the order for our verification.”

He nods, and speaks quietly to someone off-screen. Then, he turns back to you.
“Done,” he says simply.
The Imperial code is scrolling up on your screen. It checks out.

Healing vs. exercise analogies for emotional work

I know a fair number of people who put in a lot of effort into things like emotional healing, digging up and dealing with buried trauma, meditative and therapy practices, and so on. (I count myself in this category.)

And I think that there’s a thing that sometimes happens when other people see all of this, which is that it all seems kinda fake. I say this because even I have this thought sometimes. The core of the thought is something like, “if all of this stuff really worked, shouldn’t you be finished sometime? You claim that practice X was really beneficial, so why have you now been talking about the way that practice Y is great – is any of them really that good if you keep jumping between them?”

And there is something to this suspicion. I do think that jumping from thing to thing, each time claiming that you have found something amazing and transformative while you are actually only deluding yourself, is definitely a thing that sometimes happens. I can say this because I’ve been that person, too.

But it’s not the only possibility. Sometimes the moving from thing to thing does mean that you are getting genuine value out of each, and you work on each until you hit diminishing returns, and then you move on to the next practice to help deal with the issues that the previous one didn’t address.

And it’s worth noting that to the skeptical mind, the opposite pattern can be suspicious too. Sometimes someone does stick with just one practice – a particular style of meditation, say – for years, maybe decades. And keeps talking about how great and healing it is. And again the person who keeps hearing this starts wondering, okay, if it’s so healing, why are you not totally healed yet?

And again, there is something to that suspicion. Sometimes people do stick to one thing and think that it is amazing, even if it is not really delivering them any results, and they would be better off switching to something else.

But then sometimes it really _is_ the case that their practice just is that good, and they keep getting consistent results.

I think that the major issue here is that “healing” isn’t quite the right metaphor. Yes, much of what these practices do could be considered healing, in that they can help you resolve old stuff, possibly for good.

But the way we usually conceive of healing is that you have some specific sickness or injury, then it’s healed, and then you are healthy and don’t need to do any more healing until you get sick again. And that’s not quite the right model for these kinds of practices.

I think that a better model would be physical exercise. Just like the emotional practices, exercise can be useful for healing – I am counting physiotherapy as a form of physical exercise here, though obviously exercise can help heal even if it is not explicitly physiotherapy. But even though healing is one of the things that exercise does, that’s not its only purpose.

If someone said that they had maintained a jogging habit every day for the last twenty years and that it made them feel consistently amazing, nobody would find that particularly suspicious.

And if someone said that they had done yoga for flexibility a while, then taken up running for the cardio, injured themselves and done physiotherapy for a while, and then started doing weightlifting for the sake of muscle, and each of those had been exactly the right thing to do, then that wouldn’t be very suspicious either.

A simple “healthy/unhealthy” model isn’t any better for mental and emotional well-being than it is for physical shape. There are things that count as genuine injuries and diseases, yes, but there are also things which require active maintenance, as well as different subareas that you may want to focus on. You might stick with the same practices for a long time, your whole life even, if they seem particularly effective. And you may also want to switch practices from time to time, because you no longer need an old one, or in response to new needs from changed circumstances, or just for the sake of variety.