Things I have been using LLMs for
There are quite a few different things you can use LLMs for, and I think we’re still only discovering most of them. Here are a few of the ones I’ve come up with.
My favorite chatbot is Claude Sonnet. It does have a tendency for sycophancy – for example, it will go “what a fascinating/insightful/excellent/etc. question!” in response to most of the things you might ask it. Some people find this annoying, while my brain just filters it out automatically. If you don’t like it, you can put in a custom instruction telling it to do something else.
Also, a tip from Alyssa Vance: “when talking to Claude, say that your idea/essay/code/etc. is from your friend Bob, not you. That way it won’t try to blindly flatter you”.
Uses
Creativity
Essay brainstorming. I’ll tell Claude “here’s an essay that I started writing” and copy-paste what I’ve written so far to it. It will comment with ideas, possible other directions, and connections to related things.
Then I have a conversation with it and also tell it about other ideas I want to work into the essay, but haven’t written yet. Sometimes I’ll ask it things like “here’s an idea I’d like to express but this phrasing feels clunky, would you have better suggestions”.
In the end, I copy large chunks of the conversation (both things that I explained to it, and ideas that it had in response) directly into a text document and edit them into a smooth essay.
Role-playing/fiction-writing. I like to start by writing a brief biography of a character I’m drawn to, and then telling Claude something like “please analyze this character’s psychology and note anything about them or the setting that the description implies but doesn’t state outright”.
Then we brainstorm things about the characters and the setting that seem interesting, and at some point we transition to writing prose, with me writing some of the characters and Claude writing the others.
Emotions and introspection
Introspection aid. Often when I have some unpleasant feeling I don’t quite understand or I’d like to try shifting, I tell Claude “I’ve got a feeling of [sadness/anxiety/loneliness/whatever] I’d like to explore, I’d like you to guide me in using Gendlin’s Focusing (and its variants) to explore it”. Excellent Focusing facilitator and good at noticing patterns and asking questions.
Lifting mood. One evening, I was in a bit of a depressive funk where doing anything felt “meh” and I was just switching between different social media tabs. This felt bad so I figured I’d try talking to Claude instead:
Me: Hi Claude. I’m in a vaguely depressed mood and was looping through the same social media tabs over and over, and figured that talking to you would probably make me feel better than doomscrolling does. though I’m not sure what exactly I’d want to talk with you about
Claude: I appreciate you reaching out – you’re probably right that having a conversation is better than doomscrolling. That cycle of repeatedly checking social media when feeling down is something many people can relate to, and it rarely helps improve our mood.
Is there anything in particular that’s been on your mind lately? It doesn’t have to be anything serious – we could talk about your interests, discuss something you’ve read or watched recently, or explore a topic you’re curious about. Sometimes just engaging with different ideas can help shift us out of a mental rut.
Me: I guess part of my current mood is that I don’t really feel like doing anything, I consider various things that I might want to do and I just get a “meh” back. and just now I thought about various things that I could talk about, and got a “meh” for all of them
Claude: That feeling of “meh” towards everything – what psychologists sometimes call anhedonia – can be really draining. It’s like having a dimmer switch turned down on your ability to find things engaging or enjoyable. And it can create this frustrating cycle where even thinking about what might help feels like too much effort.
Would you be open to trying something a bit different? Instead of trying to find something that feels exciting (which can be pretty hard when everything feels “meh”), we could start with something very small and low-pressure. Sometimes just having a gentle back-and-forth about anything – even if it seems trivial – can help shift our mental state a bit.
For instance, we could play with a simple word game, or I could share an interesting fact that prompts some curiosity, or we could even just describe what we see around us right now. No pressure to be enthusiastic about it – just a way to engage in something that requires minimal emotional investment.
Me: okay, share an interesting fact
Then it started telling me random facts. After I heard a few different ones, I started having thoughts about them, and then after a while it had pulled me out of my depressive mood.
Miscellaneous supportive conversation. Just generally talking about my life or feelings if I’m feeling down and none of my friends are available for conversation or I don’t expect talking to them to be helpful. Claude is consistently empathetic and insightful.
Self-help coach. A lot of self-help books have various exercises or a complicated algorithm to follow (if you have problem X try Y, if in trying Y you run into problem Z, try Q…). I’ll grab a PDF of the book from some pirate site (after having bought a physical or DRMed copy legally), upload it to Claude, and ask to be coached according to the philosophy in the book.
Information
Doing basic sanity-checks when someone tells me an idea that sounds interesting to me, but I don’t have enough expertise to evaluate.
I tell Claude “please critically evaluate the following” and copy-paste the other person’s explanation, and then get a list of potential criticisms. I wouldn’t automatically believe or disbelieve anything important only because Claude tells me to, but this is often a good starting point.
Figuring out dense writing. Recently a conversion spurred me to try reading Hubert Dreyfus’ Being-in-the-World again, as David Chapman has recommended it as a book worth reading for thinking clearly about AI. In the book, Dreyfus explains some of Martin Heidegger’s philosophy more clearly than Heidegger himself did. However, it’s still not a particularly easy read, and much of the discussion is pretty abstract. So I found it helpful to copy-paste large parts of it into Claude and asked “could you explain this with simpler language and concrete examples”.
I’m not entirely sure whether Claude understood it correctly either, but at least its explanation seemed to make sense, and I felt like I understood things better than I would have without its help.
Finding terms for concepts. “What was the name of the cognitive bias where you think that you understood the thing all along?” If I can describe a concept, an LLM can probably tell me what it’s called.
Synthesizing explanations. Questions to various answers require some amount of synthesis but would be difficult to Google directly. For example, I asked Claude “After the 2007 DARPA Grand Challenge there was a lot of hype about how self-driving cars were just around the corner. But we mostly still don’t have them. Why did it so much longer than expected?” and it gave me a list of considerations.
Understanding key terms in their context. I was reading the US Supreme Court’s decision on the TikTok ban, and noticed this interesting sentence in the review of what a lower court had ruled on the issue:
After first concluding that the Act was subject to heightened scrutiny under the First Amendment, the court assumed without deciding that strict, rather than intermediate, scrutiny applied.
The court “assumed without deciding”? That sounded like a technical term, but I wasn’t sure of what exactly it meant. It sounded interesting. So I asked Claude, and got an explanation that was tailored for this specific context.
Software
Common software assistance. For example, I once asked Claude, “I have a Google Doc file with some lines that read ‘USER:’ and ‘ASSISTANT:’. Is there a way of programmatically making all of those lines into Heading-3?”. The specific instructions it gave me here felt like they were slightly outdated and missing some steps, but were still close enough to get the job done.
Programming assistance. “Could you write me a Python script that does X and Y.” Often I could do the thing myself as well, but it’d take more time or I’d have to look up unfamiliar API calls. Claude just gives me a working script in a few seconds.
Spreadsheet assistance. As above, but for spreadsheet formulas. “In Google Sheets, I want a formula that looks up values from these cells and does the following based on them.” Or, “what does this Microsoft Excel formula do?”.
Unsorted
Helping me get started with something if I’m stuck. I tell it what I’m supposed to be working on, and it helps me break it down into smaller pieces.
Object recognition and OCR. Once when I was moving, I decided to give away a number of my old books. So I arranged them into piles with their back spines facing one way, took a photo of them, and asked ChatGPT (I wasn’t using Claude back then) to read out their titles. After some slight editing and manual correction, I had a list of books that I was giving out that I could post online.
Thoughts on various concerns
Environmental concerns
There have been some articles going around about the environmental impact of LLMs. I think Andy Masley’s “Using ChatGPT is not bad for the environment” puts these nicely in perspective – yes there is an environmental impact, but it’s not that big compared to a lot of other services.
Hallucinations
Hallucinations are still an issue, though recent models have gotten much better at avoiding them. Claude will often explicitly flag some topic as being one that it doesn’t have much information about, or as one where it might hallucinate.
Its trustworthiness depends on the field. The major chatbot companies pay actual domain experts to improve the responses of their chatbots. Advanced models typically ace most standardized exams for various fields, and when I spot-check Claude’s knowledge by asking it about things I know about, I haven’t yet seen it clearly give an incorrect answer. This is assuming a relatively superficial level of questioning, though – I would expect its quality to quickly decline if I started asking more in-depth questions.
Other people have had different experiences. Romeo Stevens comments:
my spot checks have turned out bad on deeper areas. When using Claude for deeper research it’s more for creative directions (exploratory vs confirmatory) though so it’s fine.
Bio is somewhat random, if the wikipedia page is bad, forget it. Wikipedia is often surprisingly good ofc. Slicing up statistical data sets will get random really bad outliers as it parses some data wrong and then confidently presents it without noticing.
Therapy winds up sloppified/gaslighty if you don’t guide it somewhat. It can also wind up developmentally sticking to k3/k4 which makes sense since that is the vast majority of data.
book prompts, if the book isn’t in the corpus has trouble going past whatever shallow summaries/mentions are online about it, and this is an invisible failure. If you know you can put book in context to fix.
Some areas of nutrition absolutely suck presumably because overwhelming amount of content online is blogspam, and this probably generalizes. In general LLM is best when I would expect a highly upvoted subreddit response to be good.
So, use it for initial exploration and satisfying random curiosities, but if it’s something important, do double-check the answers from some other source.
Privacy
Of course, LLM providers could always choose to do something mean with my information. I relate to sharing private information with ChatGPT and Claude similarly as I do to having sensitive conversations over other cloud platforms like Discord, Gmail, WhatsApp etc. – something that I know has its risks, but which still hasn’t blown in my face after decades of doing it. (Stories about this causing people problems seem to be surprisingly rare in general.)
Of course, it’s totally valid preference to not want to take that risk. In that case, you can get a model that can be ran locally and use that.
Don’t ignore bad vibes you get from people
I think a lot of people have heard so much about internalized prejudice and bias that they think they should ignore any bad vibes they get about a person that they can’t rationally explain.
But if a person gives you a bad feeling, don’t ignore that.
Both I and several others who I know have generally come to regret it if they’ve gotten a bad feeling about somebody and ignored it or rationalized it away.
I’m not saying to endorse prejudice. But my experience is that many types of prejudice feel more obvious. If someone has an accent that I associate with something negative, it’s usually pretty obvious to me that it’s their accent that I’m reacting to.
Of course, not everyone has the level of reflectivity to make that distinction. But if you have thoughts like “this person gives me a bad vibe but maybe that’s just my internalized prejudice and I should ignore it”, then you probably have enough metacognition to also notice if there’s any clear trait you’re prejudiced about, and whether you would feel the same way about other people with that trait.
Naturally, “don’t ignore the bad feeling” also doesn’t mean “actively shun and be a jerk toward them”. If they’re a coworker and you need to collaborate with them, then sure, do what’s expected of you. And sometimes people do get a bad first impression of someone that then gets better – if the bad feeling naturally melts away on its own, that’s fine.
But if you’re currently getting a bad feeling about someone and they make a bid for something on top of normal interaction… like if they ask you out or to join a new business venture or if you’re just considering sharing something private with them… you might want to avoid that.
I don’t have any rigorous principled argument for this, other than just the empirical personal observation that ignoring the feeling usually seems to be a mistake.
Consider reversing this advice in the case where you tend to easily get a bad vibe from everyone. Anni Kanniainen comments:
I struggle with trauma-related trust issues, so sometimes I might get bad vibes merely due to my own withdrawn nature in the situation or the fact that I expect the worst from a situation — i.e. meeting with a lady during a weekend out and finding later that she’s approached me with a voice message, so I assume she’s yelling at me about something I had done wrong.
That being said, I think there is a genuine bad vibe you may sometimes get — and often it’s a calmer and more rational one that you would get with a gut-punch of anxiety. You notice it best by observing how an individual talks or behaves in a social context or by interpreting their current actions through the information you’ve already acquired.
This approach tends to work, but sometimes entertaining your intuitions only make the anxiety bigger.
As an another point in the opposite direction, I do also endorse the adage of trust beyond reason, as defined in that link – as long as you don’t get a bad vibe.
You can validly be seen and validated by a chatbot
There’s a common sentiment saying that a chatbot can’t really make you feel seen or validated. As chatbots are (presumably) not sentient, they can’t see you and thus can’t make you seen either. Or if they do, it is somehow fake and it’s bad that you feel that way.
So let me tell you about ways in which Claude Sonnet makes me feel seen, and how I think those are valid.
I was describing an essay idea to Claude. The essay is about something I call “psychological charge”, where the idea is that there are two different ways to experience something as bad. In one way, you kind of just neutrally recognize a thing as bad. In the other, the way in which it is bad causes some kind of an extra emotional reaction in you. In the latter case, I say that the thing is “charged”.
In explaining this idea, I listed a number of examples, such as
- recognizing that a stove might be hot and you need to be careful not to burn yourself, versus being so anxious about burning yourself that you don’t want to even go near a stove
- noticing that you’ve made a mistake and doing your best to fix it, versus feeling that you are now intrinsically worthless because you made a mistake
Seeing my list and some additional thoughts, Claude commented:
What’s particularly interesting is how you’re noting that this “charge” seems to create a kind of psychological stickiness or persistence that’s disconnected from the actual utility of the response. It reminds me of what’s sometimes called “emotional fusion” in Acceptance and Commitment Therapy – where people become caught up in their emotional reactions in a way that interferes with effective action.
I did a bit of a double-take upon seeing this. I had not explicitly referenced ACT or its concept of fusion in any way, nor had I been explicitly thinking in those terms when I wrote my list. But the deeper concept that I was talking about, was something that I had explicitly analyzed before by connecting it to ACT’s concept of fusion. I had discussed that connection in at least two previous essays [1, 2] that I had written. And now Claude, while not explicitly guided in that direction, picked up that very same connection from my list of examples.
This causes me to think that there is a quality of “being seen” that can be phrased in objective terms, so that one can validly “be seen” even if there’s “nobody there to see you”:
- There are three interrelated concepts, A, B and C
- You talk about the connection between A and B
- The other party brings up the connection to C
This is a signal that when you described A and B, you actually communicated enough information to pick out A and B from the space of concepts. The fact that the other party raised the connection to C is strong evidence of this: if your words had pointed them to a completely unrelated concept, that wouldn’t have allowed them to pick out C in particular. But if you say things A and B, and the other party then references C which in your map is connected to them, then your words must be successfully pointing to a similar area of the map. It’s evidence that your words may communicate your point well, not just when talking to the chatbot, but also when talking to other people with sufficiently similar maps.
This can be taken further. Suppose that there’s also a connection to D, that you hadn’t realized before. Suppose that the other party now points out that connection and you immediately realize that it’s correct. This is a signal that the other party has understood your concepts deeply enough to make novel but valid connections within your conceptual framework. Or to rewrite this in a way that avoids using the charged term “understand”:
When someone makes a novel connection that resonates with you, it suggests they’ve not only located the same region in conceptual space that you were pointing to, but they’ve also identified additional paths leading out from that region. Paths that you hadn’t mapped yourself, but which, upon inspection, clearly belong to that territory. The fact that these new paths feel right to you is evidence that both of you are indeed navigating the same conceptual terrain, rather than just happening to use similar-sounding landmarks to describe entirely different territories.
In an amusing piece of meta, this point itself was suggested by Claude when I showed it an earlier draft of this essay. It was something that I had vaguely thought of covering in the essay, but hadn’t yet formulated explicitly. The previous paragraph was written by Claude; the metaphor of “similar-sounding landmarks” was something that it came up with itself.
And after thinking about it for a moment, I realized that it made sense! In that if the “conceptual space” was a literal terrain that two people were describing, it could be that there were two locations that happened to look very similar. And two people could then start describing those locations to each other, mistakenly assuming that the similarities in their descriptions implied that they were talking about the same location. But if someone described a path within that terrain that you hadn’t previously noticed, and you then went back and confirmed that the path was there, then that would be strong evidence that you were talking about the same place.
That metaphor is an extension of my ideas that I hadn’t previously considered, which Claude suggested. Which I then thought about and realized that it made sense. Which feels like additional evidence that the region of concept space that my words are activating within Claude, is similar to the one that I am exploring in my own head.
And the fact that the conceptual maps in my head and Claude’s weights can be coherently matched against each other, implies that they are also describing something that actually exists within reality. If several people have visited the same place, they are likely to have mutually-coherent mental maps of that place because it’s the same place and they’ve all been exposed to roughly the same sensory data about that place. Claude doesn’t have same kinds of experiences as humans do, but it does have access to writings generated by people who are humans. Humans have had experiences in the real world, the humans have generate their own conceptual maps based on their experiences, and their conceptual maps have then given rise to different pieces of writing. When machine learning models absorb the human-generated data, they also absorb aspects of the same conceptual map that humans have generated, which in turn is (albeit imperfectly) correlated with reality. Even if it hallucinates facts, those facts are generally still plausible claims: ones that would in principle be consistent with a basic understanding of reality, even if they turn out to be incorrect.
This means that if my conceptual map can be coherently matched with Claude’s, it can be coherently matched with the conceptual maps of real people whose writings Claude has absorbed, which suggests that the map does correspond with actual reality. In other words, that the map – or my beliefs – is a valid map of real territory.
To summarize my argument so far: an important part of the functional purpose of the experiences of “being seen” and “being validated” is as a signal that your words are actually communicating the meaning that you are trying to communicate. There are ways of triggering this feeling that cannot be faked, since they require the other party to actually demonstrate that their reply references the thing that you had in mind. The ability to do so is independent of whether there is “anyone actually there”, and current chatbots demonstrate this capability.
So that’s a way in which a person may validly experience their ideas as being seen and validated by an LLM. What if they are talking about their emotions?
I mentioned earlier the A-B-C pattern, where you talk about the connection between A and B, and your listener then independently brings up the connection to C. Now if you are explaining a challenging situation and someone says “I imagine you might be worried about C” – where C is indeed something you’re worried about but haven’t explicitly mentioned – that’s another instance of the same pattern:
- You’ve described situation A and reaction B
- They identify unstated concern C that connects to these
- This C resonates with your actual concerns
This implies that the other person has, not just understanding the surface level of what you’re saying, but also a model of:
- How you specifically tend to think and feel
- What aspects of situations matter to you
- What kinds of things you worry about or value
This is important in two different ways. The first is that it implies that your feelings and concerns make sense to someone. Often people may feel like they are crazy or strange for feeling the way they do, and that nobody else can feel that way. But if someone comes up with a coherent map of your feelings, then that’s evidence that you’re not alone in feeling this way. Because your words are singling out a region in the other person’s concept space that matches your internal experience – which implies that somebody else must have had that experience, for those ideas to have made their way to your interlocutor’s concept space.
The effect is even stronger if the other person not only anticipates your feelings but can articulate the logical progression that led to them. If they say something like “given that X and Y happened, and knowing how much you value Z, it makes complete sense that you’d feel this way.” That’s different from just saying “your feelings are valid” – it’s showing you the actual causal chain that makes your feelings logical.
Sometimes this can feel like a big insight, if it has that quality of “pointing out the connection to D, which you hadn’t thought of but which makes complete sense in retrospect” that we talked about. But even if it is just a rephrasing of your own words that draws out a relatively obvious-yet-originally-left-implicit connection, that is still validating your experience as an understandable human experience.
This gets us to the second way in which communicating a correct understanding of you is important. If the other person has an accurate map of these things, then that’s evidence that they may be able to help you out. They’re able to predict what kinds of actions would in fact be helpful for you, and then carry them out.
An accurate map isn’t just about making you feel understood – it’s a prerequisite for offering genuinely useful assistance. Someone who can accurately model your thought processes and concerns is much more likely to suggest solutions that actually work for your specific situation and mindset, rather than giving generic advice that doesn’t account for your particular circumstances.
In the case of a chatbot, even if it can’t physically help you in the real world, it may still be able to help you clarify your thoughts, or offer you advice and perspectives that are useful. It is also helpful if conversation with them helps you develop a clearer and more explicit understanding of your emotions and problems – that understanding is “portable”, in that you can then take it to some real human and explain your needs in such a way that allows them to help you.
With humans, it also seems to be the case that merely talking about one’s feelings and getting a better understanding of them is helpful by itself. Practices such as Gendlin’s Focusing are largely just about finding names for vague feelings and making them explicit.
Once I wrote to Claude “I’m just like, if I keep running into this problem each time and can’t do anything about it, then what’s the point”. Claude replied with:
When you say “what’s the point,” I hear both frustration and exhaustion. It’s one thing to face relationship challenges, but it’s another level entirely to feel like your own unconscious patterns are consistently leading you into situations that will trigger deep pain.
All of that was a relatively obvious inference from what I said – you could have hardcoded even a 1960s chatbot to say “I hear both frustration and exhaustion” in response to hearing “what’s even the point”. But it was still naming an implicit feeling and helping it bring to the surface in a way that felt relieving, as well as giving a sensible explanation of why I was feeling so frustrated and exhaustion. Even though nothing changed about the situation itself, having it be named felt relieving by itself.
There seems to be an effect where making implicit models explicit brings them into consciousness in such a way that makes them accessible to the rest of the brain and allows them to be updated. It also allows the mind to incorporate this information in its self-modeling and self-regulation. Sometimes that’s enough to automatically shift behavioral patterns in a better direction, sometimes it requires more conscious planning – and the conscious understanding of it is what allows the conscious planning.
Of course, there are also important aspects of validation that a chatbot can’t provide. For example, one aspect of validating someone is essentially a signal of “if you get into trouble, I will back you up socially”. A chatbot is obviously not a member of a community in the same way as humans are, so its validation cannot fulfill that role. My argument is definitely not that a chatbot could fill all the functions of speaking with a human – just that there is an important subset of them that it can.
By the way, this whole section about extending the original idea to the realm of emotions was suggested by Claude. I’d had a vague similar idea even before it brought it up, but it brought significant clarity to it, such as coming up with the example of how “I imagine you might be worried about C” was an instance of the previously discussed A-B-C pattern, and by proposing the six bullet points in the beginning of this section.
The conversations I had with Claude can be found here, for anyone who’s curious to see how they morphed into the final essay.
Full disclosure: I consult for a company that offers chatbot coaching. However you could call Claude their competitor, so if this essay was motivated by money, I shouldn’t be praising it.
Trying to translate when people talk past each other
Sometimes two people are talking past each other, and I try to help them understand each other (with varying degrees of success).
It’s as if they are looking at the same object, but from different angles. Mostly they see the same thing – most of the words have shared meanings. But some key words and assumptions have a different meaning to them.
Often, I find that one person (call them A) has a perspective that’s easier for me to understand. It comes naturally. But B’s perspective is initially harder. So if I want to translate from B to A, I first need to understand B.
I remember a time when I sat listening to two people having a conversation, both getting increasingly agitated and repeating the same points without making progress. Four of us were playing a cooperative board game together. The situation was something like…
(I don’t remember the exact details anymore, and communicating the exact details would require explaining game mechanics that aren’t important in this context, so I’ll give a partially-fictional version that tries to have the same rough shape as the original situation)
We had been making plans about our next move. Person A had promised that they would make a particular play. When the time came, they noticed that there was a better play they could make instead, so they did that. Person B became upset. The conversation went something like:
A: I’ll make this play.
B: What? That’s not what we agreed on.
A: That doesn’t matter – look, this play is better because it has these consequences.
B: You can’t just say that it doesn’t matter, you promised to make a different play.
A: But this play would have a better outcome in terms of what we all want.
B: Yes but you promised to play differently, you can’t just ignore that. Our previous agreement matters.
A: Okay if you don’t want me to play like this, I can still play the way that we originally discussed, too.
B: That’s not the point, you can play the way you intended now.
A: ??? So… It is okay if I make this new move?
B: Yes but my point is that you promised to do the move that we previously discussed.
A: … but that doesn’t matter since the new move is better?
B: It matters! Kaj was counting on you to make the old move, and he needs to be able to count on you when doing plans!
A: But Kaj can just do this other thing instead now, and that’s even better? This is better for both Kaj and everyone than if I did the thing that we originally planned.
B: That’s not my point.
A: I don’t understand, but I can go back to the original plan if you want?
B: No, like I said, you can play in the new way, I don’t care about that.
A: ???
I was listening to this, puzzled. A’s perspective was easy to understand. I didn’t get B’s.
But… B’s objections were not random. They had structure, a consistent shape. I could intuit a rough feel of that shape, even though I didn’t get what exactly that shape was.
A and I were thinking about things in terms of the game. Our previous plan had been aimed at achieving good play. A had come up with a better plan, so it didn’t matter that we had previously planned to do something that turned out to be worse.
But B’s disagreement didn’t seem to be about our actual plays at all. A had even offered to just revert back to the original plan, but B had said that it didn’t matter to them what A would play. Even though this whole argument had started from B objecting to A’s new play? That didn’t seem to make sense…
…not from the perspective that I was currently inhabiting. So I needed to let go of that perspective, try on another…
What was the other perspective? If it wasn’t about the physical world of the game, it was about the social world. Something about promises, trust, being able to rely on another…
Then I had a flash of intuition. B was insisting that what we had agreed upon before was important. A was saying that the previous agreement didn’t matter, because the consequences were the same. That was triggering to B; B perceived it as A saying that he could unilaterally change an agreement if he experienced the consequences to be the same (regardless of whether he had checked for B’s agreement first).
B was saying that it didn’t matter what move they ultimately played, that was all the same, but she needed A to acknowledge that he’d unilaterally changed an agreement, and she needed to be able to trust that A would not do that.
With that, I could imagine another shape behind B’s reaction. Some betrayal in her past, where someone else had unilaterally changed an agreement because they thought the consequences were the same, when they were very much not the same to B, and then rejected B’s objections as invalid… that this situation was now reminding her of.
Viewed from that perspective, everything that B had said suddenly made sense. Indeed, what A actually played or didn’t play wasn’t the point. The point was that, as a matter of principle, A could not unilaterally declare a previous agreement to not matter without checking other people’s opinions first. Even if everyone did happen to agree in this case, sometimes they might not, with much more serious consequences. And if people always had nagging doubts about whether A’s commitments were trustworthy, that would be damaging.
So people typically talk past each other because there are two internally consistent, but mutually contradictory, views about what matters. In this case, the views were “how our moves affect the state of the game” and “whether people can be trusted not to unilaterally change previous agreements”. Seeing what’s going on requires being able to grasp both perspectives.
This kind of thing is easier if the conversation has happened over text. Then I can read through the conversation again, try to feel the implicit shape in the different messages… see if my mind could settle on an interpretation that would cause a particular message to make more sense, and then see what happens if I also read the rest of the messages through that interpretation, see if that would reveal more hints of how to interpret them, until the whole thing snaps into place as a logically consistent whole
It doesn’t necessarily always snap into place all at once. Sometimes it’s more like… I have a key intuition of what’s going on. That’s like a central structure made up of several interlocking puzzle pieces. Then I take individual messages – pieces that don’t yet fit the central structure – and turn them around in different ways to see if there was a way to make them fit, until there is nothing left to explain. Often I do that by starting to write an explanation, and gradually find the way to connect the remaining pieces to the explanation.
Understanding both perspectives is one challenge. Then there’s the challenge of translating from one perspective to another. Suppose that C and D are talking past each other. Once I’ve figured out D’s perspective, I cannot simply inhabit it and speak to C from that perspective in order to explain it. That’s what D has been doing all along, and it hasn’t worked!
Suppose that from listening to C and D argue about something that has to do with the Moon, I’m starting to get the sense that D thinks about the Moon as food that you can eat. Now it might be that my mind, anchored in a perspective where the Moon is a piece of rock, immediately rejects this – no you can’t eat the Moon, that’s nonsense. And C’s mind is doing that very same act of immediate rejection.
But if I allow my mind to come loose from that perspective and suspend that objection for a moment, then it might occur to me that “eating the Moon” would make sense if D was actually referring to Moon Cheese. And then with the hypothesis of “when D says Moon, they mean a type of cheese”, suddenly everything snaps into place and makes logical sense.
If I now try to translate to C, I need to stay mostly in D’s perspective to see why their words make sense, while also letting in enough of C’s perspective to see what things don’t make sense to them and what I need to explain.
Sometimes I let in too much of C’s perspective, with the result that D’s perspective in my mind collapses, replaced by C’s. Just as I’m explaining that “when D says this, they mean that they intend to eat the Moon”, I snap back into seeing the Moon as a big rock, and my explanation stops making sense to me. Then I have to pause and bring myself back to D’s perspective.
But if I don’t let in enough of C’s perspective, then I can’t do the translation. If it seems obvious to me that of course you can eat the Moon – and I slip into D’s mindset where “by the Moon, I mean Moon Cheese” becomes so obvious as hardly be worth saying – then C will just find my explanation nonsensical (because of course you can’t eat the Moon, rocks are not edible and it’d be too big for anyone to eat anyway).
Usually what I try to do is to convey a view under which D’s words make sense, and encourage C to try it on. “Look at what they said from this perspective, and now everything makes sense, doesn’t it?”
Sometimes that leads to a breakthrough of mutual understanding. At other times C seems incredulous and doesn’t want to accept the other perspective. Sometimes I myself actually failed to understand what D meant. But usually at least D is happy for finally having been understood, even if C still doesn’t get it.
When C expresses doubts, it’s often like they can kind of grasp the idea intellectually, but they still lack the key intuition that makes the thing *really* make sense. Their response is more like “Well I can kinda see that story if I squint, but still, huh? I don’t really see how that makes sense.”
That’s a little frustrating to me. The thing feels so perfect and logical in my mind, but C still doesn’t really get it. Possibly I could help them out if we continued talking, but often everyone is pretty exhausted at this point and D finally feeling like they were heard resolves enough tension that people can agree to move on. And often D is sufficiently relieved and grateful that it feels worth it anyway, even if it’s a little bittersweet.
(That was the case with the board game. I wish I could end this by saying that at the end I got them both to perfectly understand each other, but alas.)