Natural Hazard

There Is No Self-Deception Fairy

(this is the second post in my series, Towards a Unified Theory of Trauma and Self-Deception)

Self-deception is best understood as a response to what Val calls "The Hostile Telepath" problem which can be described as follows:

  1. Cognition is leaky.
  2. People with different interests from yours can be trying to punish or reward you based on what they think you’re feeling and thinking.
  3. There are ways to train not leaking your internal state, but they are high skill and not widely talked about.
  4. The quickest path to meet the demands of a “hostile telepath” is to do everything you can to shove the offending thoughts and feelings out of mind, practice avoiding them, and spin up processes that work to keep them from entering your awareness, a.k.a “self-deception”.

In this post we're going to explore how this "leakgage" works and what it implies about the costs that self-deception necessarily imposes. This is ultimately in service of arguing against the idea that self-deception can be robustly "meta-rational" or "second-order rational". For self-deception to be rational, as opposed to self-defeating, there would have to be a "self-deception fairy", some part of you that isn't deceived, that sees clearly, is constantly monitoring the situation and is capable enough to make sure that "you" are only "deceived" when it makes the most sense to be "deceived". I don't think it's possible for competent self-deception fairies to exist and that has important consequences for how one understands a lot of the dysfunction they see in the world

I claim that the bulk of the world's current dysfunction is a result of self-destructive behavior.1 This is in contrast with ideas like "mistake theory" and "conflict theory", which both assume that people are rational self-interested actors but disagree with each other on whether or not fundamental value differences actually exist (conflict theory), or if all things that look like fundamental conflicts are actually just misunderstandings or coordination failures or scarcity that could be fixed with better abundance tech (mistake theory).2 To contrast with someone like Freud who also acknowledges self-destructive behavior (he calls it the "death drive" or "thanatos"), he considers it to be a fundamental part of the human psyche while I think there's huge variance in the amount of self-destructive tendencies people can have. I also think this mostly isn't a biological lottery but that people can go through certain experiences ("trauma") that result in them engaging in increased amounts of self-destructive behavior.

The tricky thing about creating common-knowledge about the prevalence of and the structure of self-destructive patterns is that, empirically, the people who seem to understand them the most intuitively are the most self-destructive and the least inclined to create clear models of how it all works,3 and the people who are the best at making clear detailed models of how things work are the least self-destructive. The whole thing is so deeply alien to their fundamental ontology that their attempts to understand it typically end up making it secretly not self-destructive, it's all actually incomplete information, inadequate equilibrium, conflicts of interests, etc etc.4

I think self-deception is one of the more salient in's for starting to understand self-destructive behavior. If a house divided against itself cannot stand, it seems clear that sustained internal division would result in self-destructive behavior. Robin Hanson and Kevin Simler's book The Elephant in the Brain is probably the most in depth attempt to explain self-deception from the maximally econ-brained/evo-psyche/mistake theory perspective, and it thoroughly places self-deception as a tool of self-interest, leaving only the tiniest little release valve for self-destructive behavior to exist.5

I understand self-deception (and trauma, but this post just focuses on self-deception) to be the main medium and mechanism by which rational, legitimately self-interested and inclined-towards-good humans end up clumsily self-modifying into things that are less driven by self-preservation and self-interest and can end up in attractors where they're actively working towards bad things that they meaningfully understand to be bad.

This is something that one could mostly argue empirically, but I have a hunch that to be compelling to the econ/mistake-theory perspective it would be useful to provide a mostly analytic argument for why humans necessarily can't have competent self-deception fairies, which implies that the self-deception (which I think everybody agrees is very common) is necessarily self-destructive.

Now, let's get on with it.

Mind reading and side-channels attacks

In any intentional act of communication, your "point", the thing you want to convey that caused you to try and communicate in the first place, is only a subset of all of the information you actually put out into the world. Your vocabulary conveys information about your education, how much and what you read and what region of what country you grew up in. The prosody of your speech conveys a lot about your emotional state. Your timing and speech rhythms can convey information about how hard you're thinking, how fluent you are with some topic, how excited you are or how much tension you feel. Your body language can convey a very broad spectrum of things, fine grained stuff like specific emotions you're feeling and higher level feelings like tension, comfort, how much you feel safe with or threatened by the people you're interacting with.6 Your clothing can convey things about your nationality, subculture, class, personal habits etc. Even over much lower resolution channels like pure text, things like your punctuation and use of emojis is heavily indicative of your age. Even not saying anything can convey a lot of information, like how when the typing dots keep disappearing and reappearing and disappearing and reappearing conveys the person is taking a while to compose a response and stopping to think about it.

This "everything else" that's communicated alongside our intentional comms can reveal both general information about ourselves and also timely information about our internal state, what are we thinking and feeling right now. We successfully read each other's minds constantly and for whatever reason people seem to mostly reserve the term "mind reading" specifically for "reading minds in ways I don't think are possible". The real question is: in what situations with what levels of invasiveness and with what granularity can you reliably infer what kinds of things about people's internal state?

We've got invasive brain implants that are decently reliable at recreating text from both peoples "inner speech" and imaginally writing the text out. With less invasive brain scans we can in some situations roughly recreate visualizations of visual scenes people are focusing on in their minds eye. In the less invasive and more coarse grained realms, eye tracking reveals a huge wealth of information about what you're paying attention to, how you're paying attention, your state of physiological arousal, how surprised you are by your current situation, and even things like detecting when people "space out" while reading and bounce off a text. In the realm of body language emotions are pretty reliably expressed across cultures, though you don't get for free the context about why this person is feeling what their feeling or what that implies. Higher order body language, stuff about how tense or relaxed you feel, how nervous you are, is even more consistent than specific emotional content. And of course, the all time classic of mind reading is simply knowing someone so well that you can anticipate highly specific thoughts and feelings they're having in response to a shared situation. High level improv comedy groups intentionally cultivate this and call it "group mind", creating situations where they can pull high level plot maneuvers and complete each other's sentences without tedious explicit exposition because they already anticipated where each other were going with something.

I think most people would be quite surprised at how detailed "mind reading" can get in cooperative situations, but the thing that people are naturally more curious about is what are the limits of adversarial mind reading, or the dual question, how tightly can you control the "everything else" that you're constantly sending into the world alongside your intentional communication?

The sort of mind-reading that I described improvisers cultivating is pretty obviously confined to cooperative situations. The most extreme end of the spectrum, "I'm thinking of several random words and numbers, what are they?" basically doesn't seem possible. Body language is an interesting middle ground. Most people seem to learn some degree of masking and expressive control, though typically people aren't aware they're leaking micro-expressions. They can be suppressed but it takes training and intentionality. Actors develop high levels of expressive control, but not under adversarial conditions. Typically some of the hardest signals to fake are the basic "how safe or threatened do I feel in this situation" type signals. And in general, it's much easier to learn a "poker face", to try and shut down natural expressions, than it is to effectively simulate some other feeling. Depending on the type of adversarial situation, obviously masking might be just as disastrous as leaking a specific authentic thing you're feeling. In general, timing and fluency based leakage seem like they're some of the hardest things to suppress/simulate. It's hard to fake your dialect, your word choice, the timing and rhythm of how you speak.

The fact that we inevitably "leak" information about our internal state is not some quirk specific to humans, it's something you'd expect to be true of any information processing system embedded in the actual world. The world of cybersecurity has a concept of "side-channel attacks", where instead of relying on some bug in the code to get access to private information you take advantage of the fact that even though computers are designed to only allow information to flow in specific strict patterns with specific strict access protocols, they exist in the physical world with entangled truths, which means you can never perfectly isolate the signals your system vibrates out into the world. Bitwhisper lets computers talk with no physical cable via temperature modulation and sensing, there's audio attacks where having a microphone in the room is enough to recover sensitive information, cache timing attacks let you learn about what information other systems on a computer have been accessing even if you're in an isolated sandbox.

It's well understood in security that no defense works universally against all side-channel attacks. Likewise, there's no universally applicable side-channel attack. The terrain is such that it's a perpetual cat and mouse game. The only guarantees you get are things like "executing a side channel attack of type X would take Y resources", and then you conclude if an attacker having Y resources is inside or outside of the threat models you care about. Though the form that cyber-security side-channel attacks take are specific to the form of computer systems, the underlying reality that makes them possible applies to any system embedded in a causally entangled world, and so this should be applicable to all minds.

Occlumancy and Embedded/Extended Cognition

In general, if you're trying to prevent "internal" information from becoming "external" your two moves are 1) make it so your leaky moments and leaky places aren't directly observable (privacy) and 2) when you can't control the observability of a given region, work to leak less into that region a.k.a reduce the mutual information between your internal state and that region. If you don't want someone to read your face you can hide your face or you can control your expressions. Sometimes it's easier to do one or the other.

Being able to construct zones of relative privacy is constrained by skill, resources, and autonomy. Prisoners lack the autonomy to create most normal sorts of privacy barriers we're used to having. A child whose parents don't allow them to lock their bed-room door is denied a typical kind of privacy. You might have autonomy but not enough resources for a certain type of privacy, like only being able to afford a row-house with thin walls in a bad neighborhood. You might have resources and autonomy but not the knowledge or skill to block high skill surveillance, like someone using lasers to eavesdrop on you through vibrations in your windows.

Decorating your "externals" from your "internals" also has a skill and resource component, but the costs have a fairly different shape. Some of the entanglement between our inners and outters is "collateral" side-channel leakage, we'd get rid of it if we could be we fundamentally can't and the best we can do is dampen the leakage. But lots of the entanglement is something we intentionally cultivate because we're using our bodies and our environment as part of our cognitive processes. Writing is an intentional process that allows us to extend our long term memory. People's digital and physical workspaces serve as extended working memory, offloading the structure of their medium term intents to their environment. Gesticulating helps thinking, and sometimes you need to roll around on the floor to solve novel mathematical problems.

This means that when you try to withdraw part of your cognition from an area because there's some hostile observer, you're not only paying the cost of the energy and skill required to succeed at masking, you're losing the benefits you got from your cognition being extended in the first place. A daily planner is equally useful for a guy running a small construction company and a drug lord running his criminal empire. If the drug lord forgoes the calendar so there's less of a paper trail, he either accepts the loss in organizational capacity or has to put in extra energy recreating the gains internally with a better memory.

The Costs of Discretion

Ultimately I'm trying to talk about the unavoidable costs of self-deception in individuals but let's start the analysis with self-deception at the organizational level so we don't yet have to flush out how exactly the "conscious" and "unconscious" work. Let's say we've got some org and it understands itself to be in some level of conflict with some outside entity, could be a rival company, a rival military, the country that the organization is inside or anything really. As part of this conflict the org is trying to conceal some of its internal state and this is non-trivial because the org has both private and public portions. The "public" portion might just be people who have to interface with outsiders and could leak info, or maybe it's that part of the organization is subject to some kind of auditing or mandatory transparency. Whatever the nature of the differential privacy, our setting has the more private parts of the org concluding the best way to proceed is to keep some details of its operations and aims secret from the more public facing portions. We can ignore for now if we think this org is the good guy or the bad guy in the conflict with outside and just focus on what the repercussions are of engaging in self-deception.

So our hypothetical org compartmentalizes, coming up with some kind of "need to know" basis by which it keeps secrets and restricts information flow. The immediate problem faced is that decisions about who "needs" to know what are decisions about how autonomous and adaptable you want people to be. It's a trade off between interacting with people imperatively vs declaratively. When you interact with people declaratively you're trying to control them as little as possible and instead describe clearly what your underlying aim is and what the constraints are and let them figure out the best way forward given those criteria. This mode of organizing makes the most use of the others' intelligence, expertise, local knowledge, etc, and sets you up to get wins you couldn't have figured out yourself. When you interact with people imperatively, you are doing the work of carving up your problem space, coming up with a specific solution and implementation details, and then are giving someone a detailed list of commands that you want followed with as little deviation from the plan as possible. The imperative micro-managing approach is much better at keeping secrets because fewer people know the full context and there's less potential for leaks. But the cost is that to the degree you're micro-managing people you are losing out on the benefits of their intelligence, and it's easy to end up in situations where you make a plan with big flaws that are only visible if you're on the "front lines", and people on the front-lines could easily notice the flaws if they knew the purpose behind the commands you gave them, but since you kept them in the dark they just execute the orders and disaster ensues.

This isn't to say that the maximally declarative approach is always better. Sometimes you're dealing with a domain that's predictable enough you can do fine with using people imperatively, and sometimes there are real pressing security concerns. But there is necessarily a trade-off between secrecy/information-control and the ability to make intelligent use of others. This tension was a major conflict in the beginning of the Manhattan project. The military leadership initially wanted very strict compartmentalization among the physicists working on the bomb to prevent leaks, but was eventually convinced that not allowing the physicists to talk to each other would delay the project longer than was safe. This did genuinely make leaks more of a problem, and there was in fact a major leak. But it also seems very likely that they wouldn't have been able to figure it out in time to be relevant for the war if they'd aggressively compartmentalized the physicists.7

As Above, So Below, except when it ain't

I expect this trade off between self-discretion and the ability to make use of the full capacities of the parts of a whole to be present for information processing systems of any scale or composition, whether groups of people, individual humans, or AI minds. However there are two important differences between orgs and human individuals that make it possible for a group to have a relatively effective "self-deception fairy" in a way that isn't possible for individuals.

Before going into these differences we need to poke into what exactly we mean by the "conscious mind" in people, something I was hoping to leave for the next post but we'll dip our toes in just enough to complete this argument about there being no competent self-deception fairy.


Two very different things are frequently conflated when people talk about "conscious thought" / "conscious awareness". We might call these "first order awareness/experience" and "higher order awareness/experience". There's having experiences, and having experiences about how you're having some experience. There's thinking, and thinking about thinking. There's being aware of something, and being aware of being aware of something.8 You're basically having experiences the whole time you are conscious-as-in-awake, though the amount that you've having higher order awareness about your experiences and the amount that you remember any given moment of experience varies over time. We have all sorts of phrases to get at the state of just having first order experiences without much or any higher order experiences; zoomers have "no thoughts empty head", martial artists have "mind like water", Hungarian-American psychologists with hard to pronounce last names have "flow".

Because people hate clarity, they colloquially call the "mind like water" state "not thinking", and call experiencing higher order mental activity "thinking".9 This feels most forgivable when people are talking about sports, "thoughts" are things that happen in your "mind" and "doing" happens in your "muscles" (ignore the motor cortex and the cerebellum!) But this conflation of what I'm calling "higher order mental activity" with "the mind" and "thought" becomes less tenable when we think about a skilled mathematician being in a state of flow. Math is clearly "something you do in your head" and so it's "thinking", but the skilled mathematician effortless gliding through a deep conceptual landscape has more in common with an elite athlete who's "in the zone" (and not "thinking") than they do with a novice math student struggling to apply some concepts they just learned (who's also "thinking").

Books like The Inner Game of Tennis describe in detail the sorts of higher order mental activity that seem to be uniformly detrimental to performing smoothly in sports. The book points out, among other things, that if you're thinking to yourself "Focus! Focus!" you aren't in fact focusing because you're too busy thinking "Focus! Focus!" to yourself. And of course for many people a lot of their higher order mental activity consists of basically just doing anxiety. But higher order mental activity isn't inherently bad, the only necessary quality it has is that it's a map of a map, it's experiencing the re-encoded representation of some other experience of something. This means the act of recalling episodic memories is straightforwardly higher order mental activity. What differentiates a memory of an experience and the "idea" or "thought" of an experience seems to be about how much temporal and contextual information is encoded in the representation. Higher order mental activity is most useful when it's representing patterns in your experience in ways that allow you to reason about how things are and aren't working out and improve them.

Another thing we should be tracking to better understand self-deception is how memories get structured, stored and indexed. "Do you have a memory of X?" is not a binary. When memories are formed your attentional habits and cognitive schemas shape how richly or meagerly a new memory gets networked into your existing web of associations. A memory can have more or less raw sensory cues, more or less abstract symbolic metadata. How your memories get encoded and indexed shapes what kind of cues/triggers/queries can summon the memory. Questions like "have you ever been punched in the face?" are generally easier for people to answer than "have you ever held a purple pen in your left hand?" Sometimes when I'm trying to remember what happened yesterday it's the case that yesterday's happenings are already indexed and encoded into a narrative structure, other times when I ask "what happened yesterday?" I draw total blanks until I start at "what happened when I first woke up?" and walk step by step through sequential associations.

What you remember and how you remember it are governed by your habitual patterns of attention and sense of salience. You have some high effort volitional control over these patterns. At any moment, most degrees of freedom in this structure are fixed, leaving you intentional control along only one or two dimensions, but over time, you can reshape your overall attentional patterns through intentional choices about how you pay attention.

The reason all this matters for us is because we're talking about self-deception, which has some notion that there's Secret Truths/Thoughts/Feelings/Experiences that are kept "unconscious", which means they aren't or can't become "conscious" or enter "conscious awareness". Now that we've expanded the notion of consciousness to first and higher order and different levels of indexed memory, we can talk about all the very different ways something can be "blocked from conscious awareness":

  1. It literally never enters into first order conscious awareness, is never a part of your experience, and is never a candidate for having episodic memories formed, nor something you can intentionally direct any energy toward.
  2. It sometimes is a part of your first order experience, but only ever in a "highway hypnosis" kind of way, you never have higher order awareness of it, and as soon as it's out of your few second short term memory buffer it's not stored in any memory system.
  3. It's part of your experience, sometimes for a few clock ticks of consciousness you "re-experience it", but various attentional triggers prevent you from re-experiencing it that many times (though you never encode and index the experience of putting it out of your mind), and while experiencing the thing does get stored in long-term memory, it's not indexed with rich associations and so you'd need really precise cues to have it resurface.
  4. It's part of your experience, you have momentary introspective awareness about it being part of your experience, it's somewhat indexed but not indexed into your core associative memory structure or your "self-concept", and maybe you slightly encode the practice of putting it out of your mind but the memory of that experience is not indexed well, so you can sometimes recall the fact that you're avoiding something but it's not easy to cue.

You get the idea. "Not being consciously aware of something" is a multi-dimensional space with lots of different possible configurations that can be both qualitatively and quantitatively different from each other. This is very consequential for the question of if people can have competent self-deception fairies.

With all this in mind, let's turn back to those differences I alluded to between individuals and groups of people.

The "conscious" mind does real work, and the "unconscious" is not "you but smaller"

The core analogy that The Elephant in the Brain uses to explain the relationship between your "conscious" and "unconscious" is that of the Press Secretary:

Press secretaries provide a buffer between the president and reporters probing for sensitive, potentially damaging information. Remember how knowledge can sometimes be dangerous? Press secretaries can use strategic ignorance to their advantage in ways that a president, who must typically remain informed, can’t. In particular, what press secretaries don’t know, they can’t accidentally betray to the press. “I do my best work,” says William Bailey, the fictional press secretary on TV’s The West Wing, “when I’m the least-informed person in the room.”

Press secretaries and public relations teams exist in the world because they’re incredibly useful to the organizations that employ them. They’re a natural response to the mixed-motive incentives that organizations face within their broader ecosystems. And the argument that Kurzban, Dennett, and others have made is that our brains respond to the same incentives by developing a module analogous to a president’s press secretary.

It's central to the book's argument that this isn't just a loose analogy, but a basically exact one. The authors endorse claims like "you don't have particularly privileged access to information about your decision making, motives, or desires, and are basically just making the same sorts of educated guesses that any attentive outsider of the same culture might make." I think the press secretary model is deeply misleading because it ignores differences between groups and individuals that change the entire calculus of self-deception.

In order for the strategy of keeping your public facing "mask" in the dark about what's actually going on to pan out well, two things have to be true: the mask can't have any other important responsibilities besides being the mask (otherwise it's ignorance would make it worse at them and it would often unwittingly work at cross purposes with the "back office"), and the "back office" needs to be independently competent enough to pursue its aims without assistance or input from the "mask". To the degree that those conditions don't hold, you'll get all the expected problems and dysfunctions which are the costs of self-discretion.

Rephrasing all that in terms of an individual, if your "conscious mind" doesn't actually do shit except eat hot chip and lie, the holistic "you" doesn't really lose much from your "unconscious" mind cutting consciousness out of the equation. Tightly related, if your "unconscious mind" is basically "you but smaller", a.k.a has the full set of cognitive capacities you think of a human as having, just on a smaller scale, then even if the "conscious" mind was useful, cutting it out of the picture only loses out on bonus point team synergy and the unconscious can still basically get the job done.

These conditions are fairly easy to meet for a group of people, but don't remotely hold for an individual's mind! It's totally doable to create an organization that has a person or department that exists solely to be a public mask, where all the actual work flows have been designed to not route anything important through the mask. Likewise, in small groups it's possible for one person or a small group to be working behind the scenes to "keep the ship afloat".

As for an individual human mind, these conditions don't apply at all if we take the "conscious mind" to mean "anything that's part of your first order experience" (which includes all working, episodic, and semantic memory) and the "unconscious" mind to mean everything that literally can't/won't show up in your first order experience. If that's the split we're talking about, totally disconnecting your "conscious" and "unconscious" mind would leave you profoundly intellectually disabled such that you'd probably need a full time caretaker to not die. This means that if we're trying to think about the conscious and unconscious selectively compartmentalizing, you won't overall be profoundly disabled, but your self-deception fairy will be. The only kind of memory your self-deception fairy has at its disposal is muscle memory. If that doesn't sound like a big deal, perhaps because it's easy to think of crazy complex things humans can do with "just" muscle memory, you might be forgetting that learning complex tasks in the first place requires paying focused attention to what you're trying to learn. While The Inner Game of Tennis and others are correct in clarifying that this paying attention doesn't require higher order mental activity, doesn't require verbally repeating the steps to yourself again and again, and doesn't require going "Focus! You're trying to Do The Thing!", it absolutely involves paying direct attention in a way that loads your working memory and makes what you're paying attention to eligible to be part of the episodic and semantic memory formation pipeline. If your self-deception fairy has cut out that kind of consciousness from the Forbidden Thing, you basically can't intentionally develop further muscle memory for handling the Forbidden Thing, you can't plan or intentionally problem solve about the Forbidden Thing, leaving you stuck with whatever habits you had at the point that your self-deception fairy walled off consciousness.

This mode of self-deception is the only one where I think it's sensible to talk about your "conscious self" as truly having zero privileged information about what's happening in your mind with regards to the Forbidden Things. It's also the mode with the most profoundly disabled self-deception fairy, leaving very little capacity for a coherent secret agenda to be robustly pursued. And while it's a mode of self-deception that I think people can and do exhibit, I don't think that it's the prototypical mode and perhaps not even a common mode.

How competent might our self-deception fairy be if we looked at a different level of "lacking conscious awareness"? Let's say we admit first order-experience and working memory, maybe even allow some low-key higher order awareness, but we're still walled off from forming semantic or episodic memories about the Forbidden Thing. What then? It certainly amounts to a more capable fairy, but still quite disabled. Now your experience is like the brain damaged guy from Memento) (or the famous patient H.M. the character was based on): your old long term memories work fine, your short term memories work fine, you can develop new muscle memory, but you can't form any new long term memories, leaving you stuck in an eternal present, at least with regard to the Forbidden Things. In this mode of self-deception you have more capacity for in-the-moment flexible problem solving about the Forbidden Things. You still can't sustain much coherent long term strategies, but in various moments you can attend to them, think about them, strategize about them, and make decisions that aren't directly about the Forbidden Things that you can remember while forgetting the snippet of awareness about the Forbiddent that went into forming the plans.

If being the guy from Memento is the self-deception profile of no new memories about the Forbidden Things, it should now be clear that we can have all sorts of memory profiles in between the Memento one and "poorly indexed memories formed" and "richly indexed memories formed". These other profiles are what make it possible to have conversations with people where they tell you about what they're lying to themselves about, a type of conversation I get into with people every once in a while. If self-deception looked like "literally no form of conscious awareness about the Forbidden Things ever", that wouldn't be possible, but when the environment is right, lots of people are willing to be surprisingly open about what they're lying to themselves about.10 Some of the follow up convos I've had with people about this seem to further indicate that their self-deception profile is "poorly indexing the Forbidden Things"; I'll allude broadly to a previous conversation and they won't remember it having happened, and only when I get very specific (what event when, what the surrounding parts of the conversation were about, any weird memorable things that would have also been going on at the time) that they can somewhat remember that we've talked about this before.

All these configurations seem to imply a trade-off: the more computational and memory resources that are actually available to interact with the Forbidden Things, the more of a "paper trail" is left in your mind and in your behavior. The paper trail can be reduced, but at the cost of bringing less cognitive capacities to bear on dealing with the Forbidden Things. I see no reason to think that people would have a "fixed" self-deception profile, and so the state of most people's minds is likely one with varying sets of Forbidden Things of varying levels of default Forbiddenness, and idiosyncratic ways that different moment to moment or day to day pressures can amp up or relax how blocked from reflection and forming memories it is.

One thing we haven't talked about much is the process that governs "looking away". We've noted that depending on the self-deception profile this process can't be that integrated to long term memory or symbolic reasoning. When describing the Memento guy profile I presented it as not being particularly problematic or difficult to pay attention to the Forbidden Things, it's just that new memories won't be formed, but that's not quite right. More realistically, attending to the Forbidden Things probably feels uneasy, and is accompanied with a sense that you want to do something else. There's some process that concludes in a given moment "nope, we're not doing this" and slides your attention elsewhere. Loosely, this seems to be the realm of your attentional structure's muscle memory. The "decision" to self-deceive about something seems to roughly correspond to creating muscle memory in your attentional structure to avoid something. When not integrated with long-term memory, you can still learn various associations, what kind of situations are precursors to thoughts or conversations about the Forbidden Things and you can learn to have your attention slide to other things before the Forbidden even enters your awareness. I think a lot of people experience this as a mounting psychic pressure felt in proportion to their proximity to the Forbidden. Strong levels of pressure yeet your attention elsewhere, lower levels of pressure "just" make you not inclined to linger.

Outro, so what?

Now, I must confess something. I promised this post would be a mostly analytic argument for why humans don't have a competent self-deception fairy. The last sections scoped the argument down to "here's plausible tiers of capabilities different self-deception fairies could actually have". That part I'm quite confident about, though I admit to argue rigorously that these capabilities don't count as "competent" would involve refining what standards of competency we're talking about, but I'm going to leave the argument where it is and hope that it's clear enough to interested readers what kinds of problems can arise from this reduced capability set.

A recap and summary of why any of this matters:

When you blind yourself to some part of reality you are installing a subprocess of muscle memory in your attentional structures that avoids attending to said part of reality in ways that leave much of a memory trace. Since planning about how to do this task better would leave a thicker memory trace, this subprocess will necessarily not be as capable as a Holistic you at both the task of actually avoiding the Forbidden Things and the task of responding to whatever pressures led you to self-deceive in the first place. Self-deception always delegates to a less competent agent. Importantly, self-deception can't really be a principled "one time thing", and the less competent agent that you've delegated the task of monitoring the situation too will sometimes decide to increase the intensity of the self-blinding, putting an even less competent subprocess in charge of steering. In this way self-deception forms a natural slippery slope towards further self-deception, each layer making it harder to unravel the last and making it more likely you'll add another layer. Reversing this ratchet requires the kind of careful attention and intention that gets harder and harder to maintain the more layers of self-deception you have.

This ratcheting mechanism is why I see self-deception, especially widespread self-deception, as a much more destabilizing force than many others. In the world where self-deception is robustly second order rational and we have competent self-deception fairies that allows us to lie convincingly in public while all pursuing our secret agendas, though one might be pissed at the duplicity of it all there's still a comfort and stability born from the fact that everyone has interests they are earnestly trying to pursue. That earnestness is kept hidden, but it's real and injects coherence and rationality into people's pursuits. Everyone is ultimately reasonable. If you can figure out people's real interest and have credible information on how they're unwittingly working towards their own destruction, they'll listen to you. In this world, if you see everyone around you lying to themselves about lots of important things, there's no particular reason to expect the ship is sinking, it's just a different ship than you originally thought you were on. Buck up and learn to play the game kiddo.

But if you live in our world, one where self-deception involves throwing away problem solving capacity and creates a ratcheting gradient toward further self-deception and further throwing away of problem solving ability, there is no such stability. Self-deception is self-destructive in the way that putting a blindfold on while driving is self-destructive. Even if you have good spatial sense and a fairly straight road the situation is going to change at some point and you're setting yourself up for disaster. It's self-destructive in a negative sense, it doesn't directly make you choose or prefer actions that work against your own interests, it just reduces your overall ability to correct your mistakes and pursue your goals in a changing environment. This negative form of self-destruction is by no means the only type of self-destructive patterns people can develop. There's many more positive forms where people actively pursue outcomes they understand to be bad by their own lights, but that's a topic for another post.11

The world where every one is a rational self-interested agent, just with hidden agendas, is incredibly different from the world where people can slide into self-deceptive knots that make them more self-destructive. When groups of people collectively lie to themselves you can get all sorts of fucked up situations that very few people legitimately want, and yet the group will aggressively fight anyone who points out what's going on because they've committed themselves to ignoring the very parts of reality they'd need to track in order to fix the problem. And as I hope to describe more in later posts, these layered self-deceptive states act as the soil in which more perverse kinds of self-destructive habits are cultivated.

In conclusion, ⟡ Stay Strapped ⟡

  1. My recent post You Are Here: Historical Context for Unprecedented Times is a rough sketch of how the United States went from having very functional institutions and people to having very self-destructive institutions and people.
  2. Oddly, while these terms gesture at two real clusters in people's thinking, the discourse around them describes those clusters badly. I've written a few different threads about how mistake theorists describe what they're doing amongst themselves accurately, but they implicitly understand themselves to be surrounded by an anti-epistemic/scapegoating driven culture and can't use their discursive truth seeking tools to talk about this situation directly. Meanwhile, the conflict theorists are the ones doing the anti-epistemic scapegoating thing, which is a very different beast from being a rational agent that just happens to have fundamental value differences.
  3. A lot of post-modern writing is an attempt to describe various types of self-destructive complexes that develop in people under conditions of domination, and some of them seem to get a lot of details right but they all seem to write from a perspective that naturalizes the self-destructive traumatized complex as "natural" and all there is.
  4. I'm fairly sympathetic to this viewpoint because incoherence and self-destruction are in fact really weird things that analytic tools don't even attempt to understand. The principle of explosion in logic encodes the idea that "if you reach a contradiction there's no point in saying anything else about the matter." Similarly, if you're used to analyzing things in terms of long run dynamics and equilibrium you might think "who cares about self-destructive behavior? Eventually it will self-destruct, so it's irrelevant in the grand scheme of things". Unfortunately incoherence and self-destruction can take you and a lot of things you care about down with them on their way to the long-run grave.
  5. The book acknowledges that martyrdom and priestly celibacy seem like obvious self-destructive behavior and characterizes both as "blindly hill-climbing status gradients". More recently Hanson's thinking on "cultural drift" indicates he thinks that culture is becoming self-destructive because 1) no one is capable of making good decisions about what's a better and worse culture, 2) the only optimization pressure on culture is selection pressures, 3) global elite mono-culture has removed selection pressures, and 4) people go along with dysfunctional culture because people always blindly go along with whatever is high status.
  6. A lot of people get weird about this point and end up saying dumb things like "80% of communication is non-verbal" or "what you say doesn't really matter, all that matters is the vibe you project while saying it". I think what's actually going on here is that stuff like body language more credibly and more directly conveys high level parameters of your state (tension, comfort, friendliness, antagonism, earnestness, irony), which clearly aren't the bulk of the structure in one's speech, but if those params are flipped it often can totally invert the implication of someone's words. I think a lot of people get weird about this because lots of people do in fact live in a world where they don't really care about what people have to say except so far as it informs "do they like me?" or "are they gonna be chill?", which is related to the fact that a lot of people aren't actually trying to say much with their words besides "I like you" and "I'm chill". Body language is totally adequate for such communication, though not remotely up to the task of building a bridge or putting in a specific food order at a fast food joint.
  7. This is a great article about how this conflict played out in the Manhattan project. Interestingly, the whole industrial side of manufacturing the bomb once the physics was worked out was incredibly compartmentalized. This was achievable because the manufacturing was a more thoroughly solved problem than the atomic physics, making it relatively straightforward to top-down decompose the manufacturing problem into siloable subtasks.
  8. I've seen some people assert that higher order awareness and first order awareness exist simultaneously in the same moment, but I think they're necessarily sequential, in one "clock tick" of consciousness you have experience X, and only in a later clock tick can you have the experience "X". This sequential model seems to better fit the phenomenological reports of various meditators, engineering wise it seems much more straightforward to create a system that reuses a single "experiencing" slot to allow the seemingly unlimited recursive depth of awareness (you can keep being aware of yourself being aware of yourself being aware of yourself being aware etc etc), and also the way that higher order thoughts can seem to mess up and derail your object level cognition as is noted in sports seems to indicate higher order awareness is fighting first order awareness for resources, which is more readily explained in the sequential model.
  9. For fun, here's a thread of a dozen or so distinct mental processes that people all commonly call "thinking".
  10. I used to think the presence or absence of a "judgemental vibe" was what decided if someone would open up to me about this kind of thing, but one of Ben Hoffman's recent posts now has me thinking the important factor is if your vibe conveys you might ever try to hold them accountable for having expressed this. Being "judgemental" is just a strict subset of situations where someone is indicating they might bring up this conversation in the future and not pretend like it didn't happen.
  11. Some pointers: Motive Ambiguity, Preference Inversion, Civil Law and Political Drama, and On Commitments to Anti-Normativity.