Towards a Unified Theory of Self-Deception and Trauma
To err is human; to error correct, divine.
Lucky for us, evolution has spent millions of years forging our minds in the fire of competition so that we may take part in the divine. In a world with predators, competition, and environments that change, any creature that survives is going to need to be able to learn from its failures, to notice when something isn't going as expected, to find new approaches when old ones fail. This ability to improve and learn from mistakes is the cognitive heritage of all animals (to more and less impressive degrees), with humans being the apex predators of Figuring Shit Out and the only ones that have figured out how to bootstrap themselves into being recursively self-improving general intelligences.
It ain't always easy though. While, in principle, every human is a fully general intelligence, this doesn't guarantee that all you do is win win win. Any given person can simply be inadequate relative to the problems and constraints they happen to be facing. The universe can throw you totally unfair curve balls that are beyond your ken to figure in the time you have.
Another hurdle is that even already having in your mind all the individual pieces to some puzzle doesn't guarantee success. You don't already know all the logical implications of everything you already know you know. Cognition, connecting the dots, integrating the contents of your mind, it all takes thermodynamic work, and even though that work doesn't have to be conscious and doesn't have to feel like "work", it still is a process that has to take place in time and you only have so much time.
And of course, there's cognitive biases in the technical sense that Kahneman and Tversky pioneered. Though the colloquial sense of bias basically means "you've committed to some side in a political fight", K & T define a bias to be something like "a systematic error in human judgment caused by reliance on mental shortcuts rather than normative reasoning", an idea that's more akin to how a rifle or a thermometer might be biased. Luckily, to the degree that any given "quirk" in your thinking is aptly described as a bias in the technical sense, you can simply put in the practice of correcting for it. Even if a bias is "evolutionarily hardwired", if your rifle predictably shoots an inch to the left at 100 yards, just aim an inch to the right at 100 yards, 2 inches at 200 yards, etc. Someone has to do the work of finding the biases, and you have to do the work of practicing accounting for them, but if your problems are shaped like biases, the prescription is simply:
- Study the blade
- Profit
As soon as you notice there even is a Blade that can be Studied, a question naturally presents itself; how far can I push this? Can I personally go FOOM? This question has preoccupied a lot of scenes across the 20th and 21st centuries. The rationalists, General Semantics, the rationalists before the rationalists, the critical rationalists, these and other groups1 have all aimed in various degrees to make an engineering project out of augmenting and accelerating the pre-existing recursively self-improving capacity of humans.
Interestingly, it seems like everyone who goes down this path eventually notices that there's an "engineering layer" of rationality and a "here be dragons" layer. Kahneman notes that people frequently fail to account for their biases even when they've been provided information about their structure and have shown interest in correcting for them. Rationalists talk about flinching away from the truth and Ugh Fields, situations where the truth isn't simply missed, but avoided. Akrasia is the rationalist term for generalized procrastination, the experience of seeming to feel unified in knowing what you want to do and what makes sense, but empirically not being unified given that you're acting against your "better judgement". Eliezer Yudkowsky talks about dark-side epistemology, observing there's memeplexes that try to justify not just being wrong about specific things, but justify the idea of being delusional in general. It's not uncommon for people to report consistently having low key amnesia and forgetting what they were talking about whenever a conversation turns towards a thorny emotional bug they have. A unifying theme here is that some problems seem to have an agency of their own that lets them morph in response to the specific strategies you use to resolve them.
It's this quality of agency that leads people to talk about these problems as inner conflicts, between your cortex and your limbic system, or your higher self and your lower self, or your conscious mind and your unconscious mind, or your ego, super-ego and id, or conflicts between a cacophony of subagents like in Internal Family Systems.
This "here be dragons" layer, the land of recursive problems, agentic problems, problems that fight back, inner division; these are in some sense the central problem of rationality. This isn't to say that the "engineering layer" is easy, but I am saying there's a simplicity and a clarity to it. It seems fairly clear to me that most persistent dysfunction within people and between people comes from the realm of recursive problems.
"Trauma" and "Self-deception" are two sides of the same coin
Rationalist traditions have typically approached these recursive problems with concepts like akrasia, self-deception, and motivated cognition. Therapeutic traditions have typically approached them with concepts like trauma, defense mechanisms, and avoidance.
Concepts in the trauma cluster seem to describe what it's like for some part of your learning machinery to be "broken". You just get scared or freaked out by certain things. You have triggers that when pushed, on the low-key end make you more cagey, anxious, and worse at cognitive tasks, and at the extreme end look like PTSD flashbacks where you lose track of where you are in space and time and feel like you're in an old terrifying situation. Often it's possible to notice and learn about your triggers, and it's not always the case that there's a meta-level blocker preventing your from being aware of the issues, but what typically makes it difficult or impossible for these problems to be "simply" accounted for the way you might account for a bias is that when triggered you get physiologically activated and actually freaked out in ways that distort your overall cognition, making it much harder than normal to consistently apply any corrective.
Concepts in the self-deception cluster seem to more describe the experience of having something evade your attention. Nothing feels immediately broken because you can't look right at the error that's maintaining itself. Things feel roughly normal, you just find yourself “mysteriously” failing again and again, and “mysteriously” ending up in bad situations and not doing what you want. There still are tell-tale signs though, the mystery is never total, but attending to those tell-tale signs often involves working through similar attentional resistance.
Moving beyond the differences in how different internal knots are experienced, I believe that the underlying mechanisms of both clusters have more in common than most people acknowledge. It's common for people doing inner work on self-deception to find shit that intensely triggers them in a way that's overt and obvious even to them. Plenty of people find that when they do certain inner work on their anxieties, aversions, and triggers, as opposed "well guess my brain is broken" they find themselves experiencing the tell-tale amnesia and attention sliding that's characteristic of the self-deception cluster. These sorts of problems typically are entangled with each other and I expect there to be similar underlying mechanisms that enable each set of phenomena and that understanding either deeply will greatly inform your understanding of the other
So what's the state of human knowledge about these matters?
The most viral memes about each seem super destructive. The most popular conception of trauma seems to be “everything you don’t like about yourself is trauma you got from your parents sneezing at you funny, and That’s Okay and you should never try and hold yourself to any standards ever”2 and the most popular conception of self-deception seems to be “want to Win Big with One Weird Trick? Lie to Yourself! Be Delusional! Truth is for sad people and losers”. This shouldn’t be taken remotely seriously as explanations for how things work but it’s worth being aware of the garbage that’s out there. What about our best theories?
Our best theories on the trauma side seem to be somewhat decent, but people generally don't put in effort to make clear what level of abstraction they're working at which can make it difficult to port knowledge between traditions. The bigger problem is that various social pressures seem to result in people distorting and conflating different sets of causes and symptoms as described here. So broadly I think the trauma literature has most of the individual pieces of a complete theory, but it’s not clearly and cleanly integrated anywhere.
Our best theories of self-deception are in a worse state. I'd split theories into two buckets: "ecological theories" which describe the why, what situations is self-deception a response to, what drives it to exist in the first place, and "mechanistic" theories which explain what exactly is going on when "you" try to "hide something" from "yourself". We’ve got some okay partial ecological theories like Trivers->Hanson and therapeutic cultures somewhat generalize the dynamics though they both seem to have strange blindspots. At the ecological level synthesis and clarity is mostly what’s needed, but in regards to mechanistic theories it seems like there just aren’t any notable works going at this question.
I've been working on better theories of self-deception for several years now. Over time it's entangled with how I understand the trauma cluster. I'm hoping to write out my thoughts on this in the coming months. The rest of this post is me sketching the sorts of things that I think existing theories get wrong, and things that any accurate theory will have to account for. I'll mostly be focusing on the self-deception cluster for the rest of this post because I find it easier to work with. Next in this series I'm going to dig into some of the cognitive "primitives" that seem necessary to support self-deception, then pivot over to how that helps explain things in the trauma cluster.
1. Our minds are lean, mean, figuring shit out machines
A lot of colloquial theories of human psychology don't see self-deception / delusion as a thing needing to be explained. They just assert it seems to be everywhere, it must be the foundation of the human mind, and it's a miracle whenever humans do anything remotely like tracking reality. Most post-modern theory asserts a more elaborate version of this stance. These theories are all incapable of explaining how anyone could possibly be good at Starcraft, build a skyscraper, succeed at persistence hunting, or navigate the Polynesian triangle without a compass. The slightly less insane versions of these theories adopt a stance of "sure, people can reason when the stakes are low, but with even the slightest stakes, internal and external honesty go out the door and it all breaks down into fractal deception and domination".
This does seem to describe the lived experience of plenty of people, but it also just isn't really attempting to explain how things work. Luckily, plenty of other theories do engage with the fact that our minds have deeply epistemic capacities. The two ways I mainly see people try to reconcile our epistemic nature with our self-deceptive nature are 1) internal conflict between subsystems, and 2) actually it's all secretly meta-rational.
The first path looks like positing different structures in your mind that either have different epistemic capabilities or different drives and values. This can look like the idea that you have a "lizard brain" and a "human cortex" and the lizard brain is both a) fucking stupid and b) just wants to smash, and the cortex can't perfectly keep it under control. Freud's id, ego, and super-ego doesn't really talk about different epistemic capabilities, but talks about different drives and priorities and motivations and how the mind is a war between these forces. Many religions have a notion that your “impure” thoughts and impulses can be coming from The Devil or other miscellaneous evil spirits.
I do think that internal conflict is a good way to understand self-deception, but most of these theories seem to posit that the conflict is at a much lower level and more foundational and irreconcilable than I think is plausible. It really doesn't look like your limbic system is "at war" with your cortex. It's possible to just not really have super-ego in the Freudian sense, or to the degree that you define it to be something intrinsic to minds, it just doesn't have to be in conflict with your "ego" and "id". I think that when people experience inner conflict, the subsystems in conflict are ones that have developed and been cultivated across all your experiences and choices. In this way, I think Internal Family Systems gets closer to how it actually works, though the therapeutic tradition stays silent on what sorts of cognitive capacities these "subagents" can have or how they might be implemented.3
The other path to explain self-deception in tandem with our undeniable epistemic abilities is "it's all secretly meta-rational". Gigerenzer is the simple version of this, claiming that things like "cognitive biases" aren't askew from normative reasoning, they're heuristics that are actually quite adaptive given resource constraints. Robin Hanson and Kevin Simler are a stronger version of this in their book "The Elephant in the Brain", claiming that engaging in self-deception provides a strategic benefit, specifically that it makes it easier to skirt pro-social norms and gets you the gains of being a free-rider without being recognized and punished as one.4 I think Hanson and Simler both consider this somewhat lamentable, but they see individual acts of self-deception as locally rational in a game theoretic sense. It's not, in a strong sense, a division within oneself but instead a further expression of conflicts between people playing out.
This brings me to my second big consideration.
2. Everyone is not playing 5d chess, and there can be no “self-deception fairy” that ensures that blinding yourself is reliably meta-rational
The "self-deception fairy" is a derogatory term I've made up for an imagined part of your mind that is vigilantly tracking the underlying reality of your situation even while you blind yourself. It’s tracking things in great enough detail to intelligently discern when it's optimal for you to actually be allowed to perceive some aspect of your situation vs when it's optimal for you to avert your awareness and self-deceive. It's your guardian angel homunculus screening out the evils of the world, screening out the things that would be disadvantageous for you to know, and most importantly, it's keeping track of when the world shifts in ways that means it's suddenly advantageous to for you to know something it had previously been hiding from you.
As EY points out, the main problem with the idea that self-deception is rational is that in order to be justifiably confident in concluding that it's rational, you'd have to actually understand the truths you're trying to avoid and extrapolate their implications. Observationally, people's self-deception strategies seem to involve them never getting the information they'd need if they were to figure out if self-deception was actually rational. But if, perhaps, there was a self-deception fairy, something that was just as smart if not smarter than "you" (and for some reason it's not dangerous for it to know the forbidden truths, it's only dangerous/painful for "you"), it could be in charge of paying attention to the world, perceiving the To Be Avoided Truths, reasoning and extrapolating about them, and carefully controlling when "you" are and aren't allowed to notice certain things in a way that was schemed to maximally benefit you. If such a fairy existed, then you could certainly talk about all self-deception being meta-rational.
The Elephant in the Brain argues a medium strength version of this: "it’s possible for our brains to maintain a relatively accurate set of beliefs in systems tasked with evaluating potential actions, while keeping those accurate beliefs hidden from the systems (like consciousness) involved in managing social impressions." Therapeutic traditions argue a version of this as well. They talk about how your fear/trigger/aversions have information that "you" don't that informs how they work, and that they make a lot of sense given that hidden information. In Hanson's telling, your self-deception fairy is a selfish free-loader and in the therapeutic telling your self-deception fairy is trying to protect you from a hostile environment, but they both posit that it's working strategically for your benefit,5 though with plenty of ambiguity about how competent it is at its job.6
What both of these explanations ignore is that the degree to which you are divided is the degree to which you are conquered. All forms of self-deception necessarily make you dumber. The interactions between your "conscious" mind and your "unconscious" mind, the interactions between your "subagents", these interactions do real information processing work. None of your parts are recursively self-improving general intelligences on their own, that's a property of your whole mind working together. This means that every time you create more internal siloing, another knot of self-deception, you're putting something that is strictly less intelligent than you in charge of deciding "when it's safe to come out", and when you do this again and again and again, you get the severely blind leading the quite blind leading the kinda blind and lose any semblance of rational confidence that things are going to work out for you. While acts of self-deception can be meaningfully self-protective, the “protected” state you end up in is like a spore, able to last in a more hostile environment but with reduced ability to exert any agency over the process of getting to a less hostile environment. The more spore-like you become, the more it’s just up to the wind if you ever end up anywhere safe.
To be fair to the therapeutic traditions, I can somewhat see why they insist on the deep-wisdom/meta-rationality of your self-deception. It seems to be a strategic framing of the concept to deal with certain pathologies in people they're trying to help. Self-loathing is a very common high level pattern in many people's mind that helps maintain various knots, and when someone who runs on self-loathing learns that it's possible that they can be "biased" or "self-deceiving" or have "motivated cognition", this information gets integrated as "awesome! Here's more reasons to hate myself, and more things in myself that when I notice them I can go wow I'm such a piece of shit I can't believe I was doing that." So the traditions emphasize how your parts are trying to protect you from something and you should learn about what that is before you try to smash through all your knots.
It's unclear to me if it's actually worth the clarity tradeoff, but it’s at least an understandable choice. Still, if you're trying to broadly understand self-deception you need to integrate the fact that there really just can't be a self-deception fairy, and that even if the circumstances in which people tend to compartmentalize and self-deceive are in hostile social environments and it's very sympathetic and understandable that this happens, it’s still the case that self-blinding creates a gradient towards further blindness.
3. The best theories all point to self-deception being other centered, but it’s not always obvious where these “others” are
Psychoanalytic theories and most folk theories of self-deception posit some flavor of "You Can't Handle the Truth!" as the underlying motive force and logic. This does seem to track with lots of people's experience; I’ve heard many people describe how when they occasionally catch a glimpse of something they're hiding from themselves it seems so horrible and awful that they're just compelled to look away lest it destroy them. While I believe that people's experiences are shaped this way, this can't really explain how this would ever come about. People very clearly exhibit "egos" and "self-concepts" that seem to basically be easily updatable beliefs that are responsive to experience, and they also clearly exhibit “egos” and “self-concepts” that seem to serve no function except to be an avenue through which you can feel threatened. Other animals have some semblance of self-concept (though much less rich than ours), but nothing like an "ego". This isn’t to say I think it’s impossible for humans to have “egos”, just that it doesn’t seem possible for the type of “threatenable self-concept” that many people posit to be a psychological necessity / a fundamental building block of human minds in general.
Our best theories on the "why" of self-deception are down stream of Robert Trivers who claims "we deceive ourselves to better deceive others". The Elephant in the Brain is part of this lineage, and adds a specific conception of what sorts of conflicts people are in such that they'd be trying to deceive each other (cheating on pro-social norms). The Hostile Telepath Problem is a great post that takes the same underlying premise but gives a wider conception to the sorts of interpersonal conflicts you could be in that might result in self-deception. In general, you can hold off on proposing any specifics about potential conflicts and just observe:
- Cognition is leaky.
- People with different interests from yours can be trying to punish you based on what they think you're feeling and thinking.
- There are ways to train not leaking your internal state, but they are high skill and not widely talked about.
- The quickest path to meet the demands of punishments from a "hostile telepath" is to do everything you can to shove the offending thoughts and feelings out of mind, practice avoiding them, and spin up processes that work to keep them from entering your awareness, a.k.a "self-deception".
I typically see people talking about this in terms of avoiding punishment, but it's equally common to be driven to self-deceive based on rewards from a hostile telepath, where instead of punishing wrong-think, recognition is withheld until you simulate the targeted set of thoughts/feelings.
Another great addition to the interpersonal conflict nature of self-deception is the concept of double-binds from Gregory Bateson. Though he doesn't describe it exactly as I'm about to, double-binds point to situations where the way to get a hostile telepathy off your back doesn’t require you to genuinely deceive them, it just requires you to “not burst their bubble”. He walks through examples where the hostile telepath is making contradictory demands ("I want you to be close to me!" but also with pushing away whenever the other actually tries to be close) and is only satisfied when you "fake the proper amount of closeness". Here, the hostile telepaths aren't asking "is this person truly a believer?" they're merely asking "was the performance adequate?" Double-binds, in a sense, are the inductive case of self-deception. If someone is maintaining a contradiction, either consciously lying or self-deceiving, they can pressure others to "vibe with the contradiction".
Of course in all these hostile telepath scenarios there's gradations of pressure and gradations of sensitivity with which people's minds are being policed. Bateson's work was exploring how being enmeshed from birth in strong enough double-binds in one's family environment could create splits in the self so deep one develops schizophrenic symptoms. There's all sorts of lower intensity ways social conflicts enforce varying amounts of ignoring a contradiction/lie. In many work environments it might look more like "wow, there's some ridiculous problem we have that could be fixed easily but the bosses boss won't allow it because he thinks it will make him look bad so every day I have to deal with the bullshit that flows out of this protected error, and when I dwell on this it makes me angry and if I'm angry enough people start to notice in meetings and I fail to Get With The Program and I Harsh The Vibe and if I get angry enough I'll probably be fired and right now that feels worse than dealing with this bullshit so I'm going to practice Just Not Fucking Thinking About It."
So that's all well and good, but what about all the times that people seem to really truly be quietly sabotaging and lying to themselves in the quiet corners of their own minds when no one is around and it seems unlikely anyone could be around?
I'm basically with the author of the Hostile Telepaths post in thinking that ultimately all self-deception phenomena cashes out in some interpersonal conflict in some way. In practice though, this can be very hard to trace, for many reasons. The big one is that if you're currently in some interpersonal situation that's applying pressure on you to distort your perception, it's almost certainly applying pressure on you to distort your perception of the conflict itself, which makes it to some degree anti-memetic. And if the interpersonal situation is more than just you and another person, but a friend group, a family unit, a work place, a subculture, a society, then this distortion is going to be present in the discourse this group has about its problems.7 Now of course there are no perfect anti-memes, you can always follow the hints and trace their presence via the holes they leave,8 but you have to investigate these distortions from inside them, you don't get to jump outside your own mind in order to investigate your mind, so it’s tricky.9
Another related complication: it generally seems like an environment needs to be more toxic to create a distortion in you than it needs to be to maintain a distortion in you. This means that neither purely past->present (“what made me so fucked up?”) nor purely future->present (“how is this secretly a way to get something I want from my current environment?”) approaches will fully explain your problems. Let’s take self-hatred as an example. It’s generally recognized that having parents who frequently said you were a piece of shit and that everything bad is your fault tends to lead to you developing a persistent sense of self-hatred. Less acknowledged but much more common is the way that schools cultivate self-loathing in children10. Schools and the home both fit “contexts in the past where you were vastly more dependent and vulnerable than you currently are”. Fast-forward to adult life now. Most ways of being online involve being in algorithmically dictated social media environments that, among other things, are optimized around cultivating subtle and overt forms of self-loathing as a way to generate engagement which generates ad money. Importantly, though social media does plenty to mess people up, it really doesn’t seem like that thing that could install self-hatred ex nihilo in an otherwise fully functional happy healthy person. Likewise, I expect that if someone who developed self-hating tendencies at a young age ends up in a real robustly safe social environment later in life, unlearning self-hatred will be a relatively straightforward process. I think a lot of weirdly persistent problems are a result of past hostile circumstances creating sensitivities that current hostile, though considerably less hostile, circumstances exploit. In practice what this means is that if you're just looking at your past for answers you'll be ignoring how your present environment is working to maintain your old wounds and won't think to try and get a better environment, and if you're just looking at your present environment things won't make sense because you'll probably realize on some level that the situation just isn't intense enough to be the source of all your inner turmoil.
Ultimately the considerations I've outlined in this section are less something that need to be handled at the "theory" level (that's mainly handled via the claim "it all cashes out in interpersonal conflict"), but they should place prominently in the praxis of any discursive environments trying to work with unraveling knots in themselves and others.
4. Many theories are too high-level to be fundamental explanations
With the kind of "egos" the You Can't Handle the Truth theories invoke, and the kind of "identities" that Paul Graham evokes in Keep Your Identity Small, the obvious next questions are things like:
- What is the mental move “identifying with” is made out of?
- How is "identifying as X" different from "in conversations with strangers saying I'm an X seems like a decent compression of relevant facts about me and if they want to know more than the compression I'll tell them more"?
- Why would anyone ever “identify” with anything?
- What leads people to “identify” with one thing vs another?
- Can you “ego” or “identity” do anything besides be threatened?
- Why is it possible for people whose life experiences overlap strongly with some set of labels to all have fairly disjoint sets of insecurities?
If you aren’t at least trying to answer these kinds of questions then terms like "ego" and "identity" are at best a short-hand to refer to the observed realities of partisan irrationality that don’t provide explanatory power of their own, and at worst, they are semantic stop-signs blocking further understanding of what is going on. I believe that these concepts can be useful high-level descriptions, but to make progress you'd need to dig into the underlying patterns of learning, decision making, commitments, etc, that go into creating “egos” and “identities”.
High level theories are great and useful. Things like the Enneagram and the Buddhist realms both seem to be describing high level patterns in the sorts of self-sustaining delusions people get absorbed in, and make no claim to explaining the "mechanics" of self-deception. Lots of the time it will probably be most useful to use high level lenses like these and others. But any theory that's trying to explain how and why self-deception happens at a fundamental level will have to go deeper.
Hopefully anyone else trying to understand this domain will find these considerations useful. I’ll be writing more about my theory of how things work in bits and pieces over the coming months.
Stay Lucky
- I’ve gotten a lot from the therapy scene that developed around Gregory Bateson and the Palo Alto Mental Research Institute in the 60’s. Some bangers from that group include Steps to an Ecology of Mind, Toward a Theory of Schizophrenia, Pragmatics of Human Communication, and The Power Tactics of Jesus Christ and Other Essays.↩
- Freddie Deboer has some posts describing the most popular trauma memes and how they do damage. I don’t agree with a decent chunk of his frame but he describes the landscape well.↩
- Kaj Sotala actually wrote an amazing series exploring exactly this question, and I think anyone using subagents / parts theories would benefit from integrating some of his models here.↩
- The Elephant in the Brain is actually a bit split-brained about this. It spends the first third of the book hammering in the idea that when people have hidden-to-themselves motives it that they’re trying to selfishly cheat on some pro-social norm. The rest of the book explores different domains where the authors claim hidden motives are widely and consequentially at play. But two of its most interesting case studies, education (“education isn’t about learning, it’s about signaling obedience”) and medicine (“medicine isn’t about healing, it’s about signaling you care”) do not even remotely look like skirting a pro-social norm for selfish gain. Something much weird is going on in these domains, but Hanson and Simler brush past this.↩
- This isn’t strictly true. The deep IFS lore has a notion that some parts are actually demons and they totally suck and are just trying to fuck with you.↩
- The only people who I see defending that there can in fact be self-deception fairies that are totally competent are individuals who are explaining to me about the things they lie to themselves about and how they expect it all to work out totally fine. Yes, it’s weird for someone to tell you about “what they’re lying to themselves about”, but I keep having convos with people who are oddly candid about this, at least for the duration of the convo.↩
- This is what I think is going on with the conflations in trauma theories that Ben Hoffman points out in the post I linked previously.↩
- The Hostile Telepath post has a good list of the shorts of behavioral traces frequently left by self-deception.↩
- Though you definitely can’t get “outside” your own mind, it does seem like there’s somewhat global parameters for how sensitive your distortions are and I’ve often found that when I’m at a confluence of great food sleep and exercise whole swathes of distortions seem trivial to deal with. Psychedelics seem to have a related global effect though it doesn’t seem that they directly weaken the energy/aversion/triggeredness behind distortions, it feels more like they mess with the guard rails that distortions use to sustain themselves.↩
- John Holt’s “How Children Fail” is a great book which, among other things, describes in detail what it looks like for children to slowly learn to not trust themselves and to disdain themselves in the confines of schools.↩