Natural Hazard

It wouldn't be the road to hell if it wasn't paved in good intentions

Predictive Categories Make Bad Causal Variables

This post is going to explore the consequences of different choices you can make when thinking about things causally. Shout out to johnswentworth for first seeding in my head this sort of investigation.

One mistake people are known to make is to vastly underestimate the causal factors behind a variable. Scott writes about this tendency in genetics:

What happens if your baby doesn’t have the gene for intelligence? Can they still succeed? [...] By the early 2000s, the American Psychological Association was a little more cautious, was saying intelligence might be linked to “dozens – if not hundreds” of genes. [...] The most recent estimate for how many genes are involved in complex traits like height or intelligence is approximately “all of them” – by the latest count, about twenty thousand.

Probably not too surprising. Everyone wants "The One Thing" that explains it all, but normally its the case that "These 35,000 Things" explain it all. The Folk Theory of Essences might be the most egregious example of people inferring a mono-causal relationship when reality is vastly poly-causal. George Lakoff (the metaphors and embodied cognition guy) explains:

The Folk Theory of Essences is commonplace, in this culture and other cultures around the world. According to that folk theory, everything has an essence that makes it the kind of thing it is. An essence is a collection of natural properties that inheres in whatever it is the essence of. Since natural properties are natural phenomena, natural properties (essences) can be seen as causes of the natural behavior of things. For example, it is a natural property of trees that they are made of wood. Trees have natural behaviors: They bend in the wind and they can burn. That natural property of trees-being made of wood (which is part of a tree's "essence")-is therefore conceptualized metaphorically as a cause of the bending and burning behavior of trees. Aristotle called this the material cause.

As a result, the Folk Theory of Essences has a part that is causal. We will state it as follows: Every thing has an essence that inheres in it and that makes it the kind of thing it is. The essence of each thing is the cause of that thing's natural behavior.

Thinking in terms of essences is very common. It seems to be how a lot of people think about things like personality or disposition. "Of course he lied to you, he's a crook" "I know it was risky and spontaneous, but I'm an ENTJ, so yeah."

My first reflex is to point out that your behavior is caused by more than your personality. Environmental contexts have huge effects on the actions people make. Old news. I want to look at the problems that pop up when you even consider personality as a causal variable in the first place

Implicit/Emergent Variables

Let's think about modeling the weather in a given region, and how the idea of climate factors into. A simple way to model this might be with the below graph:

img

Certain geographic factors determine the climate, and the climate determines the weather. Boom, done. High level abstraction that let's us model stuff.

Let's see what happens when we switch perspectives. If we zoom in to a more concrete, less abstract model, where the weather is a result of things like air pressure, temperature, and air density, all affecting each other in complex ways, there is no "climate variable" present. A given region exhibits regularities in its weather over time. We see similarities between the regularities in different regions. We develop labels for different clusters of regularities. We still have a sense of what geographic features lead to what sorts of regularities in weather, but in our best concrete models of weather there is no explicit climate variable.

What are the repercussions of using one model vs the other? It seems like they could both be used to make fine predictions. The weirdness happens when we remember we're thinking causally. Remember, the whole point of causal reasoning is to know what will happen if you intervene. You imagine "manually setting" causal variables to different values and see what happens. But what does this "manual setting" of variables look like?

In our graph from last post:

img

all the variables are ones that I have some idea on how to manually set. I can play Mozart for a kid. I can give someone's family more money. I can get College Board to give you fake SAT scores. But what would it mean to intervene on the climate node?

We know that no single factor controls the climate. "Desert" and "rain-forest" are just labels for types or regularities in a weather system. Since climate is an emergent feature, "intervening on climate" means intervening on a bunch of geographic variables. The previous graph leads me to erroneously conclude that I could somehow tweak the climate without having to change the underlying geography, and that's not possible. The only way to salvage this graph is to put a bunch of additional arrows in, representing how "changing climate" necessitates change in geography.

img

Contrast this with another example. We're looking at the software of a rocket, and for some reason the developer chose to hardcode the value into every location where they needed the value of the gravitational constant. What happens if we model the software as having a causal variable for g? Like climate, this g is not explicit; it's implicit. There's no global variable that can be toggled to control g. But unlike climate, this g isn't really an emergent feature. The fact that the software acts as if the gravitational constant is is not an complex emergent property of various systems interacting. It's because you hardcoded everywhere.

If we wanted to model this software, we could include a causal variable for every instance of , but we could just as easily lump that all into one variable. Our model would basically give the same answer to any intervention question. Yeah, it's more of a pain to find and replace every hardcoded value, but it's still the same sort of causal intervention that leaves the rest of the system intact. Even though g is an implicit variable, it's much more amenable to being modeled as an explicit variable at a higher level of abstraction.

Causal Variables and Predictive Categories

A few times I've told a story that goes like this: observe that a system has regularities in it's behavior, see other systems with similar clusters of regularity, develop a label to signify "System that has been seen to exhibit Type X regularities."

Previously I was calling these "emergent features", but now I want to frame them as predictive categories, mostly to emphasize the pitfalls of thinking of them as causal variables. For ease, I'll be talking about it as a dichotomy, but you can really think of it as a spectrum, where a property slides from being relatively easy to isolate and intervene on while leaving the rest of the system intact (g in the code), all the way up to complete interdependent chaos (more like climate).

A problem we already spotted; thinking of a predictive category (like climate) as a causal variable can lead you to think that you can intervene on climate in isolation from the rest of the system.

But there's an even deeper problem. Think back to personality types. It's probably not the case that there's an easily isolated "personality" variable in humans. But it is possible for behavior to have regularities that fall into similar clusters, allowing for "personality types" to have predictive power. Focus on what's happening here. When you judge a person's personality, you observe their behavior and make predictions of future behavior. When you take a personality quiz, you tell the quiz how you behave and it tells you how you will continue to behave. The decision flow in your head looks something like this (but with more behavior variables):

img

All that's happening is you predict behavior you've already seen, and other behavior that has been know to be in the same "cluster" as the behavior you've already seen. This model is a valid predictive model (results will vary based on how good your pattern recognition is) but gives weird causal answers. What causes your behavior? Your personality. What causes your personality? Your behavior.

Now, it's not against the rules of causality for things to cause each other, that's what control theory is all about (play with negative feedback loops here!). But it doesn't work with predictive categories1. Knowing what personality is, we can expand "Your personality causes the regularities in your behavior" to "Your regularities in your behavior cause the regularities in your behavior." There is no causal/explanatory content. At best, it's a tautology that doesn't tell you anything.

This is the difference between personality and climate. Both are predictive categories, but with climate we had a decent understanding of what variables we might need to alter to produce a "desert" pattern or a "rain-forest" pattern. How the hell would you change someone from an ENTP pattern to an ISFJ pattern? Even ignoring the difficulties of invasive brain surgery, I don't think anyone has any idea on how you would reshape the guts of a human mind to change it to another personality cluster.

Thinking of personality as a causal node will lead you to believe you have an understanding that you don't have. Since you're already conflating a predictive model for a causal one, you might even build a theory of intervention where you can fiddle with downstream behavior to change the predictive category (this sort of thinking we'll explore more in later posts).

To recap: if you treat a predictive category as a causal variable, you have a good chance at misleading yourself on your ability to:

Humans and Essences

Finally we circle back to essences. You can probably already put together the pieces. Thinking with essences is basically trying to use predictive categories as causal nodes which are the source of all of an entities behavior. This can work fine for predictive purposes, but leads to mishaps when thinking causally.

Why can it be so easy to think in terms of essences? Here's my theory. As already noted, our brains are doing causal learning all the time. The more "guts" of a system you are exposed to, the easier it is to learn true causal relationships. In cases where the guts are hidden and you only interact with a system as a black-box (can't peer into people's minds), you have to rely on other faculties. Your mind is still great at pattern recognition, and predictive categories get used a lot more.

Now all that needs to happen is for you to conflate this cognition you use to predict, for cognition that represents a causal model. Eliezer describes it in "Say not complexity":

In an eyeblink it happens: putting a non-controlling causal node behind something mysterious, a causal node that feels like an explanation but isn’t. The mistake takes place below the level of words. It requires no special character flaw; it is how human beings think by default, how they have thought since the ancient times.

An important additional point is to address why this easy to make mistake doesn't get corrected (I make mistakes in arithmetic all the time, but I fix them). The key piece of this not getting corrected is the inaccessibility of the guts of the system. When you think of the essences of people's personalities, you don't get to see inside their heads. When Aristotle posited the "essence of trees" he didn't have the tools to look into the tree's cells. People can do good causal reasoning, but when the guts are hidden and you've got no way to intervene on them, you can posit crazy incorrect causal relationships all day and never get corrected by your experience.

Quick Summary

  1. The category is a feature of your mind. For it to exert cause on the original system, it would have to be through the fact that you using this category caused you to act on the system in a certain way. When might you see that happen?