Getting what we want is dangerous
The mythological King Midas had a wish: that everything he touched would turn to gold. But when his wish was granted by the god Dionysus, Midas realized there was a serious problem. He couldn’t touch his food or his family. In the end, he starved to death. Midas’s story is sometimes interpreted to mean that you should wish thoughtfully. But even choosing carefully doesn’t necessarily solve the problem. Nick Bostrom, in his book Superintelligence, invites us to imagine the consequences of asking a superintelligent AI system to give us what we want. Suppose we ask for something unambiguously good, like, “find a cure for cancer as quickly as possible.” If the AI interprets our request literally, it may decide to create tumors in half the world’s population, in order to speed up its cancer research.
The problem arises in everyday life too. If we eat all the food we want, we often become overweight and unhealthy. At a societal level, we want transportation and communication, but this uses natural resources and leads to environmental destruction. The economist Charles Goodhart observed that when we try to maximize any objective – say, students’ test scores – we end up with unintended consequences. Teachers are pressured to increase their students’ scores even if it means the students come away with a shallower understanding.
One thing to notice is that in all the examples above, the problem arises from an excess. Another way of saying this: if we are mediocre at getting what we want, then the paradoxical problems don’t appear. If we’re just barely good enough at hunting to feed ourselves, we don’t get fat. The problem comes when we are really effective.
So what is the general principle here? Why does working toward something good paradoxically lead to something bad? To answer that, we have to think about the world from a systems point of view – a world of interacting entities.
Optimizing from your own point of view
There are a lot of separate creatures in the world, each with their own self-interest. A fox wants to eat a duck, but the duck doesn’t want that. Google wants to take market share from Facebook, and so on. From each entity’s point of view, there is an inside (itself) and an outside (the rest of the world). Each entity perpetually tries to metabolize more of the outside world’s energy into its own self-pattern – that’s why it exists, because it has been able to enforce its perspective on a little corner of the world.
How does this help understand the paradox? The world maintains its rich diversity because the competing interests of all the entities approximately balance. If foxes were completely successful at eating ducks, there would be no more ducks. Conversely if all the prey was successful at escaping, there would be no more foxes. But in reality, the competing optimization processes reach a standoff. Collectively across the world, there’s a big network of balances between partially conflicting objectives.
However, if one creature is too good at getting what it wants, it breaks the balance. This ends up being bad for that creature, because every entity depends on the balance of the system it’s embedded in. If the foxes catch all the ducks, then pretty soon the foxes die too. If any entity achieves its own goals too thoroughly, it suffers.
This is the general principle behind the paradox that getting what you want is dangerous. Every entity has its own point of view, which you could think of as that entity’s imagination of a perfect reality – a vision of the entire world defined by its concept of self. The entity continually tries to optimize the world toward this objective. The paradox is that the individual self only makes sense in the context of a real world which is not self. If an entity optimizes too strongly for its own point of view, diversity collapses.
But although optimization in excess destroys diversity, optimization in balance is essential to create diversity. Diversity is the existence of lots of things that are distinct from each other. These things exist because they have some ability to fight for their own point of view. My self starts as a weak, barely specialized entity, but if it successfully pulls some of the surrounding world toward it, then it becomes more strongly differentiated. A single mating pair of foxes can become a thriving fox population if they are good at hunting. Competing optimization, aka power struggle, creates form.
Figure 1 is a crude illustration of the balance between the desires of different entities. You have a bunch of separate selves each pulling the surrounding world toward their pattern. If there’s balance, then entities differentiate by metabolizing some but not all of their surroundings. This creates diversity and richness in the world. But there are two ways it can go wrong. 1) Optimization is too weak, and entities don’t individuate, leaving a homogenous and boring universe. 2) Optimization can be too strong, with one entity causing collapse of diversity.
Figure 1. a, With too little optimization, the world is undifferentiated without interesting structure. b, If separate entities each try to impose their will on the world and succeed locally, they collectively create a rich environment. c, If a single entity is too good at optimizing from its own point of view, diversity collapses.
Optimal distance
The strength of optimization can be also thought of as the “nearness” between entities. If entities are “close” to each other, in a functional sense, it means they are strongly interacting – their patterns have a strong effect on each other. Figure 1a is a “high distance” world. Entities are functionally too far apart, not interacting with each other to form interesting metabolic structures. Figure 1c is a “low distance” world. Everything has gotten sucked into the pattern of the red entity, so all the elements of the system are too informationally close to one another.
Life depends on a balance between connection and separation. If things are too separated, they can’t interact. If they’re too connected, they can’t specialize and diversify. At the balance point, you have beautiful delicate structures like cells and human societies.
Self and not-self
If I try hard to sleep, I can’t. Why? Sleep requires relaxing. But the harder I try to relax, the less I succeed. Relaxing means releasing some of my self-pattern – the attractor loops within my brain and between my brain and body.
The goal-directed effort stymies itself by being too inflexible. It’s a lot like the fox who hunts too well and then has no rabbits left to eat. Finding a balance means partial letting go of any particular goal-concept, or equivalently, self-concept.
Allowing self-change feels like dying. In a certain sense, a self-pattern can never choose this. Instead, the release comes from being embedded in the larger context. Individual parts are forced to dissolve – which is easier when optimization is already in balance. Christian mystics call this “grace”, but it’s nothing mysterious from a systems point of view (although it’s always mysterious from a subjective point of view).
This is also what creates so much human suffering. Our self-pattern is afraid of what is not itself. It’s committed to protecting itself. Its imagination of the idealized perfect world (all as self) is equivalent (with a sign flip) to the imagination of the dreaded not-self world. The not-self world feels like a Bad Thing that could happen, lurking just outside our consciousness. We can’t picture exactly what it is. But transformative growth happens when this dissolves, when we let the Bad Thing happen. It turns out that the energy that was locked in the Darkness is actually full of life and beauty.
This is particularly rough for humans, because our imagination of the idealized world includes our beliefs about ourselves. Other animals try to optimize toward goals. But we humans throw into the mix a detailed self-concept – of how we should be – and we apply that optimization machinery toward it. For example, we want to not feel bad. If we do feel bad, then there’s a tension between reality and our idealized image. That makes us feel even worse! We’re locked in a loop until we let the Bad Thing happen.
What we value about the world is its diversity, potential, open-ended growth. In other words, its life. At an individual level, our subjective wellbeing is the feeling that there is something to be metabolized and that we have the potential to do it. The mystery of what we don’t know yet. That’s why we can’t define what is good. As soon as we try to pin it down and optimize for it in a fixed way, we’re losing track of the real meaning.
But as always, these two factors have to be in balance. We do have to temporarily adopt beliefs about what is good, and temporarily optimize for them. Otherwise, there’s no structure – we would be in the universe of Figure 1a. If things are working well, there’s a balance between releasing into acceptance and holding onto beliefs about how things can be better.
The Fermi Paradox
That brings us to the Fermi Paradox. The universe is insanely huge, with plenty of room for other intelligent life to have evolved – so why haven’t we run into any aliens?
In the conceptual framework of this essay, the answer is that intelligence means the ability to give yourself what you want. If a species gets too intelligent, it collapses because of loss of diversity.
On Earth, life started off as bits of RNA or single-celled creatures jittering around in bodies of water. A creature in one puddle was effectively very “far” from a creature in another puddle, because they had no means to interact, unless the land gradually eroded or an earthquake splashed one puddle into the other. When life became able to flagellate and swim and crawl, creatures became “nearer” to each other. Insects with flight introduced even more possibilities for interaction, although it still wouldn’t be easy for a bug to get from Laurasia to Gondwana. Now we have airplanes that connect many places on Earth in a matter of hours. Perhaps even more importantly, with the internet we communicate almost instantly around the globe.
This means that diversity is decreasing. Everything on Earth is falling into the same orbit, the same pattern of optimization – like Figure 1c. This might be ok if we could zoom out and see the red circle of Figure 1c as living in a larger universe of planets full of life, each only loosely coupled to each other.
But the great conundrum is that it’s far more difficult to travel between planets than between puddles. It’s an unfortunate consequence of the strength of gravity at planetary scale. Up until now, there has been a steady progression of life gaining the ability to interact at greater distances, but then finding that there are greater-still distances remaining to throttle interactions with other parts of the world. Now, there is a sharp discontinuity because of the sheer energy required to get off the planet.
Our intelligence has given us the ability to interact almost effortlessly. This even goes beyond the physical structure of the internet. Technologies built on top of the internet take it to another level. YouTube lets us watch people anywhere doing anything we can imagine. We go to Wikipedia to find an answer before stopping to think about the problem. And the ultimate technology is artificial intelligence, which promises to give us whatever we want instantly.
Which takes us full circle. Being too good at optimizing for whatever we want could paradoxically be our downfall. If the same thing happened to alien races, it could explain why we’re alone in the universe.