What do chinese whispers, copycat crimes, and unexpectedly slow internet have in common?
We’ve had several names for it: snowball effect, butterfly effect, and domino effect. But, they are all examples of cascades.
When you know them separately, you know three different ideas. When you understand how they occur, you create a mental model. You understand the pattern.
For example, kids want the same toy other kids want, and parents want their kids to become either doctors or engineers. Both are the same idea: Our desires are shaped by others. That’s Mimetic theory.
We are about to build a similar pattern for cascades, and connect the snowball, domino, and butterfly effect together.
Once we understand the pattern, we’ll figure out how to influence cascades. Surprisingly, cascades are widespread in our personal life. So, the techniques we learn from professionals working with complex adaptive systems can transfer to our lives, too.
Let’s build intuition with examples. We can think of cascades as passing on something from one step to the next. Scroll to the next section as soon as you feel you’ve grasped the idea.
This is a game of passing on what you hear to your neighbour. In the end, you compare what you heard with what the first person said. It’s amusing because the end is usually very different from the beginning.
There’s an alternative with actions instead. It’s easier to see the transformation here.
It’s a cascade of actions flowing through people.
This is the Kessler syndrome.
A satellite destroyed by a meteor will send debris throughout the orbits around Earth. This space junk will destroy a few satellites and subsequently spread more debris into all possible orbits, destroying everything in orbit around the Earth.
If this occurs, space flight beyond Earth will become very difficult if not impossible.
Let’s say you want to prove that the sum of first
N natural numbers is
S = N*(N+1)/2.
You can start with
Now, assume it’s true for
S= 1+2+3+4+....+k = k*(k+1)/2.
This is like dominos falling. It’s elegant because without induction, you would’ve had to prove it for each natural number. That’s knocking down dominos manually, one by one.
Since math is a bit detached from the real world, you can create an abstraction: every domino is like every other domino. So,
k can be all natural numbers. You prove it for the first one, and then prove it for two in the middle.
That’s a cascade travelling down infinite levels. Yep, math is beautiful.
Wolves can change the course of rivers.
Think of a food chain. There’s grass and trees at the bottom, deer which eat the grass, and wolves that eat the deer.
The food chain is a part of a food web connecting other food sources. It also links to other phenomena, like how trees prevent soil erosion.
Predators control the population of the next level in the food chain. When the wolves are gone, the deer multiply, overgraze, and almost wipe out the shrubs and trees. This also causes soil erosion and promotes meandering of the rivers.
Re-introduce the wolves, and it goes in reverse — like in the video.
Sea otters do the same in the sea. They feed on sea urchins who in turn feed on kelp. When there are healthy populations of sea otters, kelp forests are more prominent because sea otters control the population of sea urchins.
Back to back meetings
One meeting running late pushes back or cuts a few more meetings.
The cycle ends at night because there’s a 8–10 hour buffer.
It’s a cascade of time through the day.
Cascading style sheets
Commonly known as CSS, this is a tech used to style things on the internet (HTML).
To write a CSS, you determine the sizes, colours, position of elements on a page and create files for each.
What happens now if one element is inside another? Which design do you choose?
There’s a fixed priority order for the stylesheets. We take everything from the highest priority sheet, then things that haven’t been included from the lower priority one, and so on.
This creates the final webpage. The cascading sometimes makes web development hard and leads to memes like:
One neutron causes a Uranium nucleus to split, releasing 3 more neutrons. Each of these neutrons can then split another nucleus, leading to an exponential burst of energy until there are no more nucleus to split.
In our nuclear reactors, we control this reaction by absorbing the number of neutrons. It balances the cascade.
In interconnected systems, one part going down can bring down the entire system if safeguards are not in place. These types of cascades are called cascading failures.
Routers all over the world form the backbone of the internet. They route information packets (electrical signals, light, or radio waves) from one place to another. It’s how we’re able to get a livestream of the superbowl in USA while sitting in London.
Depending on the final location, every router decides where to send the information next. So, there’s a router in Miami sending packets about the superbowl to, say New York, since that’s the only place it knows with a connection to London. Then New York forwards the packets to Ireland, because New York knows that Ireland knows how to get the packet to London.
It’s a collaborative dynamic network.
The information sender doesn’t know the full route, it only knows the next router to send the information to, given the final location.
Once the packet reaches my laptop in London, I then send back a response saying “okay, thanks, keep it coming”.
The router gets feedback when the response comes. “Hmm, it’s taking too long to go this route. Too many people watching the Superbowl in New York. This fast route isn’t the fastest anymore. Time to find another route.”
Then, the router in Miami switches from New York to Morocco for the rest of the packets. Morocco then forwards the packet to France, which then forwards it to London.
However, let’s say Morocco has fewer resources than New York.
The cascading failure occurs when New York goes down.
Miami gets the response that none of its packets are making it through New York. So, it routes everything through Morocco. Morocco can’t handle this load and succumbs to the pressure. Everything becomes slow. Not only London, but every other part of the world depending on New York, Morocco, Ireland, or London slows down.
Just like in a traffic jam, there are too many packets trying to make it through a bottleneck, which slows everyone down.
We’ll explore this further below, using the electricity grid and power line failures. It’s the same principle.
Speaking of traffic jams, there’s an interesting cascading problem here as well.
One person braking on a highway causes the person behind them to brake harder, which causes the person behind them to break harder, which eventually ends up with someone stopping on the highway. Everyone behind this person has to stop unless there’s a smart driver who maintains their distance. The buffer absorbs the breaking shock.
Another way out is to get everyone to co-ordinate and accelerate together. This solution is exciting because self-driving cars can do this, while humans are hard to co-ordinate.
Unsurprisingly, this isn’t the only co-ordination problem plaguing us.
Malthusian Traps and Co-ordination problems
Imagine a population that has plenty of food. As a result, to compete with other tribes, they start reproducing more. Soon, there are a lot more mouths to feed. They run out of food, which leads to starvation and death. The cycle then continues.
This excerpt from Meditations of Moloch captures the idea very well.
Suppose you are one of the first rats introduced onto a pristine island. It is full of yummy plants and you live an idyllic life lounging about, eating, and composing great works of art.
You live a long life, mate, and have a dozen children. All of them have a dozen children, and so on. In a couple generations, the island has ten thousand rats and has reached its carrying capacity. Now there’s not enough food and space to go around, and a certain percent of each new generation dies in order to keep the population steady at ten thousand.
A certain sect of rats abandons art in order to devote more of their time to scrounging for survival. Each generation, a bit less of this sect dies than members of the mainstream, until after a while, no rat composes any art at all, and any sect of rats who try to bring it back will go extinct within a few generations.
If one sect of rats altruistically decides to limit its offspring to two per couple in order to decrease overpopulation, that sect will die out, swarmed out of existence by its more numerous enemies. If one sect of rats starts practicing cannibalism, and finds it gives them an advantage over their fellows, it will eventually take over and reach fixation.
If some rat scientists predict that depletion of the island’s nut stores is accelerating at a dangerous rate and they will soon be exhausted completely, a few sects of rats might try to limit their nut consumption to a sustainable level. Those rats will be outcompeted by their more selfish cousins. Eventually the nuts will be exhausted, most of the rats will die off, and the cycle will begin again. Any sect of rats advocating some action to stop the cycle will be outcompeted by their cousins for whom advocating anything is a waste of time that could be used to compete and consume.
From a god’s-eye-view, it’s easy to say the rats should maintain a comfortably low population. From within the system, each individual rat will follow its genetic imperative and the island will end up in an endless boom-bust cycle.
The god’s eye view shows the co-ordination problem.
The cascade starts off with one family reproducing faster, or one seller charging lower, or one athlete taking steroids to improve performance.
We’ve found a solution to the last two cases — we enforce rules which punish this. Some countries have found a solution to the first case, too.
I started writing about the slow internet, which led me to the traffic jam analogy, which turned into writing about problems with traffic jams. The problem with traffic jams was co-ordination, which led me to talk about other co-ordination problems, which reminded me of Moloch.
That’s an idea cascade.
Percolation Theory is the study of graphs and how connected they need to be to “percolate” an idea or liquid from one end to the other. It attacks the problem from an unconventional angle: how does the system need to change to make an idea percolate.
Nicky Case explores how networks allow ideas to go viral in the Wisdom of crowds.
Consider a group of 9 students. 3 of them are binge drinkers(33%). Everyone’s trying to figure out if they should be drinking or not. They’ll drink if majority of their friends do.
It’s possible to fool everyone into thinking that majority of their friends are binge drinkers.
In this case, if any student decides not to drink at all, they become an immune agent, which is like the opposite of the beer bingers. Get enough immune agents, and no one will start drinking. This combination of agents is sometimes called culture.
Just like the agents, each idea has it’s own characteristics. Some agents are attracted to certain ideas more than others. Some ideas are inherently more viral than others.
An idea can stop dead in its tracks if it tries to infect a community with a disposition against it, like trying to promote sex in a celibates meetup.
This can be harmful as well. Most people look down on new ideas. That’s what leads to groupthink.
This doesn’t just apply to ideas, but any contagion: real viruses, volunteering, or challenging beliefs.
For example, behavioural contagions explore how people copy the behaviour of others. Sometimes, this contagion leads to perverse outcomes, like copycat crimes and copycat suicides. Here, the agent (humans) are of prime importance. There’s a very specific kind of agent that will commit a crime when they hear of a crime on the news.
As you’ll see in the game, if a network is designed a certain way, a virus — just like an idea — can infect the entire population.
The domino effect gets its name from a bunch of dominos falling one after the another.
There’s no rule that states every domino needs to be the same size, though.
In this fashion, how many dominos do you think it will take to knock down the Empire State Building?
The Empire State Building is 443,000 mm tall (about half a kilometer).
And you start with a domino 5mm tall (the width of your little finger).
The answer is 28. That’s it. Cascades can get out of hand surprisingly quickly.
Except, in this case, you’re building the dominos yourself, so you know what you’re doing and the effort it takes. But what about systems you don’t understand? We’ll come back to this question.
In Chinese whispers above, one mistake coupled with another mistake made the end result very different. It was an error cascading through the people. Same with the traffic jam. Braking too hard was an error cascading through the drivers.
It’s prevalent in academia, too. An article makes a claim without evidence, is then cited by a few, which are then cited by a few more, and so on. The citations create the impression of evidence, but all articles are citing the same doubtful source. This has another separate name from the field, called Woozle Effect.
Things start to get unwieldy when we start assigning new names to the same phenomena in different fields. Charlie Munger advises against this. In the seminal book, Poor Charlie’s Almanac, he recommends:
If something is explanable better using a more fundamental discipline — you must use that with proper recognition — instead of “discovering” new principles in your field to explain it.
The error cascade applies to personal beliefs, too. One wrong thought can snowball into a delusional worldview, since beliefs are built on beliefs.
A credibility cascade in a job. How one person who isn’t competent can rise to the top, by virtue of “previous experiences”.
In database systems. On delete cascade is an instruction to delete things from another place when they get deleted from one place. This creates a connection between tables. If you have enough connections set up, this could delete things from all your tables.
Weather and chaos. Here’s a 10 minute video. Chaos Theory deals with deterministic systems that are impossible to predict throughout. We need infinite precision to predict state infinitely into the future. Foreign sources (things we didn’t consider) interact with the system to change behaviour. A new born butterfly can change conditions by flapping its wings, which cascade through the system.
A bad morning making the entire day go to shit.
A lack of blood supply releases toxins from affected cells. These toxins kill off more cells, resulting in more toxins being released. This is an ischemic attack. Scientists are looking for a way to block this cascade in stroke patients to minimise the damage. This is a biochemical cascade.
Cascades can happen in every piece of code with connected parts. A service that uses another service that uses another service? The core going down means all others go down, too.
These examples help pattern-match. We can now expand on the nuances in our definition.
A cascade is passing on something from one step to another.
Something can be data, physical resources, or simply intention and social proof.
Step can be time, or in more abstract conditions, an element of a list. (one CSS sheet in a list of sheets, one number in the list of natural numbers)
In every example, we are dealing with a system with connected parts. We need some kind of connection between parts to transfer that something.
Since the something isn’t always physical, the connections don’t need to be, either. Which means we can sometimes have a hard time identifying systems.
The relationship is bi-directional. Not only do cascades happen in systems with connected parts, but in every system with connected parts, cascades can happen. Since life on Earth is a system with connected parts, we have lots of opportunities for cascades. And we don’t let them go to waste.
Given that cascades occur in systems with connected parts, we should be able to categorise cascades based on the system they occur in. A good categorisation, like the periodic table of elements, tells us not only how the system will behave, but also information about systems we don’t know yet.
That’s a promising reason to attempt categorisation. Let’s try working backwards through a few examples. This is where our intuition built using all those examples will help.
What’s the difference between the slow internet cascade and the cascading style sheets cascade?
The CSS is static, the slow internet is dynamic.
The CSS doesn’t respond to changes you make in related CSS’s.
No stylesheet says “oh, I’m lower priority, let me try and become higher priority by reducing the Z-index of that obnoxious element.” Hence, static.
However, in the slow internet cascade, every router responds to changes you make in other routers. When New York went down, Miami tried to route things to Morocco instead. Hence, dynamic.
We created the CSS system. It’s legible. We control the rules for cascades.
We created the internet too. And we created algorithms to tell every node where to route the traffic, depending on how busy the nodes are. We control the rules here, too.
We understand the rules in both systems, so they’re both legible systems.
But what about the weather? Heating up one part of the world would affect every other part, so it’s dynamic. What are the rules governing weather? It’s illegible. It’s unpredictable. It’s chaotic.
This is where the small distinction between the different effects we know uncovers itself.
Dominos trigger thoughts of equal sized cuboids pushing against each other to do something cool. It’s a static legible system. You know what each domino will do. You can stop the cascade by removing one domino. Removing each domino takes equal effort.
With a snowball, you’re not sure how much snow can stick to the ball. But you know what the snowball is doing and where it’s going. It’s a dynamic legible system. You can still stop the snowball, but it’s harder than the dominos. The more you let the snowball run, the more force you need to stop it.
With the butterfly effect, it’s a random butterfly flapping wings in one corner of the world and a tornado somewhere else. Yeah, no idea how that happened. It’s an illegible dynamic system. You don’t know if stopping something will make it better or worse, like this time China tried to stop the rain.
They’re all cascades, acting on different kinds of systems.
With this segregation in place, we can figure out how to influence cascades.
We’ll skip static systems, since they aren’t too interesting. We don’t need to influence them, we can control them.
Legible Dynamic systems
A power station can face a cascading failure just like the slow internet case. A substation going down means demand being routed to neighbouring substations. And if they can’t handle the load, they shut down as well. The neighbours of the neighbours then face a much higher load, themselves shutting down, putting the entire power grid out of business.
You’re in control of the power grid. What can you do to ensure a substation failing doesn’t cascade and bring down the entire grid? Remember, you have to make a trade-off between high availability (trying your best not to shut down power) and minimal collateral damage (minimise blackout area when you have to shut down).
Here’s what people actually do: (emphasis mine)
Monitoring the operation of a system, in real-time, and judicious disconnection of parts can help stop a cascade. Another common technique is to calculate a safety margin for the system by computer simulation of possible failures, to establish safe operating levels below which none of the calculated scenarios is predicted to cause cascading failure, and to identify the parts of the network which are most likely to cause cascading failures.
The most important property here about legible systems is that we can run accurate simulations. So, we can figure out where things can go wrong.
“In real time” is an important component of managing any system. You need fast feedback loops to tell you what’s going on. The quicker information gets to you, the faster you can make decisions based on that information.
In ancient times, a limiting factor in warfare was communication. If you didn’t know where your army was, that army was as good as gone. That’s why you needed scouts. Scouts were sensors to tell you where the enemy and your army were. It was a feedback loop, but hardly real time. Things are better now with warfare — we have satellites and video feeds for direct feedback.
The cavalry would then attempt to run into the enemy and sever communications between generals and soldiers. Infantry would then proceed to attack the disoriented soldiers, weakened from the previous attacks. — Ancient warfare
But, several other systems lack these feedback loops, which makes managing them harder.
As Peter Drucker said, “What gets measured, gets managed”.
Finally, managing a cascade involves judicious disconnection. With power grids, these are called blackouts.
In the system view, it’s cutting the connection between two parts. We’re cutting the graph.
How do you decide which part to disconnect? The simplest solution is to disconnect the failing substation from everything else. Only the area it delivers gets affected, everyone else goes on as normal.
There’s been some interesting research, which found the optimal cutting algorithm. They recommend cutting nodes with the smallest value of difference between capacity and generation. That is, the nodes which generate a lot more load than they handle. The leechers. This can happen in a locality with a small power plant, but millions of people, such that on a regular day, this power plant needs help from neighbouring power plants to satisfy demand. It’s a load generator for the other plants.
An implication of this result is that sometimes, it’s better to shut down power in an area different from where the failure occured.
Before jumping into illegible dynamic systems, let’s take a moment to appreciate the difference between legible and illegible dynamic systems.
Why are internet shutdown less frequent than tornados?
If we had more influence on the weather, we could stop tornadoes from happening. But we aren’t there yet. I can’t imagine the amount of resources geoengineering would need.
Influencing the internet is much easier. We built it, we decide the rules of the game, and we can change these rules when second order effects look harmful. Figuring out how to change these rules is still a complex problem.
Despite a legible system, internet outages and blackouts still happen. So you can imagine how much more difficult things will get in the illegible one.
Illegible dynamic systems
You’re a big oil company at an auction. You’re targeting lot #78, one oil drilling site.
No one yet knows if there’s oil. 5 companies are competing in the auction.
Before the auction, all 5 get to test the drilling site. So, they run their tests. Here’s the catch: the tests aren’t always accurate. They have a 40% failure rate. With X tests, you could say with reasonable confidence that the site is worth drilling in. But tests are expensive, and this isn’t the only oil drilling site you want to focus on. There are several others too. For now atleast, since we haven’t run out of oil.
So, how do you figure out if it’s worth drilling? You rely on signals: public knowledge and bets of other companies.
Company A had a positive test, so it bids high. Company B had a negative test, but it knows that the tests can be flaky, so it matches company A.
Company C sees both A and B bid high, so it disregards its negative test and bids high too.
Company D had a negative test as well, but it thinks all A, B, and C had positive tests, so it bids high too.
… and so on.
This is an information cascade.
There’s public information, visible to everyone, and private information, which each company protects. No company knows about the test results of the other company.
Private information is usually augmented by public information. But like in this case, what if there isn’t enough public info, and actions coming from your private info augment public info?
This happens because beliefs aren’t in line with actions. Everyone else might look fine, while harbouring their doubts internally.
Something very important happens once somebody decides to follow blindly his predecessors independently of his own information signal, and that is his action becomes uninformative to all later decision makers. Now the public pool of information is non longer growing. That welfare benefit of having public information has ceased. — Algorithms to live by
This system is illegible because we can’t tell who is augmenting public information, and who is using public information to augment their private information. There’s no simulation which predicts the right configuration every time. Human brains are messy.
We can attack this problem from two sides: increase private information, or change the rules of the game to make it hard for cascades to occur.
Increase private information
If the companies are confident in their own tests, they’d be more reluctant to augment private information using public information.
We can increase the confidence in their tests by making the tests cheaper or more accurate.
If the tests were 100% accurate, or if they had 100% confidence in their private information, none of the companies would rely on public signals to make their decisions.
Usually, 100% confidence is correlated with a delusional or visionary CEO.
Change the rules of the game
The standard English auction makes bids public. There’s a good reason for this — people want to pay the minimum they can for things they want. When they know how much others are willing to pay, they can adjust their bid, or their desire, accordingly.
In practice, there’s lots of other psychological effects acting on you that make you pay more than you intended to.
As a general rule, whenever the dust settles and we find losers looking and speaking like winners, and winners wondering what a mess they’ve made, we should be especially wary of the conditions that kicked up the dust- usually, open competition for a scarce resource. This shows up frequently in auctions with winners spending way more than they intended to. — Influence, Timeliness of Scarcity
The Vikrey auction is designed to fix this problem. Here, the bids are private. And the winner pays the second highest bid. The system is designed such that the optimal strategy is to bid the maximum value you’re willing to pay. It’s a forcing function that reconciles beliefs with actions.
We can generalise from these two examples.
First, notice the type of system you’re dealing with. That determines the kind of tools you can use.
Since a cascade is a property of a system, rules of the system apply.
Donella Meadows ranked the possible leverage points in increasing order of effectiveness.
- Constants, parameters, numbers (such as subsidies, taxes, standards).
- The sizes of buffers and other stabilizing stocks, relative to their flows.
- The structure of material stocks and flows (such as transport networks, population age structures).
- The lengths of delays, relative to the rate of system change.
- The strength of negative feedback loops, relative to the impacts they are trying to correct against.
- The gain around driving positive feedback loops.
- The structure of information flows (who does and does not have access to information).
- The rules of the system (such as incentives, punishments, constraints).
- The power to add, change, evolve, or self-organize system structure.
- The goals of the system.
- The mindset or paradigm out of which the system — its goals, structure, rules, delays, parameters — arises.
- The power to transcend paradigms.
In particular interest to us are: changing the rules of the game, managing information flow, and managing feedback loops.
So, if you have enough resources, change the rules of the game, like we did for the information cascades in auctions.
The next best thing is managing information flow. This is the real time feedback in the power station, and better tests for oil drilling sites to augment private information.
The next best thing is managing feedback loops. The control rods in nuclear reactors absorb neutrons and balance the reinforcing loop.
These are the most common leverage points for cascades. But, nothing stops us from exploring the other leverage points.
For example, with phantom traffic jams, the answer was buffers. Keeping your distance lets you avert a jam.
Not all cascades need fixing, though. Sometimes, you want to create cascades. Sometimes, you want to turn a blind eye. Sometimes, cascades are the missing link to success. The same principles apply.
For example, the idea of chaining habits together is a cascade. I’ve set things up such that as soon as Eye of the Tiger starts playing, I start singing along, change my clothes, and end up at the gym. Once I’m at the gym, I row. Once I’ve rowed, I feel like I’m sweating already, which means I’ll have to take a bath, which means I might as well do the full workout. It took a bit of time for the habits to materialise, but now things are mostly automatic.
Another example is paying off debt. Start small, and keep paying all the debts one by one. The effect here may be psychological, but that doesn’t make it any less real. Unsurprisingly, this method of debt repayment is called the debt snowball.
Cascades in daily life
We’re a part of several systems, each of which have their own cascades.
A bad morning making the entire day go to shit? That’s an emotional cascade.
A tough day at work leaking into a bad night at home? That’s an emotional cascade too.
“Just one more episode” turning into a Netflix binge? Another cascade.
A daughter turning vegan, which in turn converts the entire community to become vegan? That’s possible, it has happened, N. Taleb wrote about it — it’s the minority rule — a case of a social cascade.
If they’re illegible to you right now, step 1 is noticing and making them legible. TAPs for noticing are useful here. How do you feel right before the Netflix binge? Can you figure out when the cascade is triggered? The rules of the game are in your control. You’ll do better if you can make things legible.
If you’re eating too much crap, you can change your system (environment). Get rid of the junk. That’s changing the rules of the game. It’s worthwhile noticing how this is a cascade.
For me, it usually looks like this.
Open fridge -> notice crap -> primitive brain lights up -> take 1 -> take many -> oh shit, guilt.
Tracking weight every day while trying to lose weight is managing the information flow. The closer you get to real time sensors, the more you understand. Continuous glucose monitors take that a step further. You can see how certain food items spike your glucose and make you crash. That’s one way to prevent afternoon crashes.
You have to get a little creative with the solutions here, but this framework guides the solution using the principles of the system. This is important, since it means that given an accurate model of the system, the fixes derived from system principles will work. Working towards an accurate model is the tricky bit.
With multiple people, coordination problems emerge. It’s like how sociology emerges from psychology. Let’s look at these next.
Solving coordination problems
Two members of the mafia, Jim and Jane, are arrested. There’s not enough evidence for the main offence, drug dealing, but the police has enough on both of them for a small offence, resisting law enforcement. They’re both separately offered a deal: sell out your friend and you’ll go free. If both sell each other out, both are imprisoned for drug dealing. If one of them sells out, the other one goes free. If both stay silent, they’re both arrested for resisting law enforcement.
This is a famous game from Game Theory called the Prisoners dilemma. It’s sometimes considered a paradox.
Via Game Theory, the Nash equilibrium, or the most sensible strategy for both of them is to sell each other out.
Here’s where the paradox arises. If both of them could co-ordinate and stay silent, both of them would be better off.
The critical piece is co-ordination.
Now, consider the entire system. They are just two members of the mafia. If they start selling each other out, it can create a cascade that collapses the entire mafia family. The police would rope in bigger and bigger bosses, offering them smaller rewards to sell out their boss.
They’ll catch the lowest rung and convince them to sell out their bosses for a lighter sentence. These now-sold-out mid level bosses then become the ticket to the top level boss. The dominos fall.
For a mafia to survive, they need systems in place to stop these cascades from happening.
Becoming an official member of a Mafia family traditionally involved an initiation ceremony in which a person performed such rituals as pricking his finger to draw blood and holding a burning picture of a patron saint while taking an oath of loyalty. Italian heritage was a prerequisite for every inductee and men often, though not always, had to commit a murder before they could be made.
This aligned incentives. You couldn’t leave the family anymore. They’d tell the police about your murder, and you’d end up behind bars. Worse still, you might be used in the initiation ceremony of your replacement.
Becoming a member of the Mafia was meant to be a lifetime commitment and each Mafiosi swore to obey omerta, the all-important code of loyalty and silence. Mafiosi were also expected to follow other rules, including never assaulting one another and never cheating with another member’s girlfriend or wife.
The head of the mafia held everyone accountable. Anyone not following the rules could expect death. This changed the payoff matrix for every member. This also promotes coordination. There’s a central co-ordinator to coordinate your actions.
Coming back to Jim and Jane, if they are part of the mafia with the above rules, their payoff matrix changes. Staying silent gives them brownie points and increased trust from the mafia. Breaking omerta, and betraying a fellow mate means death. If both betray, they’ll both be locked up for a long time. If, in the end, they do manage to get out, the mafia will be waiting for them.
This changes the Nash equilibrium. Staying silent now becomes the optimal strategy.
It’s hard to go this route at the scale of countries, though. Which is why some co-ordination problems are still problems.
To end, here are the different ways of influencing cascades we’ve explored.
A clean cut intervention
In a static legible system, remove the domino. Don’t use the stylesheet.
In a dynamic legible system, safeguard connections. Cut nodes. Black out.
Stop the mistake from spreading. Quarantine. In dynamic systems, cascades become harder to recover from, the longer they keep running.
Manage information flow
When the problem is not enough connected nodes, we can connect more nodes (info cascades) to align actions and beliefs.
In co-ordination problems, we can look at the incentives for each party and make those available to everyone.
Design the system to enforce one of the above
Interventions and information flows are band-aids. Sometimes, we need a longer term solution.
Band aids will disintegrate over time, unless the system puts in incentives to not let it happen. Like a nurse that redresses the wounds every week.
Another way out is to design the system to not need the band aid anymore, like in the Vikrey auction.
Look at other leverage points in the system
Sometimes, the solution is as simple as having a buffer, like in the phantom traffic jam.
Next time you’re thinking about a cascade, or hear the word snowball, domino, butterfly: let it trigger the idea of cascades. They’re the same thing disguised as different based on the system they’re acting on. This hadn’t been made explicit, yet.
Thanks to Dev Kakkar, Nishit Asnani, and Ariane Broquet for reading drafts of this.
Originally published at https://neilkakkar.com on March 5, 2020.