Ben C. Davis

Software Engineer

Notes On The Great Mental Models

by Rhiannon Beaubien and Shane Parrish

Introduction

Three reasons we disconnect from reality:

  1. Perspective - we don’t have all the information on a situation so make decisions based on incomplete data. It’s hard to see the system when we’re inside of it.
  2. Ego induced denial - we have too much invested in our own opinions so cannot learn from the world. 2 reasons: fear of criticism that stops us from putting ideas out there and defensive when we do put our ideas out there.
  3. Distance from consequence - when our decisions don’t have an immediate impact on us. The opposite of putting our hand on a hot stove. The further away from consequence the easier it is for us to convince ourselves our decisions weren’t wrong.

We often tend ignore the simple solutions, while valuing the complex, regardless of correctness.

Understanding of reality is only is full when we adjust our behaviour. Not just theory but action.

Ego can be useful. It helps us do difficult things like start a business.

A variety of models is key. Too often we’re restricted to just the models that come from our profession.

The map is not the territory

The best map is still a reduction of the information they represent. Complex to simple. To represent all information would be impossible.

It’s an abstraction and it’s the only we we can operate in the complexity of reality.

News is an abstraction. A reduction of events.

We must remember there’s an abstraction and it’s boundaries. Don’t create rules which assume the abstraction contains the whole.

Abstractions must be flexible as the data it abstracts changes.

Physicists are good at acknowledging the limits of a model or abstraction, for example Newtonian physics.

Models can become dangerous when the fixed assumptions of a model are assumed to be fixed in reality.

All models are wrong, but there wrongness is balanced by their usefulness.

3 considerations:

  1. Reality is the ultimate update
  2. Consider the cartographer
  3. Maps can influence territory

Stereotypes are like maps. But like maps, people are way too complex for simple reduction.

Maps are captured at a moment in time. So may be out of date.

We have to think about the context that influenced the creation of the map, model, or abstraction.

Models are not the reality.

The tragedy of the commons: when individuals have access to a shared resource, free of cost to the individual, the individual will act in self-interest causing the depletion of the resource.

  • apparently there are many examples of where this doesn’t happen. The original thesis of the tragedy of the commons has been widely criticised. It seems the unmanaged commons is a clarification required to make the theory work.

Circle of confidence

People have circles of confidence (competence). Be aware of yours and of others and of advice givers hidden incentives.

Here’s a summary from good reads:

If you want to improve your odds of success in life and business then define the perimeter of your circle of competence, and operate inside. Over time, work to expand that circle but never fool yourself about where it stands today, and never be afraid to say “I don’t know.”

First principles thinking

Goes back to the sentient philosophers who sought fundamental axioms.

First principles are the boundaries of a problem area. They don’t have to be universally axiomatic, just axiomatic enough given the situation.

Two ways to figure out first principles:

  1. Socratic questioning. Always ask why I think this. How do I know this is true. What are the sources. What might others think. What are the consequences? Question the original questions.
  2. Five whys. Just repeatedly ask why. Eventually land on a what or how. Separate replicable knowledge from assumption. Avoid “it just is” answers.

Thought experiments

Generally has:

  1. Ask a question
  2. Conduct background research
  3. Construct hypotheses
  4. Test with thought experiments
  5. Analyse outcomes and draw conclusions
  6. Compare to hypotheses and adjust accordingly

Allows you determine which variables might contribute to different outcomes, and to remove variables that would otherwise physically be impossible to remove. E.g. if you had all the money in the world.

Alternative histories. Historical counterfactuals. But history is a chaotic system so we have to be careful when imagining a different future based on small changes. Like weather. A small change has a huge unpredictable impact on a chaotic systems. Historical counterfactuals are an easy way to trick yourself.

But if we use a thought experiment, it could be possible to continue exploring the counterfactuals by unveiling the hidden events that may have happened if an initial different event had occurred. We can still think probabilistically in each of these outcomes. E.g. was arch duke murder the key that led to war. Would it have happened otherwise? We’d need to understand the historical context to determine whether the assassination was the cause or merely a trigger, the absence of which could have caused another trigger to take its place.

Essentially: run thought experiments for different outcomes and measure the probability of each alternative for a given outcome. We can use this to determine how responsible we were for a given outcome that’s already happened. E.g. were we responsible for a financial gain on the market? Or just luck?

Second order thinking

Thinking holistically about our actions and their consequences and the consequences of those consequences, and so on. The effects of effects.

The law of unintended consequences. E.g. better traction on tires causes worse mileage.

“Stupidity is the same as evil if you judge by results” - Margaret Atwood.

Bacterial resistance.

First law of ecology: you can never only do one thing.

It’s not a method to predict the future, there are limits to how far we can understand. But we should try to understand the web of effects we operate within.

Might mean delayed gratification.

Arguments are more effective when we’ve evaluated the 2nd order effects and can use them to bolster our argument. E.g. women’s rights were argued by suggesting that better education would allow them to support their husbands better.

Slippery slope argument. The idea that one decision now will inevitably lead to some negative outcome. In real life however, people always have control and agency. Analysis paralysis can happen when we think about slippery slope too much, where in actuality there’s a limit to how many effects we can predict so don’t worry so much.

Always ask the question: and then what? Consequences have consequences. Always look at them with the information we have. But avoid slippery slope.

Probabilistic thinking

Essentially estimating using math to determine the likelihood of a given outcome.

Comes from our lack of perfect information.

The best we can do is generating useful probability.

3 important aspects:

  1. Bayesian thinking
  2. Fat tail curves
  3. Asymmetries

Bayesian thinking/updating: how we should adjust probabilities when we encounter new data. Essentially: take into account the existing information we have when considering a new probability. E.g. look at a new data trend in its historical context. Homicide rates may have increased, but within the context. It’s still dramatically lower than it used to be and has only marginally increased. The priors might not always be relevant - they’re still probabilities themselves.The Bayes factor is a measure of the strength of one theory among two competing theories, or put another way, the likelihood ratio of one outcome, possibly the prior knowledge, over the possibility of a new outcome. An ongoing cycle of challenging and invalidating what we know.

Fat tail curves: in a bell curve there’s a limit of deviation from the mean. Not so in fat tail curves. There’s not a limit on the extremes. E.g. wealth has no real limits on the extremes, whereas weight does. So we should consider which we operate in and judge our expectations of extremes accordingly.

Asymmetries: meta probability. The likelihood our own probabilistic estimates being correct. E.g. were far more likely to be confident in our expectations of stock investing.

Generally: roughly identify what matters, coming up with the odds, check our assumptions, then make decision. We can never know the future with exact understanding, but we can evaluate probability.

Independent vs dependent outcomes. It’s crucial to determine which is which in a situation. Dependent is when an outcome is inevitable based on the outcome of a proceeding one.

Antifragile: three types of objects:

  1. Objects that are hurt by volatility
  2. Ones that are neutral
  3. Ones that benefit

The 3. Is anti fragile and that’s how we want to be as the world is fundamentally unpredictable, chaotic, and full of fat tails.

How can we prepare for volatility:

  1. Upside optionality: situations that give good odds of creating opportunities. E.g. a cocktail party where we might meet interesting people. Staying at home gives 0%
  2. Fail properly. Never take a risk that will take out the game completely. Generate the resilience to learn from failure. It gives antifragile hift: learning. We can learn from a volatile world. Trail and error creates learning.

Generally: create situations where we can benefit from volatility.

Causation vs correlation. Mistaken conclude that two events are related by cause and effect as opposed to just happenstance. Correlation coefficient: between 0 and 1. The Wright of shared variables importance. 0 = no overlapping contributing variables. Perfect correlation: 1. E.g. temperature correlation in Celsius and Fahrenheit is 1. An increase in 1 has an exact increase in the other. Height and weight has a moderate coefficient. It means there’s some shared factors, but also some other factors that contribute towards weight. Basically: causation is rare.

Whenever correlations is imperfect, extremes will always soften over time. The worst will get better and the better will get worse. This is regression to the mean. Example: there is a statistically significant decrease in depression for children who drink energy drinks. Or hug cats. But this is only due to regression to the mean. Depressed people are an extreme group so just wait and some of those group will regress to the mean and stop being as depressed. This is why control groups are crucial. The regression to the mean will happen in both groups so we’re now looking for statistical significance with the regression controlled.

In the real world we can’t always create a control so we need to look historically or whatever else we can.

Inversion

Take the opposite approach to a problem. Thinking in the reverse. Start a the end. Assume the worst, then determine what would have to be true for that outcome to happen. Avoiding stupidity is easier than seeking brilliance. Instead of aiming for goal, figure out what to avoid instead.

Assume an axiom is true and then figure out the consequence of that truth and see what would happen.

Essentially: what would the outcome imply then work backwards to find those implications.

Accruing wealth is easiest by avoiding loss. Become rich by avoiding being poor. That inversion led to index funds.

Consider obstacles to change in addition to the steps you can take to enable change.

Essentially: invert when you’re stuck. Start with the logical outcome of your assumptions, then validate those outcome, instead of first seeking evidence for the assumptions themselves.

Occams Razor

Simpler explanations are more likely to be true than complex ones. It’s significantly more likely for more likely simple explanations than improbably complex ones.

Identify and commit to the simple explanations with the fewer moving parts. Easier to understand and disprove. If two explanations have equal explanatory potential, the simple is more likely.

Miracles are rare. Simple commonplace explanations are not.

The math behind it comes from the dependent variables inherent in the explanations. If the simple only requires two variables to be a certain way, but the complex requires 30, the simple is a more robust explanation in the face of uncertainty and volatility.

Extraordinary claims requires extraordinary evidence.

Sometimes though, things sometimes are just complex.

“When you hear hoof sounds, think horses not zebras.”

Hanlon’s razor

We should not stricture to malice that which is more easily explained by stupidity.

In a complex world, helps avoid paranoia and ideology.

The explanation most likely is the one that requires the least amount of intent.

The fallacy of conjunction: we’re susceptible to available evidence, even if entirely unrelated, if they happen to occur in proximity to what we already believe. It causes us to believe logically improbably stuff as it is proximate to the explanation as opposed to being just entirely unrelated. The example given is a description of a plan in her 30s, a grad of philosophy, then asked whether she’s a bank teller or a bank teller and a feminist. Because of the information, people would say the latter, even though it’s statistically less likely as it’s dependent on multiple things being true.

Always assuming malice puts you at the centre of everyone else’s world.

Don’t assume people are out to get us. It puts us a defensive mode and restricts our openness to what’s really going on.

There are fewer true villains than weight think.