Black Box Thinking
By Matthew Syed
The central idea to Black Box Thinking revolves around the claim that across varied disciplines “the explanation for success hinges, in powerful and often counter-intuitive ways, on how we react to failure”.
Aviation vs Healthcare — Closed vs Open Loops
This is illustrated by the comparison of the Airline industry and healthcare with these industries having contrasting cultures when it comes to failure. The aviation industry equips every aircraft with two black boxes which records the activity of the plane. When there is an accident the “boxes are opened, the data is analyzed, and the reason for the accident evaluation” with this method of analysis resulting a rate of accident of one per 2.4 million flights.
In contract, it is reported that between 44,000 and 98,000 Americans die each year as a result of preventable medical errors. When testifying to the Senate a professor at John Hopkins claimed that medical error results in number of deaths equivalent to two jumbo jets falling out of the sky every twenty-four hours. When an error occurs in a hospital the prevailing culture within medicine is to not conduct an investigation with there being a prevailing culture to “practice and reward the concealment of errors”.
When a mistake occurs in Aviation, independent investigations are given full rein to conduct an investigation. “Mistakes are not stigmatized, but regarded as learning opportunities”. Once the report is finished, it is available to everyone with every pilot in the world having free access to this data.
“When pilots experience a near-miss with another aircraft or have been flying at the wrong altitude, they file a report. Provided that it is submitted within ten days, pilots enjoy immunity”. Link!
Medicine is a closed loop system with failure not leading to progress while Aviation is an open loop system in which failure leads to progress because feedback is rationally acted upon.
Black Box Thinking
For organisations beyond aviation, it is not about creating a literal black box; rather, it is about the willingness and tenacity to investigate the lessons that often exist when we fail, but which we rarely exploit. It is about creating systems and culture that enable organisations to learn from errors, rather than being threatened by them.
Matthew argues that it is only by looking at contradictory data can we truly grow. Take for example the hypothesis that water boils at 100C. This seems true but as we know the hypothesis breaks down when water is boiled at altitude. “By finding the places where the theory fails, we set the stage of the creation of a new, more powerful theory: a theory that explains both why water boils at 100C at ground level and a different temperature at altirude. This is the stuff of scientific progress”.
“By looking only at the theories that have survived, we don’t notice the failures that made them possible. This blind spot is not limited to science; it is a basic property of our world and it accounts, to a large extent, for our skewed attitude to failure. Success is always the tip of an iceberg…Beneath the surface of success — outside our view, often outside our awareness — is a mountain of necessary failure.”
Taleb has pointed out that you could observe a million white swans, but this would not prove the proposition: all swans are white. The observation of a single black swan, on the other hand, would conclusively demonstrate its falsehood (problem of induction).
Toyota Production System (TPS)
Matthew describes how Dr Gary Kaplan, recently apponted as CEO of Virginia Health System in Seattle, visited Toyota to see if he could learn anything.
He saw the TPS (pre-cursor to lean-manufacturing) and how “if anyone on the production line is having a problem, or observes an error, they pull a cord which halts production across the plant. Senior executives rush over to see what has gone wrong and, if an employee is having difficulty performing their job, help them. The error is then assessed, lessons learning, and the system adapted,”
Jidoka is a Toyota-created word that has the same Japanese pronunciation as automation, but with added connotations of humanistic and value creation. Thus, jidoka means intelligent automation.
The jidoka concept is that work stops automatically or product is rejected when a fault occurs, reducing waste. The concept originated when Sakichi Toyoda, founder of the Toyota Group, invented a textile loom that stopped automatically when a thread broke. Previously, looms continued churning out mounds of defective fabric until an operator noticed the problem.
Cognitive Dissonance lead to closed loops!
“In the field of psychology, cognitive dissonance is the mental discomfort experienced by a person who holds two or more contradictory beliefs, ideas, or values. This discomfort is triggered by a situation in which a person’s belief clashes with new evidence perceived by the person”
Matthew describes the issue of Cognitive Dissonance with people feeling inner tension when their beliefs are challenges by new evidence with being simply “re framing” the contradictory evidence rather than confronting that they were wrong.
Matthew describes a study by Elliott Aronson and his colleague Judson Mills (book: mistakes were made (but not by me)), in which students were invited to join a group that would be discussing the psychology of sex. Before joining the group the students were asked to undergo an initiation procedure with some students that was highly embarrassing (reciting explicit sexual passages from racy novels) while for others it was only mildly embarrassing (reading sexual words from a dictionary). The students were then played a tape of a discussion taking place between members of the group they had just joined. The tape was staged to play a conversation that was incredibly boring with members discussing the sexual characteristics of birds. They didn’t even know their material and kept hesitating, and failing to reach the end of their sentences.
At the end of the tape the students were asked to rate how they found the discussion. Those who underwent the mild initiation said that they found it boring with these members seeing the conversation for what is truly was. However, those who underwent the embarrassing initiation rated the discussion as interesting and exciting and forgave the irresponsible idiot in the group!
Another experiment was led by psychologist Charles Lord showed the same thing. Lord gathered two groups of people: one group highly opposed to capital punishment and the other heavily in favor. He gave both groups two dossiers with one being in favor and one to against capital punishment. Both dossiers were well researched and written. You would expect that after both parties had read the dossier’s they would shift slightly to the middle now that they have an understanding of the opposing case. However, in fact that two groups became more polarized praising one dossier and denouncing the other as unresearched rubbish! They had simply re-framed the evidence to suit their original argument.
Those who are most successful have the most to loose from their mistakes and perceptions of their errors. As a result they are the most haunted by Cognitive Dissonance and re-framing, as Tetlock showed in his now famous prediction study (see thinking, fast and slow), “ironically the most famous the expert, the less accurate his or her predictions tended to be.”.
In his book, Why Smart Executives Fail: And What You can Learn From Their Mistakes Sydney Finkelstein, a management professor at Dartmouth College, investigated major failures at over 50 corporations. He found that error-denial increases as you go up the pecking order.
“Ironically enough, the higher people are in the management hierarchy, the more they tend to supplement their perfectionism with blanked excuses, with CEO's usually being the worst of all.”
Blinded by dissonance, they are also the least likely to learn the lessons from their failures.
Confirmation Bias is another psychological quirk associated with cognitive dissonance. According to Matthew, the best way to see its effects is to consider the following sequence of numbers: 2,4,6. Suppose that you have to discover the underlying pattern in this sequence and you are given an opportunity to proposal alternative sets of three numbers to explore the possibilities.
Interestingly, most people try to confirm their hypothesis with most people, assuming that the sets are even numbers increasing sequentially, will propose 10, 12, 14 or 100, 102, 104 with most people feeling pretty certain after their three test sets that they have figured it out. However, the pattern could also simply be any ascending numbers. Had they used a different strategy and attempted to falsify their hypothesis rather than confirm it, they would have discovered this! For example that 4,6,11 is correct.
As Paul Schoemaker from Wharton states “The pattern is rarely uncovered unless subjects are willing to make mistakes — that is, to test numbers that violate their belief. Instead most people get stuck in a narrow and wrong hypothesis, as often happens in real life, such that their own way out is to make a mistake that turns out not to be a mistake after all. Sometimes, committing errors is not just the fastest way to the correct answer, it’s the only way.”
It is often assumed that Technological progress is top-down with academics conducting high-level research, which creates scientific theories, which are then used by practical people to create machines and other new technologies.
This is know as the Linear Model. Technology > Practical Applications. A common example is the Industrial revolution with it being largely inspired by the scientific revolution; the idea of Boyle, Hooke and Locke. However, this is not how the revolution panned out with the role of bottom-up testing and trial and error being the true driver.
In his book The Economic Laws of Scientific Research Terence Kealey writes
“In 1733, John Kay invented the flying shuttle, which mechanized weaving, and in 1770 James Hargreaves invented the spinning jenny, which mechanized spinning…yet they owed nothing to science; they were empirical developments based on the trial, error and experimentation of skilled craftsmen who were trying to improve the productivity, and so the profits, of their factories.”
Maybe the Linear Model is backwards? Thomas Newcomen built the first steam engine for pumping water with no scientific education. However, no body knew how exactly it worked which led to Nicolas Carnot developing the laws of thermodynamics. The trial and error inspired the theory.
Point is further made by Taleb in Anti fragile.
An interesting case of the reverse Linear Model is Architecture. When looking at the great classical structures we can easily assume that they were inspired by the formal geometry of Euclid, however, geometry played no role with the practical wisdom of Architectures inspiring Euclid to formalize what builders already knew.
The Lean Startup
Matthew argues that the desire for perfection prevents trial and error evolutionary progress with this desire resting on two fallacies.
- That we can create an optimal solution in an Ivory tower with no contact with the real world.
- That we have a fear of failure resulting in us being so worried of making a mistake that we never start the game.
In their book Art and Fear David Bayles and Ted Orland tell the story of a ceramics teacher who divided the students into two groups. One group was told that they would be graded on quantity with the other graded on quality. At the end of the semester the works of the highest quality were produced by the volume group with their constant real-world practice best facilitating learning.
This approach is shared in the lean start-up which priorities the value of testing and adapting.
Matthew goes on to describe the success of the British track cycling team under their coach Brailsford. In 2000 Great Britain won a single Olympic gold medal in cycling. In 2008/2012 Olympics they went on to win eight gold medals while a British team went on to win the Tour De France for the first time ever under Team Sky.
How? Through Marginal Gains.
Brailsford states: “It is about marginal gains…The approach comes from the idea that if you break down a big goal into small parts, and then improve on each of them, you will deliver a hue increase when you put them all together”.
The financial services company Capital One has long used rigorous experiments to test even the most seemingly trivial changes. Through randomized field trials, for instance, the company might test the color of the envelopes used for product offers by sending out two batches (one in the test color and the other in white) to determine any differences in response.
Steve Jobs once said that “Creativity is just connecting things”.
The rank algorithm behind Google was taken from an existing method of ranking academic articles Sellotape was developed by merging glue and cellophane. The collapsible buggy was created by fusing the folding undercarriage for Spitfires with an existing technology for transporting children.
As the neuroscientist David Eaglemen says in his book Incognito: The Secret Lives of the Brain “When an idea is served up from behind the scenes, the neural circuitry has been working on the problem for hours or days of years, consolidating information and trying out new combinations. But you merely take credit without further wonderment at the vast, hidden political machinery behind the scenes.”
Epiphanies often happen when we are either:
- Switching off: having a shower, going for a walk, daydreaming.
- Positive Disagreement: when ideas are challenged by others and questions asked.
Why Innovation is not so important
However, Innovation is not always the most important factor for success.
In their book Will and Vision, Gerald J. Tellis and Peter N. Golder looked at the relationship between long-term market leadership and pioneering innovation in sixty-six different commercial sectors. They found that only 9% of the pioneers ended up as the final winners.
Jim Collins writes: “Gillettte didn’t pioneer the safety razor, Star did. Polaroid didn’t pioneer the instant camera, Dubroni did. Microsoft didn’t pioneer the personal computer spreadsheet VisiCorp did”. The companies who survived were the most disciplined allowing them to put in place the supply chain, distributions, teams to actually get the product to market.
Intel was months behind its competitors in the race for the 1,000-bit memory chip and it kept hitting major problems. However, unlike its competitors Intel worked around the clock and obsessed over “manufacturing, delivery and scale.” In the end Intel won the war despite not being first. Intel’s slogan was not “Intel Creates” but “Intel Delivers”. From this example Collins argues that there is a threshold level of innovation which you need to be a contender in the game with companies who cannot hit this threshold not being in the game. However, once you are above this threshold being more innovative doesn’t seem to matter much with discipline deciding the winner.
Moser and our Mindset
In 2010 Jason Moser, a pyschologist at Michican State University, and colleague took a group of volunteers and gave them a test. As part of the set-up, an EEG Cap was placed on their heads which measures the voltage fluctuations in the brain.
Moser wanted to see at a neural level what happened when people make mistakes.
When we make mistakes there are two brain signals
1 Error Relatived Negativity — An involutary reaction make we make a mistake signalling that reality is not what we thought.
2 Error Positivity — This occurs after ERN and signals the subjective experience of actively paying attention to the error and evaluating it, perhaps engaging emotions of regret, guilt, disappointment or anger and imaging what could have been better in the mind’s eye.
Moser divided participants into two groups based on a survey. People in one group had a “fixed mindset” and believed that intelligence and talent are largely fixed traits. People in the “growth mindset” group tend to believe that their abilities can increased with hard work.
Moser then gave the groups a test which forced them to make mistakes. As expected both groups had strong ERN signals. However, those in the “Growth Mindset group” has a Error Positivity signal three times larger than the “Fixed Mindset” group. It was as if those in the Fixed Mindset group ignored mistakes while those the growth mindset group paid attention and learned from them. Moser also found that the larger the Error Positivity signal the greater the improvements.
Why do some people learn from mistakes and others don’t? Its down to how we conceptualize failures.
I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. ..I’ve failed over and over and over again in my life. And that is why I succeed. Michael Jordan