Thinking, Fast & Slow
By Daniel Kahneman
Thinking, Fast & Slow examines the process of thought and decision making amongst humans.
Introduction
Kaheman states that we are prone to overestimation how much we understand about the world and to underestimate the role of chance in events.
Overconfidence is fed by the illusionary certainty of hindsight.
Part 1
This section presents the basic elements of a two-systems approach to judgement and choice.
Kahnmen describes the two-system mode of thinking.
- System 1 — Operates automatically and quickly with little or no effort and no sense of voluntary control.
- System 2 — Allocates attention to the effortful mental activities that demand it.
System 1 is in fact the driving force as it “effortlessly originates impressions and feelings that are the main sources of beliefs and deliberate choices of System 2”.
Kahnmen describes that if the book was to be made into a film System 2 would be a character who believes herself to be a hero with its defining feature being that it is effortful and is reluctant to invest more effort than strictly necessary. As a result, the thoughts and actions that System 2 believes it has chosen are often guided by System 1 (the true central figure).
Depletion Of System 2
Tasks such as self-control and cognitive effort are forms of mental work performed by System 2. Studies have shown that people who are simultaneously challenged by a demanding cognitive task and by a temptation are more likely to yield to the temptation. System 1 has more influence when System 2 is busy.
A series of experiments by Roy Baumeister has shown conclusively that all variants of voluntary effort draw at least partly on a shared pool of mental energy. If you have to force yourself to do something, you are less willing or less able to exert self-control when the next challenge comes along. The effect is known as ego-depletion.
Baumeister showed that in fact this has a basis in biology. Mental activities use a lot of glucose. Therefore, as you are more mentally active the store of glucose falls preventing its use in future activities.
This suggests that ego-depletion can be solved through the ingestion of glucose. A test was performed on 8 parole judges in Israel. The exact time of each decision is recorded (they make 1 decision every 6 minutes) and the times of the three food breaks. The researchers found that the proportion of approved requests spikes after each meal. During the two hours or so until the judge’s next feeding, the approval rate drops steadily, to about zero just before the meal.
The Laziness of System 2
One of the roles of System 2 is “to monitor and control thoughts and actions “suggested” by System 1. Kahneman poses the puzzle:
A ball and bat cost $1.10. The bat costs one dollar more than the ball. How much does the bat cost?
The intuitive answer is that the bat costs $0.10, however, this is wrong. More than 50% of Harvard, MIT and Princeton students gave the intuitive & incorrect answer.
Many people are overconfident, prone to place too much faith in their intuitions. They apparently find cognitive effort at least mildly unpleasant and avoid it as much as possible.
Priming Effect
Priming describes how ideas prompt other ideas later on without an individual’s conscious awareness. Psychologists in the 1980s began finding that certain words could lead to the recognition or association with other words more easily than others.
Kahneman describes how in the 1980s Psychologists discovered that exposure to a word causes immediate and measurable changes in the ease with which many related words can be evoked. If you recently seen the word EAT, you are temporarily more likely to complete the word fragment SO_P as SOUP than than SOAP.
The Florida Effect
It was discovered that Priming extends to actions as well as words. Students at NYU were asked to assemble four-word sentences from a set of five words. For one group, half the scrambled sentences contained words were associated with the elderly such as Florida, gray or wrinkle. When they had completed the task the participants were asked to walk down the hall. The group whose words were associated with the elderly walked down the hall significantly slower than the control group.
Reciprocal priming effects tend to produce an coherent reaction: if you are primed to think of old age, you would tend to act old, and acting old would reinforce the thought of old age.
Even our vote is affected by priming. A study in Arizona found that people who voted at a school were more likely to support pro-school funding initiatives than otherwise.
Writing A Persuasive Message
Kahneman writes that in order for a message to be believed certain qualities should be appreciated with the idea to reduce cognitive strain as much as possible.
- Maximises legibility — For example, use bold text which contrasts highly with the background. If you are using colour, use bright blue or red rather than pale shared.
- Do not use complex language — Couching ideas in pretentious language is taken as a sign of poor intelligence and low credibility.
- Memorable — Make your ideas as memorable as possible. Put them in verse if you can (Little strokes will tumble great oaks).
Confirmation Bias
Confirmation bias is our tendency to cherry-pick information that confirms our existing beliefs or ideas. Confirmation bias explains why two people with opposing views on a topic can see the same evidence and come away feeling validated by it.
When asked “Is Sam friendly?” different instances of Sam’s behaviour will come to mind than would it you had asked “Is Sam unfriendly?”. A deliberate search for confirming evidence, known as positive test strategy, is also how System 2 tests a hypothesis. This is contrary to the rules of science which advices that you test a hypothesis by trying to refute it.
The confirmatory bias of System 1 favours uncritical acceptance of suggestions and exaggeration of the likelihood of extreme and improbable events.
The Halo Effect
The tendency for an impression created in one area to influence opinion in another area.
This describes the tendency to like or dislike everything about a person. It is one of the ways the representation of the world that System 1 generates is simpler and more coherent than the real thing.
A study asks subjects two read two character descriptions and then to say have favourably they saw the two people.
Alan: Intelligence -industrious-impulsive-critical-stubborn
Ben: stubborn-critical-impulsive-industrious-intelligent
Due to the halo effect, participants judged Alan more favourably then Ben simply due to the order. Vague words like critical were attributes positively to Alan with the subject viewing him as highly intelligent and therefore justified to be critical while Ben was seen as a critical bitter person who happens to be intelligent. First impressions matter.
Part 2
Anchoring Bias
During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments. Once an anchor is set, other judgements are made by adjusting away from that anchor, and there is a bias toward interpreting other information around the anchor.
Kahneman and Amos rigged a whole of fortune so that instead of falling from 0 to 100, it would only stop at 10 or 65. They then asked students to write down the number that the wheel stoped on and then asked them
- Is the percentage of African nations among UN members larger or smaller than the number you just wrote?
- What is your best guess of the percentage of African nations in the UN?
The average estimates of those who saw 10 and 65 were 25% and 45% respectively despite the wheel having provided no useful information about anything. The estimates stay close to the number that people considered — hence the image of the anchor.
Any number that you are asked to consider as a possible solution to the estimation problem will induce an anchoring effect.
Regression To The Mean
In statistics, regression toward the mean is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the mean or average on its second measurement.
Intuitive predictions need to be corrected because they are not regressive and therefore are biased. Suppose that I predict for each golfer in a tournament that his score on day 2 will be the same as his score on day 1. This prediction does not allow for regression to the mean. The golfers who fared well on day 1 will on average do less well on day 2.
Part 3
The Narrative Fallacy
In The Black Swan, Taleb introduces the idea of the Narrative Fallacy to describe how flawed stories of the past shape our views of the world and out expectations for the future. The Fallacy arises out of our continuous attempt to make sense of the world. Taleb suggests that we constantly fool ourselves by constructing flimsy accounts of the past that we believe to be true.
Built To Last
In Rosenweig’s book The Halo Effect, he shows how the demand for illusory certainty is met in two popular genres of business writing: histories of the rise and fall of individuals and companies and analysis of differences between success and failure constantly exaggerate the impact of leadership style on a firms outcomes.
Because of the halo effect, we get the causal relationship backwards: we are prone to believe that the firm fails because its CEO is rigid, when the truth is that the CEO appears to be rigid because the firm is failing.
The message of Built To Last is that good managerial practices can be identified and that good practises will be rewarded by good results. It is argued that “Both messages are overstated. The comparison of firms is to a significant extent a comparison between firms that have been more or less lucky”. The gap in corporate profitability and stock returns between the outstanding firms and the less successful firms in BTL shrank to almost nothing in the period following the study. There was a regression to the mean.
The Illusion Of Pundits
Everything makes sense in hindsight, a fact that financial pundits explot every evening.
Philip Tetlock explored these so-called expert predictions in a landmark study — ‘ Expert Political Judgment: How Good Is It? How Can We Know?’.
Tetlock interviewed 284 people who made their living commenting or offering advice on political and economic trends. He asked them to assess he probabilities that certain events would occur in the not too distant future i.e. Would the US go to war in the Persian Gulf? The experts were asked to rate the probabilities of three alternative outcomes in every case: persistence of the status quo, more of something such as economic growth or less of that thing.
The results were devastating. The experts performed worse than they would have if they had simply assigned equal probabilities to each of the three outcomes.
The study found that even on topics in which the expert was a specialist they were not significantly better than nonspecialists. They even found out that those with the most knowledge are often less reliable as they develop an enhanced illusion of their skill and become overconfident.
“We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly” Tetlock writes “There is no reason for supposing that contributors to top journals are any better than attentive reads of the New York Times”.
Intuitions Vs. Formulas
Kahneman discusses the academic Meehl’s volume Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence which analyzed the claim that mechanical (i.e., formal, algorithmic, “actuarial”) methods of data combination would outperform clinical (i.e., subjective, informal) methods to predict behavior. Meehl argued that mechanical methods of prediction, when used correctly, make more efficient and reliable decisions about patient prognosis and treatment.
In a typical study, trained counselors predicted the grades of freshmen at the end of the school year. The counselors interviewed students for 45 minutes and had access to high school grades, several aptitude tests and a four-page personal statement. The statistical algorithm used only a fraction of this information.
The formula was more accurate than 11 of the 14 counsellors.
Similar tests have now been conducted on a huge variety of topics ranking from success in pilot training, length of hospital stays, credit risk etc. with simple algorithms beating humans the majority of the time.
It is argued that this is due to:
- “Experts try to be clever, think outside the box and consider complex combinations of features in making their predictions”.
- Humans display widespread inconsistency in their decision making. For example, Radiologists who evaluate X-rays as normal or abnormal contradict themselves 20% of the time when they see the same picture on separate occasions.
Planning Fallacy
The planning fallacy, is a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias and underestimate the time needed.
The bias only affects predictions about one’s own tasks; when outside observers predict task completion times, they show a pessimistic bias, overestimating the time needed.
The way to cure the planning fallacy is through look at similar ventures and therefore to take an “outside view”. The technical name for this is reference class forecasting.
The Endowment Effect
The situation is described as Thaler being reluctant to sell a bottle of wine from this collection even at the high price of $100. However, he would never pay more than $35 for a bottle. This goes against the standard economic idea that you have one value for a bottle — if a bottle is worth $50 to him, he should be willing to sell it for any amount in excess of $50. This is the endowment effect.
Prospect theory suggested that the willingness to buy or sell the bottle depends on whether you own the bottle of not. If he owns it he considers the pain of giving up the bottle. If he does not own it, he considers the pleasure of getting the bottle. The values were unequal because of loss aversion: giving up a bottle of nice wine is more painful than getting an equally good bottle is pleasurable.
Framing Questions
Kahneman describes how the framing of a question is highly important even though the logic can remain the same.
For example, “Italy won” and “France lost” in a football match between the two sides mean different things but logically mean the same thing.
One of these examples asks the user two questions:
Would you accept a gamble that offers a 10% change to win $95 and a 90% change to lose $5?
Would you pay $5 to participate in a lotterty that offers a 10% chance to win $100 and a 90% chance to win nothing?
Logically both of the statements pose the same problem. In fact the second attracts many more positive answers. Why?
A bad outcome is much more acceptable if it is framed as the cost of a lottery ticket that did not win than if it simply described as losing a gamble…Losses evoke stronger negative feelings than costs.