MonthDecember 2014

The Conjunction Fallacy and the Conviction of John

The conjunction rule states that the probability of both A and B happening cannot exceed the probability of either A or B happening. The probability that I will roll a 6 on a die and flip heads on a coin, for example, cannot be greater than the probability that I will roll 6 on a die or flip heads. The conjunction fallacy occurs when this rule is violated.

Psychologists and Nobel prize winners Tversky and Kahneman demonstrated this with the case of Linda. Linda was described as “31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations”. Participants, having read this description, were asked to rank the probability of various statements about Linda being true. These included:

(1) Linda is a bank teller
(2) Linda is a bank teller and active in the feminist movement

A large majority of respondents thought (2) was more likely than (1). This violates the conjunction rule, as the probability of Linda being both a bank teller and active in the feminist movement, cannot be greater than the probability of her being only one of these (a bank teller). The problem is that statement (2) seems more representative of Linda as described in the passage, and it mistakenly deemed to be more likely.

The conjunction fallacy has important consequences for the legal system, as it often appears in the construction of plausible causal scenarios. Tversky and Kahneman also studied responses to John P, described as a defendant with prior convictions for smuggling precious stones and metals. Respondents were asked to consider the likelihood that:

(1) John is a drug addict
(2) John killed one of his employees

Only 23% of respondents thought it was more likely that John was a murderer than an addict. However, when option (2) was changed to “killed one of his employees to prevent them from talking to the police”, around half of respondents thought he was more likely to be a murderer than an addict.

The rules of probability tell us is that the more general a statement is, the more probable it is, and that every detail added to a series of events makes that series less likely. Just as it is more likely that I will see a car outside my window tomorrow than a red car, and more likely that I see a red car tomorrow than a red car with a dog in the back seat, it is more likely that John is a murderer that that he murdered specifically to prevent an employee talking to the police.

The problem is that extra detail gives rise to a more fathomable scenario; condemning John without any evident motive may seem premature, and the additional information makes the proposition seem more salient, more comprehensible, and (mistakenly) more probable.

This mistake can be costly to defendants who are faced with eloquent, detailed, but unproven hypotheses about why and how they have broken the law. As lawyers, then, we must be careful to distinguish between causal scenarios that are supported by evidence, and speculative storytelling. In the latter, every speculative detail doesn’t just cloud judgement and complicate decision-making, it dramatically reduces the likelihood of the explanation being true at all.

References

Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review, 90(4), 293.

Tversky, A., & Kahneman, D. (1981). Judgments of and by Representativeness (No. TR-3). Stanford University Department of Psychology

Status Quo Bias in the Law

In 1832, the Great Reform Act confirmed the exclusion of women from the electorate. In 1973, the Matrimonial Clauses Act confirmed the exclusion of same-sex couples from marriage. Thankfully, these exclusions were repealed in 1928 and 2013 respectively. The law evolves, and necessarily so.

Yet all of these changes have taken place slowly, and in the face of the pervasive resistance to change known as status quo bias: A cognitive error where one option is incorrectly judged to be better than another simply because it represents the status quo.

Several studies have confirmed the ubiquity of this effect. In the famous ‘mug experiment’, students were asked to fill out a questionnaire, and then rewarded with either a mug or a large chocolate bar. After receiving the gifts, they were then offered the chance to exchange their gift for the respective other option. Approximately 90% declined.  As soon as the ‘status quo’ was established either a mug or chocolate, students were happy to retain their original gift.

Though the choice of gift seems trivial, the  consequences of giving an undue bonus to the status quo can be significant: People fail to move their existing investments to more lucrative options, for example, or are resistant to changing their long-term medication for a more effective alternative.

The bias also exerts a powerful influence over the formulation and interpretation of the law. Consider the criminalisation of marijuana use in the Misuse of Drugs Act 1971 and the paucity of legal restraints on the adult consumption of alcohol and tobacco. Given the relative societal costs of these substances, this state of affairs makes seemingly little sense, and we have reason to suspect that status quo bias may be part of the problem.

One way to check is via the Reversal Test, a heuristic developed by philosophers Nick Bostrom and Toby Ord. The test posits that when a proposal to change a certain parameter is thought to have bad overall consequences, one should consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias.

The parameter at hand with drug laws is the criminalising of substances that pose a threat to public health and safety. Those who believe that we should not decriminalise marijuana should therefore consider if they would endorse a shift in the other direction; namely criminalising substances of similar or greater toxicity (such as alcohol), and imposing stronger penalties on transgressors. This position seems unlikely to be either popular or justifiable on public health grounds.

The Reversal Test is also illuminating when applied to other areas of law: If intensive factory farming, 40-hour working weeks, and labelling non-nationals “illegal” were not established norms, would we find that they should be?

Thorough its invisibility and ubiquity, the status quo bias is a silent threat to legislative  progress. Departing from the status quo where necessary, and recognising and resisting the  bias where it arises, will surely result in wiser, better motivated, and more responsive legislation.

References

Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American psychologist, 39(4), 341.

Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment. Cambridge University Press.

Bostrom, N., & Ord, T. (2006). The Reversal Test: Eliminating Status Quo Bias in Applied Ethics*. Ethics, 116(4), 656-679. Chicago.

Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of risk and uncertainty, 1(1), 7-59.

Confirmation Bias and the Law

“The human understanding when it has once adopted an opinion […] draws all things else to support and agree with it” — Frances Bacon, 1620

Once we hold a particular view or hypothesis, we are more likely to search for, acknowledge, give credence to, and remember information that confirms it, regardless of whether that information is true. This phenomenon, known as confirmation bias, may be one of the most dangerous cognitive biases affecting decision-making in a judicial context.

Consider psychologist Peter Watson’s so-called 2-4-6 test. Participants were told that the experimenter had a rule in mind that classified three sets of numbers, and that “2-4-6” conformed to that rule. The subjects then proposed their own sets of numbers, were told whether it conformed to the rule or not, and were allowed to continue until they felt sure they knew the rule.

The rule was actually “any set of three increasing numbers”, but participants typically had a difficult time discovering this. They often believed the rule to be something such as “even numbers increasing” or “numbers increasing in equal intervals”, and the positive feedback received for sets following these incorrect rules strengthened their convictions. The fact that participants failed to even consider generating sequences seriously at odds with their focal hypothesis (such as 100-40-17) demonstrates the strength of confirmation bias.

As a form of motivated cognition, it affects all levels of belief formation and behaviour. If we dislike somebody, we give disproportionate weight to any evidence that they may be unpleasant, ignore evidence to the contrary, and interpret ambiguous evidence in our favour. At a wider level, we choose the friends, read the websites, and support the political parties we expect to confirm our existing views of the world.

The implications of this bias for the judicial system are multiple. Trials, for example, are frequently long and complex, and studies have shown that members of the jury often form their decisions early and interpret subsequent evidence in a way that supports their premature conclusions. This process has leads to a polarisation of attitudes among jurors, as each member of the jury becomes more and more entrenched in their position as the trial develops.

In the 2013 murder trial of David Camm,  the defense lawyer argued that Camm was charged with the murder of his family solely due to the effect of confirmation bias in the investigation. Every piece of evidence against him transpired to be inaccurate or unreliable, yet the charges against him were not dropped (though Camm was eventually acquitted).

The Central Park jogger case is another example. In 1989, five teenagers confessed to raping and assaulting a woman as she jogged through Central Park. They quickly retracted their statements, alleging that police had coerced their confessions. No physical or eyewitness evidence linked the suspects to the attack; in fact, semen recovered from the victim appeared to come from a single donor and did not match any of the five suspects. Nevertheless, a jury convicted all five of them.

 The detective’s and prosecutor’s statements made at the time demonstrated the intensity of their commitment to their theory of the case. They found all evidence that severely undermined their theory incredible, or modified their version of events to accommodate the new evidence within their original hypothesis. (In 2002, another man confessed to the crime. His DNA matched the victim, and a judge  overturned the original defendants’ convictions).

Of course, all lawyers arguably ‘exploit’ confirmation bias to some extent, as to build a case is to argue that the evidence at hand supports the conclusion dictated by the client. However, there is a clear difference between evaluating all evidence impartially in order to build a case consciously, and using selected evidence to justify a conclusion already drawn. Being able to objectively assess a situation is critically important to the practice of law. Engaging in case-building via confirmation bias without being aware of doing so may lead to overconfidence in the strength of the resulting case.

So, what can we do? Unfortunately, simply being aware of confirmation bias is not enough to mitigate its effects, and neither is it a matter of lacking intelligence. As studies have shown , while some cognitive biases do correlate with IQ, confirmation bias does not.

The most effective strategy is to develop a mind-habit of always ‘thinking the opposite’. Always consider the possibility that you might be (drastically) wrong, actively search for evidence that this might be the case,  consider how new evidence could  challenge as well as strengthen your beliefs, and be discerning about the information you choose to process.

References

Devine, Patricia G.; Hirt, Edward R.; Gehrke, Elizabeth M. (1990), “Diagnostic and confirmation strategies in trait hypothesis testing”, Journal of Personality and Social Psychology (American Psychological Association)

Myers, D.G.; Lamm, H. (1976), “The group polarization phenomenon”, Psychological Bulletin 83 (4): 602–627

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175-220

Roach, Kent (2010), “Wrongful Convictions: Adversarial and Inquisitorial Themes”, North Carolina Journal of International Law and Commercial Regulation

Russo, J. Edward, and Margaret G. Meloy. “Hypothesis generation and testing in Wason’s 2–4–6 task.” Unpublished manuscript (2002).

Schanberg, S. H, (2002, November 26). A journey through the tangled case of the central park jogger. Village Voice, p. 36.

Stanovich, Keith E., Richard F. West, and Maggie E. Toplak. “Myside bias, rational thinking, and intelligence.” Current Directions in Psychological Science 22.4 (2013): 259-264.

© 2017 Natalie Cargill

Theme by Anders NorenUp ↑