CategoryCognitive Biases

3 Disturbing Questions to Ask Yourself Before You Go to Court

Society in general, and the courts in particular, have long been aware of explicit, discriminatory bias. Efforts to combat racism, sexism, classism, and homophobia in the justice system are ever-intensifying. And rightly so. Unfortunately, however, not all thinking errors are as well-known and recognisable as racism or sexism.

Our judgements are just as easily affected by biases that unconsciously seep into decision-making through seemingly irrelevant circumstance. These lesser known biases rarely make it outside of psychology journals into court procedure, but if you ever find yourself facing the justice system, the three questions below might become worryingly relevant.

1. Are you pretty?

What is beautiful is good. And judges, like most people, prefer the good-looking. In one study, police in Texas were asked to rate the physical attractiveness of over 1,500 defendants in misdemeanour cases. The researchers then followed the defendants through to trial, and found that the more attractive a defendant had been rated, the less likely he or she would be to receive a jail sentence or hefty fine. It pays to be pretty. In this case, literally.

2. Have the jurors washed their hands?

If you’ve done something that might invite headshakes, tuts, and murmurs of ‘disgusting’ (think assault), then this question might matter more than you think. We associate moral purity with physical cleanliness – the so-called ‘Macbeth effect’ – and how clean people feel can affect the harshness of their moral judgements.

In one study, people were asked to clean their hands with antiseptic wipes before rating how bad they found various moral transgressions. The squeaky-clean hand wipe group judged the transgressions more severely than their (comparatively) dirty counterparts.

Similar results have been found where people just see a hand-sanitizer dispenser, reminding them of cleanliness, or when participants are just told to visualise cleanliness.

3. (When) has the judge had breakfast?

If you want to bail before your trial and its ten minutes to lunch – bad luck. One recent study followed Israeli judges making parole decisions (deciding whether a defendant can go home to await trial, or must wait in prison). The judges would make between 4 and 35 of these decisions each day, and would have two food breaks diving the day into three sections.

Immediately after meal breaks, the judges granted approximately 65% of the parole applications. Immediately before meals breaks, when the judges were at their most tired and hungry, they granted approximately…0% of the applications.


The practical significance of these studies is still unclear. Some of the findings remain controversial, and no-one is advocating that trial judges snack throughout proceedings or that jurors be kept away from wash rooms (or should they?). What the studies do make clear, however, is that even tiny, unconscious errors in judgement can undermine the objectivity of the entire justice system.

If we are outraged at the thought of an innocent person being incarcerated on the basis of their gender or skin colour, how much more outraged should we be about someone being locked up because the jury found them unattractive, or because the judge was hungry?

The absurdity of the injustice is no less great, we just haven’t admitted that the biases that lead to racial discrimination also lead to less socially acknowledged, but no less devastating, mistakes.

Accordingly, the political will to stamp out explicit  bias in the justice system exists, but the will to extinguish lesser-known biases does not. Yet behind each flawed decision is a real individual deprived of their liberty or property.

While we live in a world where unconscious injustice is possible, the first step is to get it on the list enemies to be defeated. Only once we commit to real objectivity, can we  plan to achieve it.

Video: Cognitive bias, public policy, and the law

A recent interview with Faculti Media on how cognitive biases can affect public decision making. I talk about scope insensitivity and aid spending, the availability heuristic and fundamental freedoms, and status quo bias and our global priorities. More videos to follow!

The Availability Heuristic and Public Policy Priorities

Or, Why Terrorist Benefits Cheats Aren’t Taking Your Jobs.

“If a random word is taken from an English text, is it more likely that the word starts with a K, or that K is the third letter?”

In 1973, this question was posed to 150 people, and over two thirds responded that the word is more likely to begin with K. In fact, the English language has about three times as many words with K in the third position than in the first. Similar results were found for the letters L, N, R and V – which all appear far more frequently in the third position, yet were deemed far more likely to be starting letters.

In explaining these results, Nobel Prize winning psychologists Amos Tversky and Daniel Kahnemann (1973) coined the term availability heuristicThey discovered that when faced with the first-or-third letter problem, people estimated the frequency of each group by seeing how easily examples of each could be recalled. It is easy to think of words beginning with K, L, N, and R; less so to think of words with these letters in third position. Accordingly, K, L, N and R were deemed to be more frequent as first letters, and less so as third.

This phenomenon was found to extend far beyond the letter game: We all fall back on availability heuristic when we assess how frequent or likely something is by the ease with which examples come to mind.

In assessing risks, for example, the heuristic prompts us to substitute the question, “how dangerous is this?” for “how easily can I recall examples of this being dangerous?” This distinction may not have been critical for our ancestors – common dangers (predators) would naturally have been easier to recall than uncommon ones (meteors), and would also pose the most serious threat to survival. The onset of mass media, however, has destroyed all equivalence between ‘recallability’ and danger.

So what it is that now makes some threats more recallable than others? Several factors seem to be at work, most notably the level of emotion the risk elicits, its familiarity, and its salience.

Where strong emotions are involved, people tend to focus on the badness of the outcome, rather than on the probability that the outcome will occur. The resulting “probability neglect” helps to explain excessive reactions to low-probability risks of catastrophe (Sunstein, 2003). A risk that is familiar, like that associated with terrorism, will be seen as more serious than a risk that is less familiar, like that associated with sun-bathing. Salience is also important: “The impact of seeing a house burning on the subjective probability of such accidents is probably greater than the impact of reading about a fire in the local paper” (Tversky and Kahneman, 1982).

Lets now examine these availability factors in relation to the UK media and public policy priorities, in particular the highly-covered topics of terrorism, immigration, and benefit fraud.

Terrorism is a statistically tiny risk to public safety, yet looms alarmingly large in the public eye. M15 instructs UK citizens to “always remain alert to the danger of terrorism”, yet since 2001 fewer British citizens have been killed by terrorism than by bee stings. It is clear, of course, why the risk has been socially amplified beyond proportion; the threat of random, violent attacks is could scarcely be more emotionally charged, familiar, and salient.

The threat posed by excessive immigration is  clearly less violent, but  no less emotional in its (much exploited) suggestions of injustice and  insecurity- and is certainly made familiar by overrepresentation in the news.

Screen Shot 2015-01-25 at 15.16.20

Accordingly, the British public overestimate the scale of immigration; believing on average that immigrants make up 24.4% of the population (the real figure is c.13%).

For similar reasons, we also overestimate the amount of public money that goes towards fraudulent state benefit claims, and believe that £24 out of every £100 is claimed fraudulently, when the true figure is 70p (34 times less than the estimate).

Importantly, the result of being mistaken about these numbers is not simply the loss of having true beliefs about the world, or the trouble of worrying about things that are not half as bad as you thought. Placing undue emphasis on the availability of a risk also allows for hugely disproportionate, and often questionably legal, law making. Lets continue with the examples of terrorism, immigration, and state benefits.

Over the last 15 years, the UK has seen six terrorism related Acts, which have variously allowed the state to indefinitely detain foreign nationals without charge, to commit pre-charge detention in terrorism cases, and to stop and search citizens without suspicion.

Since 2012, the Immigration Rules have required applicants or their partners to have an annual income of at least £18,600 (the National Minimum Wage, incidentally, is approximately c. £13,500) in possible violation of Article 8 of the European Convention on Human Rights.

Since 2013, people living in social housing with one or two spare bedrooms had their benefit payments reduced by 14% or 25% respectively, even when a disability has meant they could not relocate. Given the number of disabled people facing unemployment, 64% of claimants found themselves in this situation; the case is to be held before the Supreme Court.

These examples are illustrative only, and there are undeniably situations that merit a restriction on human rights, a limit on immigration, or restraints on welfare payments. But it is valuable to acknowledge that in a cataclysm of public concern, legislators will be far more able to push through questionable legislation by exploiting a fear that is not rooted in numbers or probabilities, but rather emotion, familiarity, and salience.

The the availability heuristic can also prevent the rational and cost-effective pursuit of goals we care about. Perhaps people support counter-terrorism efforts, for example, out of a concern for the right to life, or for the promotion of peace, or for fear of losing fundamental rights and freedoms in the West. If so, a better approach may consider preventing heart disease or cancer, supporting organisations that encourage inter-state cooperation, or backing social change organisations that seek to entrench human rights.

The availability heuristic is arguably so pervasive to public policy that is does not warrant mentioning: As the argument goes, the public should get what it wants, and something must be seen to be done in response to perceived threats. But if we care about making decisions that will affect the state of the world, then it heuristic is devastating in its distortion of priorities. One response might be to read less news, and think more thoughts; as Rolf Dobelli writes, “News is to the mind what sugar is to the body”.

References

Briñol, P., Petty, R. E., & Tormala, Z. L. (2006). The malleable meaning of subjective ease. Psychological Science, 17(3), 200-206.

Sunstein, C. R. (2003). Terrorism and probability neglect. Journal of Risk and Uncertainty, 26(2-3), 121-136.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2), 207-232.

Punishment, Consequentialism, and the Appeal of Retribution

Why do we punish? Philosophical justifications for punishment have traditionally fallen into two broad categories: Retribution and consequentialism.

Retributivism looks backwards towards historical wrongdoings, and justifies punishment as what the perpetrator ‘deserves’ given the nature and degree of the transgression committed. Retributive punishment intrinsically values ‘just deserts’, and is indifferent as to whether punishment will have any positive effects in the future.

Consequentialism, on the other hand, is future-directed: It views punishment as justified to the extent that is achieves a desirable outcome for society. The particular desired outcome varies, but goals have included:

– Deterrence of offenders through the experience of punishment;
– Rehabilitation of offenders through treatment during punitive measures;
– Social protection through incapacitation of dangerous offenders;
– The upholding of the legal system;
– Moral education of society at large.

In assessing the support for these theories of punishment, an interesting tension arises between people’s (and policy-makers’) stated preferences and their measured intuitions.

Advocating for retribution-based justice is now taboo amongst policy-makers and politicians: The UK’s Criminal Justice System’s website writes that ‘[t]he purpose of the Criminal Justice System… is to deliver justice…by punishing the guilty and helping them to stop offending, while protecting the innocent”; President Obama earlier this year urged Palestinians and Israelis to “act with reasonableness and restraint, not vengeance and retribution” in order to achieve a “peaceful solution”. This explicit rejection of retribution is mirrored in psychological studies; when asked to provide justifications for punishment, people frequently report a motivation to deter future crimes (Ellsworth & Ross, 1983; Vidmar & Miller, 1980).

When studies assess behaviour rather than stated preferences, however, it seems that humans may be more innately retributivist than we might like to think.

In a study conducted by Jonathan Baron and Ilana Ritov (1993), participants were asked how best to punish a company for producing a vaccine that caused a child’s death. Some were told that a fine would incentivise the company to manufacture a safer product, while others were told that a fine would discourage the company from making the vaccine, and as there were no alternatives on the market, would ultimately lead to more deaths. Most participants were indifferent about this distinction, and wanted the company fined heavily, regardless of the consequence.

In his 2006 study, Kevin Carlsmith presented participants with different information relating to a crime, and found that 97% were drawn to retribution-related information over deterrence-related information. John Darley et al. (2000) similarly found that punishment decisions were highly sensitive to the retribution-related criteria and that participants largely ignored the likelihood of reoffending.

These studies, however, were not able to isolate how much people value retribution alone, because usually punishment both inflicts damage (satisfying the retributive motive) and communicates a norm violation (satisfying the deterrence motive).

A new study by Molly Crockett et al. (2014) solved this problem and isolated retributive motives by examining how much people will pay to punish another person, even when that other person will never know they have been punished (See above for the full study: Essentially, a player could ‘punish’ another defective player by paying to diminish the defector’s financial reward. The punished party, however, is not made aware of their financial position until the end of the game, and cannot know whether anything has been deduced as ‘punishment’.)

“Hidden” punishment, by definition, cannot deter future norm violations, but was nevertheless used by both victims and observers of victims. These findings provide unambiguous behavioural evidence that people are willing to invest personal resources in pure retribution without the possibility of deterrence.

In many cases, of course, the feelings that motivate a desire for retribution may be admirable – such as moral outrage and sympathy and compassion for the victims – but, as Paul Bloom writes, “on many issues, [feelings such as] empathy can pull us in the wrong direction. The outrage that comes from adopting the perspective of a victim can drive an appetite for retribution”.

If we care at all whether a punishment results in lives saved or lives lost, we cannot subscribe to retribution as a guiding principle of justice, however much our intuitions want us to.

To say we have an innate taste for retribution is not, then, to say we should indulge it. It is rather to say that in punishment, as in all areas of social policy, careful reflection, empirical data, and impartial scholarship are always likely to be better decision-making tools than amateur analysis and intuition.

 

References:

Baron, J., & Ritov, I. (1993). Intuitions about penalties and compensation in the context of tort law. In Making Decisions About Liability and Insurance (pp. 17-33). Springer Netherlands.

Carlsmith, K. M. (2006). The roles of retribution and utility in determining punishment. Journal of Experimental Social Psychology, 42(4), 437-451.

Crockett, M. J., Özdemir, Y., & Fehr, E. (2014). The value of vengeance and the demand for deterrence. Journal of Experimental Psychology: General, 143(6), 2279.

Darley, J. M., Carlsmith, K. M., & Robinson, P. H. (2000). Incapacitation and just deserts as motives for punishment. Law and Human Behavior, 24(6), 659.

Ellsworth, P. C., & Ross, L. (1983). Public opinion and capital punishment: A close examination of the views of abolitionists and retentionists. Crime & Delinquency, 29(1), 116-169.

Vidmar, N., & Miller, D. T. (1980). Socialpsychological processes underlying attitudes toward legal punishment. Law and Society Review, 565-602.

Loss Aversion, Framing Effects, and Out of Court Settlement

Imagine you are walking along the street and find £10. Great! You put the money in your pocket. Later, you go to reach for it and it isn’t there any more – you have lost your £10. This feels bad, and, importantly, it likely feels more bad than it felt good when you found it.

This emotional asymmetry is the basis of loss aversion. In prospect theory, loss aversion refers to the tendency for people to strongly prefer avoiding losses than acquiring gains (even when the outcomes of the decision are de facto identical). As demonstrated by Amos Tversky and Daniel Kahneman, losses are on average at least twice as psychologically powerful as gains.

This has a dramatic effect on the way we make choices. If the outcome of a choice is presented as a gain, people will be likely to choose a smaller, but guaranteed, gain, over one that is larger but entails a degree of risk. Conversely, if the outcome is presented as a loss, people will be reluctant to accept a definite loss, and will instead risk losing more for the chance to lose nothing at all. We are, then, very influenced by how a choice is framed.

This is particularly relevant for out-of-court settlement in civil litigation. Settling a legal dispute out of court is typically beneficial for both parties, yet far too few disputes settle when they (mathematically) should, and framing manipulation is one of the many reasons for settlement failure.

In 1996, Law Professor Jeffrey Rachlinski conducted a study in which half of the subjects, the ‘claimants’, could either accept a settlement of $200,000 dollars, or proceed to court where they would stand a 50% chance of being awarded $400,000. The other half, the ‘defendants’, could pay $200,000 to the claimant immediately, or continue to court and risk a 50% chance of being ordered to pay $400,000.

77% of the claimants were happy to take the settlement, but only 31% of defendants were happy to pay it. As predicted, complainants faced with choosing between a definite gain and the chance of a greater gain were risk averse, and defendants were risk seeking when choosing between a guaranteed loss and the chance to pay nothing.

In 2014, Ian Belton and colleagues conducted a study where subjects were presented with a similar scenario, and also examined whether lawyers are as susceptible to framing effects as non-lawyers.

Screen Shot 2015-01-01 at 14.14.35

As predicted, a significant effect of framing was found for both groups, as both non-lawyers and lawyers were much more likely to settle their claim in the gain scenario than in the loss scenario, though the effect for lawyers was less profound.

All parties involved in litigation, then, could benefit from a greater awareness of the biasing effect of framing, and as lawyers we should work to ensure that our clients’ decisions are not influenced by factors that ought to be irrelevant. Litigating in court involves substantial costs, uncertainty, and inconvenience if not stress or distress. If disputes can be settled satisfactorily out of court, then all barriers to mutually beneficial settlement – including framing effects – should be addressed.

References:

Belton, I. K., Thomson, M., & Dhami, M. K. (2014). Lawyer and Nonlawyer Susceptibility to Framing Effects in Out‐of‐Court Civil Litigation Settlement. Journal of Empirical Legal Studies, 11(3), 578-600.

Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American psychologist, 39(4), 341.

Plous, S. (1993). The psychology of judgment and decision making. Mcgraw-Hill Book Company.

Rachlinski, J. J. (1996). Gains, losses, and the psychology of litigation. S. Cal. L. Rev., 70, 113

 

The Conjunction Fallacy and the Conviction of John

The conjunction rule states that the probability of both A and B happening cannot exceed the probability of either A or B happening. The probability that I will roll a 6 on a die and flip heads on a coin, for example, cannot be greater than the probability that I will roll 6 on a die or flip heads. The conjunction fallacy occurs when this rule is violated.

Psychologists and Nobel prize winners Tversky and Kahneman demonstrated this with the case of Linda. Linda was described as “31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations”. Participants, having read this description, were asked to rank the probability of various statements about Linda being true. These included:

(1) Linda is a bank teller
(2) Linda is a bank teller and active in the feminist movement

A large majority of respondents thought (2) was more likely than (1). This violates the conjunction rule, as the probability of Linda being both a bank teller and active in the feminist movement, cannot be greater than the probability of her being only one of these (a bank teller). The problem is that statement (2) seems more representative of Linda as described in the passage, and it mistakenly deemed to be more likely.

The conjunction fallacy has important consequences for the legal system, as it often appears in the construction of plausible causal scenarios. Tversky and Kahneman also studied responses to John P, described as a defendant with prior convictions for smuggling precious stones and metals. Respondents were asked to consider the likelihood that:

(1) John is a drug addict
(2) John killed one of his employees

Only 23% of respondents thought it was more likely that John was a murderer than an addict. However, when option (2) was changed to “killed one of his employees to prevent them from talking to the police”, around half of respondents thought he was more likely to be a murderer than an addict.

The rules of probability tell us is that the more general a statement is, the more probable it is, and that every detail added to a series of events makes that series less likely. Just as it is more likely that I will see a car outside my window tomorrow than a red car, and more likely that I see a red car tomorrow than a red car with a dog in the back seat, it is more likely that John is a murderer that that he murdered specifically to prevent an employee talking to the police.

The problem is that extra detail gives rise to a more fathomable scenario; condemning John without any evident motive may seem premature, and the additional information makes the proposition seem more salient, more comprehensible, and (mistakenly) more probable.

This mistake can be costly to defendants who are faced with eloquent, detailed, but unproven hypotheses about why and how they have broken the law. As lawyers, then, we must be careful to distinguish between causal scenarios that are supported by evidence, and speculative storytelling. In the latter, every speculative detail doesn’t just cloud judgement and complicate decision-making, it dramatically reduces the likelihood of the explanation being true at all.

References

Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review, 90(4), 293.

Tversky, A., & Kahneman, D. (1981). Judgments of and by Representativeness (No. TR-3). Stanford University Department of Psychology

Status Quo Bias in the Law

In 1832, the Great Reform Act confirmed the exclusion of women from the electorate. In 1973, the Matrimonial Clauses Act confirmed the exclusion of same-sex couples from marriage. Thankfully, these exclusions were repealed in 1928 and 2013 respectively. The law evolves, and necessarily so.

Yet all of these changes have taken place slowly, and in the face of the pervasive resistance to change known as status quo bias: A cognitive error where one option is incorrectly judged to be better than another simply because it represents the status quo.

Several studies have confirmed the ubiquity of this effect. In the famous ‘mug experiment’, students were asked to fill out a questionnaire, and then rewarded with either a mug or a large chocolate bar. After receiving the gifts, they were then offered the chance to exchange their gift for the respective other option. Approximately 90% declined.  As soon as the ‘status quo’ was established either a mug or chocolate, students were happy to retain their original gift.

Though the choice of gift seems trivial, the  consequences of giving an undue bonus to the status quo can be significant: People fail to move their existing investments to more lucrative options, for example, or are resistant to changing their long-term medication for a more effective alternative.

The bias also exerts a powerful influence over the formulation and interpretation of the law. Consider the criminalisation of marijuana use in the Misuse of Drugs Act 1971 and the paucity of legal restraints on the adult consumption of alcohol and tobacco. Given the relative societal costs of these substances, this state of affairs makes seemingly little sense, and we have reason to suspect that status quo bias may be part of the problem.

One way to check is via the Reversal Test, a heuristic developed by philosophers Nick Bostrom and Toby Ord. The test posits that when a proposal to change a certain parameter is thought to have bad overall consequences, one should consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias.

The parameter at hand with drug laws is the criminalising of substances that pose a threat to public health and safety. Those who believe that we should not decriminalise marijuana should therefore consider if they would endorse a shift in the other direction; namely criminalising substances of similar or greater toxicity (such as alcohol), and imposing stronger penalties on transgressors. This position seems unlikely to be either popular or justifiable on public health grounds.

The Reversal Test is also illuminating when applied to other areas of law: If intensive factory farming, 40-hour working weeks, and labelling non-nationals “illegal” were not established norms, would we find that they should be?

Thorough its invisibility and ubiquity, the status quo bias is a silent threat to legislative  progress. Departing from the status quo where necessary, and recognising and resisting the  bias where it arises, will surely result in wiser, better motivated, and more responsive legislation.

References

Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American psychologist, 39(4), 341.

Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment. Cambridge University Press.

Bostrom, N., & Ord, T. (2006). The Reversal Test: Eliminating Status Quo Bias in Applied Ethics*. Ethics, 116(4), 656-679. Chicago.

Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of risk and uncertainty, 1(1), 7-59.

Confirmation Bias and the Law

“The human understanding when it has once adopted an opinion […] draws all things else to support and agree with it” — Frances Bacon, 1620

Once we hold a particular view or hypothesis, we are more likely to search for, acknowledge, give credence to, and remember information that confirms it, regardless of whether that information is true. This phenomenon, known as confirmation bias, may be one of the most dangerous cognitive biases affecting decision-making in a judicial context.

Consider psychologist Peter Watson’s so-called 2-4-6 test. Participants were told that the experimenter had a rule in mind that classified three sets of numbers, and that “2-4-6” conformed to that rule. The subjects then proposed their own sets of numbers, were told whether it conformed to the rule or not, and were allowed to continue until they felt sure they knew the rule.

The rule was actually “any set of three increasing numbers”, but participants typically had a difficult time discovering this. They often believed the rule to be something such as “even numbers increasing” or “numbers increasing in equal intervals”, and the positive feedback received for sets following these incorrect rules strengthened their convictions. The fact that participants failed to even consider generating sequences seriously at odds with their focal hypothesis (such as 100-40-17) demonstrates the strength of confirmation bias.

As a form of motivated cognition, it affects all levels of belief formation and behaviour. If we dislike somebody, we give disproportionate weight to any evidence that they may be unpleasant, ignore evidence to the contrary, and interpret ambiguous evidence in our favour. At a wider level, we choose the friends, read the websites, and support the political parties we expect to confirm our existing views of the world.

The implications of this bias for the judicial system are multiple. Trials, for example, are frequently long and complex, and studies have shown that members of the jury often form their decisions early and interpret subsequent evidence in a way that supports their premature conclusions. This process has leads to a polarisation of attitudes among jurors, as each member of the jury becomes more and more entrenched in their position as the trial develops.

In the 2013 murder trial of David Camm,  the defense lawyer argued that Camm was charged with the murder of his family solely due to the effect of confirmation bias in the investigation. Every piece of evidence against him transpired to be inaccurate or unreliable, yet the charges against him were not dropped (though Camm was eventually acquitted).

The Central Park jogger case is another example. In 1989, five teenagers confessed to raping and assaulting a woman as she jogged through Central Park. They quickly retracted their statements, alleging that police had coerced their confessions. No physical or eyewitness evidence linked the suspects to the attack; in fact, semen recovered from the victim appeared to come from a single donor and did not match any of the five suspects. Nevertheless, a jury convicted all five of them.

 The detective’s and prosecutor’s statements made at the time demonstrated the intensity of their commitment to their theory of the case. They found all evidence that severely undermined their theory incredible, or modified their version of events to accommodate the new evidence within their original hypothesis. (In 2002, another man confessed to the crime. His DNA matched the victim, and a judge  overturned the original defendants’ convictions).

Of course, all lawyers arguably ‘exploit’ confirmation bias to some extent, as to build a case is to argue that the evidence at hand supports the conclusion dictated by the client. However, there is a clear difference between evaluating all evidence impartially in order to build a case consciously, and using selected evidence to justify a conclusion already drawn. Being able to objectively assess a situation is critically important to the practice of law. Engaging in case-building via confirmation bias without being aware of doing so may lead to overconfidence in the strength of the resulting case.

So, what can we do? Unfortunately, simply being aware of confirmation bias is not enough to mitigate its effects, and neither is it a matter of lacking intelligence. As studies have shown , while some cognitive biases do correlate with IQ, confirmation bias does not.

The most effective strategy is to develop a mind-habit of always ‘thinking the opposite’. Always consider the possibility that you might be (drastically) wrong, actively search for evidence that this might be the case,  consider how new evidence could  challenge as well as strengthen your beliefs, and be discerning about the information you choose to process.

References

Devine, Patricia G.; Hirt, Edward R.; Gehrke, Elizabeth M. (1990), “Diagnostic and confirmation strategies in trait hypothesis testing”, Journal of Personality and Social Psychology (American Psychological Association)

Myers, D.G.; Lamm, H. (1976), “The group polarization phenomenon”, Psychological Bulletin 83 (4): 602–627

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175-220

Roach, Kent (2010), “Wrongful Convictions: Adversarial and Inquisitorial Themes”, North Carolina Journal of International Law and Commercial Regulation

Russo, J. Edward, and Margaret G. Meloy. “Hypothesis generation and testing in Wason’s 2–4–6 task.” Unpublished manuscript (2002).

Schanberg, S. H, (2002, November 26). A journey through the tangled case of the central park jogger. Village Voice, p. 36.

Stanovich, Keith E., Richard F. West, and Maggie E. Toplak. “Myside bias, rational thinking, and intelligence.” Current Directions in Psychological Science 22.4 (2013): 259-264.

© 2017 Natalie Cargill

Theme by Anders NorenUp ↑