# Heuristics

Human decision making exhibits what has been called bounded rationality. This means that human thinking is rational in style but misses being truly rational for a number of reasons, including lack of processing ability. There are just too many variables to calculate. (In addition, we always have to make decisions under conditions if incomplete and often incorrect information.)

So how do we get by? The human mind uses heuristics to think about things that would otherwise be quite complicated. These heuristics are short-cuts -- time-saving recipes that often work, but not always. They can be the source of irrational behavior. We will consider three heuristics:

### The Representativeness Heuristic

This is where people evaluate the probability of something based on how similar it is to a general class. For example, in evaluating the chances that the person in front of them is a librarian, they ask themselves how similar the person is to their image of a librarian. The more similar, the higher the perceived probability.

#### Insensitivity to prior probability of outcomes

Subjects are shown a brief description of an individual, allegedly drawn at random from a group of 100 professionals, 70% of whom are engineers and 30% are lawyers. Another group of subjects is told the opposite: 30% engineers and 70% lawyers. People ignored these baseline probabilities and based their judgments entirely on the descriptions.

When given blank descriptions, subjects used the baseline probability to guess the probability that a given person was an engineer. When given this description:

• Dick is a 30 year old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well-liked by his colleagues.

They said the probability was 50%, even though the description says nothing, and the baseline probabilities were 70/30.

#### Insensitivity to sample size

When given this problem:

• A certain town is served by two hospitals. In the larger hospital, about 45 babies are born each day. In the smaller hospital, about 15 babies are born each day. As you know, about 50 percent of all babies are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50 percent, sometimes lower.
• For a period of 1 year, each hospital recorded the days on which more than 60 percent of the babies born were boys. Which hospital do you think recorded more such days?

Yet the smaller hospital is much more likely to deviate a lot from 50%. Just as when you toss a coin twice, you could get 0% heads (25% chance), 50% heads or 100% heads (25% chance). But out of 100 tosses, it is very unlikely to get very far from 50%.

People solve this by looking at the similarity of the two situations, if they look similar, they expect similar outcomes.

#### Misconceptions of chance

A coin is to be tossed 6 times. Which sequence is more likely?

1. H T H T T H
2. H H H T T T

They are both equally likely, but most people think the first is more likely, because it "looks more random". In other words, they compare it with the idealized image of what random tosses would like, and if it looks that, they think it's more likely.

#### Ignorance of regression toward the mean

Some researchers found that flight instructors had the belief that praise is bad for pilots and harsh criticism is good for them. The basis for this was the observation by the instructors that praise following a particularly great landing was usually followed by a worse landing, and criticism following a really bad landing was usually followed by a better landing. But what is really happening is regression towards the mean: the quality of pilot landings is a function of both skill and chance. Sometimes the chance factors really add up and it produces an unusually good or bad landing. But the next time, it is unlikely to get an equally unusual landing: it will be more normal. So really bad landings are followed by better ones, and really good landings are followed by worse ones.

The same is true of the heights of fathers and sons. Exceptionally tall fathers tend to produce tall sons, but not as exceptionally tall as the fathers. Exceptionally short fathers tend to produce short sons, but not as exceptionally short.

### The Availability Heuristic

This is a strategy for evaluating the frequency of something based on the ease with which instances of this something can be brought to mind. For example, you assess the risk of heart attack by thinking about how many people you know that have had a heart attack.

#### Retrievability of instances

Subjects are read a list of well-known celebrities of both sexes, and were then asked to judge whether the list contained more men than women. In some lists, the men were more famous than the women. In other lists, the women were more famous. In each case, people's judgments of the proportion of women was determined by the relative fame of the women in the list.

#### Effectiveness of search set

Suppose you sample words (of three letters or more) randomly from an English text. Is it more likely that a word starts with "r" or that "r" is the third letter? People approach this problem by recalling words that begin with "r" (such as road) and words that have "r" in the third position (such as "car"), and assess the relative frequency based on the relative number of words like that that they can think of. Since it is much easier to search for words that begin with a given letter, most people find that words starting with "r" are more common (but it's not true. Not true for "k" either).

Similarly, suppose you are asked to rate the frequency of with which abstract words (like though, love) and concrete words (desk, water) appear in written English. One way people approach this is to count the number of contexts in which such words are likely to appear. It is much easier to find contexts for the abstract words (e.g., love stories for "love") than for concrete words. So people think abstract words are more common.

#### Biases of imaginibility

Sometimes, we don't carry memories of instances to recall, but carry a rule for generating the instances. Usually, one uses the rule to generate a few instances, and evaluates how easy it was to create different kinds of instances.

For example, consider a group 10 people who form committees of k members, 1< k<9. How many different committees of k members can be formed (answer for each k)? Which k yields the most committees?

But what people really do is observe that it is easy to construct several disjoint committees of 2 members, but hard to construct disjoint sets of 8 members (there is just one). So they conclude that small committees are more numerous than large ones.

The true answer is k=5 gives the maximum number of committees (252), and the number of committees is the same for any value of k and for 10-k (every committee of 3 defines a potential committee of 7). {1 10 45 120 210 252 210 120 45 10 1}

#### Association bias

People judge the probability of events co-occurring by judging how semantically related they are. So if you give them the clinical diagnoses of 100 mental patients, together with drawings of a person made by those patients, and then ask them about the association between diagnoses and aspects of the drawing, they will overestimate "peculiar eyes" with "paranoid schizophrenic"), because these seem related.

One way that people make estimates of things is to start with an initial value that is known or is easy to construct, and then adjusting it to reflect the exact situation. Inevitably, these adjustments are insufficient, and the final estimate is more similar to the initial value than it should be.

A roulette wheel with 100 numbers (0 to 99) is spun in front a subject. Whatever number comes up, respondents are asked whether the percentage of United Nations countries that are African is higher or lower than that number. Then they are asked for their best guess about that percentage. Median estimates for people who got "65" as their chance number were higher than the median estimates for people who got "10" as their chance number. This happens even though respondents know that the starting number is unrelated to the true percentage.

#### Conjunctive and disjunctive events

In a study by Bar-Hillel (1973), subjects were asked to bet on a pair of events. Three types of events were used altogether:

1. (single) draw a red marble from a bag containing 50% red marbles
2. (conjunctive) draw a red marble 7 times in a row from a bag containing 90% red marbles.
3. (disjunctive) draw at least one red marble in 7 tries (with replacement) from a bag containing 10% red marbles

Given the choice between the single event and the conjunctive, people chose the conjunctive. But it's actually the worse bet (it's probability is actually only .48). Given the choice between the single event and the disjunctive, people chose the single event, even the disjunctive is the better bet (p = .52).

People seem to approach the problem this way: To evaluate the conjunctive, they realize that a single draw has a 90% chance of success. They then factor that down because they have to get all 7 right. But they never factor it down enough. To evaluate the disjunctive, they realize that a single draw has only a 10% chance of success. They then factor that up because they have multiple chances to get one red one. But they never factor it up quite enough.

Biases in the evaluation of compound events are particularly significant in the context of planning. Any complex undertaking has the character of a conjunctive event: lots of things have to click into place in order for the whole thing to work. Even when the probability of each individual event is very likely, the overall probability can be very low. People in general way overestimate the probability of the conjunctive event, leading to massive time and cost overruns in real projects.

Conversely, disjunctive structures are typically encountered in the evaluation of risks. A complex system, such as a nuclear reactor or a human body, will malfunction if just one key component fails. Even if the probability of failure of any one event is very low, the overall probability of some event going wrong is very high. People always underestimate the probability of complex systems, like the Challenger, going wrong.