Over the decades, there has been much controversy on the effectiveness of clinical predictions which are mostly based on experts’ intuition. Researches from the past decades have proven that statistical methods are more accurate than clinical predictions and other researches examined heuristic principles used in predicting and judging outcomes during times when there is uncertainty or insufficient information. Although relying upon these heuristics simplifies judgment to a certain degree, this may lead to severe errors. Basically, there are three heuristic principles proposed by Kahneman and Tversky (1974).

The first is called the availability heuristic, wherein predictions are made based on the information available. The second is anchoring, wherein predictions are based on a series of numerical estimates or “anchors”. The third one is called the representativeness heuristic, wherein predictions are made based on the subsistence of apparently similar cases. This paper studies one of these heuristic principles namely, representativeness heuristic, to show how this heuristic can lead to bias on clinical predictions and hence, show that such heuristics are, indeed, less accurate than predictions based upon statistical methods.

First, the author feels compelled to give a little background on a few studies over the on-going clinical-statistical controversy. In 1996, Grove and Meehl proved that statistical method “is almost invariably equal to or superior to clinical method” (p. 293) in terms of accuracy in prediction. They analyzed secondary data coming from 136 published English researches since the 1920s which dealt with the prediction of health-related phenomena or human behaviour.

These researches should also contain at least one of each prediction — that is, at least one clinical prediction or one based on human judgment and at least one mechanical or statistical prediction. As have mentioned earlier, all of the researches they included in their studied proved that statistical method is indeed almost always equal to or superior to clinical method because statistical prediction obtained from organized data are almost always free from bias. These data are observed from actual experiences and are recorded with precise instruments instead of relying on unaided memory.

Moreover, statistical inferences are more objective than the human mind which can be bias at times or which can neglect certain important attributes that are necessary before even concluding on the result and thus, sometimes resulting to severe errors in predictions. Hence, predictions obtained from these statistical methods produce unbiased results in contrast with predictions made from human judgment. There are many reasons and examples that can show the superiority of statistical method over clinical method.

In this paper, one type of heuristic is presented based on the observations of Kahneman and Tversky in their paper On the Psychology of Prediction (1973). Their paper is chosen due to the fact that it presents how people, specifically clinicians, judge certain events based on similar events that happened in the past. In the end, this paper shows how such a heuristic (representativeness) can lead to certain and possibly severe errors in judgment as compared to the event of using statistical method.

Data analysis, discussion and conclusion are all based upon the findings of Kahneman and Tversky (1973) and Grove and Meehl (1996). In 1973, Kahneman and Tversky discussed two classes of prediction, the categorical prediction, in which predictions are presented nominally and numerical prediction, in which predictions are presented in numerically. They first examined category predictions by dividing 248 participants into three groups — 69 participants for the base-rate¬ group, 65 participants for the similarity group and 114 participants for the prediction group.

The base-rate group was asked to guess the percentage of first-year graduate students in the US who are enrolled as of the time the study was in progress in each of the nine fields of specialization namely, Business Administration, Computer Science, Engineering, Humanities and Education, Law, Library Science, Medicine, Physical and Life Sciences, and Social Science and Social Work. The similarity group was given a personality sketch (see Kahneman and Tversky, p. 38) and asked to rank the nine areas in terms of “how similar is Tom W. to the typical graduate student in each of the following nine fields of graduate specialization? ”. The prediction group, which consists of graduate students in psychology at three major universities in the United States was also given the same personality sketch as that given to the similarity group with some additional information (see Kahneman and Tversky, p. 239) and was asked to predict Tom W’s choice of specialization.

Kahneman and Tversky compared the results of these three groups by presenting a table (see Kahneman and Tversky, p. 238) and computing the product-moment correlations between the columns of the table. In so doing, they confirmed their hypothesis that most people predict certain events based on representativeness. Kahneman and Tversky explained that this happens because all the participants ignored certain important features before drawing their conclusions. In this way, they violate the normative rules of information.

The participants, basically, ignored the three types of information relevant in any statistical analysis namely, prior or background information (presented to the participants using base rates of fields of graduate specialization, specific evidence concerning the individual case (presented to the participants using the personality sketch of Tom W. ) and the expected accuracy of prediction. The statistically correct method of predicting Tom W’s choice of specialization would be to compare the relative weights assigned to specific evidence and prior information with that of expected accuracy.

As Kahneman and Tversky explains “when expected accuracy decreases, predictions should become more regressive, that is, closer to the expectations based on prior information” (p. 239). However, the participants in their study predicted without even considering the prior probabilities assigned to the specific evidence as described in Tom W’s personality sketch. Kahneman and Tversky (1973) also examined in their paper how numerical predictions can also lead to bias judgments or severe errors. In a study designed analogously with their study on categorical predictions, they showed that people also tend to predict by representativeness.

That is, most predict an outcome using a score that is most representative of the description they were provided. Kahneman and Tversky’s showed us that whether people were given nominal or numerical data, they tend to predict outcomes by representativeness. Most may think that predicting by representativeness is more efficient than statistical methods since one should only consider similar or representative events while statistical methods require rigorous (as most think) tasks such as observing and gathering data and computing for too many measures such as mean, standard deviation and the like.

However, this can become less accurate since they fail to consider some important parts in their analysis before drawing conclusions whereas statistical methods consider all of the important parts required before completely analyzing a data. Such statistical and mechanical methods reduce bias since these methods rely on precise measuring instruments than heuristic methods which rely almost entirely to memory or past knowledge which are most of the time insufficient or cannot wholly represent a certain event.

Moreover, results derived from heuristic methods such as representativeness can vary depending upon the perception of different people. Results from statistical method, on the other hand, vary only because of variation between groups or within-groups. But even if data is given to five hundred different people, as long as the data is still the same, it will still yield the same result.