Some jobs are noise-free. Clerks at a bank or a post office perform complex tasks, but they must follow strict rules that limit subjective judgment and guarantee, by design, that identical cases will be treated identically. In contrast, medical professionals, loan officers, project managers, judges, and executives all make judgment calls, which are guided by informal experience and general principles rather than by rigid rules. And if they don’t reach precisely the same answer that every other person in their role would, that’s acceptable; this is what we mean when we say that a decision is “a matter of judgment.” A firm whose employees exercise judgment does not expect decisions to be entirely free of noise. But often noise is far above the level that executives would consider tolerable—and they are completely unaware of it.
Noise is often insidious: It causes even successful companies to lose substantial amounts of money without realizing it. How substantial? To get an estimate, we asked executives in one of the organizations we studied the following: “Suppose the optimal assessment of a case is $100,000. What would be the cost to the organization if the professional in charge of the case assessed a value of $115,000? What would be the cost of assessing it at $85,000?” The cost estimates were high. Aggregated over the assessments made every year, the cost of noise was measured in billions—an unacceptable number even for a large global firm. The value of reducing noise even by a few percentage points would be in the tens of millions. Remarkably, the organization had completely ignored the question of consistency until then.
Noise vs. Bias
When people consider errors in judgment and decision making, they most likely think of social biases like the stereotyping of minorities or of cognitive biases such as overconfidence and unfounded optimism. The useless variability that we call noise is a different type of error. To appreciate the distinction, think of your bathroom scale. We would say that the scale is biased if its readings are generally either too high or too low. If your weight appears to depend on where you happen to place your feet, the scale is noisy. A scale that consistently underestimates true weight by exactly four pounds is seriously biased but free of noise.
It is obviously useful to an organization to know about bias and noise in the decisions of its employees, but collecting that information isn’t straightforward. Different issues arise in measuring these errors. A major problem is that the outcomes of decisions often aren’t known until far in the future, if at all. Loan officers, for example, frequently must wait several years to see how loans they approved worked out, and they almost never know what happens to an applicant they reject.
Where there is judgment, there is noise―and usually more of it than you think.
Unlike bias, noise can be measured without knowing what an accurate response would be. To illustrate, imagine that the targets at which the shooters aimed were erased from the exhibit. You would know nothing about the teams’ overall accuracy, but you could be certain that something was wrong with the scattered shots of teams B and D: Wherever the bull’s-eye was, they did not all come close to hitting it. All that’s required to measure noise in judgments is a simple experiment in which a few realistic cases are evaluated independently by several professionals. Here again, the scattering of judgments can be observed without knowing the correct answer.
Performing a Noise Audit
The point of a noise audit is not to produce a report. The ultimate goal is to improve the quality of decisions, and an audit can be successful only if the leaders of the unit are prepared to accept unpleasant results and act on them. Such buy-in is easier to achieve if the executives view the study as their own creation.
The problem of noise is effectively invisible in the business world; we have observed that audiences are quite surprised when the reliability of professional judgment is mentioned as an issue. What prevents companies from recognizing that the judgments of their employees are noisy? The answer lies in two familiar phenomena: Experienced professionals tend to have high confidence in the accuracy of their own judgments, and they also have high regard for their colleagues’ intelligence. This combination inevitably leads to an overestimation of agreement. When asked about what their colleagues would say, professionals expect others’ judgments to be much closer to their own than they actually are. Most of the time, of course, experienced professionals are completely unconcerned with what others might think and simply assume that theirs is the best answer. One reason the problem of noise is invisible is that people do not go through life imagining plausible alternatives to every judgment they make.
Dialing Down the Noise
The most radical solution to the noise problem is to replace human judgment with formal rules—known as algorithms—that use the data about a case to produce a prediction or a decision. People have competed against algorithms in several hundred contests of accuracy over the past 60 years, in tasks ranging from predicting the life expectancy of cancer patients to predicting the success of graduate students. Algorithms were more accurate than human professionals in about half the studies, and approximately tied with the humans in the others. The ties should also count as victories for the algorithms, which are more cost-effective.
In many situations, of course, algorithms will not be practical. The application of a rule may not be feasible when inputs are idiosyncratic or hard to code in a consistent format. Algorithms are also less likely to be useful for judgments or decisions that involve multiple dimensions or depend on negotiation with another party. Even when an algorithmic solution is available in principle, organizational considerations sometimes prevent implementation. The replacement of existing employees by software is a painful process that will encounter resistance unless it frees those employees up for more-enjoyable tasks.
The bottom line here is that if you plan to use an algorithm to reduce noise, you need not wait for outcome data. You can reap most of the benefits by using common sense to select variables and the simplest possible rule to combine them.
Studies show that algorithms do better than humans in the role of decision maker.
no matter what type of algorithm is employed, people must retain ultimate control. Algorithms must be monitored and adjusted for occasional changes in the population of cases. Managers must also keep an eye on individual decisions and have the authority to override the algorithm in clear-cut cases. For example, a decision to approve a loan should be provisionally reversed if the firm discovers that the applicant has been arrested. Most important, executives should determine how to translate the algorithm’s output into action. The algorithm can tell you which prospective loans are in the top 5% or in the bottom 10% of all applications, but someone must decide what to do with that information.
Algorithms are sometimes used as an intermediate source of information for professionals, who make the final decisions. One example is the Public Safety Assessment, a formula that was developed to help U.S. judges decide whether a defendant can be safely released pending trial. In its first six months of use in Kentucky, crime among defendants on pretrial release fell by about 15%, while the percentage of people released pretrial increased. It’s obvious in this case that human judges must retain the final authority for the decisions: The public would be shocked to see justice meted out by a formula.
Bringing Discipline to Judgment
Replacing human decisions with an algorithm should be considered whenever professional judgments are noisy, but in most cases this solution will be too radical or simply impractical. An alternative is to adopt procedures that promote consistency by ensuring that employees in the same role use similar methods to seek information, integrate it into a view of the case, and translate that view into a decision. A thorough examination of everything required to do that is beyond the scope of this article, but we can offer some basic advice, with the important caveat that instilling discipline in judgment is not at all easy.
Training is crucial, of course, but even professionals who were trained together tend to drift into their own way of doing things. Firms sometimes combat drift by organizing roundtables at which decision makers gather to review cases. Unfortunately, most roundtables are run in a way that makes it much too easy to achieve agreement, because participants quickly converge on the opinions stated first or most confidently.
As an alternative or addition to roundtables, professionals should be offered user-friendly tools, such as checklists and carefully formulated questions, to guide them as they collect information about a case, make intermediate judgments, and formulate a final decision. Unwanted variability occurs at each of those stages, and firms can—and should—test how much such tools reduce it. Ideally, the people who use these tools will view them as aids that help them do their jobs effectively and economically.
Source: Harvard Business Review
Trader Aleksandar Kumanov