Girish Mahajan (Editor)

Automation bias

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Automation bias

Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. This problem has come under increasing scrutiny as decision making in such critical contexts as intensive care units, nuclear power plants, and aircraft cockpits have increasingly involved computerized system monitors and decision aids. Errors of automation bias tend to occur when decision-making involves a degree of dependence on computers or other automated aids and the human element is largely confined to monitoring the tasks underway. Examples of such situations can involve not only such urgent matters as flying on automatic pilot but also such mundane matters as the use of spell-checking programs.

Contents

The tendency toward overreliance on automated aids is known as "automation misuse".

Errors of commission and omission

Automation bias can take the form of omission errors, which occur when automated devices fail to detect or indicate problems, or commission errors, which occur when users follow an automated directive without taking into account other sources of information.

Errors of omission have been shown to result from cognitive vigilance decrements, while errors of commission result from a combination of a failure to take information into account and an excessive faith in the reliability of automated aids. Errors of commission occur for three reasons: (1) overt redirection of attention away from the automated aid; (2) diminished attention to the aid; (3) active discounting of information that counters the aid's recommendations. Omission errors take place when the human decision-maker fails to notice an automation failure, for example when a spell-check program misses a misspelled word or offers a false correction.

Training that focused on the reduction of automation bias and related problems has been shown to lower the rate of commission errors, but not of omission errors.

Factors

The presence of automatic aids, as one source puts it, "diminishes the likelihood that decision makers will either make the cognitive effort to seek other diagnostic information or process all available information in cognitively complex ways." It also renders users more likely to conclude their assessment of a situation too hastily after being prompted by an automatic aid to take a specific course of action.

According to one source, there are three main factors that lead to automation bias. First, the human tendency to choose the least cognitive approach to decision-making, which is called the cognitive miser hypothesis. Second, the tendency of humans to view automated aids as having an analytical ability superior to their own. Third, the tendency of humans to reduce their own effort when sharing tasks, either with another person or with an automated aid.

Other factors leading to an over-reliance on automation and thus to automation bias include inexperience in a task (though inexperienced users tend to be most benefited by automated decision support systems), lack of confidence in one's own abilities, a lack of readily available alternative information, or desire to save time and effort on complex tasks or high workloads. It has been shown that people who have greater confidence in their own decision-making abilities tend to be less reliant on external automated support, while those with more trust in decision support systems (DSS) were more dependent upon it.

Screen design

One study, published in the Journal of the American Medical Informatics Association, found that the position and prominence of advice on a screen can impact the likelihood of automation bias, with prominently displayed advice, correct or not, is more likely to be followed; another study, however, seemed to discount the importance of this factor. According to another study, a greater amount of on-screen detail can make users less "conservative" and thus increase the likelihood of automation bias. One study showed that making individuals accountable for their performance or the accuracy of their decisions reduced automation bias.

Availability

"The availability of automated decision aids," states one study by Linda Skitka, "can sometimes feed into the general human tendency to travel the road of least cognitive effort."

Awareness of process

One study also found that when users are made aware of the reasoning process employed by a decision support system, they are likely to adjust their reliance accordingly, thus reducing automation bias.

Team vs. individual

The performance of jobs by crews instead of individuals acting alone does not necessarily eliminate automation bias. One study has shown that when automated devices failed to detect system irregularities, teams were no more successful than solo performers at responding to those irregularities.

Training

Training that focuses on automation bias in aviation has succeeded in reducing omission errors by student pilots.

Automation failure and "learned carelessness"

It has been shown that automation failure is followed by a drop in operator trust, which in turn is succeeded by a slow recovery of trust. The decline in trust after an initial automation failure has been described as the first-failure effect. By the same token, if automated aids prove to be highly reliable over time, the result is likely to be a heightened level of automation bias. This is called "learned carelessness."

Provision of system confidence information

In cases where system confidence information is provided to users, that information itself can become a factor in automation bias.

Definitional problems

Although automation bias has been the subject of many studies, there continues to be complaints that it remains ill-defined and that reporting of incidents involving automation bias is unsystematic.

Automation-induced complacency

The concept of automation bias is viewed as overlapping with automation-induced complacency, also known more simply as automation complacency. Like automation bias, it is a consequence of the misuse of automation and involves problems of attention. While automation bias involves a tendency to trust decision-support systems, automation complacency involves insufficient attention to and monitoring of automation output, usually because that output is viewed as reliable. "Although the concepts of complacency and automation bias have been discussed separately as if they were independent," writes one expert, "they share several commonalities, suggesting they reflect different aspects of the same kind of automation misuse." It has been proposed, indeed, that the concepts of complacency and automation bias be combined into a single "integrative concept" because these two concepts "might represent different manifestations of overlapping automation-induced phenomena" and because "automation-induced complacency and automation bias represent closely linked theoretical concepts that show considerable overlap with respect to the underlying processes."

Automation complacency has been defined as "poorer detection of system malfunctions under automation compared with under manual control." NASA's Aviation Safety Reporting System (ASRS) defines complacency as "self-satisfaction that may result in non-vigilance based on an unjustified assumption of satisfactory system state." Several studies have indicated that it occurs most often when operators are engaged in both manual and automated tasks at the same time. This complacency can be sharply reduced when automation reliability varies over time instead of remaining constant, but is not reduced by experience and practice. Both expert and inexpert participants can exhibit automation bias as well as automation complacency. Neither of these problems can be easily overcome by training.

The term "automation complacency" was first used in connection with aviation accidents or incidents in which pilots, air-traffic controllers, or other workers failed to check systems sufficiently, assuming that everything was fine when, in reality, an accident was about to occur. Operator complacency, whether or not automation-related, has long been recognized as a leading factor in air accidents.

To some degree, user complacency offsets the benefits of automation, and when an automated system's reliability level falls below a certain level, then automation will no longer be a net asset. One 2007 study suggested that this automation occurs when the reliability level reaches approximately 70%. Other studies have found that automation with a reliability level below 70% can be of use to persons with access to the raw information sources, which can be combined with the automation output to improve performance.

Sectors

Automation bias has been examined across many research fields. It can be a particularly major concern in aviation, medicine, process control, and military command-and-control operations.

Aviation

At first, discussion of automation bias focused largely on aviation. Automated aids have played an increasing role in cockpits, taking a growing role in the control of such flight tasks as determining the most fuel-efficient routes, navigating, and detecting and diagnosing system malfunctions. The use of these aids, however, can lead to less attentive and less vigilant information seeking and processing on the part of human beings. In some cases, human beings may place more confidence in the misinformation provided by flight computers than in their own skills.

An important factor in aviation-related automation bias is the degree to which pilots perceive themselves as responsible for the tasks being carried out by automated aids. One study of pilots showed that the presence of a second crewmember in the cockpit did not affect automation bias. A 1994 study compared the impact of low and high levels of automation (LOA) on pilot performance, and concluded that pilots working with a high LOA spent less time reflecting independently on flight decisions.

In another study, all of the pilots given false automated alerts that instructed them to shut off an engine did so, even though those same pilots insisted in an interview that they would not respond to such an alert by shutting down an engine, and would instead have reduced the power to idle. One 1998 study found that pilots with approximately 440 hours of flight experience detected more automation failures than did nonpilots, although both groups showed complacency effects. A 2001 study of pilots using a cockpit automation system, the Engine-indicating and crew-alerting system (EICAS), showed evidence of complacency. The pilots detected fewer engine malfunctions when using the system than when performing the task manually.

In a 2005 study, experienced air-traffic controllers used high-fidelity simulation of an ATC (Free Flight) scenario that involved the detection of conflicts among "self-separating" aircraft. They had access to an automated device that identified potential conflicts several minutes ahead of time. When the device failed near the end of the simulation process, considerably fewer controllers detected the conflict than when the situation was handled manually. Other studies have produced similar findings.

Two studies of automation bias in aviation discovered a higher rate of commission errors than omission errors, while another aviation study found 55% omission rates and 0% commission rates. Automation-related omissions errors are especially common during the cruise phase of When a China Airlines flight lost power in one engine, the autopilot attempted to correct for this problem by lowering the left wing, an action that hid the problem from the crew. When the autopilot was disengaged, the airplane rolled to the right and descended steeply, causing extensive damage. The 1983 shooting down of a Korean Airlines 747 over Soviet airspace occurred because the Korean crew "relied on automation that had been inappropriately set up, and they never checked their progress manually."

Health care

Clinical decision support systems (CDSS) are designed to aid clinical decision-making. They have the potential to effect a great improvement in this regard, and to result in improved patient outcomes. Yet while CDSS, when used properly, bring about an overall improvement in performance, they also cause errors that may not be recognized owing to automation bias. One danger is that the incorrect advice given by these systems may cause users to change a correct decision that they have made on their own. Given the highly serious nature of some of the potential consequences of AB in the health-care field, it is especially important to be aware of this problem when it occurs in clinical settings.

Sometimes automation bias in clinical settings is a major problem that renders CDSS, on balance, counterproductive; sometimes it is minor problem, with the benefits outweighing the damage done. One study found more automation bias among older users, but it was noted that could be a result not of age but of experience. Studies suggest, indeed, that familiarity with CDSS often leads to desensitization and habituation effects. Although automation bias occurs more often among persons who are inexperienced in a given task, inexperienced users exhibit the most performance improvement when they use CDSS. In one study, the use of CDSS improved clinicians' answers by 21%, from 29% to 50%, with 7% of correct non-CDSS answers being changed incorrectly.

A 2005 study found that when primary-care physicians used electronic sources such as PubMed, Medline, and Google, there was a "small to medium" increase in correct answers, while in an equally small percentage of instances the physicians were misled by their use of those sources, and changed correct to incorrect answers.

Studies in 2004 and 2008 that involved the effect of automated aids on diagnosis of breast cancer found clear evidence of automation bias involving omission errors. Cancers diagnosed in 46% of cases without automated aids were discovered in only 21% of cases with automated aids that failed to identify the cancer.

Military

Automation bias can be a crucial factor in the use of intelligent decision support systems for military command-and-control operations. One 2004 study found that automation bias effects have contributed to a number of fatal military decisions, including friendly-fire killings during the Iraq War. Researchers have sought to determine the proper LOA for decision support systems in this field.

Correcting bias

Automation bias can be mitigated by the design of automated systems, such as reducing the prominence of the display, decreasing detail or complexity of information displayed, or couching automated assistance as supportive information rather than as directives or commands. Training on an automated system, which includes introducing deliberate errors. has been shown to be significantly more effective at reducing automation bias than just informing users that errors can occur. However, excessive checking and questioning automated assistance can increase time pressures and complexity of tasks thus reducing the benefits of automated assistance, so design of an automated decision support system can balance positive and negative effects rather than attempt to eliminate negative effects.

References

Automation bias Wikipedia