Wednesday, July 10, 2019
The first step in applying behavioral insights in government is to understand the basics.

“Applying behavioral insights in the right context can lead to substantial improvements in program outcomes,” writes behavioral scientist Amira Choueiki Boland in a 2016 Public Administration Review article.

But just what are these insights -- derived from the academic field of behavioral science -- that can be applied in government?  It is hard to explain the field, in part because of its complexity, and in part because it wraps itself into a technical language that takes some decoding.

Following are some of the underlying concepts and terms, at least as they are beginning to appear in the public administration literature. Because the field is still evolving, sometimes different language is used to describe the same concepts, and the ways concepts are organized will vary between authors. As a result, my descriptive efforts should be seen as a beginner’s effort.

Underlying Concepts: “System 1 and System 2” Thinking and Cognitive Bias

In a 2018 Public Administration Review article, Nicola Bellé and his colleagues briefly describe the historical evolution of some of the concepts underpinning behavioral science.  They note that before the 1940s, the dominant model used to describe decision making “features a rational decision maker who has clear and comprehensive knowledge of the environment, a well-organized system of preferences, and excellent computational skills to allow for the selection of optimal solutions.”

However, in the late 1940s and 1950s, scholars began to question this approach, noting that “decision makers are endowed with bounded rationality.”  As a result, “people make decisions for themselves and for others by relying on a limited number of heuristic principles [mental short cuts] that reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations.”

Based on this new theory, “decision makers are prone to cognitive biases [errors in thinking] that systematically affect their estimates, judgments, and choices in any domain.”

What Is “System 1 and System 2” Thinking?  Pioneering psychologists Daniel Kahneman and Amos Tversky describe the differences between the use of heuristics and rational decision making as System 1 and System 2 thinking, where:

  • System 1 thinking is perceptual, fast, intuitive, automatic, and effortless.  An example is judging the potential actions of other drivers while driving home from work using the same route each day. The advantage of this use of mental shortcuts reduces complexity and allows fast, effortless, automatic and associative decision making.
  • System 2 thinking is reason-based, slow, takes mental effort, and is rule governed. Judgements are based on intentional and explicit processes. An example is choosing a health plan. Sometimes it involves the use of external decision support models, software, or group decision making.

Under System 1, the use of heuristics (rules of thumb/mental shortcuts) can be effective in that they reduce complexity. However, they tend to lead to systematic errors which are called “cognitive biases.”

What is Cognitive Bias? Dr. Travis Bradberry writes: “Cognitive bias is the tendency to make irrational judgments in consistent patterns. . . . Researchers have found that cognitive bias wreaks havoc by forcing people to make poor, irrational judgments. . . . Since attention is a limited resource, people have to be selective about what they pay attention to in the world around them. Because of this, subtle biases can creep in and influence the way you see and think about the world.”

How do cognitive biases work?  According to Kendra Cherry, “A cognitive bias is a type of error in thinking that occurs when people are processing and interpreting information in the world around them. . . . They are rules of thumb that help you make sense of the world and reach decisions with relative speed.”

Cherry  elaborates, noting that: “When you are making judgments and decisions about the world around you, you like to think that you are objective, logical, and capable of taking in and evaluating all the information that is available to you. Unfortunately, these biases sometimes trip us up, leading to poor decisions and bad judgments.”

A Wikipedia article catalogs 170 different kinds of cognitive biases. John Manogian III developed a codex that organizes this inventory of cognitive biases into four categories:

  • What should we remember (e.g., discarding specifics in order to create generalities)
  • Too much information (e.g., focus on details that reinforce pre-existing beliefs)
  • Need to act fast (e.g., bias towards status quo)
  • Not enough meaning (e.g., we fill in characteristics with stereotypes)

Some Techniques for Applying These Underlying Concepts

Following are some examples of behavioral intervention techniques that leverage the basic concepts of System1/System 2 thinking and cognitive biases. Many are based on a 2019 literature synthesis by Paul Battaglio, Jr., et al, in Public Administration Review:

The Use of “Nudge” or Choice Architecture. Nudging and choice architecture are useful tools for influencing the choices or behaviors of citizens and government workers. According to Battaglio et al: “nudge theory systematizes the use of behavioral science to influence high-stakes choices through low-powered incentives.” . . . “A nudge is ‘any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any option. . . Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.’”

Example: The Social Security Administration’s Supplemental Security Income (SSI) is a monthly cash benefit for the disabled, blind, or elderly poor. Less than 60 percent of those eligible apply, in part because of perceived administrative barriers. So SSA sent letters to those it judged might be eligible to let them know the application process was simple and what was the maximum benefit level. It tested several variations of the letter and found that 6 percent of those receiving the letter applied vs. 1 percent among those who did not receive a letter. Ultimately, the number of participants who received the letter and who qualified for the program was greater than those who did not receive the letter and qualified, by 340 percent.

Opting for the Status Quo.  In this form of cognitive bias, decision makers “tend to prefer the status quo option as the number of viable alternatives increase.”  That is, when more options become available, the decision maker is more likely to prefer sticking with the status quo, such as the same contractor, same doctor, or same appliance. According to Richard Thaler: “the most powerful nudge we have in our arsenal is simply to change the default . . The default is what happens when you do nothing.”

Example: Public health professionals are being trained to understand the role of status quo bias in decisions made by patients and are using this greater understanding to increase participation rates in organ donation programs, vaccination campaigns, and HIV screening by asking participants to opt-out of participation rather than asking them to opt-in.

The Use of Outcome Framing.  How a choice among alternatives is framed typically influences the selection made by the decision maker; e.g., describing policies with the same outcome in positive terms (lives saved) vs in negative terms (lives lost).  As a result, “.  . . . individuals prefer the policy with the sure outcome when the outcomes are framed positively and prefer the policy with the probabilistic outcome when outcomes are framed negatively” . . . “decision makers are risk averse in the domain of gains and risk takers in the domain of losses.” . . . “individuals tend to react in a systematically different manner to the same piece of information, depending on how it is presented to them.”

Example: A team of Italian scholars led by Paolo Belardinelli examined the use of outcome framing as an intervention technique.  They found that describing something in terms of success rates vs. failure rates affects decisions – a more positive framing leads to more favorable decisions than a negative framing. In an experiment, they found that user satisfaction with a sports facility results in different responses to the same data.  When a negative framing was used, survey respondents gave lower ratings to a sports facility and its director than did respondents who were presented the same data but with a positive framing.

The Use of Anchors.  Anchoring is the tendency to rely too heavily on an initial estimate, which biases our final answer. Bellé et al observe that “different starting points yield different estimates, which are biased toward the initial values.” In other words, when decision makers are given a random number, and then given an “anchor” number, they tend to make a judgement about the random number in relation to the anchor number. This can be done in areas as diverse as pricing, performance, or promotions.

Example:  Doctors found they could increase patients’ willingness to receive monthly injections to manage their psoriasis if they were first asked if they would be willing to receive daily injections.  A test found that those that were asked to participate daily more readily participated in monthly injections than those who were initially told they should receive monthly injections.

Effects of Proportion Dominance.   Bellé et al say “proportion dominance” is a particular cognitive bias where “individuals tend to put more weight on the percentage rather than the absolute number of people affected by their decisions and actions.” Researchers have found that decision makers “tend to be more willing to help higher rather than lower percentages of victims or beneficiaries of a service, even when the absolute number is held constant.”

Example: Proportion dominance is often used by charitable giving campaigns, where people are asked to donate a percentage of their income, rather than first asking for a dollar amount. The effects of proportion dominance could also be used to influence how commenters and advocacy groups respond to regulatory policy proposals.

The Influence of Loss Aversion (Prospect Theory).  Decisions that are framed as losses loom larger than if they were framed as gains – even if the end result is the same. This is because “people prefer avoiding losses to acquiring equivalent gains,” according to Battaglio et al. Similarly, “individuals tend to demand far more to give up an object they already possess than they would pay to acquire it.”

Example: The Australian Consumer and Competition Commission sent warning letters to potential victims of fraud as a result of risky transactions. Seventy-five percent of potentially risky fund transfers ceased as a result of warning letters.

Effects of Temporal Discounting. People dislike deferred gratification.  They often value something less if they have to wait for it. Federal HR specialist Chris Dobyns says: “our perception of the value of monetary payments decreases the longer the receipt of the actual payment is delayed.”

Example: HR specialists are beginning to appreciate that shortening the length of time between an accomplishment and the delivery of recognition is critical. As a result, an immediate spot award of $50 can be perceived as having more value to a recipient than being told they will receive a $100 award in six months. This has important implications for the design of agency award and recognition programs.

* * * * * *

There are many more types of insights that can be derived from behavioral science, which is why public administrators are beginning to invest in specialists that help their agencies and programs apply the principles of behavioral science in day-to-day work.  The next blog post will highlight how behavior insights are influencing many policy areas in the field of public administration.

* * * * * * *

Note: Here are links to related posts on this topic:

Part I:  How Can Behavioral Science Improve Program Outcomes?

Part II:  What Are Some Basic Behavioral Science Concepts?

Part III: How Is Behavioral Science Influencing Public Administration?

Part IV: Using Behavioral Science to Improve Federal Outcomes

Part V:  Using Behavioral Insights to Reduce Miner Injuries

Part VI: Nudge in the City:  Behavioral Science in Government

Part VII: Creating a Critical Mass of Talent and Resources in the Use of Behavioral Science in Government

Part VIII:  Behavioral Science: A Revolutionary Potential for Government?

 

Graphic Credit:  Courtesy of Chatchai_stocker via FreeDigitalPhotos.net