If we want to act on data to get fit or reduce heating bills, we need to understand not just the analytics of the data, but how we make decisions. Whatever else it produces, an organization is a factory that produces judgments and decisions. Other than the connection with decisions, the two domains might seem to have little in common. After all, data science is typically discussed in terms of computer technology and machine learning algorithms. Behavioral nudges, on the other hand, concern human psychology. What do they have in common?
The “behavioral insights” movement is based on a complementary idea: Rather than try to equip people to be more “rational,” we can look for opportunities to design their choice environmentsin ways that comport with, rather than confound, the actual psychology of decision making. For example, since people tend to dislike making changes, set the default option to be the one that people would choose if they had more time, information, and mental energy. (For example, save paper by setting the office printer to the default of double-sided printing. Similarly, retirement savings and organ donation programs are more effective when the default is set to “opt in” rather than “opt out.”) Since people are influenced by what others are doing, make use of peer comparisons and “social proof” (for example, asking, “Did you know that you use more energy than 90 percent of your neighbors?”). Because people tend to ignore letters written in bureaucratese and fail to complete buggy computer forms, simplify the language and user interface. And since people tend to engage in “mental accounting,” allow people to maintain separate bank accounts for food money, holiday money, and so on.
Richard Thaler and Cass Sunstein call this type of design thinking “choice architecture.” The idea is to design forms, programs, and policies that go with, rather than against, the grain of human psychology. Doing so does not restrict choices; rather, options are arranged and presented in ways that help people make day-to-day choices that are consistent with their long-term goals. In contrast with the hard incentives of classical economics, behavioral nudges are “soft” techniques for prompting desired behavior change.
Proponents of this approach, such as the Behavioural Insights Team and ideas42, argue that behavioral nudges should be part of policymakers’ toolkits. This article goes further and argues that the science of behavioral nudges should be part of the toolkit of mainstream predictive analytics as well. The story of a recent political campaign illustrates the idea.
Tapping into the Innate
The 2012 US presidential campaign has been called “the first big data election.” Both the Romney and Obama campaigns employed sophisticated teams of data scientists charged with (among other things) building predictive models to optimize the efforts of volunteer campaign workers. The Obama campaign’s strategy, related in Sasha Issenberg’s book The Victory Lab, is instructive: The team’s data scientists built, and continually updated, models prioritizing voters in terms of their estimated likelihood of being persuaded to vote for Obama. The strategy was judicious: One might naively design a model to simply identify likely Obama voters. But doing so would waste resources and potentially annoy many supporters already intending to vote for Obama. At the opposite extreme, directing voters to the doors of hard-core Romney supporters would be counterproductive. The smart strategy was to identify those voters most likely tochange their behavior if visited by a campaign worker.
But the story does not end here. The Obama campaign was distinctive in combining the use of predictive analytics with outreach tactics motivated by behavioral science. Consider three examples: First, campaign workers would ask voters to fill out and sign “commitment cards” adorned with a photograph of Barack Obama. This tactic was motivated by psychological research indicating that people are more likely to follow through on actions that they have committed to. Second, volunteers would also ask people to articulate a specific plan to vote, even down to the specific time of day they would go to the polls. This tactic reflected psychological research suggesting that forming even a simple plan increases the likelihood that people will follow through. Third, campaign workers invoked social norms, informing would-be voters of their neighbors’ intentions to vote.
The Obama campaign’s combined use of predictive models and behavioral nudge tactics suggests a general way to enhance the power of business analytics applications in a variety of domains. It is an inescapable fact that no model will provide benefits unless appropriately acted upon. Regardless of application, the implementation must be successful in two distinct senses: First, the model must be converted into a functioning piece of software that gathers and combines data elements and produces a useful prediction or indication with suitably short turnaround time. Second, end users must be trained to understand, accept, and appropriately act upon the indication.
In many cases, determining the appropriate action is, at least in principle, relatively straightforward. For example, if an analysis singles out a highly talented yet underpaid baseball player, scout him. If an actuarial model indicates that a policyholder is a risky driver, set his or her rates accordingly. If an emergency room triage model indicates a high risk of heart attack, send the patient to intensive care. But in many other situations, exemplified by the challenge of getting out the vote, a predictive model can at best point the end user in the right direction. It cannot suggest how to prompt the desired behavior change.
This challenge is called “the last-mile problem.” The suggestion is that just as data analytics brings scientific rigor to the process of estimating an unknown quantity or making a prediction, employing behavioral nudge tactics can bring scientific rigor to the (largely judgment-driven) process of deciding how to prompt behavior change in the individual identified by a model. When the ultimate goal is behavior change, predictive analytics and the science of behavioral nudges can serve as two parts of a greater, more effective whole.
Keeping it Honest
Similar ideas can motivate next-generation statistical fraud detection efforts. Fraud detection is among the most difficult data analytics applications because (among other reasons) it is often the case that not all instances of fraud have been flagged as such in historical databases. Furthermore, fraud itself can be an inherently ambiguous concept. For example, much automobile insurance fraud takes the form of opportunistic embellishment or exaggeration rather than premeditated schemes. Such fraud is often referred to as “soft fraud.” Fraud “suspicion score” models inevitably produce a large proportion of ambiguous indications and false-positives. Acting upon a fraud suspicion score can therefore be a subtler task than acting on, for example, child welfare or safety inspection predictive model indications.
Behavioral nudge tactics offer a “soft touch” approach that is well suited to the ambiguous nature of much fraud detection work. Judiciously worded letters could be crafted to achieve a variety of fraud mitigation effects. First, letters that include specific details about the claim and also remind the claimant of the company’s fraud detection policies could achieve a “sentinel effect” that helps ward off further exaggeration or embellishment. If appropriate, letters could inform individuals of a “lottery” approach to random fraud investigation. Such an approach is consistent with two well-established psychological facts: People are averse to loss, and they tend to exaggerate small probabilities, particularly when the size of the associated gain or loss is large and comes easily to mind.
What will this adapt to ?
Behavioral design thinking suggests one path to “doing well by doing good” in the era of data. The idea is for data-driven decision making to be more of a two-way street. Traditionally, large companies and governments have gathered data about individuals in order to more effectively target market and actuarially segment, treat, or investigate them, as their business models demand. In a world of high-velocity data, cloud computing, and digital devices, it is increasingly practical to “give back” by offering data products that enable individuals to better understand their own preferences, risk profiles, health needs, financial status, and so on. The enlightened use of choice architecture principles in the design of such products will result in devices to help our present selves make the choices and take the actions that our future selves will be happy with.