In 2008, economist Richard H. Thaler and legal scholar Cass Sunstein released an internationally best-selling book called Nudge, formalizing and promoting active engineering of a “choice architecture” by organizations or individuals that promote or sell goods and services.
The idea behind a nudge is that consumers can be presented with a predetermined choice considered to be in the consumer’s best interest: i.e., a burger shop offering apple slices as the default side order instead of french fries; or placing stickers of flies in urinals for people to aim at to reduce spillage. The hypothesis goes: the consumer then would default to making better choices and in the long run, presumably benefit from them. And in my perspective: without the consumer being aware of having made a passive choice that they might not have otherwise actively made.
“A-ha!” says the Marketing VP. “Why don’t we the concept of nudging in reaching out to our customer base? We could surely nudge them in the right directions. And if those nudges coincidentally happens to give us a lift in sales and revenue all the better. What could go wrong?”
“A-ha!” says the Big Data Consultant. “Why don’t we keep a log of these nudges? We can optimize our systems to collect and track all of this data. Storage is cheap, we can just automate the collection of all of it and figure it out later. What could go wrong?”
“A-ha!” says the Data Scientist. “What if we build systems that can run automated statistical testing and run behavioral experiments to optimize for the nudges are increasing revenues? We don’t really need to understand why they work, just need a significant p-value. What could go wrong?”
“A-ha!” says the Growth Hacker/Product Manager. “What if we build in these behavioral experiments to every facet of our product? We can track and run dozens and hundreds of these nudges every day and optimize our overall conversion rates by 0.2%. What could go wrong?”
“A-ha!” say the Big Tech Companies. “What if our entire product was a collection of thousands of small behavioral experiments, running on a sample size of billions of people every single day, optimized exclusively around whether these experiments nudge people to want to return to our site, click more links, and buy more things they don't need? What could go wrong?”
“Wait a minute,” says the Astute Individual. “Why do I spend so much time being upset at ridiculous things? Why do my emotions depend so much on who likes my vacation photos? Why do I marginalize people as left-and-right swipes on dating sites? Why do I feel such a disconnection from other people?
“Why do I feel as if the world is becoming unglued at an accelerating rate?
“Why can’t I stop?”
Thaler and Sunstein present a “nudge” as the following[1], emphasis mine:
A nudge, as we will use the term, is any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.
While one might say that these predetermined choice architectures can be used to promote positive interventions; from the perspective of agency, these seemingly obvious and “positive” actions become blurred. If an agent’s choices are masked by a separate actor, then the appearance of agency is no more than just one of appearances. We must talk about these things in a more specific vernacular than good or bad, positive and negative with nuance on degrees of choice.
In virtue epistemology, there are concepts of virtuous and vicious masking [2]. A virtuous mask, roughly speaking, is when irrelevant information is masked, either by choice or circumstance, often to the benefit of an agent; a vicious mask is one where relevant information is masked, often for reasons of consistency or unavailability. A nudge, then, is a story about masking — an agent controlling the mask for another. The central debate around the nudge is about whether this independent agent, a nudger, will be a vicious masker or a virtuous masker.
But the debate is missing the mark. The real question should be one of culpability. If a mask, whether intended to be virtuous or vicious, were to cause ill benefits to an agent, who should be culpable for the application of the mask?
If the agent were to mask himself, then the culpability is his own.
An asymmetry, then, is created when the masking agent, a third-party, has no assumption of responsibility or culpability, while all of the culpability is transferred solely to the agent wearing the mask.
[1] Wikipedia; Nudge Theory
[2] Carlos Montemayor, Abrol Fairweather. Knowledge, Dexterity, and Attention. Cambridge University Press. 2017. (Shout out to Carlos!)
We will return to the application of nudging aided by small rewards in the next issue with a guest post from Zero HP Lovecraft.
2. Nudging, a Parable
beautifully constructed and Interesting concept--social engineering of sorts--seemingly, big tech thrives on this model -can be sorely misused in the wrong hands