Did We Really Volunteer For This?
When are our choices freely made and our actions voluntary? What level of financial incentives or behavior manipulation crosses the line into coercion or what ethicists call “undue influence”?
For the past several years I have been studying these questions in the context of workplace wellness programs. These programs collect, integrate and analyze large amounts of data to identify employees with health risk factors. Then the programs influence employees’ behavior, healthcare decisions and physical attributes through financial incentives and the use of behavior manipulation techniques built into web sites, wearable devices and mobile applications.
By law, employee participation in wellness programs is supposed to be voluntary. The Americans with Disabilities Act (ADA) and the Genetic Information Non-Discrimination Act (GINA) do not permit employers to collect health information that might be used to discriminate against employees. The only exceptions are voluntary wellness programs. Unfortunately, neither law defines what “voluntary” means.
In my past writing, I explored the ethics of incentives for participation in medical research as a way to think about incentives for participation in wellness programs. Medical research has something in common with wellness programs because both involve studying and manipulating health-related behavior, and because participation in medical research and wellness programs is supposed to be voluntary. Ethicists have given much thought to what constitutes coercion or “undue influence” to get people to participate in research. In 2015, I wrote that a rulemaking process to reconcile financial incentives for wellness programs in the Affordable Care Act with ADA and GINA “voluntariness” requirements would allow us to have a meaningful debate about ethical issues raised by these programs.
In 2015 and 2016, the Equal Employment Opportunity Commission (EEOC) went through such a rulemaking process. In May 2016, the agency issued two wellness program regulations, one related to ADA and the other to GINA. The regulations define program participation as voluntary even when employers charge someone who refuses to participate up to 30 percent more for health insurance.
AARP found the EEOC’s definition of voluntariness unreasonable, and in October 2016 filed a lawsuit against the EEOC. In August 2017, Judge John D. Bates of the US District Court for the District of Columbia agreed with the AARP’s position and called the EEOC’s definition of voluntariness “arbitrary and capricious.”
As I wrote in my research paper, wellness programs generally require employees to complete annual health risk assessments that include detailed health information and can touch on lifestyle, family medical history, and work and personal relationships. Some programs also require biometric screenings, including a blood draw, and some mandate a drug test for tobacco use. Many programs then use web sites, smartphone apps and wearable devices to track employee wellness-related activities, such as exercise, food consumption, sleep and more.
Judge Bates’s opinion explains what EEOC’s definition of voluntariness means for people who must decide whether to give this very personal data to wellness programs.
Having chosen to define “voluntary” in financial terms—30% of the cost of self-only [health insurance] coverage—the agency does not appear to have considered any factors relevant to the financial and economic impact the rule is likely to have on individuals who will be affected by the rule. … At around $1800 a year, this [health insurance surcharge] is the equivalent of several months’ worth of food for the average family, two months of child care in most states, and roughly two months’ rent.
In other words, if someone “chooses” to provide their health data or allow their blood to be drawn rather than sacrifice a couple of months of rent or child care, they are not submitting voluntarily. I am gratified that the opinion explicitly called out my comments among those that should have been considered during the rulemaking process.
The judge asked the EEOC to reconsider its definition and reissue the regulations. When the agency presented its long schedule for doing so, the judge invalidated the EEOC wellness program rules as of January 1, 2019.
Because the EEOC regulations defined voluntariness in purely financial terms, the judge’s opinion did not deal with non-financial means that wellness programs use to manipulate participants’ behavior, such as nudging, gamification and social influencing. Nevertheless, the opinion is now part of the larger debate about individual autonomy in the age of pervasive data collection and behavior manipulation.
Particularly when it comes to protecting their privacy, US consumers are often given meaningless “choices.” Then they are told that they “chose” to give up their data, even if no reasonable alternative was available.
Examples of “notice and no real choice” are common.
Most websites have privacy policies. Most consumers don’t read them and the few who read them probably don’t understand them. Nevertheless, marketers assert that consumers make a tradeoff by using websites and giving up their data in exchange for “free” services. They keep asserting this even as more consumers use ad and tracker blockers to avoid data collection. Content providers simply override consumer preferences through new tracking methods that are harder for consumers to avoid.
Doctors and hospitals hand out HIPAA Privacy Notices to patients, stating the many ways in which their health information will be used and disclosed. The only “choice” for a patient who doesn’t like the content of the notice is to try to find somewhere else to get healthcare, lie, or avoid the healthcare system. HIPAA offers patients some real privacy rights, but it also includes provisions that allow use and disclosure of health data in many ways that patients don’t understand and about which they have no choice.
Emerging technologies follow the same old “notice and no choice” paradigm. NTIA’s “best practices” for commercial use of facial recognition say that stores should post a sign, stating that the store uses the technology. Some stores post signs, saying that they are tracking consumers via their cell phones. Those who use and promote this approach assert that a consumer who enters the store displaying a sign “chooses” to submit to surveillance. And what if there isn’t another store with similar products and no surveillance? That’s not a concern for the store owner, those who build and deploy the technology, or those who support its use.
In his opinion on the EEOC’s wellness regulations Judge Bates did something unusual. He showed what the “voluntary” choice, as defined by the EEOC, actually means for people who must decide whether to participate in these programs. He pointed out that the “choice” between giving up highly personal information and paying rent or putting food on the table isn’t a meaningful choice. Calling participation voluntary doesn’t make it so.
We need to have a more honest conversation about the real cost to individuals of the current data collection and use practices. Financial incentives are only one aspect of this discussion. Some technologists say that technology companies deliberately create addictive products by using massive data stores in combination with psychological manipulation. Studies of the “sharing” economy show how the underlying platforms use data and manipulative design to shape their markets and participant behavior. Governments have set up “nudge units” to shape behavior and choices of their citizens.
The relationship between financial incentives and voluntariness in wellness programs might be resolved through economic analysis, public consultation and a new rulemaking process. Defining voluntariness, particularly in the employment context, is not easy or simple. However, as long as companies can collect vast amounts of data and use this data to manipulate individual behavior, we will never be sure whether our choices are truly our own.