Transparency, Choice and Privacy
If a company fully discloses its data practices and users click “I agree,” is all data processing that follows legitimate?
After the European Court of Justice invalidated the EU-U.S. Safe Harbor Agreement, the Data Protection Authority (DPA) of Schleswig-Holstein questioned the remaining legal bases for data transfer from the EU to the US, including consumer consent. According to the summarized translation by the law firm Hunton & Williams, the DPA’s position paper included the following requirements for consent.
Although the DPA concludes that informed consent is not possible under the circumstances, suppose some companies published privacy policies that included all the points above. Would users really be making an informed and rational decision if they accept the policies?
To answer this question, it is instructive to look at another area where informed consent plays an important role--medicine. I thought about this while reading a recent essay titled “The Paternalism Preference — Choosing Unshared Decision Making” in the New England Journal of Medicine. In this essay, Dr. Lisa Rosenbaum examines whether patients can or even want to make decisions about their treatment based on the risks and benefits presented to them during the medical informed consent process.
Informed consent in medicine is different from the consent obtained on the consumer-facing Web. On the Web, companies obtain consent when users click “I agree” on a page with links to a privacy policy and Terms of Service. Usually companies do not actually require users to read these documents. Some companies state that simply using their site constitutes “consent” to their policies. Even if users chose to read the documents, it may not make much difference because the documents are often incomprehensible to an average non-lawyer.
By contrast, informed consent in medicine requires “the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each.” (Canterbury v. Spence, 464 F.2d 772, 1972). As Dr. Rosenbaum writes,
Dr. Rosenbaum does not advocate withholding information from patients, but also wonders whether even highly educated and fully informed patients are in a position to properly evaluate risks and benefits of treatment alternatives. She recounts her own experience of needing surgery to realign her broken clavicle:
As Dr. Rosenbaum wrote in her essay, more information does not always lead to the best medical choices by the patient. Nor does more information necessarily lead to better choices about privacy, even if people take the time to read all the privacy notices presented to them and understand what those notices mean. A recent study from the University of Pennsylvania challenges the notion that consumers make informed and reasoned tradeoffs between giving up their data and receiving discounts and other benefits. It found that consumers who have the best understanding of data collection and uses are more likely to consider privacy protection efforts futile than less informed consumers. The study found people resigned to the loss of privacy as a price of participating in the 21st-century economy.
A recent study from Carnegie Mellon came to similar conclusions.
As demonstrated by psychologists, we are not very good at evaluating risk. We all have cognitive biases. Whether making decisions about medical treatment or about giving information online, we are more likely to be influenced by news stories and our own experiences than by statistics. And sometimes we feel that we don’t have a real choice so we simply “agree.”
Providing information about risks and benefits of treatment and asking patients to decide how to proceed may make doctors feel better and reduce their legal liability. Providing privacy notices and then assuming that users choose to provide their data in a rational tradeoff may make company executives feel better and reduce corporate legal liability. Neither is about what is best for the individual.
I suspect that few people want to return to the old paternalistic ways of medical practice, when doctors simply told patients what to do and expected unquestioning compliance. Nevertheless, many patients want more than information. They want to rely on advice from doctors they trust. In spite of publicized cases of physicians who recommend drugs, tests and procedures for their own financial benefit, most doctors follow a code of ethics that places the patient’s interests first. Shared decision-making acknowledges that both doctors and patients have a stake in the outcome of medical treatment.
Companies collecting consumer data have no comparable ethical framework that requires them to respect privacy. However, there is a long-standing framework for operationalizing the shared interests that individuals and companies have in personal information. This framework is Fair Information Practices (FIPs). The FIPs, as a set, do not give unilateral decision-making power either to the consumer or the company. Instead, FIPs enable companies to collect and use data but at the same time require them to limit data collection and retention, limit uses of data to those disclosed at the time of collection, and mandate good security practices. When companies implement all the FIPs--not just notice and consent--they share privacy-related decision-making and privacy-related responsibility with those whose data they collect and use. Regulators should require such shared responsibility by requiring the implementation of all FIPs. Notice and consent is not an adequate substitute, no matter how complete the notice may be.