top of page

Believing Machines

There was an interesting and thoughtful article recently about the implications of the Canadian case in which evidence from a Fitbit is used in a disability lawsuit. The author also talks about the increasing use of tracking data in legal proceedings of all kinds. Of course, the legal system is not the only place where electronic data is proliferating. Electronic medical records are becoming more prevalent throughout the healthcare delivery system. Employer- and insurer-sponsored wellness programs increasingly use data from tests and devices to determine individual health status. Logging and analysis of activities on electronic networks is ubiquitous. CCTV cameras and license plate readers are proliferating.

This got me thinking that we now seem to trust people less than we trust computers and algorithms, even when we don’t understand how these devices work or whether they produce information that make sense. Widely publicized studies of human nature and the human psyche show the many ways in which people are irrational, forgetful, and unaware or unwilling to admit that they are. However, we often forget that computers are programmed by people, that data is often entered by people, that all sensors and statistical algorithms have associated error rates, and that because of this we should not trust computers without question.

Some time ago I was reminded of this, fortunately in a circumstance that held no serious consequences. I had an old clothes dryer, which had a damaged lint screen that I had to take out and clean between loads. I went to a local appliance parts store to buy a replacement and took the old screen with me, just in case.

The man at the parts store confirmed the make and model number of my dryer on his computer, then went back to the storage area and returned with a new lint screen in a clear plastic bag. However, this lint screen was different from the one I brought with me. I could see that it would not fit into the slot where the original had been. I said that the part seemed to be wrong. The man who had been helping me turned his computer monitor to face me, pointed at the screen and said, “Is this your make and model number?” I agreed that it was. “Well, here is the part number for the lint screen, and here is the same part number on the label on the bag. It’s the right part.” I showed him the original part, but he was not convinced.

I decided to change tactics. “Do you happen to have a lint screen that looks like the one I brought with me?” I asked. He said he did. I asked, “Can I buy it from you?” He sighed and said, “It’s the wrong part for your machine.” I said, “That’s fine. I’d like to buy it anyway.” He shook his head, went to the storage area, and returned with a lint screen that looked identical to the one I had in my hand. I paid a few dollars, came home, and slipped it into my dryer. It was a perfect fit.

People laugh when I tell them the story because it is so obviously absurd to trust the data in the computer over the evidence in front of them. Unfortunately, in many cases it is not obvious that the data in the computer is wrong, and if the data is believed the result is not at all funny. One recent example is that of a man whose car was stopped by a SWAT team because the police relied on a database generated by license plate readers that could not distinguish license plates from different states. There are other examples, including stories in which patients were in danger of receiving improper treatment because an identity thief’s data became incorporated in their electronic medical record and followed them from one facility to another as the record was shared.

Believing computers and their data without a question has another consequence, too. It leads us to distrust people unless they can produce data to confirm that they are telling the truth. Employers and health insurers used to ask employees whether they smoke; now many perform drug tests and treat anyone who refuses such a test as a smoker.

All this raises questions in my mind. I wonder whether knowing that they are presumed to be lying changes people’s behavior. After all, if there is no expectation that one will behave honorably, why take on the burden of doing so? For example, if a wellness program accepts only fitness tracker data and not the employee’s word about exercise, why not let someone else “exercise” the fitness tracker? Of course, knowing that people might do this simply leads us to be tied ever more closely to the machines that measure us, e.g., by having them activated by a pre-registered biometric. It also leads to collection of more data that is combined and analyzed for consistency.

I also wonder what we can do to prove that we are right when we believe the computer is wrong. Humans do not remember at the same level of detail as computers and do not record every action as computers do. Even if we look at logs of our activities, such as fitness tracker records, we can’t be sure that the logs are accurate, that they are logs of our activities and not someone else’s, and that they have not been tampered with. How do we prove our case if we believe the data is wrong?

We know that people are fallible. Yet, as we continue to instrument our bodies and our environment, we also need to remember that data and algorithms have their limitations. How we decide whom and what to believe will say a great deal about the kind of society we are.

Originally posted on LinkedIn on December 5, 2014

bottom of page