top of page

After the Equifax Breach—Preventing Financial Fraud

As I wrote in a prior post, Understanding the Equifax Data Breach, many of the proposed solutions will not fix consumers’ problems resulting from the breach. If, as I surmise, the Equifax breach involved the theft of “credit header” data—identifying information such as names, addresses, dates of birth, phone numbers, and Social Security numbers—breach victims face the possibility of financial fraud, identity fraud, or both. This article takes a closer look at solutions proposed for prevention of financial fraud.

Consumers have limited ability to protect themselves from fraud. No matter what they do, they remain dependent on the security practices of organizations that house their data and on the decision-making practices of financial institutions that use the data. Solutions proposed in the wake of the breach might also introduce new concerns.

  • Free credit monitoring. Credit monitoring does not prevent identity theft or financial fraud. It simply allows consumers to know what’s going on and, in some cases, to take faster action than they could otherwise. Identity and credit monitoring services claim that they provide additional value by scanning the Dark Web for Social Security numbers or other information sold by criminals. However, it is not clear what consumers can do with notice of illegal sale. It might be simpler and cheaper to simply assume that if you received a data breach notification, your data was stolen and is available for sale, and act accordingly.

  • Automatic fraud alerts on credit files. A fraud alert requires a lender to contact a consumer before performing a credit-related transaction. This helps prevent fraudsters from opening new accounts in someone’s name, as long as 1) consumers keep their contact information current and verifiable by trusted third parties; 2) lenders fulfill their consumer contact obligations; and 3) consumers respond in a timely way. The weakness of this approach in its current form is that fraud alerts only apply to data at consumer reporting agencies. Unfortunately, “credit header” data is often sold to other services and used for identity verification in non-credit transactions. In order to be truly effective, a fraud alert should travel with “credit header” data, and organizations that use “credit header” should have the same obligations as lenders.

  • Free freezes and thaws on credit files or frozen credit files as default. In this proposal, a credit file is frozen and unavailable to lenders until a consumer wants to make a purchase that requires a credit check. At that point, the consumer can lift the freeze temporarily or permanently. A freeze helps prevent some types of financial fraud, such as a fraudster opening a new account in the consumer’s name, but the need to remove the freeze would slow down some transactions. Even savvy consumers may not realize which transactions involve a credit check. For example, as more consumers have high-deductible health plans, more hospitals do credit checks to determine whether someone can pay several thousand dollars before their insurance kicks in. Without careful analysis, it is difficult to say how many consumers could be harmed and in what way if every transaction that involves a credit check requires a credit report thaw. Freezes also may not prevent some kinds of financial fraud, such as takeover of existing accounts through impersonation.

  • “Big Data” and new credit scoring algorithms. Several companies in the credit scoring market use new scoring algorithms and nontraditional credit data, such as where consumers shop, what they buy, what they look at online, and who they interact with on social media. As a founder of one such company said, “All data is credit data.” To date, these companies mostly focus on people with limited credit information or low credit scores, but they could potentially expand their reach. The issues with this approach are the same ones that come up in the use of “big data” and algorithmic decision-making generally: lack of transparency about how various data elements affect scoring, difficulty in proving that the scores comply with anti-discrimination laws, and identifying and correcting errors in the data. This approach also involves very high levels of data collection and surveillance. Massive amounts of behavioral and other data collected for this type of credit scoring would need to be retained to enable the scoring algorithms to evaluate patterns and to allow for review in case of legal challenges.

  • A completely voluntary credit reporting system. A voluntary system will not work because it would not be trusted by lenders. Only people who understand the value of the credit system and who think they have a clean record will participate. Those who have no credit file because they didn’t think they would need one would find it more difficult and more expensive to get credit even if they are credit-worthy. Depending on how the voluntary system is set up, incorrect data may still find its way into consumer files. And what would happen when a consumer who participated decides to drop out? If a record of prior participation were found, the likely inference would be that the person dropped out because he or she has something to hide.

  • Consumer-controlled credit file. The effectiveness of this proposal depends on its design and implementation. If the consumer can alter the file at will, lenders will not trust it and are unlikely to use it. If consumer control only includes approving or denying access to the file, it would be similar to the proposal for free credit file freezes and thaws, discussed above.

Although consumers cannot fully protect themselves from the fallout of the Equifax data breach, a combination of fraud alerts and free freezes/thaws is probably the most expedient way to address the current situation. In the longer term we need to address larger issues. The Equifax breach is likely to accelerate the use of risk mitigation techniques that rely less on static data and more on people’s behavioral and transactional patterns, and on biometrics. We need to have a societal conversation about the trade-off between commercial surveillance that might make us safer from fraud and the loss of privacy as more data about us is used in ways we cannot control and may not understand. We also need to discuss how to restructure liability for fraud to make investments in security a priority for organizations that handle and rely on consumer data.

bottom of page