While lawmakers in Congress and policymakers around the world debate how they should establish guardrails for a rapidly expanding artificial intelligence industry, the Federal Trade Commission already has a powerful enforcement tool in place: algorithm disgorgement.
Also referred to as model deletion, the enforcement strategy requires companies to delete products built on data they shouldn’t have used in the first place. For instance, if the commission finds that a company trained a large language model on improperly obtained data, then it will have to delete all the information along with the products developed from the ill-gotten data.
“It really gets to the core of what is a common practice in the tech industry to use wide swaths of data, not necessarily for the purpose under which the data was originally obtained,” said Sarah Myers West, managing director of AI Now and a former FTC adviser. “It’s not OK to justify data collection just for the sake of product improvement when it’s a violation of privacy.”
So far, the FTC has used this tool in five cases against tech companies dating back to 2019, including in a case against a diet app for children and the controversial data analytics firm Cambridge Analytica. Most recently, the commission proposed a pair of settlements with Amazon that required it to delete ill-gotten data. The agency’s order over privacy violations from Amazon’s Ring required the security camera company to delete data products, including algorithms, from videos it unlawfully reviewed. In another settlement with Amazon, the FTC ordered the tech giant to delete children’s data, geolocation data and other voice recordings it obtained in an alleged violation of federal children’s privacy law.
FTC officials called the enforcement action against Amazon a warning to other companies that may be mishandling user data in a race to build their models. “Machine learning is no excuse to break the law,” FTC commissioner Alvaro Bedoya wrote in a statement about the order. “Today’s settlement on Amazon Alexa should set off alarms for parents across the country — and is a warning for every AI company sprinting to acquire more and more data.”
In April, the commission along with the Consumer Financial Protection Bureau, the Department of Justice Civil Rights Division and U.S. Equal Employment Opportunity Commission issued a joint statement saying they plan to “vigorously enforce their collective authorities and to monitor the development and use of automated systems.”
FTC Chair Lina Khan made the point again in a May New York Times Op-Ed. “Although these (AI) tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering, even in this new market,” the chairwoman wrote. Khan’s piece in the Times followed the agency’s guidance to businesses on AI topics ranging from the false promotion of products, consumer trust and deception in just the first half of 2023.
Part of the agency’s strength in acting as a go-to regulator of AI is the flexibility Congress’s mandate for the agency has. “The FTC ACT is and has broad applicability and was actually designed by Congress for exactly this — to confront new technologies and new emerging markets,” said Ben Wiseman, acting associate director at the Division of Privacy and Identity Protection at the FTC. “Its breadth and scope provide the ability to ensure that consumers are protected when these new technologies hit the marketplace.”
He said that model deletion is a “significant part” of the agency’s enforcement strategy when it comes to AI. “I think the main consideration that we are thinking through is these AI systems, these models are trained on lots and lots of consumer data. What are companies doing to protect that data? Not just from unauthorized access, but also through other disclosures that might be inadvertent or might be intentional?” asked Wiseman.
Looking forward, the FTC has its eyes on how emerging uses of AI may sweep up sensitive data as it’s used in the medical field and other industries that broker in sensitive data, Wiseman said. Such enforcement would come on top of a recent string of enforcement actions related to sensitive health data by the agency.
Model deletion isn’t a new tool for the agency but it has picked up in frequency over the past year. In May, the FTC sued an education technology company for violating the privacy of students as young as those in kindergarten. The proposed settlement ordered the company to delete any models or algorithms it developed using the data. Experts say disgorgement is an effective enforcement tool because it has significant costs for a company’s business model, rather than just fines that can be a slap on the wrist for major players.
“It actually goes for the money,” said Meredith Whittaker, president of the Signal Foundation and a former senior adviser on artificial intelligence at the FTC. “They spend $5 million on T-shirts that don’t fit staff right. That is nothing when you’re talking about these companies. But the data and the models are where the money is.”
The desire to not run into the FTC’s crosshairs’ could motivate companies to take a more careful approach to tracking how and what data their models are sucking up. “It’s just complicated in practice because AI systems are not designed to be rolled back to certain points in time,” said Cobun Zweifel-Keegan, managing director of the International Association of Privacy Professionals. “It’s hard to disintegrate the way things have been learned over time or it might require retraining or thinking over.”
Ben Winters, senior counsel at Electronic Privacy Information Center Senior and lead of its AI and Human Rights Project, says that hopefully the Amazon decisions “motivate companies to get their [act] together.”
“One of the biggest opportunities of this use of disgorgement is you’re forcing a data provenance and data governance practice because if you don’t track where everything is and what tweaks you’ve made with what sorts of data then the enforcement agency is likely not going to be particularly charitable,” said Winters. “It’s more like they’re going to have an overinclusive set of data and of algorithms that they have to delete.”
The FTC’s Wiseman said whether the agency pursues a remedy such as model deletion depends on the facts of the case, but companies that use illegally obtained data should know the FTC will be putting its data products under scrutiny and that data deletion is on the table.
Experts note that right now the FTC has some limitations in how it brings cases that could call for model deletion. In both the Alexa case and two of the five cases that involved full model deletion, the agency cited federal children’s privacy law.
Having a federal privacy law protecting users of all ages would make it easier for the agency to bring privacy-related cases. “We would certainly welcome a comprehensive privacy law,” said Wiseman. “I think that in and of itself would make it significant, would make our job easier.”
The post The FTC’s biggest AI enforcement tool? Forcing companies to delete their algorithms appeared first on CyberScoop.