“We must regulate AI,” FTC Chair Khan says

By | May 3, 2023
Lina M. Khan testifies during a Senate Commerce, Science, and Transportation Committee nomination hearing on April 21, 2021 in Washington, DC.
Enlarge / FTC Chair Lina M. Khan testifies during a Senate Commerce, Science, and Transportation Committee nomination hearing on April 21, 2021, in Washington, DC.
Graeme Jennings/Getty Images

On Wednesday, Federal Trade Commission (FTC) Chair Lina Khan pledged to use existing laws to regulate AI in a New York Times op-ed, “We Must Regulate A.I. Here’s How.” In the piece, she warns of AI risks such as market dominance by large tech firms, collusion, and the potential for increased fraud and privacy violations.

In the op-ed, Khan cites the rise of the “Web 2.0” era in the mid-2000s as a cautionary tale for AI’s expansion, saying that the growth of tech companies led to invasive surveillance and loss of privacy. Khan feels that public officials must now ensure history doesn’t repeat itself with AI, but without unduly restricting innovation.

“As these technologies evolve,” she wrote, “we are committed to doing our part to uphold America’s longstanding tradition of maintaining the open, fair and competitive markets that have underpinned both breakthrough innovations and our nation’s economic success—without tolerating business models or practices involving the mass exploitation of their users.”

Khan’s op-ed comes when rising hype and anxiety about generative AI like ChatGPT has begun to dominate the tech world. Increasing use of the nebulous term “AI” in commerce has led the FTC to post clarifying statements about how it plans to deal with these new technologies (and potentially misleading claims about them) on its website.

In line with those previous statements, Khan made a point of noting that AI represents nothing special in the eyes of the law. “Although these tools are novel, they are not exempt from existing rules,” she wrote, “and the FTC will vigorously enforce the laws we are charged with administering, even in this new market.”

Furthermore, Khan’s plans for AI go beyond the popular AI chatbots of generative AI, extending to other forms of automation and algorithmic decision-making. She mentions at least four key areas of concern:

  • Ensuring fair competition: Preventing large tech firms from exploiting their market dominance and using collusion to stifle innovation and smaller competitors in the AI landscape.
  • Strengthening consumer protection: Safeguarding users from deceptive and fraudulent practices enabled by AI, such as phishing scams, deepfake videos, and voice cloning.
  • Promoting data privacy: Monitoring AI systems to ensure they adhere to data protection laws and prevent exploitative data collection or usage, protecting users’ personal information.
  • Combating discriminatory practices: Ensuring AI systems don’t perpetuate or amplify biases and discrimination, which can lead to unfair treatment in areas like employment, housing, or access to essential services.

Some of these elements have previously been laid out in the Biden administration’s “AI Bill of Rights” guidelines published in October. Those guidelines do not explicitly have the force of law behind them, but the FTC has the latitude to interpret existing laws to apply to AI. “The FTC is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector,” Khan said.

Looking ahead, Khan asks: Can the United States continue to foster world-leading technology without accepting “race-to-the-bottom business models” and “monopolistic control” that locks out higher-quality products. Her answer? “Yes—if we make the right policy choices.”

Source