Some in Congress say this is a problem. On Tuesday, Rep. Yvette D. Clark (DN.Y.) introduced legislation that would require disclosure of AI-generated content in political ads — part of an effort, she said, to “address a number of challenges Congress faces.” For we are faced with AI.
“Our current laws don’t even begin to scratch the surface with respect to what protecting the American people could mean for the rapid deployment of AI in disrupting society,” Clark said in an interview.
The immediate impetus for her bill, she said, was an ad released by the RNC last week that used AI-generated images to paint a dystopian picture of a possible second term for Biden. Designed to respond to Biden’s announcement that he was running for re-election, the 30-second spot featured simulated scenes of China invading Taiwan and immigrants at the southern border, among other scenarios. The ad included a disclaimer in the upper left corner that read, “Made entirely with AI imagery.”
Clark said the disclosure made the RNC ad a model for the transparent deployment of AI, in a certain sense. But not all actors will follow that example, he warned, and the consequences could be dire during the 2024 presidential campaign.
“There will be people who don’t want to disclose that it’s AI-generated, and we want to guard against that, especially as we look at the political season,” Clarke said.
Clark’s bill would amend federal campaign finance law to require that political ads include a statement disclosing any use of AI-generated imagery. The Federal Election Commission recently strict rules Regarding sponsorship disclaimers for digital ads, the requirement to disclose who paid for ads promoted on websites also applies to advertising on other platforms, such as social media and streaming sites.
Additional improvements are required by “revolutionary innovations” in AI technology, as well as the potential “use of generic AI that harms our democracy,” according to Clark’s bill.
Lawmakers have not taken immediate action on similar proposed measures aimed at reducing the use of AI applications, including facial recognition technology. The legislation has stalled amid a broad congressional impasse that has also thwarted privacy and advertising transparency proposals, though Senate Majority Leader Charles E. Schumer (D-N.Y.) recently presented a framework for AI regulation that can inspire action.
Last year, Clark was behind one of several members of Congress proposed solutions limiting law enforcement use of that technology, but it never got past the two committees to which it was referred. Variations of the law have been introduced over the years. One version, banning the government from implementing facial recognition technology and other biometric technologies, was recently restart,
Some lawmakers have turned to creative strategies to build momentum for congressional action. Earlier this year, Rep. Ted Lieu (D-Calif.) introduced a measure calling on Congress to regulate the development and deployment of AI. To underline the power of technology, LIU created the resolution using the AI language model ChatGPT.
Man wrote Clark’s bill. And while the AI-generated images may not have been deployed or spoofed, he said, the need to create controls is clear.
“I think AI has really important uses, but there have to be some rules for the road so the American people aren’t tricked or put in harm’s way,” she said.