Republicans worried that any new rules would lead to increased censorship of conservatives, while Democrats worried they would open the floodgates to online hate speech and disinformation. Now those arguments are beginning to return in a completely new debate.
From the frequent invocation of Section 230 during OpenAI’s Sam Altman’s Senate testimony on Tuesday to a spat over disinformation and censorship at a separate Senate hearing on the government’s use of automated systems, familiar battle lines about social media risks being redrawn as Congress turns its gaze to AI.
“The same muscle memory comes back,” said Nu Wexler, a partner at public relations firm Seven Letter and a former Democratic congressional staffer who has worked at Google, Facebook and other tech companies.
A return to the politics of those earlier technological conflicts will make it more difficult for the two parties to agree on AI policy. And even if they can stay united, lawmakers will likely need to look beyond censorship, disinformation, political bias or other issues raised by social media if they want to produce meaningful AI regulations.
One reason many lawmakers are viewing AI through a social-media lens, some on the Hill say, is the basic knowledge gap around an extremely fast-moving new technology.
“Without mentioning anyone’s names, some members of the House and Senate have no idea what they’re talking about,” said Rep. Zoe Lofgren (D-Calif.), the ranking member of the House Science Committee, in an interview with POLITICO on Thursday.
During a Tuesday hearing of the Senate Homeland Security and Governmental Affairs Committee, ranking member Rand Paul (R-Ky.) accused the government of collaborating with social media companies to deploy AI systems that “will monitor and censor your protected speech.”
Paul later told POLITICO that he will not work on AI legislation with the committee chairman Gary Peters (D-Mich.) until the Democrat acknowledges that online censorship is a real problem.
“Everything else is window dressing,” Paul said. “We can work well with (Peters) on that, but we will have to see progress on defending speech.”
Speaking to reporters after Tuesday’s hearing, Peters said he shared Paul’s concerns about AI and civil liberties. But he also emphasized that AI is “much broader than just related to potential misinformation and disinformation.”
“It’s a topic we need to consider — but it’s also a very complicated topic,” Peters said.
The vote was less partisan during Altman’s testimony before the Senate Judiciary Subcommittee on Privacy, Technology and the Law. But tech topics that typically spark intense fights were still front and center.
Senators from both parties, including Josh Hawley (R-Mo.) and Amy Klobuchar (D-Minn.), questioned the potential for AI systems to promote electoral misinformation online. Others, including judicial chairman Dick Durbin (D-Ill.) and ranking member Lindsey Graham (RS.C.), Altman asked about Section 230 of the Communications Decency Act. The provision protects online platforms from legal liability over content posted by users. Efforts to reform the 27-year-old Internet Act for the modern age of social media have repeatedly snarled over partisan conflicts over censorship, disinformation and hate speech. And Section 230 may not even apply to AI systems — an idea Altman repeatedly tried to convey to senators on Tuesday.
“It’s tempting to use the social media frame, but this is not social media,” Altman said. “It’s different, so the answer we need is different.”
Lofgren, whose congressional district includes a chunk of Silicon Valley, shares Altman’s sentiment that Section 230 “doesn’t really apply” to AI. “Apples and oranges, really,” she said.
And if lawmakers hope to tackle politically charged topics like disinformation, Lofgren said a federal data privacy bill would be more effective than new rules on AI. “If you want to get into manipulation, you have to get into how you manipulate, which is really the use and abuse of personal data,” the congressman said.
Wexler said it’s too early to tell whether congressional efforts to limit AI will end up trapped by the same partisan gridlock that has derailed meaningful rules on social media. While he acknowledged that the warning signs are there, he also pointed to clear areas of agreement — particularly on the need for greater study and more transparency in AI systems.
And while Lofgren thinks Congress should stop confusing social media with AI, she sees few signs of a similar partisan divide — at least for now. “Could that come true? Maybe,” she said. “But I think everyone realizes that this is a technology that could turn the world upside down, and we better figure it out.”
However, other observers believe it is only a matter of time before the political feuds that underpin congressional efforts to unify on other technical issues erupt over AI.
“The left will say that AI is hopelessly biased and discriminatory; the right will argue that AI is just another ‘woke’ anti-conservative conspiracy,” said Adam Thierer, senior fellow for technology and innovation at the R Street Institute, a libertarian think tank .
“The social media culture wars are about to turn into the AI ​​culture wars,” Thierer said.
0 Comments