The tone of congressional hearings with tech industry executives in recent years can best be described as antagonistic. Mark Zuckerberg, Jeff Bezos and other tech luminaries have all been dressed down on Capitol Hill by lawmakers upset with their companies.
But on Tuesday, Sam Altman, the chief executive of the San Francisco start-up OpenAI, testified before members of a Senate subcommittee and largely agreed with them about the need to regulate the increasingly powerful AI technology which is made within his company and others such as Google and Microsoft.
In his first testimony before Congress, Mr. Altman implored lawmakers to regulate artificial intelligence, as committee members showed a nascent understanding of the technology. The hearing underscored the deep discomfort felt by technologists and government about the potential harms of AI. But that unrest did not reach Mr. Altman, who had a friendly audience in the members of the subcommittee.
The appearance of Mr. Altman, a 38-year-old Stanford University dropout and tech entrepreneur, was his baptism as the leading figure in AI. for the three-hour hearing.
Mr. Altman also spoke about his company’s technology at a dinner with dozens of House members on Monday night, and met privately with a number of senators before the hearing, according to people who attended the dinner and the meetings. He offered a loose framework for managing what happens next with the rapidly evolving systems that some believe could fundamentally change the economy.
“I think if this technology goes wrong, it can go very wrong. And we want to be vocal about that,” he said. “We want to work together with the government to prevent that.”
Mr. Altman made his public debut on Capitol Hill as interest in AI has exploded. Tech giants have invested billions of dollars in what they say is a transformative technology, even amid growing concerns about AI’s role in spreading misinformation, killing jobs and one day matching human intelligence.
That has thrust the technology into the spotlight in Washington. President Biden said this month at a meeting with a group of AI company CEOs that “what you’re doing has enormous potential and enormous danger.” Top leaders in Congress have also promised AI regulation.
That members of the Senate subcommittee for privacy, technology and the law did not plan on a rough grilling for Mr. Altman was clear when they thanked Mr. Altman for his private meetings with them and for the agreement to appear in the hearing. Cory Booker, Democrat of New Jersey, repeatedly referred to Mr. Altman by his first name.
Mr. Altman was joined at the hearing by Christina Montgomery, IBM’s chief privacy and trust officer, and Gary Marcus, a well-known professor and frequent critic of AI technology.
Mr. Altman said his company’s technology could destroy some jobs but also create new ones, and that it will be important for “government to figure out how we want to reduce that.” He proposed the creation of an agency that issues licenses for the creation of large-scale AI models, safety regulations and tests that AI models must pass before being released to the public.
“We believe that the benefits of the tools we have deployed so far far outweigh the risks, but ensuring their safety is important to our work,” Mr. Altman said.
But it was unclear how lawmakers would respond to the call to regulate AI. Dozens of privacy, speech and security bills have failed over the past decade due to partisan bickering and fierce opposition from tech giants.
The United States has led the world on privacy, speech and child protection regulations. It is also behind on AI regulation. Lawmakers in the European Union are set to introduce rules for the technology later this year. And China has created AI laws that comply with its censorship laws.
Senator Richard Blumenthal, Democrat of Connecticut and chairman of the Senate panel, said the hearing was the first in a series to learn more about the potential benefits and harms of AI to eventually “write the rules.”
He also acknowledged the failure of Congress to keep up with the introduction of new technologies in the past. “Our goal is to demystify those new technologies and hold them accountable to avoid some of the mistakes of the past,” Mr. Blumenthal said. “Congress couldn’t meet the moment on social media.”
Members of the subcommittee proposed an independent agency to oversee AI; rules forcing companies to disclose how their models work and the data sets they use; and antitrust rules to prevent companies like Microsoft and Google from monopolizing the emerging market.
“The devil will be in the details,” said Sarah Myers West, director of the AI ​​Now Institute, a policy research center. She said Mr Altman’s suggestions for regulation do not go far enough and should include limits on how AI is used in policing and the use of biometric data. She noted that Mr. Altman showed no indication of slowing down development of OpenAI’s ChatGPT tool.
“It’s such an irony to see an attitude of concern for human harm that quickly releases the system responsible for that very harm into commercial use,” Ms West said.
Some lawmakers in the hearing pointed to the continuing gap in technological know-how between Washington and Silicon Valley. Lindsey Graham, Republican of South Carolina, repeatedly asked witnesses if a speech liability shield for online platforms like Facebook and Google also applies to AI
Mr. Altman, calm and unflappable, repeatedly tried to draw a distinction between AI and social media. “We need to work together to find a completely new approach,” he said.
Some subcommittee members also showed a reluctance to be too strong on an industry with great economic promise for the United States and which competes directly with opponents such as China.
The Chinese are creating AI that “reinforces the core values ​​of the Chinese Communist Party and the Chinese system,” said Chris Coons, Democrat of Delaware. “And I worry about how we promote AI that strengthens and strengthens open markets, open societies and democracy.”
Some of the toughest questions and comments regarding Mr. Altman came from Dr. Marcus, who noted that OpenAI has not been transparent about the data it uses to develop its systems. He expressed doubt in Mr Altman’s prediction that new jobs will replace those killed by AI
“We have unprecedented opportunities here, but we also face a perfect storm of corporate irresponsibility, widespread commitment, lack of adequate regulation and inherent unreliability,” said Dr. Marcus.
Tech companies have argued that Congress should be careful with any broad rules that lump together different types of AI. In Tuesday’s hearing, IBM’s Ms. Martin called for an AI law similar to Europe’s proposed regulation, which outlines different levels of risk. She called for rules that focus on specific uses, not regulate the technology itself.
“At its core, AI is just a tool, and tools can serve different purposes,” she said, adding that Congress should take a “precision regulatory approach to AI.”
0 Comments