The White House said Tuesday that eight more companies involved in artificial intelligence had pledged to voluntarily follow standards for safety, security and trust with the fast-evolving technology.
The companies include Adobe, IBM, Palantir, Nvidia and Salesforce. They joined Amazon, Anthropic, Google, Inflection AI, Microsoft and OpenAI, which initiated an industry-led effort on safeguards in an announcement with the White House in July. The companies have committed to testing and other security measures, which are not regulations and are not enforced by the government.
Grappling with AI has become paramount since OpenAI released the powerful ChatGPT chatbot last year. The technology has since been under scrutiny for affecting people’s jobs, spreading misinformation and potentially developing its own intelligence. As a result, lawmakers and regulators in Washington have increasingly debated how to handle AI.
On Tuesday, Microsoft’s president, Brad Smith, and Nvidia’s chief scientist, William Dally, will testify in a hearing on AI regulations held by the Senate Judiciary subcommittee on privacy, technology and the law. On Wednesday, Elon Musk, Mark Zuckerberg of Meta, Sam Altman of OpenAI and Sundar Pichai of Google will be among a dozen tech executives meeting with lawmakers in a closed-door AI summit hosted by Sen. Chuck Schumer, D-N.Y. and the majority leader.
“The president has been clear: Harness the benefits of A.I., manage the risks and move fast — very fast,” the White House chief of staff, Jeff Zients, said in a statement about the eight companies pledging to AI safety standards. “And we are doing just that by partnering with the private sector and pulling every lever we have to get this done.”
The companies agreed to include testing future products for security risks and using watermarks to make sure consumers can spot AI-generated material. They also agreed to share information about security risks across the industry and report any potential biases in their systems.
Some civil society groups have complained about the influential role of tech companies in discussions about AI regulations.
“They have outsized resources and influence policymakers in multiple ways,” said Merve Hickok, the president of the Center for AI and Digital Policy, a nonprofit research group. “Their voices can’t be privileged over civil society.”