ADVERTISEMENT

Technology

OpenAI Says ‘Dedicated’ to Safety in Letter to US Lawmakers

The Open AI logo on a laptop arranged in Crockett, California, US, on Friday, Dec. 29, 2023. Microsoft has invested some $13 billion in OpenAI and integrated its products into its core businesses, quickly becoming the undisputed leader of AI among big tech firms. Photographer: David Paul Morris/Bloomberg (David Paul Morris/Bloomberg)

(Bloomberg) -- OpenAI, responding to questions from US lawmakers, said it’s dedicated to making sure its powerful AI tools don’t cause harm, and that employees have ways to raise concerns about safety practices.

The startup sought to reassure lawmakers of its commitment to safety after five senators including Senator Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a letter addressed to Chief Executive Officer Sam Altman.

“Our mission is to ensure artificial intelligence benefits all of humanity, and we are dedicated to implementing rigorous safety protocols at every stage of our process,” Chief Strategy Officer Jason Kwon said Wednesday in a letter to the lawmakers.

Specifically, OpenAI said it will continue to uphold its promise to allocate 20% of its computing resources toward safety-related research over multiple years. The company, in its letter, also pledged that it won’t enforce non-disparagement agreements for current and former employees, except in specific cases of a mutual non-disparagement agreement. OpenAI’s former limits on employees who left the company have come under scrutiny for being unusually restrictive. OpenAI has since said it has changed its policies.

Altman later elaborated on its strategy on social media. 

“Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations,” he wrote on X. 

Kwon, in his letter, also cited the recent creation of a safety and security committee, which is currently undergoing a review of OpenAI’s processes and policies.

In recent months, OpenAI has faced a series of controversies around its commitment to safety and ability for employees to speak out on the topic. Several key members of its safety-related teams, including former co-founder and chief scientist Ilya Sutskever, resigned, along with another leader of the company’s team devoted to assessing long-term safety risks, Jan Leike, who publicly shared concerns that the company was prioritizing product development over safety.

--With assistance from Peter Elstrom.

(Updates with Altman’s social media post from sixth paragraph)

©2024 Bloomberg L.P.