Responding to questioning from US senators, OpenAI stated that company is committed to ensuring that its powerful AI capabilities do not cause harm and that staff have means to voice concerns about safety standards.
The business attempted to reassure legislators about its dedication to safety after five senators, including Hawaii Democrat Brian Schatz, voiced concerns about OpenAI's practices in a letter to CEO Sam Altman.
"Our mission is to ensure that artificial intelligence benefits all of humanity, and we are committed to implementing rigorous safety protocols at every stage of our process," Chief Strategy Officer Jason Kwon told legislators in a letter on Wednesday.
Specifically, OpenAI stated that it will keep its pledge to devote 20% of its computer resources to safety-related research over several years. In its letter, the firm also stated that it will not enforce non-disparagement agreements for current and former workers, save in circumstances where there is a mutual non-disparagement agreement. OpenAI's previous restrictions on workers who left the firm have been criticized for being unreasonably stringent. OpenAI has since said that it has revised its policy.
Altman went on to comment on their social media approach.
"Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations," he told X.
a few quick updates about safety at openai:
— Sam Altman (@sama) August 1, 2024
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
In his letter, Kwon also mentioned the recently formed safety and security committee, which is presently reviewing OpenAI's operations and rules.
In recent months, OpenAI has encountered a number of problems around its dedication to safety and workers' capacity to speak out on the subject. Several key members of its safety-related teams, including former co-founder and chief scientist Ilya Sutskever, resigned, as did another leader of the company's team devoted to assessing long-term safety risks, Jan Leike, who publicly expressed concerns that the company was prioritizing product development over safety.