Thursday, December 26, 2024

OpenAI forms new Safety and Security Committee

Software DevelopmentOpenAI forms new Safety and Security Committee


OpenAI has formed a new Safety and Security Committee as it begins training its next frontier model. 

Over the next 90 days, the committee will begin evaluating the company’s current processes and safeguards. Then, it will share its findings with OpenAI’s Board, which will then share the updated recommendations publicly. 

The overall goal of the committee is to be able to provide guidance for how OpenAI can continue innovating on AI in a safe manner. 

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” OpenAI wrote in a post.

The committee will be led by Bret Taylor, Adam D’Angelo, Nicole Seligman, and Sam Altman. OpenAI’s Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki will also be on the committee. Additionally, the committee will consult with John Carlin, a former Justice Department official, and Rob Joyce, a former director of the U.S. National Security Agency. 

Check out our other content

Check out other tags:

Most Popular Articles