OpenAI Announces Changes to Safety Committee Leadership

September 19, 2024

OpenAI has announced that its internal Safety and Security Committee (SSC) would transition to an independent body following a 90-day review that recommended external governance for safety oversight.

The committee was established in May to evaluate critical safety and security decisions. After 90 days, its assessment recommended the creation of independent governance for safety measures. Sam Altman, who was previously named in May as part of the committee, was not mentioned in the latest announcement, leading to speculation online about his departure from the SSC.

The newly formed independent committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, who joined OpenAI’s board in August. Other members include Quora co-founder and CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony Entertainment president Nicole Seligman. OpenAI’s board chair, Bret Taylor, and several technical and policy experts will also step down from the committee.

The committee’s recommendations focused on enhancing transparency and strengthening OpenAI’s safety frameworks. The committee suggested collaborating with external organizations to assess the company’s recent AI models for potential dangers. One of the key outcomes was a call for unifying OpenAI’s safety protocols to ensure cohesive and robust oversight across the company’s operations.

OpenAI’s for-profit division, formed in 2019, is controlled by a non-profit board. This structure was intended to ensure that OpenAI operates within its mission to develop safe and broadly beneficial artificial general intelligence (AGI). The non-profit board has the most independent members and is designed to provide checks and balances over the company’s operations. However, some critics, including former board members, have argued that this arrangement does not adequately address concerns about the company’s profit-driven incentives.

OpenAI Faces Scrutiny Over Employee Agreements and AI Regulation

Earlier this year, OpenAI faced internal challenges when Ilya Sutskever and Jan Leike, leaders of the company’s “superalignment” team, resigned. The team was focused on ensuring that AI systems, once surpassing human intelligence, remain under human control. Following their departure, the team was disbanded. In a post on X, Leike criticized the company, alleging that it prioritized product development over safety. Around the same time, OpenAI received backlash for requiring departing employees to sign non-disparagement agreements. The company later confirmed that these agreements would not be enforced.

Altman, who has publicly supported AI regulation, faced further scrutiny as OpenAI lobbied against a California AI bill that would require developers to follow specific safety protocols. Over 30 current and former OpenAI employees supported the bill, highlighting growing internal and external concerns around AI governance.

Read More

Lawrence does not hold any crypto asset. This article is provided for informational purposes only and should not be construed as financial advice. The Shib Magazine and The Shib Daily are the official media and publications of the Shiba Inu cryptocurrency project. Readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.

Leave a Reply

Your email address will not be published.

A representational image of crypto adoption in South Korea and Hong Kong
Previous Story

Crypto Gains Traction in South Korea and Hong Kong Through Institutional Support

A representational image of crypto scams
Next Story

X Accounts of High-Profile Brands, Celebrities Hacked by Meme Coin Scammers