The new company will develop AI safety and capabilities in tandem. ---

Ex-OpenAI chief scientist Ilya Sutskever launches SSI to focus on AI safety ***

Ex-OpenAI chief scientist Ilya Sutskever launches SSI to focus on AI safety
Staff Member
Thursday 20th of June 2024 11:30:00 AM 3 min read

Co-founder and former chief scientist of OpenAI, Ilya Sutskever, and former OpenAI engineer Daniel Levy have joined forces with Daniel Gross, an investor and former partner in startup accelerator Y Combinator, to create Safe Superintelligence, Inc. (SSI). The new company’s goal and product are evident from its name.

SSI is a United States company with offices in Palo Alto and Tel Aviv. It will advance artificial intelligence (AI) by developing safety and capabilities in tandem, the trio of founders said in an online announcement on June 19. They added:

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Sustkever and Gross were already worried about AI safety

Sutskever left OpenAI on May 14. He was involved in the firing of CEO Sam Altman and played an ambiguous role at the company after stepping down from the board after Altman returned. Daniel Levy was among the researchers who left OpenAI a few days after Sutskever.

Sutskever and Jan Leike were the leaders of OpenAI’s Superalignment team created in July 2023 to consider how to “steer and control AI systems much smarter than us.” Such systems are referred to as artificial general intelligence (AGI). OpenAI allotted 20% of its computing power to the Superalignment team at the time of its creation.

Leike also left OpenAI in May and is now the head of a team at Amazon-backed AI startup Anthropic. OpenAI defended its safety-related precautions in a long X post by company president Greg Brockman but dissolved the Superalignment team after the May departure of its researchers.

Other top tech figures worry too

The former OpenAI researchers are among many scientists concerned about the future direction of AI. Ethereum co-founder Vitalik Butertin called AGI “risky” in the midst of the staff turnover at OpenAI. He added, however, that “such models are also much lower in terms of doom risk than both corporate megalomania and militaries.”

Tesla CEO Elon Musk, once an OpenAI supporter, and Apple co-founder Steve Wozniak were among more than 2,600 tech leaders and researchers who urged that the training of AI systems be paused for six months while humanity pondered the “profound risk” they represented.

The SSI announcement noted that the company is hiring engineers and researchers.

Source

 

Comments

Trade cryptocurrency with ease and enjoy low trading fees!
Trade cryptocurrency with ease and enjoy low trading fees!

Quickly and easily trade cryptocurrency at Wollito.com

Find your answers instantly in our Support Center
Find your answers instantly in our Support Center

Taking good care of our customers is our top priority. Wollito Customer Support is here to pro...

Wollito NFT - Coming Soon
Wollito NFT - Coming Soon

List your NFT for FREE with Wollito NFT's.