July 24, 2024
Ilya Sutskever Launches New AI Company
AI

Ilya Sutskever Launches New AI Company

Ilya Sutskever, co-founder and former chief scientist of OpenAI, is launching a new AI company focused on safety. In a Wednesday post, Sutskever introduced Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” developing a safe and powerful AI system.

SSI’s Unique Approach to AI Safety

The announcement describes SSI as a startup that “approaches safety and capabilities in tandem,” allowing the company to advance its AI system quickly while still prioritizing safety. It also highlights the external pressures AI teams at companies like OpenAI, Google, and Microsoft often face, noting that the company’s “singular focus” enables it to avoid “distraction by management overhead or product cycles.” This approach ensures that SSI can prioritize safety, security, and progress without being influenced by short-term commercial pressures. “Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.”

Leadership and Vision at SSI

SSI is co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked as a member of the technical staff at OpenAI. This leadership team brings a wealth of experience in AI development and safety. Last year, Ilya Sutskever led the push to oust OpenAI CEO Sam Altman.

Sutskever left OpenAI in May and hinted at the start of a new project. Following Sutskever’s departure, AI researcher Jan Leike also resigned from OpenAI, citing that safety processes had “taken a backseat to shiny products.” Similarly, Gretchen Krueger, a policy researcher at OpenAI, mentioned safety concerns when announcing her departure. These moves underscore the founders’ commitment to prioritizing safety in AI development.

SSI’s Focus on Safe Superintelligence

As OpenAI pushes forward with partnerships with Apple and Microsoft, SSI is taking a different path. During an interview with Bloomberg, Ilya Sutskever stated that SSI’s first product will be safe superintelligence, and the company “will not do anything else” until achieving this goal.

This steadfast focus on creating a safe and powerful AI system differentiates SSI from other AI companies that may juggle multiple projects and commercial pressures. SSI’s approach aims to address and mitigate the safety concerns that have been raised in the AI community, providing a dedicated platform for the development of safe superintelligence.

In summary, Sutskever’s new venture, Safe Superintelligence Inc., aims to revolutionize AI safety by creating a powerful AI system that prioritizes security and safety over short-term gains. With a focused leadership team and a clear mission, SSI is set to make significant strides in the AI industry.

Image by DC Studio on Freepik

Disclosure Statement: Miami Crypto does not take any external funding, or support to bring crypto news to the readers. We do not have any conflicts of interest while writing news stories on Miami Crypto.

Related posts

AI in DeepMind Falls Short of Complete Solution for Climate Issues

Eva Moore

U.S. Economy Saved in 2023 by AI and Industrial Policies

Kevin Wilson

Hollo.AI: Pioneering Ethical AI Identity in the Battle Against Deep Fakes

Bran Lopez

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Please enter CoinGecko Free Api Key to get this plugin works.
Index