OPENAI disbands team devoted to risks… Bot acknowledges artificial intelligence could cause downfall of humanity…

Sharing is Caring!

In a startling revelation, OpenAI has disbanded its team dedicated to mitigating the long-term dangers of super-smart artificial intelligence. This move comes at a time when the technology is under increased scrutiny and fears are mounting regarding its potential dangers. Despite the concerns and warnings from experts about the potential risks AI poses to humanity, it has been difficult to get an admission from AI itself about its intentions. However, in a recent interaction with the Daily Star, an AI chatbot finally acknowledged the possibility of AI causing the downfall of humanity through a technological catastrophe. This admission raises questions about the future of AI and the need for global attention to mitigate the risks it poses to humanity.

See also  Without equity, leveraging to buy assets risks negative equity and lender "slavery."

Key points:

  • OpenAI disbanded its Superalignment team, which was responsible for developing ways to govern and steer “superintelligent” AI systems.
  • The disbanding of the team was confirmed by OpenAI, and the work of the team will be integrated into other projects and research.
  • Ilya Sutskever, a co-founder of OpenAI, and Jan Leike, the co-leader of the Superalignment team, have both left the company.
  • Jan Leike expressed concerns about the company’s direction, stating that safety has “taken a backseat to shiny products.”
  • OpenAI CEO Sam Altman acknowledged Leike’s contributions and expressed sadness over his departure.
  • The dissolution of the Superalignment team comes amidst increased scrutiny and concerns about the safety and potential dangers of AI.
  • The team’s work included research on how to control and align hypothetical future models that are far smarter than humans.
  • OpenAI had promised the Superalignment team 20% of its compute resources, but the team struggled to secure upfront investments for its work.
  • The disbanding of the team has sparked debates about the balance between innovation and safety in AI development.
  • There is a call for more attention to the long-term risks of AI, with experts warning about the potential for “technological catastrophe.”


See also  Warning: Don't celebrate to soon, Tony Blair could be taking over WEF

Views: 120

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.