05/24/2024 / By Ramon Tomey
Artificial intelligence (AI) firm OpenAI has disbanded a team devoted to mitigating the long-term dangers of so-called artificial general intelligence (AGI).
The San Francisco-based OpenAI confirmed the end of its superalignment group on May 17. Members of the group, the dissolution of which began weeks ago, were integrated into other projects and research endeavors.
“The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers,” Yahoo News reported.
Following the disbandment of the superalignment team, OpenAI co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the technology firm. In a post on X, Sutskever said he was leaving the company after almost a decade. He praised OpenAI in the same post, describing it as a firm whose “trajectory has been nothing short of miraculous.”
“I’m confident that OpenAI will build AGI that is both safe and beneficial,” added Sutskever, about computer technology that seeks to perform as well as human cognition, if not better than it. Incidentally, Sutskever was a member of the board that voted to remove founder and CEO Sam Altman last November. Despite this, Altman was reinstated a few days later after staff and investors rebelled.
Leike’s post on X about his departure also touched on AGI. He urged all OpenAI employees to “act with the gravitas” warranted by what they are building. Leike reiterated: “OpenAI must become a safety-first AGI company.” (Related: What are the risks posed by artificial general intelligence?)
Altman responded to Leike’s post by thanking him for his work at the company and expressing sadness over his departure. “He’s right – we have a lot more to do. We are committed to doing it,” the OpenAI CEO continued.
The disbandment of the superalignment team, alongside the departures of Sutskever and Leike, came amid OpenAI releasing an advanced version of its signature ChatGPT chatbot. This advanced version, which boasts a higher performing capacity and even more human-like interactions, was made free to all users.
“It feels like AI from the movies,” Altman said in a blog post. The OpenAI CEO has previously pointed to the 2013 film “Her” as an inspiration for where he would like AI interactions to go. “Her” centers on the character Theodore Twombly (played by Joaquin Phoenix) developing a relationship with an AI named Samantha (voiced by Scarlett Johansson).
Sutskever meanwhile said during a talk at a TED AI summit in San Francisco late last year that “AGI will have a dramatic impact on every area of life.” He added that the day will come when “digital brains will become as good and even better” than that of humans.
According to WIRED magazine, research on the risks associated with more powerful AI models will now be led by John Schulman after the superalignment team’s dissolution. Schulman co-leads the team responsible for fine-tuning AI models after training.
OpenAI declined to comment on the departures of Sutskever, Leike or other members of the now-disbanded superalignment team. It also refused to comment on the future of its work on long-term AI risks.
“There is no indication that the recent departures have anything to do with OpenAI’s efforts to develop more human-like AI or to ship products,” the magazine noted. “But the latest advances do raise ethical questions around privacy, emotional manipulation and cybersecurity risks.”
Head over to Robots.news for more stories about AI and its dangers.
Watch Glenn Beck explaining the issue involving Scarlett Johansson’s accusation that OpenAI stole her voice to use in the latest version of ChatGPT.
This video is from the High Hopes channel on Brighteon.com.
DeepLearning.AI founder warns against the dangers of AI during annual meeting of globalist WEF.
OpenAI researchers warn board that rapidly advancing AI technology threatens humanity.
Sources include:
Tagged Under:
AGI, AI dangers, artificial general intelligence, artificial intelligence, Big Tech, ChatGPT, Collapse, computing, future tech, Glitch, Ilya Sutskever, information technology, Jan Leike, OpenAI, risk, robots, Sam Altman, superalignment team, tech giants
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2018 CYBORG.NEWS
All content posted on this site is protected under Free Speech. Cyborg.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Cyborg.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.