03/19/2024 / By Belle Carter
Decades earlier than previously predicted, artificial intelligence is set to surpass human intelligence. The mathematician and futurist who popularized the term “artificial general intelligence” (AGI) believes AI is verging on an exponential “intelligence explosion.” Ben Goertzel announced this while closing out the 2024 Beneficial AI Summit and Unconference, which was partially sponsored by his own firm SingularityNET last week in Panama.
“It seems quite plausible we could get to human-level AGI within, let’s say, the next three to eight years. Once you get to human-level AGI within a few years you could get a radically superhuman AGI,” he said. The man who is sometimes called the “father of AI” admitted that he could be wrong, but he went on to predict that the only impediment to a runaway, ultra-advanced AI – far more advanced than its human makers – would be if the bot’s ‘own conservatism’ advised caution.
‘There are known unknowns and probably unknown unknowns,” Goertzel said. “No one has created human-level artificial general intelligence [AGI] yet; nobody has a solid knowledge of when we’re going to get there.” But, unless the processing power, in Goertzel’s words, required a ‘quantum computer with a million qubits or something,’ an exponential escalation of AI struck him as inevitable. “Once you get to human-level AGI, within a few years you could get a radically superhuman AGI,” he said. (Related: Technocrats Gates and Altman admit current AI is the stupidest version of AGI but believe it can eventually “overcome polarization” – or in reality – censor views.)
In recent years, Goertzel, well-known for his work on Sophia the Robot, the first robot ever to be granted legal citizenship, has been investigating a concept he calls “artificial superintelligence” (ASI), which he defines as an AI that’s so advanced that it matches all of the brain power and computing power of human civilization. According to him, three lines of converging evidence could support his thesis. First, he cited the updated work of Google’s long-time resident futurist and computer scientist Ray Kurzweil, who has developed a predictive model suggesting AGI will be achievable in 2029. Next, Goertzel referred to all the well-known recent improvements made to large language models (LLMs) within the past few years, which he pointed out have “woken up so much of the world to the potential of AI.” Finally, he turned to his infrastructure research designed to combine various types of AI infrastructure, which he calls “OpenCog Hyperon.”
The new infrastructure would marry AI, like LLMs and new forms of AI that might be focused on other areas of cognitive reasoning beyond language. It could be math, physics, or philosophy, to help create a more well-rounded true AGI. Goertzel’s “OpenCog Hyperon” has gotten the interest of others in the AI space, including Berkeley Artificial Intelligence Research (BAIR) which hosted an article he co-wrote with Databricks CTO Matei Zaharia and others last month.
The self-described panpsychist has suggested that researchers pursue the creation of a ‘benign superintelligence.’ Goertzel has also proposed an AI-based cryptocurrency rating agency capable of identifying scam tokens and coins.
In a conversation with the science and technology website Futurism last year, Goertzel talked about his views on consciousness, humans, AI and otherwise. At one point, the outlet asked: “Do you think an AI would ever be sophisticated enough to do drugs, and if so, would you do drugs with one?” The scientist admitted easily that he has done drugs with an AI, “if by that we mean I have done drugs and then interacted with an AI.”
He said that in the 90s, he was doing algorithmic music composition. “It’s quite interesting to play music and have an AI play music back to you. But if you’re in an altered state of consciousness, it can be even more interesting,” he said. “I think in terms of AI themselves taking drugs, the challenge is more to get the AI to not be in an altered state of consciousness.”
According to him, when they were working with their open-source AGI system, it was very easy to make it either obsessive-compulsive and like just keep thinking about the same thing over and over or to make it basically stuck in a stoned mind, drifting from one thing to another to another to another, like semi-randomly. “You have to work to have the system auto-tune its own parameters so it’s not OCD or overly stoned and distracted,” he explained. “With humans, our brains evolved to keep the parameters in a range where we can do useful stuff, and AIs sort of have to recapitulate that process.”
He added that AI doesn’t need chemical drugs in the same sense that a human does. But AI system parameters can be set so it’s going way off the rails in terms of its internal dynamics as well as its external behaviors. “And much like on some human drug trips, this will cause it to generate a whole lot of creative things, most of which are garbage and some of which will cause it to be totally unable to estimate the nature or quality of it,” he said.
Watch Goertzel’s closing speech at the 2024 Beneficial AI Summit below.
Head over FutureTech.news for news similar to this.
Tagged Under:
AGI, AI, artificial intelligence, artificial super intelligence, BAIR, Ben Goertzel, Berkeley Artificial Intelligence Research, Big Tech, computing, conspiracy, deception, future science, future tech, futurist, Glitch, information technology, insanity, intelligence explosion, inventions, OpenCog Hyperon, robotics, robots, superhuman, tech
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2018 CYBORG.NEWS
All content posted on this site is protected under Free Speech. Cyborg.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Cyborg.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.