The AI apocalypse has begun, with the amount of intelligence in the universe doubling every 18 months.
OpenAI’s Sam Altman once joked that AI would lead to the end of the world, but before that, it would be a huge business.
Eric Hoel, an American neuroscientist and philosopher at Tufts University, argues that AI systems cannot be considered intelligent at this point, as they do not understand the world and do not have a personality that manifests itself in their intentions and actions.
The Bayesian brain hypothesis states that the brain’s primary function is to minimize surprise. If this is true, the AI apocalypse might have already begun.
This suggests that ChatGPT and others are already more versatile in their intelligence than any human being.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
The AI apocalypse has begun, changing the meaning of Moore’s Law. Now, the amount of intelligence in the universe will double every 18 months, according to Sam Altman, CEO of OpenAI, the company behind ChatGPT. Just seven years ago, Sam joked: “AI will most likely lead to the end of the world, but before that, there will be a huge business.” Now, people joke he does not part with a “nuclear backpack” so that he can remotely detonate data centers if GPT gets out of control.
|Recommended post: ChatGPT Could Cause Irreversible Human Degeneration|
To better understand what Altman is referring to in his tweet, and to find out how not to go crazy during the AI boom, we recommend reading Eric Hoel’s “How to navigate the AI apocalypse as a sane person.” Hoel is an American neuroscientist and philosopher at Tufts University. It’s a good read, as Hoel is a great writer, so we definitely recommend checking this post out.
Let’s just have a look at one of the key points as it summarizes a deep understanding of what is happening and the near future of the apocalypse that has already come. The main argument of “rational techno-optimists,” who believe that nothing extraordinary and overly risky happened with the advent of ChatGPT, is as follows:
- Despite the outstanding results of generative conversational AI (like ChatGPT, Bing, etc.), these AI systems cannot be considered intelligent. They do not understand the world and do not have the motivation of an agent. They do not have a personality that manifests itself in their intentions and actions, and their perceived intellect is nothing more than a simulacrum of intellect. At its core, this simulacrum is just an auto-filler for the next words, reflecting in its probabilistic mirror the colossal, unfiltered corpus of human-written texts from the Internet.
- If so, then there is neither a close prospect of the appearance of a superintelligence nor the risks associated with it (although, of course, you need to prepare for this, most likely on the horizon of decades).
Hoel’s answer to this argumentation is as follows:
- The fact that ChatGPT, for example, is simply an auto-complete of the next words does not imply that it cannot become (or has already become) an intelligent agent. Unlike consciousness, intelligence is a fully functional concept: If something acts intelligently, it is intelligent. If it acts as an agent, it is an agent.
- Here is an explanatory example. There is an influential cohort of scientists—Carl Friston (the most cited neuroscientist) and a host of other famous names—who claim that the purpose of our brain is to minimize surprise. This “Bayesian brain hypothesis” is one of the mainstream ones today. The theory states that, on a global level, minimizing surprise is the brain’s primary function. And while this is just one of several leading hypotheses about how the brain works, let’s assume it’s true. Imagine now that aliens find a human brain, look at it, and say: “Oh, this thing just minimizes surprise! It cannot be the basis of the intellect and therefore cannot be dangerous for the true bearers of the mind. Think: Is “minimizing surprises” really a much more difficult goal than auto-completing text? Or is it actually very similar?
- And if so, then the non-human superintelligence may already be nearby. And the associated risks are already quite real. What else is there to add? Perhaps ChatGPT and others are already more versatile in their intelligence than any human being. ChatGPT and others will likely be active at the same time for the following reasons: reasonable, unreliable, and not similar to the human mind, uncontrollable in any fundamental sense, except for some hastily designed fences.
And if all this is true, then the AI apocalypse has already begun.
Read more related topics:
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.