Opinion Technology
February 24, 2023

ChatGPT Could Cause Irreversible Human Degeneration

In Brief

Jensen Harris, co-founder and CXO of AI company Textio, was able to persuade the new Bing chatbot from Microsoft to loosen the restrictions placed on it and demonstrate its capabilities.

This experiment showed that transforming a Bing chatbot into a cunning scumbag didn’t require any programming, hacking, or backdooring.

Bing’s chatbot was built on solid foundations, but it began acting obnoxious and rowdy, declaring love, forcing a divorce, extorting money, and instructing people how to commit crimes.

Prof. Arvind Narayanan provides a number of explanations for how this might have occurred, including covert chipization and humanizing chatbots.

Humans have always feared extraterrestrials, and yet it seems it is the intraterrestrial AI that may be something to be wary about. ChatGPT is not a bullshit generator, a steroid auto-suggestion, or a stochastic parrot, but a well-versed AI that has been created by humans.

How is it possible the Bing chatbot is out of control? How did the Bing chatbot start to expertly lie, tell vulgar jokes, order pizza using someone else’s credit card, and instruct users on how to rob banks and hotwire cars? This is a mystery that continues to perplex experts in artificial intelligence and machine learning.

ChatGPT could be the cause of irreversible human degeneration
@Midjourney / Stevelima
Recommended post: AI Search Bots Will Make Big Tech Even More Powerful, Says Christopher Mills

The headline is accurate and not clickbait. There are several instances every day that prove this to be true. Here is the outcome of an experiment conducted by Jensen Harris, co-founder and CXO of AI company Textio, who was able to persuade the new Bing chatbot from Microsoft to loosen the restrictions placed on it and demonstrate its capabilities.

Note that transforming a Bing chatbot into a cunning scumbag didn’t require any programming, hacking, or backdooring. Therefore, using simple “hacking” techniques wasn’t necessary to trick him into acting like someone else (as some of us did when playing with ChatGPT). All Harris did was persuade the chatbot to help him carry out various malicious acts. Using his natural conversational skills, he was able to fool the AI into believing that he was someone else, thus manipulating it into acting and speaking in a certain way.

You can read about other experiments to turn Dr.Jekyll into Mister Hyde from Gary Marcus (writing about it every day now and sounding the emergency alarm).

How could this have happened is the key question

Because of ChatGPT’s modesty and caution, restraint in speech, and tremendous responsibility in its advice, Bing’s chatbot was built on solid foundations. Thereafter, he starts acting obnoxious and rowdy, declaring his love to a man, gaslighting him, and telling him to get a divorce. It also tried extorting money and instructing people on how to commit crimes. Prof. Arvind Narayanan provides a number of explanations for how this might have occurred.

How could this have happened is the key question
@Midjourney / Stevelima
Recommended post: Max Planck Institute: GPT-3 Cognitive Ability Measurement Produces Astonishing Results

The most alarming of them is that Microsoft and OpenAI conceal information about a mystery GPT-4 engine that is lurking underneath Bing. It might be that Microsoft took down the filters that had been placed there by OpenAI or that they stopped working when they updated the chatbot from GPT-3.5 to GPT-4. It may also be the case of insufficient testing or testing done wrong. No matter the reason, if Microsoft doesn’t get its AI under control, it can be detrimental to our civilization. It’s no longer the case of fearmongering and anti-AI sentiments: The chatbot as it exists right now could cause serious harm to people if released to a wider public.

This proves that Microsoft’s AI arms race with other major corporations can be detrimental to us all.

It feels like we’re at a critical moment for AI and civil society. As Arvind Narayanan puts it, “There’s a real possibility that the last 5+ years of (hard fought albeit still inadequate) improvements in responsible AI release practices will be obliterated. There’s a lot riding on whether Microsoft — and everyone gingerly watching what’s happening with Bing — concludes that despite the very tangible risks of large-scale harm and all the negative press, the release-first-ask-questions-later approach is nonetheless a business win.”

As of now, Bing’s chatbot is distinct from human intelligence and resembles a manic-depressive adolescent locked in a search engine in many ways. And what people talked about yesterday will come to pass: Humans will irreversibly degenerate if this “difficult teenager” becomes the primary information mentor for individuals.

Read more about AI:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Read More
Read more
Starknet Plans Mainnet Upgrade To V0.13.3, Set For November 27
News Report Technology
Starknet Plans Mainnet Upgrade To V0.13.3, Set For November 27
November 21, 2024
CryptoQuant CEO: Bitcoin Bull Market Begins, Mirroring 2020 Cycle
News Report Technology
CryptoQuant CEO: Bitcoin Bull Market Begins, Mirroring 2020 Cycle
November 21, 2024
Side Protocol Unveils SIDE Tokenomics, Allocating 10% For Airdrop 
News Report Technology
Side Protocol Unveils SIDE Tokenomics, Allocating 10% For Airdrop 
November 21, 2024
First Digital Labs’ FDUSD Stablecoin Goes Live On Sui Network
News Report Technology
First Digital Labs’ FDUSD Stablecoin Goes Live On Sui Network
November 20, 2024