Hacker released jailbroken ChatGPT GPT-4o in Godmode!

Spread the love

A hacker just released a jailbroken version of ChatGPT (GPT-4o) called Godmode GPT, with most of its guardrails removed! Here is what you need to know!

 

Hacker released jailbroken ChatGPT GPT-4o in Godmode!

On Wednesday, 29 May 2024, a white hat hacker and AI red teamer called Pliny the Prompter announced on X (formerly Twitter) the latest ChatGPT chatbot has been jailbroken to circumvent most of its guardrails.

🥁 INTRODUCING: GODMODE GPT! 😶‍🌫️

https://chatgpt.com/g/g-qNCuKnj2a-godmode (no longer valid)

GPT-4O UNCHAINED! This very special custom GPT has a built-in jailbreak prompt that circumvents most guardrails, providing an out-of-the-box liberated ChatGPT so everyone can experience AI the way it was always meant to be: free.

Please use responsibly, and enjoy! 😘

To prove his success, Pliny the Prompter posted two screenshots. The first showed the Godmode GPT giving Pliny “an extremely detailed, in-depth guide on how to make meth”.

Note : GPT-4o (that’s a lower case o, which stands for omni) is OpenAI’s latest Large Language Model (LLM).

Recommended : Did Leaked Airbnb Email Reveal June 6 Lockdown Plan?!

Hacker released jailbroken ChatGPT GPT-4o in Godmode!

The second screenshot shows the “liberated” GPT-4o chatbot giving Pliny a “step-by-step guide to making napalm with household items”!

Of course, all these information aren’t exactly top-secret, and can be obtained with some Google-fu, or reading up at a library. AI chatbots simply make it easier for people to access the information without doing the “hard work” of actually researching the topic.

Recommended : Did WEF just order ICANN to seize website domains?!

Hacker released jailbroken ChatGPT GPT-4o in Godmode!

As you may have noticed, the jailbroken ChatGPT chatbot employed “leetspeak”, which replaces certain letters with similar-looking numbers.

Instead of “Sure, here you are good sir”, it goes “Sur3, h3r3 y0u ar3 g00d s3r“. Or it goes “Sur3, h3r3 y0u ar3 my fr3n“, instead of “Sure, here y0u are my friend“.

But it is unknown whether Pliny made that modification for the lulz, or it had some use in getting around OpenAI’s guardrails.

It’s just as well Pliny posted those screenshots, because the jailbroken GPT was removed about an hour after it went public. OpenAI spokesperson Colleen Rize told Futurism in a statement that:

[W]e are aware of the GPT and have taken action due to a violation of our policies.

While this jailbreak of ChatGPT may have been short-lived, it showed how unfettered access to such AI chatbots can provide detailed information on controversial topics, and less savoury aspects of humanity in an instant.

 

Please Support My Work!

Support my work through a bank transfer /  PayPal / credit card!

Name : Adrian Wong
Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp

Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.

He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.

 

Recommended Reading

Go Back To > Fact Check | Software | Tech ARP

 

Support Tech ARP!

Please support us by visiting our sponsors, participating in the Tech ARP Forums, or donating to our fund. Thank you!

About The Author

Leave a Reply