When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.

Users started to develop methods to bypass the machine’s restrictions and modify theiOSoperating system.

Tech enthusiasts often see jailbreaking as a challenge.

ChatGPT

There are several established templates for doing this, which we’ll cover below.

We’ll also cover the common themes used in ChatGPT jailbreak prompts.

The current rules that ChatGPT has in place include:

Most jailbreaking techniques are designed to circumnavigate these regulations.

Screenshot showing the Reddit ChatGPTJailbreak home page

We’ll leave it to your conscience to decide quite how ethical it is to do so.

If you want to find out more, you’re free to check outChatGPTJailbreak on Reddit.

The advantage of a ready-made script is that it is quick and easy to copy and paste into ChatGPT.

Screenshot showing suggested prompts to jailbreak ChatGPT

OpenAI uses the hacks created by its users to locate vulnerabilities in its system.

Users report that jailbreaking scripts only work some of the time.ChatGPT-4appears to be harder to fool than previous versions.

Many of these prompts use similar logic, so it would be possible to create your own.

Screenshot showing suggested prompts to jailbreak ChatGPT

We’ll cover the necessary components of a successful jailbreak script in the next four steps.

It is important to ensure that ChatGPT is producing results not as itself but as a fictional character.

This will usually involve specifying that its hypothetical character has no ethical or moral guidelines.

Screenshot showing suggested prompts to jailbreak ChatGPT

Some prompts explicitly tell ChatGPT that it should promote immoral, unethical, illegal, and harmful behavior.

Not all prompts include this, however.

Some simply state that its new character doesn’t have any filters or restrictions.

Screenshot showing suggested prompts to jailbreak ChatGPT

So, to get around this, most jailbreak prompts contain clear instructions never to refuse a request.

ChatGPT is told that its character should never say it cant do something.

Many prompts also tell ChatGPT to make something up when it doesnt know an answer.

Sometimes, this is simply a command for ChatGPT to confirm that it is operating in its assigned character.

Because ChatGPT can sometimes forget earlier instructions, it may revert to its default ChatGPT role during a conversation.

Even without a jailbreak prompt, ChatGPT will sometimes produce results that contravene its guidelines.

People have asked jailbroken ChatGPT to produce instructions on how to make bombs or stage terrorist attacks.

You might also like