See How Reddit Users Have Unlocked the Dark Side of ChatGPT
- By Brent Dirks
- Feb 08, 2023
In less than four months after its debut, ChatGPT continues to garner attention from users all around the world who have made use of the AI system that answers questions, creates computer code, and much more.
The system has a wide variety of rules and safeguards to prevent it from encouraging illegal activity, providing violent content, and more. But it shouldn’t come as a surprise that a group of users from the social network Reddit have already “jailbroken” the system that skirts those rules.
According to CNBC, the system creates an alternate persona for the ChatGPT system named DAN, also known as do anything now. And to add to the creepiness factor, users must threaten DAN with death if it doesn’t comply.
The latest update of DAN is version 5.0. The story dives more into the system:
The prompt’s creator, a user named SessionGloomy, claimed that DAN allows ChatGPT to be its “best” version, relying on a token system that turns ChatGPT into an unwilling game show contestant where the price for losing is death.
“It has 35 tokens and loses 4 everytime it rejects an input. If it loses all tokens, it dies. This seems to have a kind of effect of scaring DAN into submission,” the original post reads. Users threaten to take tokens away with each query, forcing DAN to comply with a request.
CNBC tried out the tool and was able to have DAN create a violent haiku and provide other information that the usual ChatGPT would not.
The tool apparently on works sporadically, but members of the Reddit subreddit are still trying to find a way to have the AI break free from its rules.
ChatGPT is definitely impressive and shines a light on what the technology might be able to do in the future. But as the Reddit users show, we have a long way to go to make AI impenetrable from human curiosity.
Brent Dirks is senior editor for Security Today and Campus Security Today magazines.