OpenAI has announced the launch of a bug bounty program for its API, including ChatGPT, which rewards users for finding vulnerabilities.
The program is being run through bug boOpenAI Will Pay You to Find ChatGPT Bugs – But Not Jailbreaksunty platform Bugcrowd. Rewards range from $200 to $20,000 for discovering “low-severity and exceptional” bugs within ChatGPT, the OpenAI plugins, the OpenAI API, and other related services.
According to a blog post, members of the public are invited “to report vulnerabilities, bugs, or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”
Also read: ChatGPT Users ‘Jailbreak’ AI, Unleash Dan Alter Ego
No reward for ChatGPT jailbreaks
However, OpenAI made it clear that it will not accept jailbreaks for ChatGPT or text prompts intended to exploit the popular AI chatbot.
“Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded,” said OpenAI on its Bugcrowd page.
Jailbreaking ChatGPT is a process that involves removing restrictions and limitations from the chatbot in order to have it perform functions beyond its safeguards. It can involve using specific prompts, such as “Do Anything Now” or “Developer Mode,” and users can even coerce the bot to build a weapon – something it would normally refuse to do.
🚨 OpenAI just launched its first Bug Bounty Program!
Share your expertise, uncover vulnerabilities & earn rewards up to $20,000!
Calling all ethical hackers & security researchers to join the mission! pic.twitter.com/g6Omyg311S
— Shubham Saboo (@Saboo_Shubham_) April 12, 2023
OpenAI stated that it will not accept reports of vulnerabilities that are already known or have been previously reported.
“Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” the company said. “Addressing these issues often involves substantial research and a broader approach.”
Users can report model safety issues using a separate form, it added.
OpenAI under pressure to improve safety
The launch of OpenAI’s bug bounty program comes amid mounting regulatory concerns over privacy risks as well as a data breach.
A few weeks ago, a bug was discovered that exposed chat titles, the first message of new conversations, and payment information from ChatGPT Plus users. OpenAI later apologized for the leak. The firm has also been under intense scrutiny for how it protects user data.
As MetaNews reported, ChatGPT was banned in Italy for this very reason. These issues, plus an open letter signed by Steve Wozniak and Elon Musk calling for a six-month pause on AI systems, seem to have prompted OpenAI to publicly assert its commitment to safety.
“This initiative is essential to our commitment to develop safe and advanced AI,” said the firm.
OpenAI has been working on GPT-4, a large multimodal model that accepts image and text inputs and emits text outputs. The company has released GPT-4’s text input capability via ChatGPT and the API, with a waitlist.
OpenAI’s cash rewards are significantly less compared to bug bounty programs from other companies, such as Apple, which can pay up $2 million for the most severe vulnerabilities.