OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a bug bounty program in an attempt to ensure its systems are “safe and secure.”
To that end, it has partnered with the crowdsourced security platform Bugcrowd for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from “$200 for low-severity findings to up to $20,000 for exceptional discoveries.”
It’s worth noting that the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs. The company noted that “addressing these issues often involves substantial research and a broader approach.”
The development comes in response to OpenAI patching account takeover and data exposure flaws in the platform, prompting Italian data protection regulators to take a closer look at the platform.
OpenAI has also been ordered to implement an age verification system by September 30, 2023, to filter out users aged below 13 and have provisions in place to seek parental consent for users aged 13 to 18. The company has been given time till May 31 to submit a plan for the age-gating system.
As part of efforts to exercise data rights, both users and non-users of the service can request for “rectification of their personal data” in cases where it’s incorrectly generated by the service, or alternatively, erase the data if corrections are technically infeasible.
Non-users, per the Garante, should further be provided with easily accessible tools to object to their personal data being processed by OpenAI’s algorithms. The company is also expected to run an advertising campaign by May 15, 2023, to “inform individuals on use of their personal data for training algorithms.”
Source: https://thehackernews.com/