In response to the ongoing concerns surrounding generative AI, Google has unveiled an expansion of its Vulnerability Rewards Program (VRP) specifically aimed at addressing AI-related threats and potential malicious uses. Consequently, the company has issued updated guidelines outlining which findings are eligible for rewards and which are not. For instance, discovering a vulnerability that results in the extraction of private, sensitive training data falls within the scope, while a discovery that only exposes public, non-sensitive data would not qualify for a reward. Last year, Google allocated $12 million to security researchers for identifying software vulnerabilities.
Google has emphasized that AI technology introduces unique security challenges, such as model manipulation and the risk of unfair bias. Therefore, the company has developed new guidelines to address these distinct AI-related issues. In a statement, Google expressed its belief that expanding the VRP will encourage research focused on AI safety and security, shedding light on potential concerns and ultimately enhancing AI’s safety for all users. Furthermore, Google is broadening its efforts in open source security to ensure information about AI supply chain security is universally accessible and verifiable.
Earlier this year, several AI companies, including Google, convened at the White House to commit to greater awareness and exploration of AI vulnerabilities. Google’s VRP expansion coincides with the anticipated release of a comprehensive executive order from President Biden, expected on Monday, October 30. This order is set to introduce stringent assessments and prerequisites for AI models before government agencies can utilize them.