Microsoft’s New Initiative: Inviting Researchers to Probe Bing’s AI for Vulnerabilities

Oct 16, 2023

In a bold move, Microsoft has launched a bounty program targeting their artificial intelligence (AI) system. The tech giant is inviting researchers worldwide to find vulnerabilities in its AI-powered Bing service, with the potential for cash rewards reaching up to $15,000.

As part of this initiative, Microsoft is encouraging security researchers globally to delve into the AI-driven Bing experience and uncover any hidden flaws. According to the details provided on the AI bounty program’s website, the company is inviting submissions that reveal vulnerabilities in Bing’s AI system, with eligible entries standing the chance to earn between $2,000 and $15,000.

However, the scope of this program isn’t limited to Bing alone. Microsoft also extends the invitation to scrutinize other AI integrations, including those within Microsoft Edge, the Microsoft Start app, and the Skype Mobile app. Any discovered vulnerabilities within these integrations are also eligible for the bounty reward.

The bounty program’s primary objective is to identify significant vulnerabilities within the AI-powered Bing experience that could potentially compromise customer security. The goal is to ensure the robustness and integrity of the AI system, while simultaneously encouraging research and innovation in the field.

Applicants keen on participating in this bounty program must be at least 14 years old. Minors are required to secure permission from a legal guardian before they can participate.

Reflecting on the past year, Microsoft reported in a blog post that it had awarded $13.8 million in bounty rewards to 345 security researchers across the globe. These researchers had collectively found 1,180 vulnerabilities spread across 17 distinct bug bounty programs.

Last year, as part of its bug bounty efforts, Microsoft expanded its coverage to include Exchange on-premises, SharePoint, and Skype for Business. The company also increased the maximum rewards for reporting high-impact security flaws via the Microsoft 365 platform.

This new initiative by Microsoft demonstrates the company’s commitment to enhancing the security and reliability of its AI systems. By inviting external researchers to probe their systems for vulnerabilities, Microsoft is not only ensuring the robustness of its AI but also fostering a culture of research and innovation in the field of AI.