Microsoft engineer claims firm's AI tool generates sexual, violent images: Report

By Team Asianet Newsable  |  First Published Mar 7, 2024, 1:39 PM IST

Shane Jones, who’s worked at Microsoft for six years, has been testing the company’s AI image generator in his free time and told he is disturbed by his findings. He’s warned Microsoft of the sexual and violent content that the product, Copilot Designer, is creating, but said the company isn’t taking appropriate action.


Shane Jones, an AI developer at Microsoft, expressed his worries in a letter on Wednesday, alleging that there are insufficient measures in place to prevent the company's AI picture generator, Copilot Designer, from producing offensive content, such as graphic or lewd images.  Jones claimed he previously warned Microsoft management but saw no action, prompting him to send the letter to the Federal Trade Commission and Microsoft's board.

In the letter, which he posted on LinkedIn, Jones wrote, "Inside the company, there are systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers." He lists his title as "principal software engineering manager".

Tap to resize

Latest Videos

Tap to resize

Also Read | Want to ask managers for help? Meta CTO shares key tips on seeking managerial assistance

A Microsoft representative refuted the accusations of ignoring safety concerns, according to The Guardian. They emphasized the existence of "robust internal reporting channels" for addressing issues related to generative AI tools. As of now, there has been no response from Shane Jones regarding the spokesperson's statement.

The letter's main complaint is with Microsoft's Copilot Designer, an image-generation application that uses OpenAI's DALL-E 3 system for processing. It works by using textual cues to create visuals. Alongside this rapid development, concerns have arisen regarding the potential misuse of AI for spreading disinformation and generating harmful content that promotes misogyny, racism, and violence.

Also Read | 'Rs 18,999' trends on social media & Realme's new smartphone is the reason

Microsoft refuted the allegations by claiming to have specialized teams assigned to assess any safety risks associated with its AI capabilities. They further assert that they set up meetings between Jones and their Office of Responsible AI, indicating that they are prepared to handle his concerns inside.

Microsoft introduced Copilot, its "AI companion," last year, and has since heavily marketed it as a game-changing way to incorporate AI technologies into both artistic and business endeavors.

click me!