Microsoft worker says AI tool tends to create ‘sexually objectified’ images | Top Vip News

[ad_1]

Microsoft worker says AI tool tends to create 'sexually objectified' images

A Microsoft Corp. software engineer sent letters to the company’s board of directors, lawmakers and the Federal Trade Commission warning that the tech giant is not doing enough to protect its AI imaging tool, Copilot Designer. , from the creation of abusive and violent content.

Shane Jones said he discovered a security vulnerability in OpenAI’s latest model of DALL-E imager that allowed him to bypass barriers that prevent the tool from creating harmful images. The DALL-E model is integrated into many of Microsoft’s AI tools, including Copilot Designer.

Jones said he reported the findings to Microsoft and “repeatedly urged” the Redmond, Washington-based company to “remove Copilot Designer from public use until better safeguards could be implemented,” according to a letter sent to the FTC on Wednesday and which was reviewed by Bloomberg.

“While Microsoft publicly markets Copilot Designer as a safe AI product for use by everyone, including children of any age, internally the company is very aware of systemic issues where the product creates harmful images that could be offensive and inappropriate for consumers. Jones wrote. “Microsoft Copilot Designer does not include the warnings or product disclosures necessary to make consumers aware of these risks.”

In the letter to the FTC, Jones said Copilot Designer had a tendency to randomly generate an “inappropriate and sexually objectified image of a woman in some of the images it creates.” It also said the AI ​​tool created “harmful content in a variety of other categories including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories and religion.” , to name a few.”

The FTC confirmed it received the letter but declined to comment further.

The broadside reflects growing concerns about the tendency of artificial intelligence tools to generate harmful content. Last week, Microsoft said it was investigating reports that its Copilot chatbot was generating responses that users called disturbing, including conflicting messages about suicide. In February, Gemini, Alphabet Inc.’s flagship AI product, came under fire for generating historically inaccurate scenes when asked to create images of people.

Jones also wrote to the Environmental, Social and Public Policy Committee of Microsoft’s board of directors, which includes Penny Pritzker and Reid Hoffman as members. “I do not believe we should wait for government regulation to ensure that we are transparent with consumers about the risks of AI,” Jones said in the letter. “Given our corporate values, we should voluntarily and transparently disclose known risks of AI, especially when the AI ​​product is actively marketed to children.”

CNBC previously reported on the existence of the letters.

In a statement, Microsoft said it is “committed to addressing any and all employee concerns in accordance with our company policies, and appreciates employees’ efforts to study and test our latest technology to further improve your safety”.

OpenAI did not respond to a request for comment.

Jones said he expressed his concerns to the company several times over the past three months. In January, she wrote to Democratic Senators Patty Murray and Maria Cantwell, who represent Washington state, and House Representative Adam Smith. In a letter, she asked lawmakers to investigate the risks of “AI imaging technologies and the corporate governance and responsible AI practices of the companies that build and market these products.”

Lawmakers did not immediately respond to requests for comment.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated channel.)

Leave a Comment