Home » Microsoft silences engineer over AI danger

Microsoft silences engineer over AI danger

by admin
Microsoft silences engineer over AI danger

Research into AI continues. Recently, however, a Microsoft executive pointed out a danger within DALL-E and was deterred from talking further.

More and more people can use AI to help them do their jobs, making them easier. The company that has the lead in AI is OpenAI. This is responsible for ChatGPT and DALL-E.

When ChatGPT came onto the market and was made available to the masses, there was great astonishment. The text-based AI is programmed to independently formulate texts with the help of quick research in its own database. DALL-E, on the other hand, is able to independently generate an image based on just a few words. The image-based AI model also has its own database available for this purpose. But not everything about AIs is positive. Now a manager at Microsoft pointed out a danger from DALL-E.

How exactly does generating images with AI actually work? You can find out more in the video.

DALL-E warns about this

As GeekWire reported, engineer Shane Jones tried to explain the risks and dangers of the third version of DALL-E. In this case it is possible due to a security leak, Generate explicit and/or violent images. In December 2023, he publicly posted a letter on LinkedIn warning of the dangers.

After Microsoft allegedly tried to silence the engineer (A manager is said to have asked him to delete the post), but he didn’t let it get him down, Shane Jones decided to contact the Capitol. In his letter to US Senator Patty Murray he wrote, among other things:

“I have determined that DALL-E 3 poses a public safety risk and should therefore be removed from public use until OpenAI addresses the risks with this model.” (Translated into German by GIGA.)

This is how dangerous a security breach like this could be

Such a security gap can have far-reaching consequences, and not just for images. For example Deepfakes, i.e. videos that show things and let people speak that never happened. So-called deepfake porn can also be created. This involves feeding an artificial intelligence different images of a person’s facial expressions. Then the artificial intelligence is supposed to insert these expressions onto another person’s face in a porn video.

This is often used to create deepfake pornography famous people to create. Female stars are most affected by this. This was also the case recently with US singer Taylor Swift. Explicit and false images were created about them. Taylor Swift’s large fan base and social media platforms like X were able to prevent further spread; Of course, this does not represent a sustainable solution to the problem.

See also  BDS team cooperates with blockchain platform XBorg - - Gamereactor

Did you enjoy the post? Follow us on WhatsApp and Google News and don’t miss any news about technology, games and entertainment.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy