Home » Microsoft warned that US adversaries are beginning to use artificial intelligence in offensive cyber operations

Microsoft warned that US adversaries are beginning to use artificial intelligence in offensive cyber operations

by admin
Microsoft warned that US adversaries are beginning to use artificial intelligence in offensive cyber operations

Microsoft: Adversaries Using Generative AI for Cyber Operations

Tech giant Microsoft has released a report stating that U.S. adversaries, including Iran and North Korea, are using its generative artificial intelligence (AI) to organize offensive cyber operations. The use of such technology poses a concern as it allows adversarial governments to breach networks and spread misinformation.

In cooperation with business partner OpenAI, Microsoft identified and disrupted several accounts using malicious AI technologies. The company emphasized the importance of publicly exposing these operations to raise awareness of the potential threats posed by large language models.

While cybersecurity companies have historically utilized machine learning in defense, the emergence of large language models such as OpenAI’s ChatGPT has intensified the cat-and-mouse game between defenders and attackers. Microsoft’s investment in OpenAI reinforces the significance of addressing these concerns, especially in a year with over 50 countries holding elections, where misinformation could have a significant impact.

The report provided specific examples of how adversaries have exploited generative AI, including North Korea’s use of the models to target foreign think tanks, Iran’s use of AI to enhance social engineering and phishing email campaigns, and Russia’s interest in investigating satellite and radar technologies.

OpenAI, which recently released its GPT-4 model chatbot, acknowledged that there is a risk associated with making advanced AI technologies publicly available but noted that current capabilities for malicious cyber tasks are limited.

Cybersecurity researchers are urging the need for AI to be built with security in mind, as the misuse of long-language models poses a significant threat to national security.

See also  Pull up a chair and relax with some lo-fi Diablo beats - Diablo IV - Gamereactor

Critics have argued that the rushed release of advanced AI technologies without sufficient security measures has contributed to the current state of affairs. They have called for a more responsible approach to the development and use of large language models to avoid potential security risks.

As AI and large language models continue to advance, there is growing concern that they could become some of the most powerful weapons in the military arsenal of every nation-state. Microsoft’s alert about the misuse of generative AI technologies indicates the need for increased vigilance and proactive measures to address potential security threats.

(AP information)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy