Security
The Rise of AI-Infused Hacker Tactics: Exploiting Artificial Intelligence in Cyberattacks
The Use of Artificial Intelligence in Cyberattacks: A Microsoft Perspective
Microsoft has highlighted the increasing use of artificial intelligence by threat actors to accelerate cyberattacks across all stages of the attack lifecycle. According to a recent Microsoft Threat Intelligence report, attackers are utilizing generative AI tools for a variety of malicious activities, including reconnaissance, phishing, malware creation, and post-compromise actions.
AI is being employed by threat actors to draft phishing emails, translate content, summarize stolen data, debug malware, and assist with scripting and infrastructure configuration. Microsoft warns that most malicious use of AI revolves around using language models to produce text, code, or media, acting as a force multiplier to reduce technical barriers and speed up execution while allowing human operators to maintain control over objectives and decisions.
Several threat groups, including North Korean actors Jasper Sleet and Coral Sleet, have been observed incorporating AI into their cyberattacks. These actors use AI in remote IT worker schemes to generate realistic identities, resumes, and communications to gain employment at Western companies and maintain access once hired.
Jasper Sleet leverages generative AI platforms to create fraudulent digital personas by generating culturally appropriate name lists and email address formats. The group also uses AI to extract and summarize required skills from job postings for software development and IT-related roles, tailoring fake identities to specific job requirements.
AI is also being utilized by threat actors to develop malware and create infrastructure, with AI coding tools used to generate and refine malicious code, troubleshoot errors, and port malware components to different programming languages. Some malware experiments even show signs of AI-enabled malware that dynamically generate scripts or modify behavior at runtime.
When AI safeguards attempt to prevent the use of AI in malicious tasks, threat actors resort to jailbreaking techniques to trick language models into generating malicious code or content. Microsoft advises organizations to treat AI-powered attacks as insider risks and focus on detecting abnormal credential use, securing identity systems against phishing, and protecting AI systems from becoming targets in future attacks.
In conclusion, the use of artificial intelligence in cyberattacks is a growing trend that poses significant challenges for defenders. By understanding how threat actors leverage AI tools, organizations can better prepare for and defend against these advanced attacks.
-
Facebook4 months agoEU Takes Action Against Instagram and Facebook for Violating Illegal Content Rules
-
Facebook5 months agoWarning: Facebook Creators Face Monetization Loss for Stealing and Reposting Videos
-
Facebook5 months agoFacebook Compliance: ICE-tracking Page Removed After US Government Intervention
-
Facebook3 months agoFacebook’s New Look: A Blend of Instagram’s Style
-
Facebook5 months agoInstaDub: Meta’s AI Translation Tool for Instagram Videos
-
Facebook3 months agoFacebook and Instagram to Reduce Personalized Ads for European Users
-
Facebook3 months agoReclaim Your Account: Facebook and Instagram Launch New Hub for Account Recovery
-
Apple5 months agoMeta discontinues Messenger apps for Windows and macOS

