This Sneaky AI Malware Is Hiding In Plain Sight On Your Computer
By 813 Staff

Silicon Valley insiders report This Sneaky AI Malware Is Hiding In Plain Sight On Your Computer, according to The Hacker News (@TheHackersNews) (on March 12, 2026).
Source: https://x.com/TheHackersNews/status/2032140456352694636
For a mid-level network administrator at a regional hospital chain, the alert that popped up on his screen last Tuesday was just another false positive. He’d seen dozens of PowerShell scripts that week, most of them benign internal tools. He almost approved it. That single, nearly automatic click would have unleashed “Slopoly,” a new and unusually evasive backdoor, across the entire healthcare system’s network. He paused, and that pause is now a case study in the latest, most insidious wave of AI-powered cyberattacks. According to a report from IBM’s X-Force unit, detailed by @TheHackersNews, the administrator’s hesitation came just as the security team identified the script as a weaponized tool from the threat actor group Hive0163, notable for its entire codebase being generated by artificial intelligence.
Internal documents from IBM’s investigation show that Slopoly represents a significant pivot in offensive cyber operations. Unlike traditional malware, which is hand-coded and contains recognizable signatures, this PowerShell backdoor was almost certainly authored by large language models. The code is modular, efficient, and deliberately written to avoid classic patterns that trigger security software. Engineers close to the project say its variable naming conventions and structure lack the stylistic fingerprints of a human programmer, instead exhibiting the oddly formal yet functionally sterile output of an AI. This allows it to slip past defenses that scan for known malicious code snippets, posing a fundamental challenge to detection paradigms that have relied on decades of human-authored threat intelligence.
The practical consequence is a dramatic lowering of the barrier to entry for sophisticated attacks. Groups like Hive0163, which may not have possessed elite coding skills, can now use generative AI to produce custom, polymorphic malware on demand. The rollout of such tools into the wild has been anything but smooth for defenders; security teams are now forced to suspect every piece of scripted automation, legitimate or not, creating paralysis in IT departments that rely on PowerShell for vital system management. The impact is a slower, more paranoid operational tempo for every company that depends on automated scripts, which is to say, virtually all of them.
What happens next is an arms race in the AI layer. Security firms are urgently retraining their own AI classifiers to recognize the subtle hallmarks of machine-generated malicious code, a task complicated by the fact that benign administrative scripts are also increasingly AI-assisted. The timeline for widespread adaptation of these countermeasures remains uncertain, leaving a critical gap. The major unknown is how quickly other threat groups will adopt and refine this technique. For now, the incident underscores a new reality: the human element in cybersecurity is no longer just about phishing a careless employee, but about out-thinking the machine logic of an opponent that never sleeps and iterates at the speed of a prompt.
Source: https://x.com/TheHackersNews/status/2032140456352694636

