Microsoft raises the alarm over OpenClaw, a popular AI tool that could allow hackers to hijack corporate networks.
Microsoft Security has issued a critical warning to organizations worldwide: the viral AI tool known as OpenClaw is currently a major security "backdoor."
Although the open-source platform has become a favorite for tech enthusiasts, experts say it now poses a severe threat to corporate data.
Since its launch in November 2025, OpenClaw has exploded in popularity, gaining over 160,000 stars on GitHub.
However, Microsoft researchers point out that the tool essentially allows "untrusted code" to run on company systems. Because it processes commands using sensitive passwords, it is an easy target for cybercriminals.
The scale of the danger is alarming. Security scans have discovered more than 135,000 versions of OpenClaw left unprotected on the open internet.
Even more troubling is the discovery of over 800 malicious "skills" on the platform’s marketplace. These fake tools are designed to look helpful but actually steal passwords and sensitive company files.
Other security giants are also sounding the alarm. CrowdStrike warned that if employees install this tool on work laptops without proper security, it could act as a "powerful AI backdoor" for adversaries.
Bitdefender’s research was even more blunt, revealing that nearly 20% of the tools available for the platform contain hidden viruses.
Big tech companies are not taking chances. At Meta, executives have reportedly warned staff that using OpenClaw on work devices could lead to immediate dismissal.
Governments and universities have also started issuing urgent notices for staff to patch their systems.
If your organization uses this tool, Microsoft says you must isolate it. The recommendation is to run OpenClaw in a "safe zone" where it cannot touch the main company network.
The developers of OpenClaw have released a new update (version 2026.2.12) to fix several of these holes. However, they admitted that even with new defenses, the risk of "prompt injection" attacks where a hacker tricks the AI into breaking its own rules remains a tough problem to solve.

0 Comments