OpenClaw

OpenClaw: the AI assistant everyone wants and no CISO would approve

There is a new AI assistant that has taken the internet by storm in a matter of weeks. It is called OpenClaw, it is open source, it installs in seconds, and it promises to automate virtually any digital task: from answering emails to managing calendars, analyzing documents, or running code. Millions of people have downloaded it. And that is precisely the problem.

Because OpenClaw, the most viral open source project in recent history, is also one of the biggest cybersecurity threats of 2026. And it is most likely already installed on some device within your company.

OpenClaw is an open source AI assistant that works as both a browser extension and a desktop application. Its value proposition is simple: connect a language model with full access to the user’s operating system to automate everyday tasks. It replies to emails, schedules meetings, organizes files, summarizes documents, generates code, and runs it directly on the user’s machine.

The user experience is, admittedly, outstanding. The interface is clean, setup is minimal, and results are immediate. In less than two weeks after launch, it had accumulated over 500,000 GitHub stars and millions of installations. Demo videos went viral across every social network, and the developer community threw itself into contributing to the project.

The problem no CISO would approve

But behind that seamless experience lies a security architecture that would make any cybersecurity officer turn pale. To function, OpenClaw requires permissions that go far beyond what is reasonable for an assistant:

  • Full file system access: it reads and writes to any directory on the device.
  • Arbitrary code execution: it can run scripts, install packages, and modify system configurations.
  • Access to stored credentials: it reads session tokens, browser cookies, SSH keys, and credentials saved in built-in password managers.
  • External API connections: it sends data to external servers to process user requests, with no clarity on what information is transmitted or how it is stored.

In practice, installing OpenClaw is the equivalent of giving root access to an unknown third party. And because it is open source, many users mistakenly assume that “open” means “secure.”

The real risk for businesses

The most dangerous scenario is not an individual user installing OpenClaw on a personal laptop. The real risk is employees installing it on corporate devices connected to internal networks, with access to code repositories, databases, management tools, and production systems.

And this is already happening. Not as an IT department decision, but as shadow IT: employees looking to be more productive who install tools without going through official channels. Security teams do not even know OpenClaw is on their networks until it is too late.

The specific risks include:

  • Data exfiltration: any document, credential, or sensitive information accessible from the user’s device can be sent to external servers.
  • Malicious code execution: since OpenClaw executes code directly, a vulnerability in the model or its dependencies could enable remote malware execution.
  • Lateral movement: from a compromised device, an attacker could use stored credentials to access other systems on the corporate network.
  • Regulatory violations: sending personal or confidential data to external servers without explicit consent may constitute breaches of GDPR, DORA, or NIS2.

Why open source does not guarantee security

One of the most repeated arguments by OpenClaw advocates is that, being open source, anyone can audit its security. In theory, that is true. In practice, the project’s development pace is so fast that security reviews cannot keep up with the changes.

The repository receives hundreds of daily contributions. Versions are released at a frequency that makes rigorous auditing of each update impossible. And many of the dependencies it uses are, in turn, open source projects with their own potential vulnerabilities.

The LiteLLM case, compromised just weeks earlier, should serve as a reminder: the open source ecosystem is an increasingly exploited attack vector.

What your company should do

The first step is visibility. If you do not know what software your employees are installing on their devices, you cannot protect yourself. An up-to-date inventory of applications and browser extensions is the starting point.

The second step is establishing clear policies. This is not about prohibiting innovation, but about channeling the adoption of new tools through a process that includes a minimum security assessment.

The third step is training. Employees do not install OpenClaw with malicious intent: they do it because they want to be more productive. Helping them understand the risks is more effective than any technical block.

And the fourth step is having a response strategy. If OpenClaw is already on your network, you need a plan to identify what data may have been exposed, revoke compromised credentials, and mitigate the impact.

The speed of adoption versus the speed of security

The OpenClaw case illustrates a tension that defines cybersecurity in 2026: the speed at which users adopt new tools far exceeds the speed at which organizations can evaluate and secure them.

Millions of people installed OpenClaw before a single independent security audit existed. By the time security teams react, the tool is already embedded in workflows, habits, and expectations.

At Axyom, we believe that anticipation is the best form of protection. Cybersecurity is not just about responding to incidents, but about designing a strategy that accounts for risks before they materialize.