The Enemy Is Not at the Gates. It's in the Kernel.

Your Trust Model is Obsolete

For the last decade, security architecture has been defined by a simple, powerful idea: Zero Trust. The model correctly assumes the network is hostile and mandates that no actor, human or machine, is trusted by default. Every access request must be authenticated and authorized. This was a necessary and rational evolution from the failed perimeter model.

The complication is that a sophisticated adversary no longer needs to look like an outsider. The most effective attacks are now executed by processes that are already trusted—native system binaries, administrative scripts, and legitimate software components. These actors pass initial authentication checks with valid credentials. Their malicious activity begins long after the gateway has waved them through.

This raises a critical question: how do you defend your systems when the attacker wears the face of a legitimate process? When powershell.exe or wmic.exe is the weapon, conventional access control is irrelevant. To survive in an era where trusted processes are the new attack vectors, security must evolve from governing access to governing intent.

The Attacker is Now a Native Process

The premise that an organization's own tools are the primary weapons in modern breaches is not theoretical. It is a statistical reality documented in the field. This technique, "Living off the Land" (LotL), allows adversaries to operate below the detection threshold of traditional anti-malware solutions by forgoing custom binaries in favor of tools already present on the target system.

Data Point: 75% of Attacks are Malware-Free

According to CrowdStrike's 2024 Global Threat Report, three-quarters of all detected attacks were malware-free. The data is unambiguous: adversaries have shifted their tradecraft from deploying foreign executables to subverting the tools inherent to the operating system. They are not breaking in with battering rams; they are picking the locks with the master keys already hanging on the wall.

Data Point: The Subversion of Legitimate Binaries

This trend is corroborated by Bitdefender, which found that 84% of major attacks involved the use of LotL binaries. These are not obscure edge cases. They are the dominant methodology used by advanced persistent threats to achieve lateral movement, privilege escalation, and data exfiltration. The target is no longer just the user account; it is the execution authority of the system's most fundamental processes.

When a trusted process is the attack vector, security models based on binary signatures and initial authentication become brittle and ineffective. The fight has moved from the network gateway to the process table.

The evidence is clear: the threat is already inside. But our primary tool for observing it is dangerously fragile.

The Brittle Shell of Deep Inspection

The industry's response to the LotL threat has been the proliferation of Endpoint Detection and Response (EDR) and Cloud Workload Protection Platforms (CWPP). These solutions deploy privileged agents that hook deep into the kernel to monitor system calls, process execution, and file system activity. In theory, they provide the necessary visibility to detect the malicious use of legitimate tools.

In practice, they introduce a massive, centralized point of failure.

Case Study: The 19 July 2024 Global Outage

On 19 July 2024, a faulty sensor update to CrowdStrike's Falcon platform triggered a global outage, rendering countless Windows systems inoperable with a Blue Screen of Death (BSOD). Airlines, banks, broadcasters, and hospitals went dark. The cause was a flawed kernel-mode driver in a security product designed to increase system resilience.

This event is not an indictment of a single vendor. It is a systemic indictment of the architectural model. Granting a third-party agent privileged access to the kernel of every critical machine creates an unacceptable dependency and an immense blast radius. A single faulty logic path, whether introduced by error or malicious action, can trigger catastrophic failure at a global scale.

For CISOs: The operational risk of your security stack can now exceed the risk of the threats it is meant to mitigate. Resilience is not just about preventing breaches; it is about surviving the failure of your own controls.

The reliance on deep-packet and deep-process inspection agents creates a brittle shell. It provides a semblance of security, but its failure mode is absolute and its complexity invites disaster. A more resilient architecture is required.

If the agents of observation are too dangerous to scale, we must embed governance in the identity of the actors themselves.

The Mandate: From Governing Access to Governing Intent

The fundamental flaw in current systems is the conflation of authentication with authorization to act. A process like powershell.exe is authenticated as a legitimate Microsoft binary. That authentication grants it a wide range of implicit permissions, which an attacker then exploits.

A resilient system must separate these concepts. The next architectural leap is to bind a process's runtime behavior to its cryptographically verified identity and origin. This is the core of governing intent. To survive in an era where trusted processes are the new attack vectors, security must evolve from governing access to governing intent.

What is "Intent" in a System?

Intent is not an abstract psychological state. In a system architecture context, it is the verifiable alignment of a process's runtime behavior against a cryptographically-bound policy. It answers not just "Who are you?" but "What are you supposed to do, based on who you are?"

A process's intent is defined by a policy that dictates its expected behavior: the network endpoints it may contact, the files it may access, the child processes it may spawn, and the system calls it is permitted to make. Deviations are not just logged; they are programmatically denied.

Foundational Pillars: Identity, Attestation, and Policy

This model is not theoretical; it is an engineering discipline built on three pillars:

  1. Strong, Ephemeral Identity: Every non-human actor, from a microservice to a cron job, must have a strong, automatically rotated cryptographic identity. Standards like SPIFFE/SPIRE provide this foundation, allowing workloads to securely authenticate to each other without relying on static secrets.

  2. Hardware-Rooted Attestation: The workload must prove it is running on a trusted hardware and software stack. A Trusted Platform Module (TPM) can attest to the boot state of the machine, ensuring the integrity of the operating system and hypervisor before the workload is even allowed to request an identity.

  3. Runtime Policy Enforcement: Once identity is proven and the platform is attested, a granular policy is attached to the workload's session. This policy is enforced at runtime, continuously validating that the process's actions adhere to its declared intent.

This is a profound shift. We move from a state of chasing alerts generated by brittle agents to a state where the system itself enforces expected behavior based on provable identity.

This approach is not a replacement for all other controls. It is a necessary complement.

Network Controls Are Blind to Intent

A common counter-argument is that strong network controls, such as micro-segmentation, solve this problem. If a workload can only talk to its designated database, does its internal behavior matter?

Yes, because the most damaging actions often occur within the trust boundary. Micro-segmentation can prevent a compromised web server from accessing a finance system, but it is blind to the actions that server takes with the data it is supposed to access.

Consider a compromised CI/CD pipeline. Its network permissions are correct; it can access the artifact repository and the production Kubernetes cluster. The attacker, using the pipeline's legitimate credentials, doesn't violate network policy. Instead, they use kubectl to deploy a cryptominer or exfiltrate secrets from the production environment. The network sees legitimate traffic from a trusted source to a permitted destination. Only a system governing intent can detect that the specific kubectl exec command violates the pipeline's expected behavior.

Identity without runtime governance is a credential waiting to be stolen. Network controls without process-level visibility are a fence around a city with unlocked doors.

Governing intent is a powerful architectural pattern, but its implementation carries significant trade-offs that demand rigorous engineering.

The Architectural Trade-Offs of Continuous Verification

Adopting a model of continuous intent verification is not a simple software installation. It is a fundamental shift in architectural design that introduces its own set of complexities and risks. Honesty about these trade-offs is a prerequisite for success.

Performance and Overhead

Continuously monitoring the behavior of every process on a system is computationally expensive. Kernel-level monitoring, even when optimized, consumes CPU and memory resources. While the goal is to be more efficient and stable than older EDR agents, the observer effect is real. A poorly implemented policy engine can degrade application performance or, in the worst case, create resource contention that leads to instability.

The False Positive Problem

The primary challenge of any behavioral analysis system is defining "normal." A policy that is too strict will generate a high volume of false positives, blocking legitimate administrative actions and creating operational friction. A policy that is too loose provides a false sense of security. Defining and maintaining accurate policies for thousands of distinct workloads requires significant investment in automation and operational maturity.

The Nascent State of the Technology

While the foundational pillars (SPIFFE, TPMs) are mature, the technology to declaratively define and enforce process-level "intent" at scale is still emerging. Significant venture capital is flowing into the non-human identity space, with firms like Oasis Security (\$120M) and Astrix Security (\$45M) leading the charge. This market validation confirms the criticality of the problem, but it also signals that the definitive, enterprise-grade solutions are still being forged.

For Security Architects: The immediate work is not to buy a product claiming to "verify intent." It is to build the foundational layers: implement universal workload identity, enforce hardware attestation, and begin defining coarse-grained behavioral policies as code.

The path is complex, but the alternative is to cede control of our systems to adversaries who know how to use them better than we do.

Your First Step Toward System Sovereignty

The security landscape is littered with the obsolete fortifications of previous wars. Firewalls, antivirus, and even first-generation Zero Trust Network Access are proving insufficient against an adversary that operates from within. The core principle of security architecture must be updated.

We must assume that any process can be subverted. We must design systems where trust is not a static property granted upon authentication but a continuous, verifiable state. This requires moving beyond coarse network controls and brittle inspection agents toward a model where every workload has a strong cryptographic identity and is bound by an enforceable policy that governs its intent.

This is not a project; it is a change in philosophy. It is the only way to build resilient, sovereign systems capable of withstanding the modern threat. To survive in an era where trusted processes are the new attack vectors, security must evolve from governing access to governing intent.

Next
Next

Compute Sovereignty: Your AI Strategy is Built on Borrowed Land