
The Motion Paradox: What If the Data Never Moved?

Every security architecture we've built over the last two decades has been designed around a single premise that almost no one questions: data must move. Files must travel from one tenant to another, from one identity to another, from our control to someone else's. Collaboration requires transfer. Sharing means sending. External partnership means extending our perimeter into environments we don't operate. This assumption is so deeply embedded in how we think about information security that it has become invisible, and the cost of that invisibility is enormous.
I've been making a related argument for months. In my piece on the Mythos-ready security program, I wrote that if we accept the industry shift toward containment and resilience, then containment must extend to the data layer, or it isn't really containment at all. Network segmentation, egress filtering, and blast-radius limits are all containment controls, and they are necessary. But they stop at the perimeter. Every sensitive document we share externally is an egress event our containment architecture doesn't touch, and every copy that lands in a third party's environment is an ungoverned node that exists beyond our ability to monitor, revoke, or protect.
That argument is about principles. This piece is about the architecture that operationalizes the principle. Because once we accept that containment must reach the data layer, we are forced to ask a question most security programs have never asked out loud: what if the data never moved?
The Paradigm We Stopped Seeing
For most of the last twenty years, security leaders have been building on top of an accepted paradigm we might call data in motion. In this paradigm, the unit of collaboration is the file, and collaboration itself is understood as the act of sending that file somewhere else. Once the file has moved, our posture becomes reactive: we attempt to limit what happens next.
Look at the security ecosystem this assumption has produced. There is an entire category of enterprise spend devoted to reducing the consequences of data leaving our control. Some tools try to prevent sending in the first place. Some try to wrap the file in controls that travel with it. Some encrypt what we can't contain. Some track where copies go after they've gone. Some produce forensic evidence for incidents we couldn't prevent. These are all intelligent responses, and each of them solves a real problem. But every one of them exists because we accepted, without examining the assumption, that the data had to move.
This is the paradox. We have spent billions of dollars, across decades, building controls that exist because of the paradigm, not controls that question it. The ecosystem is so mature, so well-integrated, and so normalized in our architecture that it feels like the definition of security itself. It isn't. It's a response to a choice we made so long ago we've forgotten it was a choice.
Meanwhile, the consequences of that choice are getting harder to manage, not easier. Third-party ecosystems have grown to a scale where we cannot audit our way to assurance. AI agents are poised to share data across boundaries at machine speed, generating volumes of external collaboration our current controls were never designed to govern. Regulators are beginning to ask not just whether we had a policy, but whether we can prove what happened to our data after it left our control. And the answer, honestly, is usually no. We can describe what should have happened. We cannot demonstrate what actually did.
The Alternative That Was Always Available
The alternative paradigm has always been available to us. We just couldn't see it clearly because the dominant one was in the way. I call it data at rest, access in motion.
In this paradigm, the unit of collaboration is not the file. It is the access event. Data doesn't travel to the collaborator; the collaborator travels to the data, authenticating under governance we control, interacting under policies we enforce, generating audit events we own. One copy. One location. One policy. One audit trail. Full revocability. This is not a new type of control layered on top of the existing paradigm. It is a different architecture entirely, and once it's in place, most of the motion we currently spend so much effort mitigating simply stops happening.
Enforcement happens at both the point of sharing and the point of access. When an internal user adds an external party to a SharePoint site or Teams channel, or sends a link in an email, policy decisions are evaluated before the share is allowed to occur, and then re-evaluated on every subsequent access as context changes. This matters because motion doesn't begin at access. It begins at the moment a share is initiated, and that's where governance has to meet it.
The implications for a security program are significant. When data never leaves the tenant, a third party's breach is no longer automatically a breach of our data, because we don't have data sitting in their environment waiting to be exfiltrated. When access can be revoked at any moment, the half-life of a compromised relationship collapses from years to seconds. When every access event is logged in a single, unified trail, regulators asking “who touched this document last quarter?” get an answer in minutes instead of weeks. When there is no duplicate to chase, we have eliminated what I've previously called the governance fork; every copy that becomes its own policy surface, its own lineage problem, its own exposure.
The posture this produces is dramatically simpler and dramatically more defensible than the one most enterprises operate today. That's not marketing language. It is a description of what changes architecturally when you stop accepting the premise that data has to move.
The Architectural Principle
There's a phrase I've been using, both internally and with customers, that captures the whole thesis in five words: data that stays put stays safe. It sounds simple, and that's deliberate. It is simple. The difficulty isn't understanding the principle. The difficulty is recognizing how much of our current architecture exists to solve problems that the principle, applied consistently, would have prevented in the first place.
This is the architectural choice every CISO has already made, whether they've examined it or not. If an organization's external collaboration still runs primarily on attachments, downloads, and copies flowing into third-party environments, they’ve chosen the data-in-motion paradigm by default. If our controls focus on what happens after the file leaves our tenant, we’ve accepted a reactive posture because that was the only posture the paradigm permitted. If an audit process requires multi-week forensic reconstruction to answer basic questions about data access, we’re paying the ongoing operational tax of an architecture that was never going to produce clean answers.
None of this is a failure of execution. It's a successful execution of a paradigm that has quietly become obsolete.
A Word on the Exceptions
I'm not arguing that data never moves. Some business processes require it. Regulatory submissions, legacy supplier networks, certain cross-border workflows, regulated recordkeeping where the counterparty must retain their own authoritative copy, etc. These are real, and they're not going to disappear because we prefer a cleaner architecture. Business processes win over technology, every time.
The argument isn't that motion is eliminated. The argument is that the default is inverted. Today, motion is the default and containment is the exception. In the new paradigm, containment is the default and motion is the exception. When data does have to move, it moves intentionally, with explicit governance wrapped around the movement, logged as a deliberate decision, subject to the same scrutiny as any other elevated action. That's very different from a world where every share is motion by default and governance is trying to catch up after the fact.
The shift isn't from motion to no motion. It's from motion as the path of least resistance to motion as a governed exception. Every leader I've spoken with recognizes the difference immediately, because they've lived the consequences of the first model.
What the Alternative Looks Like in Practice
The good news is that the access-in-motion architecture isn't theoretical. It's operational, at enterprise scale, in production, today. This is the work we're doing at eSHARE, and I joined specifically because I became convinced that this paradigm shift is the most consequential thing happening in enterprise security that most leaders haven't yet named. Data stays within the customer's Microsoft 365 tenant. External parties authenticate with their own identity and access content under continuous policy enforcement, without joining the tenant and without receiving copies they'll eventually lose control of. Every access event is captured. Every sharing decision is governable. Every revocation takes effect immediately, across channels, without relying on anyone else's cooperation or cleanup.
The point isn't the product. The point is that the alternative paradigm exists, is deployed, and produces measurably different outcomes than the one most organizations are still running against their best efforts. If this architecture works at scale in regulated industries like life sciences, aerospace, and financial services, the question for every other CISO becomes harder to avoid: why are we still investing in controls that presume the data must leave, when the data doesn't have to leave?
The Choice in Front of Us
The window for this conversation is shorter than it appears. Agentic collaboration is going to multiply external data exchange by orders of magnitude, and it will do so on timelines that make architectural retrofits expensive. Regulators are converging on expectations that assume continuous observability over third-party data handling, and those expectations will not be satisfied by the paradigm we've been running. Board members are beginning to ask questions about blast radius and resilience that the data-in-motion paradigm cannot answer cleanly.
This isn't about adding another control to the existing stack. It's about rethinking the architecture underneath it. The organizations that make the shift now will find themselves with a simpler security posture, a lower cost structure, a more defensible compliance position, and a foundation that scales cleanly into the agentic era. The organizations that don't will spend more each year managing the consequences of a choice they never consciously made.
Containment only works if it covers the data, not just the network. The paradigm that makes that possible is available right now. The question isn't whether the architecture exists. It's whether we're willing to question the premise that got us to where we are.
We've moved into a world of containment. The data has to move there too.


