By River Caudle
The slide on the conference room screen glows with a number that is supposed to elicit applause: 100%.
The presenter, a polished IT executive, smiles as he points to the metric. "Year-over-year," he says, "we have achieved 100% uptime."
In most corporate boardrooms, this is the moment for nodding heads and bonuses. But to anyone who has spent years inside critical infrastructure, this isn't a victory lap. It is a distress signal.
That perfect number implies three terrifying possibilities: the environment is too small to matter, the monitoring systems are broken, or, most likely, patch management is non-existent.
In the high-stakes world of Industrial Control Systems (ICS) and Operational Technology (OT), availability is not a trophy to be hoarded. It is a currency to be spent. You spend it on critical maintenance, on defensive architecture, and on failover testing.
When a company brags about never taking a system down, they are admitting they are hoarding that currency, accumulating a mountain of technical debt that will eventually come due.
And when that debt is called in, it won't be paid in downtime. It will be paid in catastrophe.
The Ontological Gap
The friction between modern IT and industrial OT often feels like a bad marriage: two parties shouting the same words but meaning entirely different things. The source of this friction is a linguistic trap surrounding the word "Security."
Imagine an IT professional walking into a server room. When they say "Security," they are speaking the language of Secrecy. Their nightmare is a data leak. Their primary variable is confidentiality. They want to ensure that secrets stay secret.
Now, imagine an OT engineer walking onto a plant floor, surrounded by high-pressure steam pipes and spinning turbines. When they say "Security," they are speaking the language of Safety. Their nightmare is kinetic. They are worried about people dying. They are worried about a turbine throwing a blade through a concrete wall.
In the OT world, Availability and Integrity are existential. The problem arises when a CISO, trained in the logic of enterprise IT, takes over the industrial environment. They unknowingly commit a category error: optimizing for secrecy in an environment that demands safety.
The Physics of the Internet
This clash of philosophies is nowhere more visible than in the debate over Carrier-Grade NAT (CGNAT) and the definition of the "Edge."
There is a common refrain among IT purists: "NAT is not a security feature. The edge is still the edge."
Technically, they are correct. Network Address Translation (NAT) was designed to conserve IP addresses, not to stop hackers. But this argument reveals a flaw in the IT mindset. It frames architecture as a passive canvas upon which "real" security tools (firewalls, EDRs, SIEMs) are painted.
This framing is dangerous. To understand why, we have to look at the physics of the internet.
Consider two scenarios involving a critical Remote Code Execution (RCE) vulnerability in an edge firewall.
- In Scenario A, the organization relies on public IPs. The firewall is globally routable. When the vulnerability is announced, the exploit is immediately viable from anywhere on earth. The security relies entirely on the patch, which hasn't been applied yet.
- In Scenario B, the organization sits behind the architectural constraint of NAT, using unroutable address space (RFC 1918). The same RCE exists. But the internet backbone drops traffic destined for private ranges like 192.168.x or 10.x because, by the laws of internet routing, that traffic cannot exist there.
The blast radius in Scenario B is constrained not by software configuration, but by architectural reality. You cannot "misconfigure" unroutable address space out of existence.
The Ultimate Control
This brings us to the most uncomfortable truth for the vendor-driven security industry: Architectural constraints are the only durable controls.
The IT-centric model treats security as software you buy and layer on top of your network. These are "operational controls." They are fragile. They require constant care. They carry their own Common Vulnerabilities and Exposures (CVEs). They expire.
Architecture is different. Architecture is the plumbing.
Architectural controls fail differently. They never need patches. They carry no CVEs. They do not depend on a junior admin maintaining the correct configuration at 3:00 AM. They simply persist.
A firewall is a suggestion; a physical air gap or a one-way gateway is a law. Architecture persists until someone makes a deliberate, physical effort to change it.
Engineering for the Break
Engineering is not about how a system functions when it is perfect. It is about how the system functions when it breaks.
A career spent protecting credit card data produces a philosophy optimized for preventing access. But a career spent protecting chemical processes, power grids, or water treatment plants produces a philosophy optimized for limiting blast radius.
"The edge is still the edge" is a fine theory for a textbook. But when you check the CVE list and see that your firewall has a critical authentication bypass (an event that happens constantly), your architectural choices are the only thing standing between an incident and a disaster.
As IT ownership of OT environments increases, we are seeing a corresponding rise in lateral movement attacks within industrial settings. This is causation through category error. We are applying enterprise logic to critical infrastructure, and it is failing.
Defense in depth requires the assurance that no single control failure will be catastrophic. Operational controls, by their nature, cannot provide that assurance. Only architecture can.
So, the next time someone brags about 100% uptime, don't applaud. Ask them why they haven't spent their currency. Start with architecture. Design for failure.
Because eventually, the software will fail, and the architecture will be the only thing left holding the line.
🌊
River Caudle is the CSO of River Risk Partners. A specialist in industrial cybersecurity for critical infrastructure, he is an active contributor to the Purdue Enterprise Reference Architecture (PERA) and collaborates with the ISA99 committee on the future of industrial security standards.