By River Caudle
The "Standard" OT Security Playbook is Flawed
For years, we’ve operated under a foundational assumption that is as pervasive as it is flawed:
If we secure the network, we secure the process.
We’ve treated the network as the fortress, assuming that if the packets are protected, the plant is safe.
But while we were busy hardening the perimeter, we ignored the fact that the physics at the source - the sensors and transmitters that actually drive the process - remain completely defenseless.
The Data Integrity Gap
The packets crossing our firewalls are often cryptographically secured, but the data inside them can be a complete fabrication. The cybersecurity industry has been focused on the delivery mechanism, while almost entirely ignoring the payload.
After 20 years on the digital side of this problem, I’ve had to reconcile my work with the engineering realities documented by veterans like Joe Weiss PE CISM CRISC ISA Fellow, who has been archiving control system failures since 1998. When you merge these two perspectives, the industrys blind spot becomes impossible to ignore.
Joe frames the missing link through a specific engineering lens:
Process Measurement Integrity = Authorization + Authentication + Accuracy
- Authorization: Only permitted personnel and systems can modify sensor configurations
- Authentication: The signal actually comes from the sensor it claims to come from
- Accuracy: The measurement reflects physical reality within acceptable tolerances
- Authorization: None. Anyone with physical access and a HART communicator can connect and change range, span, damping, or engineering units. No login. No credential. No audit log on the device. You can reverse the control direction by setting the low range higher than the high range.
- Authentication: None. The PLC receiving 12mA has no mechanism to verify whether that signal originated from a legitimate sensor or from an attacker's signal generator spliced into the wire.
- Accuracy: Assumed, not verified. In one documented case at a major manufacturing facility, raw sensor signals were monitored independent of the OT network. The study found that more than half of the process sensors were either inoperable or out of calibration... while the Windows-based SCADA continued to show all readings as normal.
Now think about a typical 4-20mA pressure transmitter.
We've built our OT security strategy on implicit trust of Level 0 devices.
That trust is misplaced.
The Trust Problem We Can't Engineer Around
Here's what makes Level 0 different from everything else in our security stack: you cannot apply verification principles to devices that have no authentication mechanism.
At Levels 2 and above, we verify users, devices, and connections because those systems have the computational capacity to participate in authentication protocols.
Process sensors don't. A 4-20mA signal is just a current. Theres no place to put a certificate, no processor to run a challenge-response, and no memory to store credentials (HART protocol overlays notwithstanding... we will address those specific configuration risks below).
So process sensors operate on 100% implicit trust. The control system believes whatever the sensor tells it. The safety system believes whatever the sensor tells it. The operator display believes whatever the sensor tells it.
There is no verification. No validation. No cryptographic proof of origin.
This isn't a gap we can close by extending our network security architecture downward. The physics of these devices don't allow it.
That's why process measurement integrity requires a different approach: compensating controls that validate measurements through redundancy, physics-based monitoring, and cross-correlation rather than cryptographic authentication.
Joe captures the consequence of this implicit trust simply:
Garbage in from process sensors = Garbage out from networks
Our network anomaly detection, behavioral analysis, and packet inspection all operate on data that originated at a sensor. If the sensor is wrong, whether from drift, miscalibration, or malicious manipulation, our monitoring tools faithfully process and protect the lie.
What This Looks Like When It Fails
Bellingham, Washington, 1999: The Olympic Pipeline SCADA system failed. By design, the pressure sensors defaulted to "average values" rather than showing a fault condition. Operators saw normal readings. Safety systems saw normal readings. Actual pressure was building catastrophically.
The pipeline ruptured. 237,000 gallons of gasoline ignited. Three people died.
Network monitoring would have seen nothing wrong. The packets containing those "average values" transmitted perfectly. The lie was injected before the data hit the wire.
Florida Combined-Cycle Plant, 2019: A single voltage sensor (potential transformer) provided erroneous input to a steam turbine controller. The controller responded by cycling the turbine, creating 200MW load swings. Those oscillations propagated through the entire Eastern Interconnection. One sensor in Florida caused 50MW power swings in New England.
NERC's Lessons Learned didn't classify this as a cyber incident. But a sensor communicated false data to a control system, which actuated equipment based on that data, causing cascading physical consequences. That meets NIST's definition of a cyber incident. It just doesn't match our network-centric mental model.
Manufacturing Facility, 2022: The study I mentioned earlier. Physics-based monitoring of raw 4-20mA signals, independent of the OT network, revealed that over half the sensors were compromised or out of spec. Feed pumps had performance issues. None of this appeared on the Windows-based SCADA.
Calculated productivity impact: approximately 3% of net output.
For a billion-dollar facility, that's tens of millions annually. Attributed to "normal operations" because the monitoring systems showed green.
The Scope Gap in Our Frameworks
Process sensors are explicitly excluded from most OT security frameworks:
- NERC CIP: Excludes devices with non-routable protocols
- TSA Pipeline Security Directives: Focus on network security, not field instrumentation
- ISA/IEC 62443: Acknowledges Level 0 but provides no adequate controls for legacy devices
- API, AWWA cybersecurity standards: Don't address process sensor security
The SANS "State of OT Security 2025"SANS "State of OT Security 2025" report quantified visibility across Purdue levels, and Level 0 was not mentioned at all.
We’ve collectively scoped Level 0 out of OT security, making the devices that generate our data invisible to our assessments and tools.
At a March 2025 FERC/NERC workshop, regulators acknowledged that the "non-routable communications" exclusion needs to change. That's progress, 26 years after Bellingham, but it hasn't translated to updated requirements yet.
What Detection Actually Looks Like at Level 0
The honest answer: our standard toolkit doesn't work here. But that doesn't mean we're completely blind.
What doesn't work:
- Network packet inspection (the lie happens before the packet)
- Standard OT anomaly detection (sees network behavior, not signal integrity)
- Vulnerability scanning (these devices don't have CVEs because they're insecure by design)
What does work:
Physics-based signal monitoring: Tap the raw signal before it reaches the I/O card. Compare that analog ground truth against what the DCS/SCADA reports. Companies offering physics-based signal monitoring (such as SIGA OT Solutions) have demonstrated the ability to detect combustion turbine sensor failures that Windows-based HMIs completely miss.
Redundant sensing with voting logic: Don't let a single sensor drive critical control actions. High-select or median-select architectures force an attacker to compromise multiple independent sensors simultaneously. That's significantly harder than spoofing one signal.
HART configuration monitoring: While HART has vulnerabilities, you can monitor for configuration changes. If someone connects a HART communicator and modifies a transmitter's parameters, that change can be logged at the asset management system level. It's not prevention, but it's detection.
Process invariant monitoring: Some sensor lies violate physics. If the flow rate shows zero but the downstream pressure is rising, something's wrong. Cross-correlating related measurements can catch spoofed values that don't make physical sense.
None of these are turnkey solutions. All require engineering involvement to implement correctly. But they're real capabilities that exist today.
Building Bridges to Engineering: A Starting Point
"Work with Engineering" is easy to say. Here's what it looks like in practice.
Who initiates: You do. The security team. Engineering has been excluded from OT security conversations because their devices "don't have IP addresses." They're not going to invite themselves to a party they've been told isn't for them.
Who to contact: Start with the Instrumentation & Controls (I&C) Supervisor or Lead Instrument Technician. Not the VP of Engineering. You want the person who actually calibrates transmitters and troubleshoots control loops.
The opening conversation:
"We're trying to get a better handle on our OT security posture, and I've realized we have a blind spot around field instrumentation. I don't know enough about how sensors work at the physical level to assess the risks properly. Can I buy you coffee and ask some dumb questions?"
That framing matters. You're not auditing them. You're not telling them they have a security problem. You're asking for their expertise.
First meeting agenda:
- Asset inventory: What field devices feed your most critical control loops? Safety systems? Which ones are "smart" (HART/Fieldbus) vs. pure analog?
- Access points: How does someone physically access a transmitter to calibrate it? What tools do they use? Is there any logging?
- Configuration management: When a sensor's range or parameters change, how is that documented? Who approves changes?
- Anomaly experience: Have they seen sensors behave strangely? Drift unexpectedly? Give readings that didn't match physical reality? Engineers often have war stories about "that weird transmitter" that never got classified as security-relevant.
What you're building toward: A joint assessment of critical control loops that examines both network attack paths AND physics-level attack paths. The engineer brings knowledge of signal types, calibration procedures, and process constraints. You bring threat modeling and security architecture. Neither can do this assessment alone.
Five Things to Do With This
- Add Process Measurement Integrity to your assessment framework. When you evaluate a control loop, ask: What authorizes configuration changes to this sensor? What authenticates that the signal is coming from a legitimate source? What validates accuracy against physical reality? If the answers are "nothing, nothing, and nothing," document that gap.
- Map your Level 0 blind spots. Identify which sensors feed safety-critical systems, high-value processes, and regulatory compliance measurements. Those are your highest-risk physics-level attack surfaces.
- Have the engineering conversation. Find your I&C lead. Ask the dumb questions. Start building the relationship that lets you collaborate on risks neither team can see alone.
- Evaluate compensating controls. Physics-based monitoring, redundant sensing with voting logic, HART configuration alerts, process invariant checks. None are perfect. All are better than pure implicit trust.
- Be honest about limitations. When you report on OT security posture, caveat what you can't see. "We have visibility into network traffic at Levels 2-3. We do not currently have mechanisms to detect sensor-level compromise or signal manipulation at Level 0."
The Gap We Need to Close
Process measurement integrity isn't going to be fixed by firewalls. It's not going to be fixed by network segmentation. Our current OT security tools operate above the layer where these attacks occur.
It's going to be fixed by security practitioners and instrumentation engineers working together. Combining network threat modeling with process physics knowledge to see attack paths that neither discipline sees alone.
We've built our OT security programs on a foundation of devices we can't see, can't authenticate, can't audit, and can't secure with current tools. We've been calling it "out of scope" instead of calling it what it is: our most significant blind spot.
The frameworks will eventually catch up. FERC and NERC are already signaling that the Level 0 exclusion has to change. But framework updates take years.
Your environment is at risk now.
The question isn't whether to address process measurement integrity. It's whether you address it proactively, or wait until a lying sensor teaches you the lesson the hard way.
River Caudle is CSO of River Risk Partners, specializing in industrial cybersecurity and production loss prevention for nuclear, energy, and critical infrastructure sectors.