The outage clock does not care who pressed the patch button or who signed the safety case, it just burns money and trust until operators restore flow and leadership restores confidence. In plants and rail yards, pipelines and power stations, a familiar tug-of-war still plays out between cybersecurity teams trained in IT hygiene and operators accountable for uptime and safety. This roundup pulled together perspectives from leaders across industrial security—drawing on practices commonly seen at Honeywell Process Solutions, Accenture, Wabtec, and OPSWAT—to capture where consensus has formed, where contention remains, and what actually moves organizations from cultural stalemate to shared resilience on the plant floor.
Most participants started from the same sobering baseline: only a small minority of organizations feel fully prepared for OT threats, often cited near 14%, meaning the readiness gap is less a tooling deficit than a cultural one. Rising costs make the gap impossible to ignore—outages averaging roughly $88,000 per hour, 58% of sites operating under mandates with about 26% reporting violations—while aging systems, stricter oversight, cloud adoption, and geopolitical pressure push risk upward. The result is a widening divide and a clear mandate: translate cyber into operational language, unify data and tools, and design defense so that it disappears during normal operations and performs under stress.
Yet the story is not grim. Leaders described practical ways to align incentives and speed decisions without jeopardizing production. Co-location on the plant floor, shared inventories and monitoring, time-bound vendor access, and board-ready risk quantification formed a repeatable pattern. This is not a call for more controls; it is a case for better choreography—where cybersecurity helps deliver safer, steadier, smarter operations.
What unites leaders—and where they disagree
Participants largely agreed that culture outruns technology in determining outcomes. Security teams tend to lead with vulnerability counts and patch cadence; operations prioritizes deterministic control, safety, and reliability targets. When the same incident is framed as a CVE score on one side and a risk to process continuity on the other, friction follows. The gap deepens when compliance is treated as a checklist rather than a design guide, or when separate tooling for IT and OT fragments visibility and slows triage.
However, views diverged on pace and posture. Some emphasized quick wins—segmentation and secure transfer workflows that reduce risk without touching controllers—while others pushed for broader governance changes at the corporate level before system changes at the site. There was also nuance about incident rehearsals: several favored frequent cross-functional drills, yet some operators preferred smaller, scenario-specific walk-throughs aligned to maintenance windows to limit disruption.
A consistent throughline was language. The most effective engagements recast cyber risk in terms of downtime, safety incidents avoided, mean time to recovery, and the dollars at stake. Notably, leaders described that this framing accelerates approvals for controls that are continuity-centric, even among skeptical plant managers.
Inside the culture divide across plants and SOCs
When “patch now” meets “don’t stop the line”
The most familiar flashpoint stems from conflicting cadences. Security teams seek rapid remediation to reduce exposure windows; operations resists changes that could destabilize deterministic processes. That tension is sharper in facilities where change windows are scarce, maintenance backlogs are long, and failure modes are poorly documented. The readiness shortfall reflected in the small fraction of organizations claiming full preparedness is not due to ignorance about threats, but to practical constraints that the plant cannot wish away.
Numbers reinforce the caution. Outages pegged around $88,000 per hour and mandated controls covering most sites, with noticeable rates of violations, shape risk acceptance. Leaders said that violations often come from insufficient staff or time to integrate controls while meeting production goals, not from indifference. The language mismatch—security urging immediate patches, operations defending safety envelopes—hardens into a cultural divide unless each side can see the other’s tradeoffs in operational terms.
Shared tools, shared data, shared wins
Leaders converged on a simple, powerful remedy: use the same backbone. Joint asset inventories, configuration management databases, shared monitoring, and harmonized change workflows reduce duplication and improve decision quality. During incidents, a single source of truth lets teams agree on scope, triage faster, and avoid conflicting actions. During planning, it becomes easier to integrate cyber tasks into maintenance cycles and lifecycle management.
Practical models help. Co-location of analysts near control rooms for critical shifts, cross-training so security staff can read safety cases and process diagrams, and integrated incident rehearsals that include vendors all build muscle memory. Industry practices frequently associated with Honeywell, Accenture, Wabtec, and OPSWAT emphasized this operational backbone: shared telemetry and inventories, integrated playbooks, and joint after-action reviews. The tradeoff is governance and funding: shared systems need clear ownership and budgets. Yet the payoff—fewer blind spots, faster recovery, and measurable reliability gains—was described as decisive.
Defense-in-depth without disrupting production
Controls that disappear in normal operations
Defense-in-depth remains the principle, but the implementation must be tuned for OT. Segmentation that mirrors process boundaries, unidirectional gateways where feasible, secure file transfer paths, and out-of-band visibility provide layered protection that does not impede real-time control. Leaders stressed that intrusive network scans or frequent patching on legacy equipment risk causing the outages they aim to prevent, so controls should be continuity-first by design.
Regional and sector pressures complicate this calculus. Energy and transportation face heavier regulatory scrutiny and increasingly aggressive adversaries, including state-linked campaigns against critical infrastructure. In such environments, recovery readiness becomes as vital as prevention: spares for critical components, golden images for programmable devices, and tested runbooks reduce time to restore. Several noted that rehearsing the recovery path builds trust, convincing operators that cybersecurity is an ally when the plant needs to bounce back quickly.
Challenging the “more controls” instinct
Participants warned against the reflex to stack tools when core processes are not aligned. More controls that generate more alerts without a shared triage workflow create fatigue and erode trust. Conversely, fewer, better-integrated controls supported by common data, defined change windows, and jointly owned metrics tend to improve both security and uptime. The metric that mattered most in these conversations was not the number of vulnerabilities closed but the minutes of unplanned downtime avoided and the speed of safe restoration when incidents occurred.
Connectivity, cloud, and AI without more chaos
Designing a common language of resilience
The expansion of IIoT, cloud-hosted analytics, and AI workloads added new stakeholders—data scientists, cloud architects, integrators—and widened the attack surface. Leaders argued that identity, monitoring, and configuration visibility must be unified across IT and OT so that all parties operate from the same picture of risk. This requires consolidating identity providers, normalizing telemetry, and mapping dependencies from plant-level assets up to cloud services.
Zero trust principles adapted to OT were cited repeatedly: time-bound vendor access, strong credential management with rapid revocation, and cryptographic validation of updates. These practices reduce persistent exposure and make vendor support auditable. Some groups also flagged post-quantum considerations for long-lived equipment and signed updates, noting that lifecycle decisions made now will persist for years and should anticipate cryptographic shifts.
Unified data governance and faster recovery
A recurring insight was that governance has to evolve in lockstep with connectivity. Shared policies for data classification, access, and retention create predictability for everyone, from reliability engineers to analytics teams. Additionally, unified telemetry and configuration baselines speed fault isolation and shorten the path from detection to safe recovery. Leaders described that when identity, asset data, and monitoring are coherent, incident commanders can make confident decisions without reopening design debates mid-crisis.
Budget, governance, and board communication
Leaders emphasized that board conversations work when framed in business terms. Value at Risk, scenario analyses, and probability-impact matrices help quantify exposure and show residual risk after proposed mitigations. This framing connects cybersecurity to revenue protection, safety outcomes, and regulatory posture, rather than to abstract technical metrics. It also clarifies tradeoffs—such as whether to fund segmentation first or invest in secure vendor access and monitoring—by comparing risk retired per dollar.
On governance, integrated IT-OT risk councils were highlighted as a practical mechanism. These bodies approve changes with both security and operations constraints in mind, own shared metrics, and arbitrate budget across functions. Crucially, they align cyber efforts to reliability and performance KPIs, tying investment to outcomes operators already track, such as mean time between failures and mean time to recovery.
Actionable guidance from the field
Leaders converged on a small set of actions that consistently move programs forward. First, embed cybersecurity into maintenance and lifecycle planning, not as a parallel track but as part of everyday reliability work. Second, standardize on shared inventories, configuration databases, and telemetry that both maintenance and security trust. Third, run joint tabletop exercises and live rehearsals that include vendors, and capture lessons in runbooks that align with safety cases.
Access control modernizations stood out as high-return. Replacing persistent vendor VPNs with time-bound sessions, enforcing multi-factor authentication, validating software and firmware integrity, and ensuring rapid credential revocation reduce exposure while preserving support. Finally, treat compliance as a floor: use ISA/IEC 62443 and similar frameworks to guide architecture and governance choices that deliver continuity-first security, rather than box-checking.
Comparing perspectives from the field
Across manufacturing, transportation, and process industries, leaders aligned on continuity-centric design yet differed on sequencing. Some prioritized quick segmentation and secure transfer paths to cut risk immediately; others started with governance and shared tooling to prevent later fragmentation. Organizations with heavy vendor dependence pushed hardest on zero trust access and update validation; sites with complex legacy assets leaned into out-of-band visibility and recovery readiness.
Despite these differences, the common pattern was unmistakable: shared goals, shared tools, and shared accountability lowered friction and raised resilience. Where co-location, cross-training, and integrated rehearsals were routine, teams navigated incidents with fewer surprises and faster restoration. Where those practices were absent, even well-funded controls struggled against cultural drag.
Where this roundup pointed next
By the end, several next steps had emerged in clear relief. Programs that translated vulnerabilities into downtime and safety language secured funding faster and sustained executive attention longer. Efforts that built a single source of truth for assets and configurations shortened triage and removed duplicative work. And drills that included vendors exposed brittle access paths and forced the creation of rapid revocation procedures before attackers did.
The roundup also highlighted fresh angles for future work: integrating post-quantum planning into long-lived device lifecycles, expanding identity governance to cover AI engineering workflows, and refining risk metrics so they map directly to reliability targets used by plant leadership. For readers seeking more depth, the best follow-on materials would include sector-specific guidance on segmentation patterns, practical playbooks for vendor access modernization, and case studies that quantify recovery gains from shared inventories and telemetry.






