HomeResourcesWhat is OPC Tunneling?
OPC Connectivity Explained

What is OPC tunneling?

OPC tunneling is a technique for moving OPC data across a network by replacing DCOM with a standard TCP connection that works cleanly through firewalls and does not require Windows domain synchronization, matching user accounts, or dynamic port ranges. It is the practical solution for cross-network OPC Classic deployments where DCOM is not workable.

Last reviewed: 2026Reading time: ~10 minTopics: OPC tunneling, DCOM, DMZ, DataHub, DHTP, OPC UA, firewalls

What is OPC tunneling?

OPC tunneling is a technique for moving OPC data across a network by replacing DCOM, the Windows-specific transport that OPC Classic was built on, with a standard TCP connection that works cleanly through firewalls and does not require Windows domain synchronization, matching user accounts, or dynamic port ranges.

The tunneling software installs on both the OPC server machine and the OPC client machine. Each instance connects locally to its respective OPC application using standard COM, the same way it would if the client and server were on the same machine. The two tunneling instances then communicate with each other over a single, configurable TCP port, effectively creating a peer-to-peer network link that carries the OPC data without DCOM being involved in the network hop at all.

From the perspective of the OPC client and the OPC server, everything looks the same as a local COM connection. The client does not know it is talking to a tunnel; the server does not know either. The tunneling layer is transparent, which means existing OPC Classic applications can be networked over firewalls and DMZs without any modification to the client or server software.

OPC tunneling vs OPC UA: OPC UA solves the same cross-network problem natively, because it is built on TCP/IP from the ground up with no DCOM dependency. If both your OPC server and all client applications support OPC UA, that is the preferred path for new deployments. OPC tunneling is the practical solution for environments where one or both sides of the connection are OPC Classic only and cannot or should not be replaced.

Why DCOM fails across networks and firewalls

Understanding why tunneling exists requires understanding what makes DCOM problematic in real plant environments. DCOM was designed in the 1990s for communication between Windows applications on a corporate LAN within the same Windows domain. It was not designed for IT/OT network boundaries, firewall-separated zones, or the security posture that modern industrial environments require.

The core DCOM problem in industrial networks: DCOM uses dynamic port allocation. The client connects to the server's DCOM endpoint on TCP port 135, which then negotiates a random high-numbered port (typically somewhere in the 49152 to 65535 range) for the actual data communication. Firewalls that separate OT and IT networks must either open the entire dynamic port range or block OPC Classic network connections entirely. Neither option is acceptable in a security-conscious environment.

Beyond the port problem, DCOM has several other failure modes that engineers who have deployed OPC across networks recognize immediately:

IssueDCOM behaviorTunneling behavior
Firewall compatibilityFails. Fails unless an entire dynamic high port range (49152-65535) is opened, or DCOM port ranges are manually locked and configuredWorks. Works through any firewall that allows a single configurable TCP port outbound
Windows domain requirementFails. Requires matching user accounts and passwords on both machines, or both machines in the same Windows domain. Cross-domain or workgroup configurations require manual interventionWorks. No Windows domain dependency. Connection is authenticated at the tunneling layer with username and password configured in the tunnel software itself
Network failure recoveryFails. DCOM timeouts are hardcoded and can take minutes to detect a broken connection. Client applications may appear to hang waiting for DCOM to time outWorks. Configurable heartbeat detects network breaks in as little as 50 milliseconds. Local DataHub instance continues serving clients during outage using last known values
Configuration complexityFails. Requires dcomcnfg, Component Services, OPCEnum configuration, firewall rules, user account matching, and authentication level tuning. Any Windows Update can break a working configurationWorks. Three-step configuration in the tunneling software UI. No Windows system-level changes required
Cross-domain and workgroupFails. Very difficult. Requires identical local user accounts with matching passwords on both machines. Easy to misconfigureWorks. Works natively. The two machines do not need to be in the same domain, workgroup, or even the same Windows version
Inbound firewall ports on OT sideFails. Requires inbound ports open on the server side for client-initiated connectionsWorks. Can be configured outbound-only from the OT side: the plant DataHub initiates outbound to the IT DataHub, so no inbound ports need to be open on the OT network
EncryptionFails. No built-in data encryption over the networkWorks. SSL/TLS encryption available with optional password protection for the tunnel connection

How OPC tunneling works

At its core, OPC tunneling mirrors data between two DataHub instances rather than simply relaying OPC protocol commands across the network. This distinction matters for performance and resilience.

In a standard DCOM connection, every OPC read, write, and subscription command from the client travels across the network to the server, and every response travels back. The network becomes part of the OPC session latency. If the network becomes congested or briefly drops, the OPC session may be interrupted.

In a tunneling architecture, each DataHub instance maintains its own complete data cache. The OPC client on the IT side talks locally to its DataHub via COM, as if the DataHub were a local OPC server. The two DataHub instances synchronize their caches over the TCP tunnel using the DataHub Transfer Protocol (DHTP). When a value changes on the OT side, the slave DataHub detects it, sends it across the tunnel, and the master DataHub updates its local cache and notifies any subscribed clients, all without the client ever waiting on a cross-network OPC call.

OPC tunneling architecture: DCOM replaced by peer-to-peer TCP link
OT NETWORK / PLANT FLOORPLC / DCSnative protocolOPC DA ServerTOP ServerCOGENT DATAHUB (SLAVE)OT-side tunnel endpointlocal OPC client · cache · DHTPCOMInitiates outbound connectionNo inbound ports needed on OT sideFIREWALLTCP port 4502SSL/TLS encryptedpassword protectedsingle configurable portIT NETWORK / BUSINESS LAYERCOGENT DATAHUB (MASTER)IT-side tunnel endpointOPC server to clients · cache · DHTPCOMSCADAHistorianAnalytics

What happens when the network goes down

One of the most practically important differences between DCOM and tunneling is how each handles a network interruption. With DCOM, a broken network connection is not immediately detected. DCOM has hardcoded timeout periods that can take minutes to expire, during which the OPC client may appear to hang, display stale data without a quality change, or require a restart to recover.

With Cogent DataHub tunneling, the heartbeat interval is configurable and can be set as low as 50 milliseconds. When the heartbeat is missed, the local DataHub detects the break immediately, marks all tunnel-sourced data points with a quality of "Not Connected," and continues serving the IT-side clients with that flagged status. When the network recovers, the DataHub automatically reconnects, refreshes the data cache with current values, and resumes normal operation, all without any intervention from an engineer or operator.

When to use OPC tunneling

OPC tunneling is not the right answer for every situation, but it is the right answer for a well-defined set of scenarios that come up regularly in real plants. Here are the four most common.

1
OPC DA across a firewall or DMZ
The scenario OPC tunneling was built for
Your OPC DA server is on the OT network. Your historian, SCADA, or analytics platform is on the IT network. A firewall sits between them. DCOM does not work cleanly through the firewall without opening an unacceptable port range. Tunneling solves this with a single outbound TCP connection from the OT side, keeping the OT network firewall locked down. This is the most common OPC tunneling use case, and the primary reason Cogent DataHub's tunneling capability was built.
2
OPC DA between different Windows domains or workgroups
Cross-domain, workgroup, or mixed-version Windows environments
DCOM requires matching user credentials on both machines. In plants where the OT network Windows machines are managed separately from IT domain controllers, maintaining matching accounts is operationally costly and a common source of authentication failures after password rotations or Windows updates. Tunneling eliminates this dependency entirely. The two DataHub instances authenticate directly with each other using credentials configured in the tunnel software, with no requirement that the underlying Windows accounts match.
3
Collecting OPC data from multiple remote sites
Multi-site aggregation over WAN, VPN, or internet connections
When OPC servers are at remote sites connected by WAN links, cellular, or VPN, DCOM is completely impractical. The latency alone breaks DCOM's timing model, and the dynamic port requirements are impossible to manage across WAN firewall rules. Tunneling handles WAN connections reliably because it uses standard TCP, tolerates the latency inherent in wide-area links, and can aggregate data from multiple remote DataHub instances into a single data set on the central DataHub. A data center or operations center can collect OPC data from dozens of remote sites through a single managed set of tunnel connections.
4
Windows security updates breaking DCOM
A reactive fix for a recurring operational problem
Microsoft's KB5004442 security update, released in 2021, changed the authentication requirements for DCOM servers and broke many existing OPC Classic network connections that were relying on lower authentication levels. This was not the first time a Windows security patch disrupted DCOM-based OPC deployments, and it will not be the last. Every time Microsoft tightens DCOM security, OPC Classic network connections that were working require reconfiguration. Replacing DCOM with tunneling removes OPC data networking from the Windows security update maintenance cycle entirely.

Cogent DataHub tunneling: capabilities in detail

Cogent DataHub, distributed by Software Toolbox, is a tunneling solution for OPC Classic deployments because of the specific engineering choices made in its implementation. These are not marketing features; they are the capabilities that matter when you are managing a tunneled OPC deployment across a production environment.

🔒
SSL/TLS encryption
All tunnel traffic can be encrypted to the highest TLS level supported by both machines, with optional password protection. Meets the security requirements of most IT security teams for cross-network OT data transfer.
50 ms heartbeat detection
Configurable heartbeat detects network breaks as fast as 50 milliseconds. Data quality is immediately flagged. Clients always know when they are receiving live data vs cached data from an interrupted connection.
🔄
Automatic reconnection
When the network recovers, the DataHub reconnects automatically and refreshes the full data set. No operator intervention needed. Unlike DCOM, which may require OPC client or server restarts to recover.
🚪
No inbound OT ports required
The OT-side DataHub can initiate the outbound connection, meaning no inbound firewall ports need to be open on the plant network. The plant controls who connects and when, not the IT side.
📊
Data aggregation
A single DataHub instance can maintain tunnel connections to multiple remote OPC servers simultaneously, aggregating all their data into a single unified data set accessible by IT-side clients over one connection.
🛡
Data diode mode
Enhanced security mode that discards all incoming data without any processing, enforcing strict one-way data flow. Also supports hardware data diodes using TCP emulation for environments requiring physical unidirectionality.
💾
Store and forward
Caches data locally when the network is down and forwards it when connectivity resumes. Critical for historian applications where data gaps during network outages create compliance issues or analysis gaps.
🔁
Read-only or read-write
Each tunnel endpoint can be configured as read-only (data flows one way, no writes allowed from IT to OT) or read-write. Read-only mode is the default recommendation for cross-zone connections, limiting the attack surface.
🌐
Proxy and DMZ support
Works through proxy servers and multi-hop DMZ architectures. A DataHub in the DMZ can relay data from the OT tunnel to the IT side, with all inbound ports on both OT and IT networks closed and only the DMZ exposed.

DA and UA tunneling in one product: Cogent DataHub tunnels OPC DA, OPC UA, OPC A&E, Modbus, MQTT, and more across the same infrastructure. DataHub tunneling is not limited to OPC Classic; it is a general-purpose secure data transport that handles multiple protocols through the same set of tunnel connections, which simplifies the network architecture as deployments grow.

Zero inbound ports: the OT-side outbound tunnel pattern

One of the more counterintuitive but important capabilities of DataHub tunneling is the ability to move data without any inbound firewall ports open on the OT side. This addresses a common conflict between OT teams (who want to keep the plant network completely locked down) and IT teams (who need data from it).

In a standard client-server relationship, the client initiates the connection to the server. If the OPC server is on the OT side and the OPC client is on the IT side, the IT client needs to connect inward to the OT server, which requires an inbound port on the OT firewall.

DataHub reverses this. The OT-side DataHub (acting as the tunnel slave, the authoritative data source) initiates an outbound connection to the IT-side DataHub (the tunnel master). Because the connection originates from inside the OT network going outward, no inbound ports are required on the OT firewall. The IT-side DataHub then exposes the data to IT clients as an OPC server via local COM, with no connection ever coming into the plant network from outside.

Advanced pattern: OT outbound through DMZ, no inbound OT ports
OT NETWORKOPC Server + DataHubFirewall: outbound onlyNo inbound ports openFWDMZDataHub (relay node)Inbound from OT only.Outbound to IT.Open ports here only.FWIT NETWORKDataHub (master)+ OPC clientsSCADAHistorianNo inbound ports openon IT firewall.

OPC tunneling vs migrating to OPC UA

Engineers evaluating their options for cross-network OPC connectivity should understand how tunneling and OPC UA compare as solutions, because they address the same problem from different directions.

OPC UA solves the cross-network problem natively: it uses TCP/IP, it has built-in security, it does not depend on DCOM, and it works across firewalls on a single configured port. If both the OPC server and all client applications support OPC UA, a direct UA connection is the cleanest long-term architecture and the recommended path for new deployments.

OPC tunneling is the practical solution when that condition is not met, which describes a large proportion of real plants:

  • The OPC client application only supports OPC DA and the vendor has not released a UA version
  • The OPC DA server is embedded in hardware or legacy software that cannot be replaced in the near term
  • The migration budget or operational risk of replacing working OPC DA infrastructure is not justified for the expected value
  • There are intermediate steps in an IT/OT integration architecture where classic OPC protocols still need to traverse network boundaries

In many deployments, tunneling and OPC UA coexist in the same DataHub instance. The DataHub can simultaneously tunnel OPC DA data from legacy clients and servers while also serving as an OPC UA server to modern clients, or bridging DA data to a UA endpoint for consumption by systems that support it.

A practical migration path: Deploy DataHub tunneling now to solve the immediate DCOM and firewall problem. As client applications are upgraded or replaced with OPC UA-capable versions, those connections can be migrated to direct OPC UA without any disruption to the remaining DA tunneled connections. The DataHub supports both in parallel on the same installation, so the transition does not require a cutover.

Frequently asked questions

Does OPC tunneling work across the internet or just on local networks?+

OPC tunneling works across any TCP/IP network, including wide-area networks, cellular connections, VPNs, and the public internet. The DataHub Transfer Protocol (DHTP) that Cogent DataHub uses for its tunnel connections is designed to tolerate the higher latency and variable reliability characteristics of WAN connections, unlike DCOM which breaks down under WAN conditions.

For internet-facing tunnel connections, SSL/TLS encryption and password protection should always be enabled. The DataHub also supports the Cogent DataHub Service for Azure as a cloud relay, which allows plant-side DataHub instances to connect outbound to an Azure-hosted endpoint, enabling remote monitoring and data access without any inbound firewall ports on the plant network or requiring a VPN.

Does OPC tunneling require any changes to the OPC client or server software?+

No. This is one of the key advantages of the tunneling approach. The OPC client and OPC server continue to communicate using their normal OPC interface. From the OPC server's perspective, its DataHub instance looks like just another OPC DA client connecting locally. From the OPC client's perspective, its DataHub instance looks like a local OPC DA server. Neither application needs to know that a tunnel is carrying the data across the network.

This means OPC tunneling can be deployed retroactively to an existing OPC DA system without modifying, reconfiguring, or relicensing any of the existing OPC applications. The DataHub instances are installed alongside the existing software and connect via local COM.

What happens to data quality during a network outage?+

When the tunnel connection is interrupted, the DataHub immediately marks all data points sourced from the tunnel with a quality of "Not Connected." This quality change is immediately visible to any OPC clients connected to the IT-side DataHub, giving them accurate information about the reliability of the data they are receiving rather than continuing to display potentially stale values as if they were current.

If store-and-forward is enabled, the DataHub caches any data changes that occur during the outage and forwards them to downstream consumers when the connection is restored. For historian applications, this means the historical record remains complete even across temporary network interruptions.

Can DataHub tunnel data in one direction only? Can I prevent the IT side from writing to OT devices?+

Yes. Each tunnel endpoint can be configured independently as read-only or read-write. Configuring the OT-side DataHub as a read-only data source means IT-side applications cannot issue write commands back through the tunnel to OT devices, even if their OPC client software attempts to do so. The DataHub simply discards write requests from the IT direction.

For environments that require strict unidirectionality with hardware-level enforcement, DataHub also supports a Data Diode mode that discards all incoming data without any processing, and works with hardware data diodes that use TCP emulation for an additional layer of physical isolation.

Can one DataHub aggregate data from multiple remote OPC servers over separate tunnels?+

Yes. A single DataHub instance on the IT side can maintain tunnel connections to multiple DataHub instances at different sites simultaneously. Each site has its own DataHub connected to its local OPC server, and all of them tunnel into the central IT-side DataHub. The central DataHub aggregates all the incoming data into a single unified data set, which any OPC client on the IT network can browse and subscribe to through a single OPC connection.

This is the standard architecture for organizations managing OPC data collection across multiple plants, remote wellheads, substations, or facilities. It replaces what would otherwise be a complex web of individual DCOM connections and provides a single managed point of integration with full visibility into the status of each site's connection.

Is tunneling still relevant now that OPC UA is widely supported?+

Yes, for several reasons. First, the installed base of OPC DA infrastructure in operating plants is enormous and is not being replaced quickly. Plants that are running OPC DA reliably see no operational reason to replace it; they see a very real reason to solve the network connectivity problem without a major software migration.

Second, OPC UA resolves the firewall problem only if both the server and every client support it. In mixed environments, where some clients have been upgraded to UA but others remain DA-only, tunneling fills the gap.

Third, DataHub tunneling is not only for OPC DA. It also tunnels OPC UA through networks where direct UA connections would require inbound firewall ports on the OT side, a security concern even for UA. The combination of OPC UA's native security model with DataHub's outbound-connection tunnel architecture gives the strongest security posture for cross-zone OPC data sharing.

Need help solving OPC firewall and DCOM issues?

Software Toolbox has deployed Cogent DataHub tunneling in hundreds of plants facing exactly these problems. Whether you need help with cross-network OPC DA, DMZ architecture design, or evaluating tunneling vs OPC UA migration, our engineers have been there.

Talk to an engineer