What is OPC tunneling?
OPC tunneling is a technique for moving OPC data across a network by replacing DCOM, the Windows-specific transport that OPC Classic was built on, with a standard TCP connection that works cleanly through firewalls and does not require Windows domain synchronization, matching user accounts, or dynamic port ranges.
The tunneling software installs on both the OPC server machine and the OPC client machine. Each instance connects locally to its respective OPC application using standard COM, the same way it would if the client and server were on the same machine. The two tunneling instances then communicate with each other over a single, configurable TCP port, effectively creating a peer-to-peer network link that carries the OPC data without DCOM being involved in the network hop at all.
From the perspective of the OPC client and the OPC server, everything looks the same as a local COM connection. The client does not know it is talking to a tunnel; the server does not know either. The tunneling layer is transparent, which means existing OPC Classic applications can be networked over firewalls and DMZs without any modification to the client or server software.
OPC tunneling vs OPC UA: OPC UA solves the same cross-network problem natively, because it is built on TCP/IP from the ground up with no DCOM dependency. If both your OPC server and all client applications support OPC UA, that is the preferred path for new deployments. OPC tunneling is the practical solution for environments where one or both sides of the connection are OPC Classic only and cannot or should not be replaced.
Why DCOM fails across networks and firewalls
Understanding why tunneling exists requires understanding what makes DCOM problematic in real plant environments. DCOM was designed in the 1990s for communication between Windows applications on a corporate LAN within the same Windows domain. It was not designed for IT/OT network boundaries, firewall-separated zones, or the security posture that modern industrial environments require.
The core DCOM problem in industrial networks: DCOM uses dynamic port allocation. The client connects to the server's DCOM endpoint on TCP port 135, which then negotiates a random high-numbered port (typically somewhere in the 49152 to 65535 range) for the actual data communication. Firewalls that separate OT and IT networks must either open the entire dynamic port range or block OPC Classic network connections entirely. Neither option is acceptable in a security-conscious environment.
Beyond the port problem, DCOM has several other failure modes that engineers who have deployed OPC across networks recognize immediately:
| Issue | DCOM behavior | Tunneling behavior |
|---|---|---|
| Firewall compatibility | Fails. Fails unless an entire dynamic high port range (49152-65535) is opened, or DCOM port ranges are manually locked and configured | Works. Works through any firewall that allows a single configurable TCP port outbound |
| Windows domain requirement | Fails. Requires matching user accounts and passwords on both machines, or both machines in the same Windows domain. Cross-domain or workgroup configurations require manual intervention | Works. No Windows domain dependency. Connection is authenticated at the tunneling layer with username and password configured in the tunnel software itself |
| Network failure recovery | Fails. DCOM timeouts are hardcoded and can take minutes to detect a broken connection. Client applications may appear to hang waiting for DCOM to time out | Works. Configurable heartbeat detects network breaks in as little as 50 milliseconds. Local DataHub instance continues serving clients during outage using last known values |
| Configuration complexity | Fails. Requires dcomcnfg, Component Services, OPCEnum configuration, firewall rules, user account matching, and authentication level tuning. Any Windows Update can break a working configuration | Works. Three-step configuration in the tunneling software UI. No Windows system-level changes required |
| Cross-domain and workgroup | Fails. Very difficult. Requires identical local user accounts with matching passwords on both machines. Easy to misconfigure | Works. Works natively. The two machines do not need to be in the same domain, workgroup, or even the same Windows version |
| Inbound firewall ports on OT side | Fails. Requires inbound ports open on the server side for client-initiated connections | Works. Can be configured outbound-only from the OT side: the plant DataHub initiates outbound to the IT DataHub, so no inbound ports need to be open on the OT network |
| Encryption | Fails. No built-in data encryption over the network | Works. SSL/TLS encryption available with optional password protection for the tunnel connection |
How OPC tunneling works
At its core, OPC tunneling mirrors data between two DataHub instances rather than simply relaying OPC protocol commands across the network. This distinction matters for performance and resilience.
In a standard DCOM connection, every OPC read, write, and subscription command from the client travels across the network to the server, and every response travels back. The network becomes part of the OPC session latency. If the network becomes congested or briefly drops, the OPC session may be interrupted.
In a tunneling architecture, each DataHub instance maintains its own complete data cache. The OPC client on the IT side talks locally to its DataHub via COM, as if the DataHub were a local OPC server. The two DataHub instances synchronize their caches over the TCP tunnel using the DataHub Transfer Protocol (DHTP). When a value changes on the OT side, the slave DataHub detects it, sends it across the tunnel, and the master DataHub updates its local cache and notifies any subscribed clients, all without the client ever waiting on a cross-network OPC call.
What happens when the network goes down
One of the most practically important differences between DCOM and tunneling is how each handles a network interruption. With DCOM, a broken network connection is not immediately detected. DCOM has hardcoded timeout periods that can take minutes to expire, during which the OPC client may appear to hang, display stale data without a quality change, or require a restart to recover.
With Cogent DataHub tunneling, the heartbeat interval is configurable and can be set as low as 50 milliseconds. When the heartbeat is missed, the local DataHub detects the break immediately, marks all tunnel-sourced data points with a quality of "Not Connected," and continues serving the IT-side clients with that flagged status. When the network recovers, the DataHub automatically reconnects, refreshes the data cache with current values, and resumes normal operation, all without any intervention from an engineer or operator.
When to use OPC tunneling
OPC tunneling is not the right answer for every situation, but it is the right answer for a well-defined set of scenarios that come up regularly in real plants. Here are the four most common.
Cogent DataHub tunneling: capabilities in detail
Cogent DataHub, distributed by Software Toolbox, is a tunneling solution for OPC Classic deployments because of the specific engineering choices made in its implementation. These are not marketing features; they are the capabilities that matter when you are managing a tunneled OPC deployment across a production environment.
DA and UA tunneling in one product: Cogent DataHub tunnels OPC DA, OPC UA, OPC A&E, Modbus, MQTT, and more across the same infrastructure. DataHub tunneling is not limited to OPC Classic; it is a general-purpose secure data transport that handles multiple protocols through the same set of tunnel connections, which simplifies the network architecture as deployments grow.
Zero inbound ports: the OT-side outbound tunnel pattern
One of the more counterintuitive but important capabilities of DataHub tunneling is the ability to move data without any inbound firewall ports open on the OT side. This addresses a common conflict between OT teams (who want to keep the plant network completely locked down) and IT teams (who need data from it).
In a standard client-server relationship, the client initiates the connection to the server. If the OPC server is on the OT side and the OPC client is on the IT side, the IT client needs to connect inward to the OT server, which requires an inbound port on the OT firewall.
DataHub reverses this. The OT-side DataHub (acting as the tunnel slave, the authoritative data source) initiates an outbound connection to the IT-side DataHub (the tunnel master). Because the connection originates from inside the OT network going outward, no inbound ports are required on the OT firewall. The IT-side DataHub then exposes the data to IT clients as an OPC server via local COM, with no connection ever coming into the plant network from outside.
OPC tunneling vs migrating to OPC UA
Engineers evaluating their options for cross-network OPC connectivity should understand how tunneling and OPC UA compare as solutions, because they address the same problem from different directions.
OPC UA solves the cross-network problem natively: it uses TCP/IP, it has built-in security, it does not depend on DCOM, and it works across firewalls on a single configured port. If both the OPC server and all client applications support OPC UA, a direct UA connection is the cleanest long-term architecture and the recommended path for new deployments.
OPC tunneling is the practical solution when that condition is not met, which describes a large proportion of real plants:
- The OPC client application only supports OPC DA and the vendor has not released a UA version
- The OPC DA server is embedded in hardware or legacy software that cannot be replaced in the near term
- The migration budget or operational risk of replacing working OPC DA infrastructure is not justified for the expected value
- There are intermediate steps in an IT/OT integration architecture where classic OPC protocols still need to traverse network boundaries
In many deployments, tunneling and OPC UA coexist in the same DataHub instance. The DataHub can simultaneously tunnel OPC DA data from legacy clients and servers while also serving as an OPC UA server to modern clients, or bridging DA data to a UA endpoint for consumption by systems that support it.
A practical migration path: Deploy DataHub tunneling now to solve the immediate DCOM and firewall problem. As client applications are upgraded or replaced with OPC UA-capable versions, those connections can be migrated to direct OPC UA without any disruption to the remaining DA tunneled connections. The DataHub supports both in parallel on the same installation, so the transition does not require a cutover.
