Please ensure Javascript is enabled for purposes of website accessibility
Powered by Zoomin Software. For more details please contactZoomin

PI to CONNECT Agent

PI to Data Hub Agent 2​.2​.1539 release notes

  • Last UpdatedMar 18, 2026
  • 6 minute read

Overview

This release covers the PI to Data Hub Agent, a component that is installed on-premises to replicate data, assets, and licensed PI Point counts from PI System to CONNECT data services. Version 2.2.1539, released December 11th, 2023, fixes several bugs and has performance improvements.

Customers who are upgrading to this version should be aware of a behavior change (item 96143 under Fixes). After upgrading, your agent may connect via Windows Authentication rather than a PI trust. Although this behavior change is desirable—Windows Authentication is the more secure option—it is possible that your agent will now connect to Data Archive with a PI identity that does not have permission to read the data in your transfer. The expectation is that few, if any, customers will be adversely affected.

Customers upgrading to this version should also consider changing OperationTimeout to 300 in C:\ProgramData\OSIsoft\PIToOCS\appsettings.json (item 97181 under Enhancements).

Although PI to Data Hub Agent Version 2.2.1539 is an optional upgrade, it is highly recommended for all customers. A summary of the most important fixes and enhancements are highlighted in the list below. For a full list of changes, see the sections Enhancements and Fixes below.

  • If your agent connects to Data Archive 2023 or greater, upgrade to this version. This release contains a fix (item 97857 under Fixes), where certain license files are now correctly identified as unlimited. Correct identification of the license is essential for the new licensing model, AVEVA PI Data Infrastructure – aggregate tag. More information on this feature can be found in the PI to CONNECT release history.

  • If you are seeing performance issues (slow transfers), upgrade to this version. This release contains performance improvements (item 97168 under Enhancements). For example, in a transfer where the agent was in eastern United States and the CONNECT data services namespace in western United States, there would be enough latency between agent and cloud (~100 milliseconds) to cause significant slowdowns, especially when a large number of string or digital PI Points were included in the transfer or when there were a large number of PI Points in the transfer with a small number of events per archive.

  • If you are seeing data gaps or issues with their transfer appearing “stuck”, upgrade to this version. Specifically, this release contains the following Fixes: 74686 (fix transfer progress), 99974 (agent should retry forever when errors are retriable).

  • If you are seeing data inconsistencies between Data Archive and CONNECT data services, upgrade to this version. Specifically, this release contains the following Fixes: 95557 (fix for range delete with backfill).

For more information on product features and functions, including system requirements and installation/uninstallation instructions, refer to PI to CONNECT Agents documentation.

Enhancements

Work Item

Description

97181

The default operation timeout for the agent has been increased from 30 seconds to 300 seconds (5 minutes). The default was changed because Data Archive RPCs typically have timeouts of 270 seconds (4 minutes and 30 seconds). The mismatch in timeouts lead to cascading problems and data loss. The new default takes effect only for new installations and must be manually applied for upgrades. Customers upgrading to this version should consider changing OperationTimeout to 300 in C:\ProgramData\OSIsoft\PIToOCS\appsettings.json to avoid putting unnecessary load on Data Archive when timeouts occur.

97168

When sending historical data to the cloud, messages are now batched together. This significantly improves performance under some conditions. The biggest improvements are seen when there is high latency between agent and cloud (such as a transfer from an agent in eastern US to a namespace in western US) and when a large number of string or digital PI Points are being transferred.

97704

Added a log message that reports the breakdown of the PI points included in the transfer by point type.

97705

Added a new appsetting.json setting, HistoricalReadTaskCount, to allow the preferred number of Data Archive read threads to be configured. Typically, this value should not be changed except when advised by technical support.

97706

Changed the behavior of an existing appsetting.json setting, HistoricalSendTaskCount, so that it can be increased to at least 12, regardless of processor count. This setting can be used to increase the parallelization of sends to the cloud, possibly improving performance in high-latency scenarios, such as transferring data from an agent in eastern US to a namespace in western US. The benefits of increasing HistoricalSendTaskCount should be weighed against the increased likelihood of sending data out of order to the cloud. Typically, this value should not be changed, except when advised by technical support.

97195

Added extra logging around the "transport message too large" errors so that root cause can be determined from the log message. Example error: Transport message too large to persist in Event Hub and will not be sent! Actual: 253075 Max: '235929'.

Fixes

Work Item

Description

97857

Treat license count of 999,999,999 as an unlimited license for purposes of charging Flex credits for AVEVA PI Data Infrastructure – aggregate tag PI Servers.

97183

The agent now caps the maximum events per query to 150,000 events when timeouts occur or errors occur that might indicate that Data Archive is busy. Previously, capping the events to 150,000 was done only when ArcMaxCollect was exceeded (error-11091). The new behavior is to cap requests at 150,000 events when the following occur:

  • PI timeout exception

  • Error -11140 is (Archive_MaxQueryExecutionSec exceeded)

  • Error -10746 (MaxMessageLength exceeded)

    A customer ran into a situation, where switching to 150,000 event chunks would have reduced pressure on Data Archive. They were doing a backup of Data Archive and they were querying streams with over 1 million events for a particular archive. They were running into timeouts on the agent side (PI timeout exception) and also exceeding Archive_MaxQueryExecutionSec on the server side.

74686

The transfer could appear to get "stuck" during the Historical transfer step. For example, if there were 10 archives in the transfer, the progress reported in the CONNECT data services portal could indicate, indefinitely, that archive 9 of 10 was being transferred. Although this was only an issue with the progress report, this problem, combined with item 99974 below, would make it appear that the agent was still attempting to fill data gaps when, in actuality, the historical portion of the transfer was already complete.

99974

During the historical transfer step, data gaps could occur if there were communication problems with Data Archive that lasted longer than 5 minutes. This problem has been fixed. The agent now retries indefinitely when retriable errors occur.

78720

If Data Archive was shut down during agent registration, the agent would stop unexpectedly. This problem has been fixed.

98497

When messages with event IDs of 43 and 122 appeared in the event log, the agent could get into a bad state and consume all memory on the computer. The memory growth problem has been mitigated. The agent will now log an error message and move on to the next message in the queue.

95557

Extra values or incorrect values can be sent by the PI to Data Hub Agent after a range delete operation followed by a backfill. This operation is common in AF Analytics when recalculating analyses. The incorrect value problem has been fixed in the agent. Note, however, that there will still be some cases in which extra values appear in the cloud, though these extra values within the compression deviation tolerance. This work item only addresses agent side fixes for range delete. There was a separate bug fix on the cloud side, as described by work item 2824330. Specifically, the cloud service was not processing range delete or individual delete events in the order sent by the agent. Both of these problems are now fixed.

98282

When starting a transfer, a transfer job can fail to start if the transfer job status is requested from the cloud service before all steps are added to the transfer. This bug could affect the startup phase of any transfer. This problem has been fixed.

97194

Fixed thread safety issue when reporting agent progress. The error was reported in the log was: "System.InvalidOperationException: Collection was modified; enumeration operation may not execute."

97167

Transfer edits could cause a very large number of time ranges to be resent to CONNECT data services. This problem has been fixed.

96143

Windows Authentication is now favored over PI trusts for connections to Data Archive. The authentication order was reversed in all previous versions (2.2.1163 and earlier), where the agent would first try to connect using a PI trust (less secure) before trying to connect using Windows security (more secure).

Security information and guidance

We are committed to releasing secure products. This section is intended to provide relevant security-related information to guide your installation or upgrade decision.

We proactively disclose aggregate information about the number and severity of security vulnerabilities addressed in each release. The tables below provide an overview of security issues addressed and their relative severity based on standard scoring.

Distribution Kits

Product

Software Version

PI to Data Hub Agent Installation

2.2.1539

In This Topic
TitleResults for “How to create a CRG?”Also Available in