Please ensure Javascript is enabled for purposes of website accessibility
Powered by Zoomin Software. For more details please contactZoomin

CONNECT

Data services

  • Last UpdatedOct 29, 2025
  • 2 minute read

CONNECT data services are designed to handle high-throughput ingestion and querying of time-series data. Performance is influenced by data density, namespace topology, and batching strategies.

Change broker

Test bed description

  • Default configuration of 8 Streams data services stream processor instances, and 8 Change Broker update processor instances.

  • No formal test data for high-density scenarios.

Performance metrics

Performance metric

Value

Notes

Max throughput

100,000 events/sec

30,000,000 events over 5 minutes.

Sign up activation

<0.6s for 150,000 streams

~3.1s for 1,000,000 streams

Fast activation times.

Stream add/remove

<0.8s for 100,000 streams

Efficient updates.

Typical use cases

  • High-frequency queries and batching for data transfer efficiency.

Additional notes

  • Fewer sign ups with more Streams is optimal. Batching improves performance.

  • For large-scale deployments, iterative creation and updates of sign ups is recommended to avoid performance bottlenecks.

  • OMF ingress allows manual matching. Recommended to group 1,000 streams with 10 timestamp pairs. PI to AVEVA Data Hub ingress auto-optimizes.

Streams data services

Test bed description

  • Ingress tested with 500K streams x 5 properties x 1 event/sec.

  • No formal performance data for high-density scenarios.

Performance metrics

Performance metric

Value

Notes

Ingress

2.5M property values/sec

All events could be queried within 10 seconds.

Egress

End-of-stream queries

25,000 req/sec

10% streams changing per minute.

Sampled data queries

5,000 req/sec

8 hours of data.

Summaries

250 req/sec

8 hours of data.

Data view emulation

1 query/hour

200 streams, 1 year of data.

Typical use cases

  • Namespace-based scaling for large datasets.

Additional notes

  • System sustained loads with stable latency and resource utilization.

  • Scaling via namespace is effective.

  • Topology impacts performance. Split data across multiple namespaces to improve performance and organizational clarity.

  • Increased density can lead to higher read and write load, as well as higher memory usage. This may affect ingress and egress.

Assets

Test bed description

  • Tested in integration environment.

  • Local testing attempted with 2 million assets (system not functional at this scale).

Performance metrics

Performance metric

Value

Notes

Max tested asset count

1,000,000

Integration environment testing. Performance degradation was observed at this scale.

Max asset size

4MB

Typical assets are much smaller.

Metadata property count

60

A significant ingress rate drop observed compared to assets with no metadata.

Storage limit

1TB

Memory constraints are reached before the storage limit.

Additional notes

  • At 1,000,000 assets, degradation includes slow startup and increased memory impact.

  • Query performance varies by query type. equals is most efficient, endswith and similar queries are slower.

  • Use multiple namespaces to improve scalability and reduce memory load per asset processor.

Events store

Test bed description

  • Model based on 10 plants, each with 10 production lines and 5 pieces of equipment.

  • 12-month simulation period.

Performance metrics

Performance metric

Value

Additional context

Total events per year (per plant)

70,346,400

Based on MES transactional model.

Total events (10 plants)

703,464,000

12-month period.

Processing rate

184,000 events/hour

Across 10 production plants.

Retrieval performance

  • 50,000 events ≤ 1 second

  • 100,000 events ≤ 3 seconds

  • 250,000 events ≤ 5 seconds

  • 500,000 events ≤ 8 seconds

Scalable access times.

Average event size

22 properties

Range from 10 to 40 properties.

In This Topic
TitleResults for “How to create a CRG?”Also Available in