Plan your data migration from the on-premises EFM database
- Last UpdatedDec 29, 2025
- 3 minute read
The deleteList table in the AVEVA Measurement Advisor database should be used as a guide for the data migration. AVEVA recommends testing the migration of all tables from your on-premises database to the AVEVA Measurement Advisor tables (installed from the BACPAC file) to verify that the procedure is correct. It is difficult to fix the data once it has been imported.
All SQL commands below should be run against the AVEVA Measurement Advisor EFM database that was imported from the BACPAC file unless otherwise noted.
-
To find the object types that need to be migrated, run the following select statement:
select distinct objectTypeName
from deleteList
where objectTypeName not in ('User')
Note: Do not migrate the following: Baseline unit profiles; Areas, Groups, and their Connectivity for those coming from AVEVA Enterprise SCADA; Users (Users must be reconfigured in CONNECT)
-
For each object type, plan to import the configuration first, followed by the data.
-
To determine the order in which you will migrate your data, run the following SQL statement:
select objectTypeName, tableName, isData, deleteOrder
from deleteList
where isReplicated = 'Y'
and objectTypeName = 'Meter' -- note change this to the desired objectTypeName
order by objectTypeName, isData, deleteOrder desc
-
Ensure the intended objects exist before their connectivity is migrated. For example, make sure you have created all measurement points, meters, and gas qualities before migrating the mpRefer table. This applies to any refer table that you are migrating.
-
You may choose to migrate some or all of your data. For example, if you have an object list in your existing measurement system, you may wish to migrate the object list and rules to build it but not migrate the objects that belong to that list. Then, you would simply rebuild the list in the cloud database. Optionally, you might not want to migrate problem or rejected data if that is no longer of interest.
-
Decide on your plan for migrating the transactional data. Ensure the size of your cloud database has the necessary capacity. If your data size is too large and you encounter errors, contact your customer representative to update the maximum size of your database.
To migrate your data faster, it should be loaded in accordance with the primary key (PK). For example, the primary key for the hourlyMeterReading table is meterId, time, auditVersion. If you have five years of data to migrate, it will be faster to migrate all five years for a given meter, rather than migrating a month-by-month batching of data for all the meters.
-
Decide on your data batching plan. Most configuration should be able to be migrated in a single batch, with the exception of objItem (depending on the amount of changes).
-
Large data tables should be migrated in batches. The batching strategy should take into account the primary key of the table. The primary key can be found by using the object explorer in SQL Server Management Studio, or you can run the following query:
#In this example, we return the PK for the hourlyMeterReading table.
select O.name, I.name, C.name
from sysobjects O
join sys.indexes I
on O.id = I.object_Id
and I.is_primary_key = 1
join sys.index_columns IC
on O.id = IC.object_id
and IC.index_id = I.index_id
join syscolumns C
on C.id = O.id
and C.colId = IC.column_id
where O.name = 'hourlyMeterReading'
order by O.name, I.name, IC.index_column_id
In this case meterId,time,auditVersion is returned.
-