Please ensure Javascript is enabled for purposes of website accessibility
Powered by Zoomin Software. For more details please contactZoomin

AVEVA™ Work Tasks

Best Practices for Maintaining AVEVA Work Tasks Databases

Best Practices for Maintaining AVEVA Work Tasks Databases

  • Last UpdatedJun 16, 2025
  • 7 minute read

A database must be maintained well in order to achieve optimum performance and reliability and high availability. A Database Professional has to perform the required maintenance tasks to achieve this. This document gives a brief of essential maintenance tasks to be performed for AVEVA Work Tasks software’s database.

Note: It is highly recommended that a Database Professional performs the tasks mentioned in this document. The purpose of the document is to provide guidelines for maintenance activities. Essential maintenance tasks may vary based on specific implementation.

Databases in AVEVA Work Tasks

AVEVA Work Tasks uses the following databases.

  • Farm Database in which the Farm Configuration settings are stored.

  • Repository Database with tables pertaining to each repository.

    Note: It is recommended to maintain separate databases for Farm and Repository (datasource).

The following figure displays the databases used by AVEVA Work Tasks.

Maintenance Plan

Database maintenance includes a defined set of proactive tasks that a Database Professional needs to perform on a periodic basis to help ensure that their databases perform optimally and maintain high availability. In many cases, the individual maintenance steps taken might not, by themselves, seem all that important, but the accumulative effect of performing database maintenance tasks can be substantial.

Note: Under normal circumstances, it is recommended to perform the database maintenance tasks depending on the environment, intensity of transactions, and regulatory and organizational requirements.

The following sections describe some of the very important database maintenance tasks, which are categorized into high availability and optimum performance.

High Availability and Disaster Recovery

Backup and Restore

Protection of data is of prime importance to any organization. To protect the database, a proper disaster recovery plan and backup process should be in place. Based on the daily transactions count, full backup should be taken periodically, i.e. either Weekly or Daily.

Note: It is recommended to schedule a full backup and a transaction backup periodically, based on the transactions.

Managing MDF Files

An MDF file is a primary file to store the data of the database. Maintenance of the MDF file is important to get optimum performance and availability of the database. Ideally, the MDF file should be placed in an IO Device with the best IO throughput.

Pre-size the database based on the size requirement. For more information about database sizing, see Guidelines for Sizing Database.

You can also find out the size requirement by performing the following:

  1. Watching the database growth for multiple weeks/months

  2. Calculating the size for a longer period of time

  3. Setting the database size in advance

The two major benefits of pre-sizing databases are:

  • By pre-sizing databases, SQL Server would not have to depend on autogrowth to grow the database. This ensures that the performance is not affected when autogrowth occurs.

  • Each autogrowth can contribute to physical file growth.

Managing LDF Files

An LDF file stores the database transactions that allow a database to be restored back to a specific point in time on any eventuality. SQL Server transaction log files are NTFS files that have a file extension of .ldf. A database can have multiple log files that reside on one or more drives. The growth of the file is based on transactions on to the database and maintenance strategies.

The backup strategy affects the growth of the LDF files. The LDF file size might increase when the database Recovery Mode is “Full”. Therefore, to reduce the LDF file size, it is recommended to schedule a Full Backup followed by incremental backups or to shrink the file size.

A Database Professional can achieve this using internal database commands such as DBCC SHRINKFILE.

Note: Ensure not to place the LDF file in the same drive as that of the MDF file.

Checking for Corruption

Corrupt databases are the worst nightmare of any Database professional. In any environment, from small business to enterprise, the compromise of integrity and availability of the data can constitute a business emergency or financial and reputation loss. Therefore, early detection and prevention of data corruption is essential, which can be achieved by meticulous planning. Ideally, the DBCC CHECKDB command should be run periodically to monitor the database corruption.

Data Corruption Detection

Following is a standard checklist used for data corruption and detection:

  • List the “Page Verify Setting” for all of your Production and System Databases.

  • Do you regularly and proactively check for potential database corruption.

  • What method(s) do you use to identify potential database corruption.

  • How often do you check for potential database corruption in each Production and System Database.

  • List the average time taken to perform database corruption detection per database.

  • Have you documented what you would do if data corruption is detected.

Eliminating Physical File Fragmentation

When the OS writes a file to disk, if continuous clusters are not available, the content is written elsewhere on the disk.

  • When a file is stored in a non-contiguous manner on disk, the file is considered to be physically fragmented.

  • Physical file fragmentation can contribute to an additional load on your I/O subsystem and reduce I/O performance because the disk has to work harder (thrash) to read and write data.

Ways to minimize physical file fragmentation:

  • Ensure there is no physical file fragmentation before creating new database and log files.

  • Pre-size MDF and LDF files instead of allowing autogrowth to automatically size files. This prevents physical file fragmentation.

  • Regularly defragment arrays used for backups as they can become fragmented over time.

Physical File Fragmentation

The following need to be checked periodically.

  • List the physical file fragmentation for every array with MDF, LDF, BAK, and TRN files.

  • Monitor File Fragmentation

  • Regularly perform Physical File Defragmentation

  • What tool do you use to perform physical file fragmentation.

  • Before creating or growing MDF or LDF files, do you check for Physical Fragmentation first.

Managing Indexes

AVEVA Work Tasks is a highly database dependent application. AVEVA Work Tasks has all the required indexes for ensuring optimum performance of the Workflow Engine. Since AVEVA Work Tasks can be extended by customization, the customized component may degrade the performance of AVEVA Work Tasks and its components. Therefore, it is advised that the indexing needs of the customized components and the extent of usage are looked into, from the performance perspective.

Custom tables are the tables created by the user for specific needs such as new custom Lists, custom application specific tables, and so on. For example, if an application uses AVEVA Work Tasks List infrastructure, then the users have to create indexes based on the custom tables.

You can use the database engine tuning advisor to arrive at the index requirement. Understanding the characteristics of your database, queries, and data columns can help you design optimal indexes.

It is also recommended to perform the following actions periodically:

  • Detect Index Fragmentation

  • Fix Index Fragmentation

  • Reorganize or Rebuild Indexes

  • Create missing index

  • Filter out unused indexes

Note: It is recommended to add only the required indexes, and avoid adding too many indexes.

Maintaining Statistics

SQL Server collects statistical information about indexes and column data stored in the database. These statistics are used by the SQL Server Query Optimizer to choose the most efficient plan for performing INSERT, SELECT, DELETE or UPDATE queries.

The statistical maintenance functionality of SQL Server, AutoStat, may generate unwanted overhead on a production system by performing one of the following actions:

  • Initiating statistical updates during heavy production periods.

  • Initiating an excessively high number of UPDATE STATISTICS processes at a given point in time.

Note: It is recommended to analyze the pros and cons before setting the statistics to the best desired mode and maintain it based on the requirement.

Statistics Maintenance

Following is a standard checklist used for index and statistics maintenance:

  • What method(s) do you use to maintain Index and Column Statistics.

  • How often do you run jobs to maintain Index and Column Statistics.

  • If you use the REBUILD Method to Defragment Indexes, do you also update Statistics.

  • If you use the REORGANIZE Method to Defragment Indexes, do you also update Statistics.

  • If you update Statistics independent of Index Defragmentation, list the average time taken per database.

  • Do you have “AUTO_UPDATE_STATISTICS_ASYNC” turned “On”. If Yes, list those databases.

  • Do you use Filtered Statistics. If Yes, are they documented (SQL Server 2008).

Archiving Data

Increase in the AVEVA Work Tasks Database size inversely affects the performance of workflow execution. The database size can be reduced by archiving the data.

Note: It is recommended to archive the data based on year, financial year, or so on depending on business and regulatory requirements.

AVEVA Work Tasks archival script is available for SQL Server. Check with customer support for more details and for script.

Table Partitioning

SQL Server supports table and index partitioning. The data of partitioned tables and indexes is divided into units that can be spread across more than one file group in a database. The data is partitioned horizontally, so that groups of rows are mapped into individual partitions. All partitions of a single index or table must reside in the same database. The table or index is treated as a single logical entity when queries or updates are performed on the DB. This will provide better performance.

A remarkable performance improvement of AVEVA Work Tasks engine can be achieved in the SQL Server Enterprise versions using the SQL Server Table Partition feature.

Conclusion

These are the best practices for maintaining AVEVA Work Tasks Database. The implementation can be customized as per the requirement. Periodic database maintenance by a Database Professional or Expert is required to get optimum performance, reliability, and availability of the database.

The following are the key points for maintaining AVEVA Work Tasks Databases:

  • Pre-size database based on transactions.

  • Plan periodic database and log file backups.

  • Manage indexes and maintain statistics.

  • Archive database for exponential data growth if performance is affected.

  • Follow a periodic and systematic backup strategy.

  • Eliminate index fragmentation.

  • Detect corrupted database pages.

In This Topic
TitleResults for “How to create a CRG?”Also Available in