Please ensure Javascript is enabled for purposes of website accessibility
Powered by Zoomin Software. For more details please contactZoomin

AVEVA™ Vision AI Assistant

Terms and concepts

  • Last UpdatedFeb 12, 2024
  • 5 minute read

Common Terms

The following table describes common terms in the AVEVA Vision AI Assistant documentation.

Term

Description

Skill

A skill is an Artificial Intelligence (AI) model created for a specific purpose.

A skill follows a lifecycle. For more information, see Skill workflow.

The lifecycle stage of the skill can be identified with a status. For more information, see Skill status.

Dataset

Datasets are input images or videos provided to the skill. Datasets are classified as follows:

  • Training dataset – This is the dataset initially provided to train the skill. A small percentage of this dataset is withheld during training. This subset is referred to as a validation dataset.

  • Validation dataset – After training, the skill prediction is verified with the validation dataset to determine metrics like skill accuracy and confusion matrix.

  • Testing dataset – After the skill is deployed, any images or videos received during runtime are referred to as the testing dataset. For AVEVA Vision AI Assistant, it refers to the images received from the camera specified as the monitoring source.

  • Retraining dataset – On retraining, any feedback or dataset provided to the skill is referred to as retraining dataset. This dataset is used to further tune the skill and improve its prediction.

Deploy

Deploying a skill implies that is ready to be used in a production setting. The skill has been trained and presents with acceptable results.

Downscale, Upscale, or Resize Images

Upscaling refers to the process when an image is resized without loss of quality or information. Similarly, if images are downscaled, they are then resized to a smaller size without loss of information. When an image is only resized (not upscaled or downscaled), the resolution of the image is increased or decreased with some loss of quality.

References to information or loss of quality refer to details of the actual image such as edges of objects, color, and difference in texture.

Epoch

An epoch refers to one cycle through the full training dataset. Training a skill usually takes more than a few epochs. For a Discrete State Detection skill, the training is run for 20 epochs. Each epoch helps improves the skill's ability to correctly predict the classification or anomaly.

False Positive Alarms

This occurs when a good image (without anomaly) is wrongly classified as an anomaly and presented as an alarm.

Loss

Loss is the penalty for a bad prediction. Loss is a number indicating how bad the model's prediction was on a single example. If the model's prediction is perfect, the loss is zero. Otherwise, the loss is greater. The goal of training a model is to reduce loss on average across all examples.

Skill Status

Each skill is tagged with a status to identify the stage of the lifecycle the skill is in. For more information, see Skill status.

Retention

AVEVA Vision AI Assistant has implemented retention strategies for classified images and skill predictions:

  • Training – The training dataset is stored indefinitely. Deleting the skill deletes all the training images and skill predictions.

  • Retrain – A copy of the image and prediction is saved indefinitely as part of the training dataset.

  • Skill History – You can select a Skill History starting from one hour to a custom value. Both images and skill prediction are deleted after the retention period:

    • Retention Period = Time of First Prediction + Skill History Value

Validation

As part of training a skill, the training data is split into two parts:

  • Training set – Used to train the skill.

  • Validation set – Only used to evaluate the skill's performance.

    Metrics on the training set let you see how your model is progressing in terms of its training, but it is metrics on the validation set that let you get a measure of the quality of your skill – how well it is able to make new predictions based on data it has not seen before.

Discrete State Detection

The following table describes terms that are specific to Discrete State Detection skills.

Term

Description

Confidence Score

After a trained skill is previewed or deployed, the skill assesses each image and provides a confidence score. The score conveys how close the skill believes that it has predicted or classified the image correctly. The confidence score improves if the skill retained with new images and user feedback.

Confusion Matrix

A Confusion Matrix visualizes the accuracy of a classifier by comparing the actual and predicted classes. For a Discrete State Detection skill, the data is organized in two classes. For example, consider classes Good and Bad with 50 images each.

Every column of the matrix represents a predicted class. Every row of the matrix corresponds with an actual class. In this example, 60 images were predicted as good while 40 were predicted as bad, which differs from the actual totals of 50 for each class.

The matrix is composed of four cells:

  • True Good Displays the count of good images the skill predicted correctly as good.

  • False Good Displays the count of bad images the skill predicted as good.

  • False Bad Displays the count of good images the skill predicted as bad.

  • True Bad Displays the count of bad images the skill predicted correctly as bad.

Skill Accuracy

Accuracy is the percentage of classifications that a skill gets correctly during validation. For example, if the skill correctly classifies 81 images out of 100, then the skill accuracy is 81%. Accuracy is how well the skill performs overall, unlike confidence which refers to how confident the skill is in a particular prediction.

Anomaly Detection

The following table describes terms that are specific to Anomaly Detection skills.

Term

Description

Anomaly Score

After the skill is trained, an anomaly score is calculated for each image in the validation dataset to determine the baseline anomaly score. The anomaly score is a value from 0 to 100, which indicates the significance of the anomaly compared to the training dataset. A high anomaly score indicates a higher probability that the skill detected an anomaly in the current image.

After training, if the tolerance level for the skill is changed, then the skill only displays images with an anomaly score greater than the tolerance score.

Tolerance False Positive Rate

For an Anomaly Detection skill, there are two possible test outcomes:

  • Positive outcome The image is correctly classified as an anomaly.

  • Negative outcome The image is not an anomaly but was predicted as one. This negative outcome is referred to as a False Positive.

    Tolerance False Positive Rate (for a skill) = (False Positive Images/Total Validation Images) * 100

    It is expressed as a percentage.

Tolerance

This tuning parameter can be used in anomaly skills to increase or decrease the sensitivity of the prediction. The higher the tolerance, the skill is less sensitive in its prediction, resulting in lesser number of alarms. Alternatively, the lower the tolerance, the skill is more sensitive, resulting in more alarms. The scale is 0 to 100.

User Defined Pipeline

The following table describes terms that are specific to User Defined Pipeline skills.

Term

Description

Block

A graphical representation of an action to perform. Each block in a Pipeline has a unique functionality and performs a specific action on the input, such as transforming the image or generating statistics.

Canvas

The development area that you use to build a Pipeline.

Pipeline

A sequence of actions to perform on your training data. This provides a customized event detection skill that meets the requirements for your specific use case.

TitleResults for “How to create a CRG?”Also Available in