How Industrial AI Assistant (Preview) works
- Last UpdatedDec 01, 2025
- 2 minute read
The architecture that supports Industrial AI Assistant (Preview) is based on three key components:
-
AVEVA Unified Engineering and AVEVA Administration
Your data is securely hosted in CONNECT. This is the same secure repository used for all other data shown in AVEVA Unified Engineering and AVEVA Administration.
AVEVA Unified Engineering and AVEVA Administration also host the Industrial AI Assistant (Preview) interface where you can make requests and ask questions.
-
Industrial AI Assistant (Preview)
The programmed application that processes your requests and manages interactions with AVEVA Unified Engineering, AVEVA Administration and the large language model.
-
Large Language Model (LLM)
Supports the functionality of the assistant by providing intelligent assessment of a request, which is translated into a series of search and data queries. The LLM also allows the results to be summarized within the context of the request in order to provide the most relevant information back to the user.
When you enter a request into the assistant, the following process occurs.
-
Industrial AI Assistant (Preview) receives the request and prepares the required information for the LLM. This involves a combination of:
-
Applying guardrail instructions that prevent unethical and irrelevant questions being answered.
-
Preparing a list of available tools and information that can be used to reply to the request. This is based on the solutions you have available in AVEVA Unified Engineering and AVEVA Administration.
-
-
The LLM then receives the information prepared by the assistant and performs two functions:
-
Guardrail assessment. If a question is inappropriate, a response will be generated at this point to close the request.
-
Intent analysis. The LLM looks at the language used in the request and determines the type of information that is required.
-
-
Industrial AI Assistant (Preview) applies the request to the information available in CONNECT data services and summarizes the relevant content.
-
The LLM receives the summarized information and responds with an answer to the request.

This architecture allows your data to be handled in the following ways:
-
User requests (the text you enter in the assistant) may be monitored to troubleshoot errors and improve responses where user feedback is provided.
-
Operational information (such as raw production data and documents) remains stored in AVEVA Unified Engineering and AVEVA Administration. It is only accessed by the assistant when summarized data is collated and it is not retained.
-
Summarized data (the data received by the LLM to process responses) is not retained.