Copyright ©
Mindbreeze GmbH, A-4020 Linz, 2024.
All rights reserved. All hardware and software names used are brand names and/or trademarks of their respective manufacturers.
These documents are strictly confidential. The submission and presentation of these documents does not confer any rights to our software, our services and service outcomes, or any other protected rights. The dissemination, publication, or reproduction hereof is prohibited.
For ease of readability, gender differentiation has been waived. Corresponding terms and definitions apply within the meaning and intent of the equal treatment principle for both sexes.
Retrieval Augmented Generation (RAG) is a natural language processing technique that combines the strengths of query-based and generative artificial intelligence (AI) models. In a RAG-based AI system, a query model is used to find relevant information from existing data sources. Meanwhile the generative model takes the queried information, synthesises all the data and transforms it into a coherent, contextual response.
You can find the instructions on how to set up an LLM in Configuration - InSpire AI Chat and Insight Services for Retrieval Augmented Generation.
To configure a Large Language Model (LLM) for your pipelines, switch in the menu item “RAG” to the area "LLMs".
Click on "Add" and select the respective LLM to configure it. There are currently four LLMs available for Integration:
The following settings can be configured for the respective LLM.
When creating an OpenAI LLM a Dialog opens that refers to the Data Privacy Disclaimer. To continue you have to accept this Disclaimer.
Attention: When using the OpenAI API, chat inputs from the user and information indexed by your organization are transmitted to the respective endpoints via prompts. The handling of the transmitted information is governed by the data protection regulations of the respective AI provider. Mindbreeze is not responsible for the further data processing. The AI provider is neither a vicarious agent nor a subcontractor of Mindbreeze. We would like to point out that, according to current assessments, the lawful use of AI services is not guaranteed (precautionary note pursuant to GDPR Art. 28 para. 3 sentence 3). For further information and risks, please refer to the relevant privacy policies of the respective AI provider.
For more Information visit https://openai.com/enterprise-privacy .
By confirming the checkbox, you as the Data Controller instruct Mindbreeze to carry out this transmission nevertheless and acknowledge the note as outlined above.
Description | |
API Key (required) | The API Key. |
Model (required) | The Name of the OpenAI LLM to use. |
With “Test Connection” you can check if the given values are valid and if a connection can be established.
For the areas “General”, “Prompt” and “Test”, please see the chapter General Parts of the LLM Settings.
When creating an Azure OpenAI LLM a Dialog opens that refers to the Data Privacy Disclaimer. To continue you have to accept this Disclaimer.
Attention: When using the Azure OpenAI API, chat inputs from the user and information indexed by your organization are transmitted to the respective endpoints via prompts. The handling of the transmitted information is governed by the data protection regulations of the respective AI provider. Mindbreeze is not responsible for the further data processing. The AI provider is neither a vicarious agent nor a subcontractor of Mindbreeze. We would like to point out that, according to current assessments, the lawful use of AI services is not guaranteed (precautionary note pursuant to GDPR Art. 28 para. 3 sentence 3). For further information and risks, please refer to the relevant privacy policies of the respective AI provider.
For more Information visit https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy
By confirming the checkbox, you as the Data Controller instruct Mindbreeze to carry out this transmission nevertheless and acknowledge the note as outlined above.
Description | |
URL (required) | URL of the LLM Endpoint. |
API Key (required) | The API Key. |
Description | |
Azure Deployment (required) | Name of the Azure Deployment. |
For the areas “General”, “Prompt” and “Test”, please see the chapter General Parts of the LLM Settings.
Description | |
URL (required) | URL of the LLM Endpoint. |
Description | |
User Message Token User Message End Token Assistant Message Token Assistant Message End Token Message End Token | To be filled in depending on the model. (Default: Only the Message End Token is used with the Value “</s>” |
For the areas “General”, “Prompt” and “Test”, please see the chapter General Parts of the LLM Settings.
When creating an Aleph Alpha LLM a Dialog opens that refers to the Data Privacy Disclaimer. To continue you have to accept this Disclaimer.
Attention: When using the Aleph Alpha API, chat inputs from the user and information indexed by your organization are transmitted to the respective endpoints via prompts. The handling of the transmitted information is governed by the data protection regulations of the respective AI provider. Mindbreeze is not responsible for the further data processing. The AI provider is neither a vicarious agent nor a subcontractor of Mindbreeze. We would like to point out that, according to current assessments, the lawful use of AI services is not guaranteed (precautionary note pursuant to GDPR Art. 28 para. 3 sentence 3). For further information and risks, please refer to the relevant privacy policies of the respective AI provider.
For more Information visit https://aleph-alpha.com/data-privacy/
By confirming the checkbox, you as the Data Controller instruct Mindbreeze to carry out this transmission nevertheless and acknowledge the note as outlined above.
Setting | Description |
API Key (required) | The API Key. |
Model (required) | The Name of the Aleph Alpha LLM to use. |
With “Test Connection” you can check if the given values are valid and if a connection can be established.
For the areas “General”, “Prompt” and “Test”, please see the chapter General Parts of the LLM Settings.
Description | |
Name | Name of the Large Language Model. |
Max Answer Length (Tokens) | Limits the amount of generated output tokens (1 Token ~ 1 Word). The value "0" does not limit the tokens. Limit the answer length to prevent too long answers and reduce load on the LLM endpoint. Attention: Make sure the prompt length and the max answer length is not greater than the context length of the model. |
Overwrite Randomness (Temperature) | If activated, the default temperature of the LLM will be overwritten with the configured “Randomness of Output” |
Randomness of Output (Temperature) | Controls the randomness of the generated answer (0 – 100%). Higher values will make the output more creative, while lower values will make it more focused and deterministic. |
Description | |
Preprompt | A preprompt is used to apply specific roles, intents and limitations to each subsequent prompt of a Model. |
Prompt Examples | See the chapter Prompt Examples. |
These examples are displayed in the Mindbreeze InSpire AI Chat as sample questions. Accordingly, frequently asked questions are suitable as prompt examples. By clicking on a prompt example, this question is automatically entered in the Mindbreeze InSpire AI Chat. A prompt example can be created by clicking on "Add" and filling in the following fields:
Setting | Description | Example |
Title | The title of the prompt example. This text is displayed in the Mindbreeze InSpire AI Chat. | Ask for the amount of Mindbreeze Connectors. |
Prompt | The question or instruction entered in the Mindbreeze InSpire AI Chat. | How many connectors does Mindbreeze provide? |
Click on "Save" to save the prompt example. Any number of prompt examples can be created. Once all prompt examples have been created, save the entire LLM to save the changes.
On this page the configuration can be tested. Keep in mind that the generated text is not based on retrieved documents.
After testing the LLM Settings click on “Save” to save the LLM.
To create a pipeline, switch to the section "Generative Pipelines". Click on "Add" to start creating a new pipeline.
The creation of a pipeline is divided into five sections:
The individual sections are described in more detail in the following chapters.
In the section “General”, the following general settings can be set:
Setting | Description |
Name | Name which is displayed in the Mindbreeze InSpire AI Chat. |
Description | Description of the pipeline. |
Version | A generated Version ID. |
Based on version | The previous version, on which this version is based. |
Version Name | When a pipeline is released, a version name must be specified. The version name is not displayed in the Mindbreeze InSpire AI Chat and is used to track changes to the pipeline. The version name should contain a short summary of the changes. |
Version Description | A more detailed description of the changes in the pipeline. |
In the section "Prompt Examples" example questions can be added to a pipeline that are displayed in the Mindbreeze InSpire AI Chat. If no example questions are defined in the pipeline, the example questions are taken from the LLM. If no example questions are defined in the LLM either, no example questions are displayed in the Mindbreeze InSpire AI Chat. For more information on prompt examples, see the chapter Creation of Prompt Examples.
Once the necessary settings have been made, you can continue to the next section by clicking "Next" or by clicking on the desired section in the navigation bar on the left.
This area currently has no impact on the pipeline and can be skipped.
In the "Examples" section, you can create a new dataset or add an existing dataset. This area currently has no effect on the pipeline. The data sets in this section can be used in the future to evaluate the pipeline, so that the effects of a change can be analysed.
If a new dataset is created here, it is automatically added to the pipeline. For more information, see the chapter Adding data.
If you have already created one (or more) datasets that you would like to use in this pipeline, you can add them by clicking on "Add an existing Dataset". In the context menu, you can select several datasets and use them in your pipeline by clicking "Add".
Only indices that have the "Semantic Sentence Similarity Search" feature enabled can provide answers for generation in the Mindbreeze InSpire AI Chat.
The retrieval part of the RAG can be configured in the "Retrieval" section. The following settings are available:
Setting | Description |
Search Service | Client Service to use for search. |
Skip SSL Certificate Verification | If this is enabled, the verification of the SSL certificate will not be performed. |
Only Process Content | If this property is set, only responses from the content metadata will be used. Otherwise, responses from all metadata will be processed. |
Maximum Answer Count | First n answers are processed and used for the prompt. Hint: When n = 0 and the prompt logging is disabled in app.telemetry, then the columns for the answers will have no header for the details of the answer in the CSV Logging. |
Minimal Answer Score [0-1] | Only answers are processed, which have a greater answer score. |
In the section "Constraints", the pipeline can be refined and customized to the respective requirements.
Setting | Description |
Allow Search Request Template Override | Allows the values of the search query template to be overwritten via API requests. Only relevant if the API is used directly. For more information, see api.chat.v1beta.generate Interface Description. |
Search Constraint | When searching using the search service, the value in this field (if any) is also used as a condition in the search. |
Include Data Source | If one or more data sources are included, all other data sources are automatically excluded. |
Exclude Data Source | If one or more data sources are excluded, all other data sources are automatically included. |
The section "Effective Data sources" provides an overview of the effective data sources of the selected search service.
In the section “Generation”, you can configure the prompt generation, select the LLM and the prompt templates that are filled with the search results and then sent to the configured LLM.
Select the LLM you have created in the setting "Model". By selecting the LLM, you will receive a summary made up of the following points:
Setting | Description |
Model | Defines which LLM is selected. |
Use Conversation History | If this setting is activated, the content from the previous conversations is used for generating the next answer. |
Max used Conversation History Message | This setting is only effective if "Use Conversation History" is active. Limits the number of chat history messages that are used for generation. If the value is "0", all chat history messages are used. This setting ensures that the requests to the LLM do not become too large in the case of longer chats. Recommended values: 1-5. |
Max Answer Length (Tokens) | This setting overwrites the LLM setting “Max Answer Length (Tokens)” if the value is greater than 0. |
Randomness of Output (Temperature) | This setting overwrites the LLM setting “Randomness of Output (Temperature)” if the vaue is greater than 0. |
Allow Builtin Prompt Template Variables Override | Allows the system prompt template variables ({question}, {summaries}) to be overwritten. Only relevant if the API is used directly. For more information, see api.chat.v1beta.generate Interface Description. |
Summaries Sources (per Result) | The template which processes the received answers into a text for the prompt. Depending on the desired information from the answer, the following placeholders can be inserted:
|
Prompt Template | The template contains the instructions for the LLM. To process the questions and answers in the prompt, the following placeholders should be inserted:
|
Prompt Template (if no search results were found) | If the service has activated the setting "Generate with empty Results" and the search service does not find any answers for a question, then an optional prompt can be specified for the generation. The following placeholders can be inserted:
|
Display Retrieved Sources | If this setting is activated the last retrieved sources according to the setting “Max Retrieved Source Count” are attached at the end of the generated answer text. By default, the setting “Prompt Template” instructs the model to provide the relevant sources, independent of this setting. When this setting is active, it is recommended to adjust the setting “Prompt Template” to avoid duplicate sources in the generated answer. |
Retrieved Source Template | The Template defines how each source should be displayed. The following placeholder must be inserted: {source} for the source. |
Retrieved Sources Template | The template displays the retrieved source template summaries. The following placeholder must be inserted:
|
Max Retrieved Source Count | This setting defines how many retrieved sources should be displayed. |
In the "Test" section, you can test the pipeline settings and check whether the settings you have made fulfil the requirements.
A pipeline can have several versions and each of which has a status:
After a pipeline has been created or edited, there is an draft version for it. To finalise the draft version, it must be released. To do this, select the pipeline you have created and click on "Release Version". Enter a version name and optionally a version description. Then click on "Release Version".
To use a pipeline in Mindbreeze InSpire AI Chat, it is necessary to release a version. To do this, select a pipeline that already contains a released version and click on "Publish". Only released pipeline versions can be published.
In the dialogue box, select a pipeline version that you want to publish and check the display name and description. If a version of the pipeline has already been published, you will find information on the published version of the pipeline above the selection field.
Then click "Publish" to publish the selected version. After publishing, the version number of the published version should appear in the column "Published".
It is then possible to select and use the created pipeline in the Mindbreeze InSpire AI Chat.
Select a published pipeline and click on "Publish". You will find information about the published version in the dialogue box. Then click on "Unpublish".
The pipeline should no longer have a value for "Published" in the overview. The pipeline is now no longer available in the Mindbreeze InSpire AI Chat.
If you have a Producer Consumer Setup, the RAG configuration can be synchronized to all Nodes by clicking “Sync to consumer”.
This area currently has no impact on the pipeline and can be skipped. This area will also be revised in the coming releases.
The creation of datasets is necessary for the evaluation of the pipelines in later development stages of the RAG.
Click on "Create new dataset" and give the dataset a name under "Name of dataset". Then add data to a dataset by clicking on "Add".
The following fields can be filled in:
Column name | Key for CSV/JSON | Description |
Question | question | Mandatory field |
Answer | answer | The expected answer to the question. |
Source | source | The expected source from which the answer is obtained. |
Score | score | Value of the answer, which describes how good the answer is. |
Answer Passage | answer_passage | The passage from which the answer is taken. |
Click on "Save" to save the data of the dataset.
In addition to the manual creation of data, a file can also be uploaded. To do this, click on "Upload file". Specify whether the data from the file should extend or overwrite the existing entries.
Please note that only one file can be uploaded. The following file types are compatible: JSON and CSV.
Once the file has been successfully loaded, click on "Add". Then save the dataset.