Copyright ©
Mindbreeze GmbH, A-4020 Linz, 2023.
All rights reserved. All hardware and software names used are trade names and/or trademarks of the respective manufacturers.
These documents are strictly confidential. The transmission and presentation of these documents alone do not create any rights to our software, to our services and service results or to any other protected rights. The transfer, publication or reproduction is not permitted.
For reasons of simpler readability, gender-specific differentiation, e.g. users, is omitted. In the interest of equal treatment, the corresponding terms apply to both genders.
The great popularity and many possible applications of OpenAI's ChatGPT solution show how relevant the topic of Generative AI based on Large Language Models (LLMs) is. Especially in the enterprise context, this technology can offer great added value. However, its use is hampered by issues such as hallucinations, issues with keeping data up-to-date, data security, critical questions regarding intellectual property and technical implementation for sensitive data. Mindbreeze offers a solution in the form of a combination of Insight Engine and LLMs that is able to compensate for exactly these weaknesses. The result is the ideal basis for Generative AI in the enterprise context.
This basis is called "Natural Language Question Answering", NLQA for short, and combines semantic search with "Question Answering". The semantic search makes it possible to work with complete sentences, which means that you can search for information by entering full, natural language sentences. Question Answering is responsible for generation of answers, whereby answers are rendered in natural language including surrounding context and source reference.
The following chapters explain how to perform the basic configuration for NLQA. For further questions about the configuration and usage of the feature, see the FAQ section in this document. Information about supporting custom language models and handling feedback can be found there.
Attention: A 100% complete and correct search cannot be guaranteed when using semantic search with trained models. Model customization is only supported for dedicated projects, where the labeled training and test data is provided by the customer environment. Additionally, the Scope of the NLQA feature is currently limited to 50,000 documents per Mindbreeze InSpire Appliance.
This section describes the configuration steps necessary to enable Natural Language Question Answering (NLQA).
In Mindbreeze Management Center (MMC), navigate to the "Configuration" menu and switch to the "Indices" tab, then add a new index ("+ Add Index") and activate "Advanced Settings".
In order to enable NLQA, Named Entity Recognition (NER) must be configured. If you have already configured NER in the global Index Settings, you can skip this step.
However, if NER has not yet been configured, please scroll to the "Semantic Text Extraction" section (in the local index settings, not to be confused with the "Global Index Settings").
Configure the following options:
In the next step, enable the Enable Sentence Transformation option (in the Semantic Text Extraction section).
Other recommended settings:
Configure your data source. Documentation of the data sources supported by Mindbreeze InSpire can be found on help.mindbreeze.com in the "Data Sources" section.
Wait until the indexing is complete. Test the configuration using the standard Mindbreeze InSpire Insight app.
Yes, you can enable NLQA on existing indexes as well. Configure "Named Entity Recognition (NER)" and "Sentence Transformation" as described above. Then, the index must be rebuilt. Perform one of the following steps:
By default, the answers are displayed directly above the search results. In addition, the visualization of answers can be freely customized to your needs and also completely disabled. See Development of Insight Apps.
The NLQA feature can be used with the existing Mindbreeze InSpire license. The InSpire beta version of NLQA is limited to 50,000 documents per appliance. This limitation is independent of the Mindbreeze InSpire license. For more information, please contact support@mindbreeze.com.
The basis for the semantic search is formed by Transformer-Based-Language-Models in ONNX format. By using this open standard, pre-trained models or self-trained LLMs can be integrated into Mindbreeze InSpire. The configuration of Custom Sentence Transformer Models is described in the Sentence Transformation Documentation.
There are several ways to influence the displayed responses:
Filtering responses based on quality or number:
Answers are sorted by similarity factor, with the best answer at the top. The minimum similarity factor for displaying an answer is 50% by default. This value as well as the maximum number of answers can be set in the respective index and client service configuration.
Order of answers can be influenced (boostings):
Mindbreeze components for influencing the relevance of search hits such as Term2DocumentBoost Transformer and Personalized Relevance can also be used for boosting answers. In addition, boosts defined in the api.v2.search interface are also applied to Answers relevance.
Yes, there are advanced configuration options for NLQA in Mindbreeze InSpire - Sentence Transformation. These options are typically only relevant to you if you have specific use cases that require special configuration. Please contact support@mindbreeze.com if you need support for your data science project, e.g. if you want to use a different model than the Sentence Transformer model provided by default.
The voting feature offers the possibility to get feedback from a user on an answer search result. This feedback is also recorded in app.telemetry. The InSpire beta version of NLQA does not currently support automatic adjustment of the relevance of answers for users based on their feedback.
In addition, other user interactions with Answers are recorded in app.telemetry, for example, when a user clicks on the Answer source document. These interactions, as well as user feedback, can be graphically displayed and evaluated in the app.telemetry Insight App Reporting Dashboard.