Copyright ©
Mindbreeze GmbH, A-4020 Linz, 2023.
All rights reserved. All hardware and software names used are trade names and/or trademarks of the respective manufacturers.
These documents are strictly confidential. The transmission and presentation of these documents alone do not create any rights to our software, to our services and service results or to any other protected rights. The transfer, publication or reproduction is not permitted.
For reasons of simpler readability, gender-specific differentiation, e.g. users, is omitted. In the interest of equal treatment, the corresponding terms apply to both genders.
The popularity and wide range of applications of OpenAI's ChatGPT solution show how relevant the topic of Generative AI based on Large Language Models (LLMs) is. Especially in an enterprise context, this technology can offer great added value. However, issues such as hallucinations, lack of recency, data security, critical questions regarding intellectual property, and the technical implementation of sensitive data make its use difficult. Mindbreeze now offers a solution through the combination of our Insight Engine and LLMs that is able to compensate for exactly these weaknesses. The result is the ideal basis for Generative AI in an enterprise context.
LLMs have exceptional capabilities in the processing and generation of human language, while Insight Engines can overcome the aforementioned hurdles through data recency, connectivity and source validation. This makes it possible to work with complete sentences and map the LLM's understanding of the content. The basis for the semantic search is made by Transformer Based Language Models in ONNX format. By using open standards, customers can integrate and use pre-trained models or self-trained LLMs in Mindbreeze InSpire.
With Mindbreeze InSpire, customers can use Generative AI immediately, as it is seamlessly integrated into the product. Data security plays a key role and is ensured through constant authorisation checks directly with the individual data sources. There, connectors guarantee that the content is always up to date. Thanks to the scalable architecture and the customisable relevance models, customers are able to personalise their interaction with the Insight Engine.
Based on a deep integration of LLMs into the core of the Insight Engine and the semantic search, the Question Answering feature can now generate answers in natural language. Mindbreeze's many years of experience enable high scalability when processing large data sets. Thanks to the integration of multilingual models, Mindbreeze InSpire can generate information in different languages. For example, a user can ask a question in German and receive an answer in English. In addition to the results, users are provided with source information to ensure the traceability of the answers in addition to the rights. To provide the user with the necessary context, preceding and following sentences are shown in addition to the actual answer.
Through relevance models, the Question Answering feature can also be adapted to the needs of the user. Users are able to independently configure which parameters are used to measure the relevance of the answers and thus influence the resulting answers.
It is also possible to use Question Answering without the use of Generative AI in the standard client. If potential answers are found, the most relevant answer in the context of the hit is also displayed in the standard client.
Mindbreeze users can limit search results by using filters to obtain a more precise list of results. The filters can be configured by the Mindbreeze administrator for different metadata to offer users a better search experience and more efficient access to information.
To improve the usability of the filters, the option Reset filter has been added with the release of version 23.4. The option Reset filter appears above the available filters as soon as a filter is active (see the following screenshot) and resets all active filters with a single click.
The functionality of Javascript crawling enables the automatic and script-based simulation of user input when crawling complex websites. This makes it possible to crawl websites with login masks, pop-ups or delayed content. With release 23.4, different scripts can now be defined per sub-URL. This enables the crawling of complex websites which require a variety of different script behaviours using only a single crawler.
In addition, the security of scripts has been improved by requiring a host name to ensure that scripts are only executed on websites for which they are intended. Furthermore, it is now possible to access the Mindbreeze credentials from within the script, so that user names and passwords are no longer displayed in the configuration or in the logs.
The Mindbreeze Client Service supports authentication using JWT-Tokens to enable customers to access Mindbreeze InSpire externally. Access rights to documents and other media can now be defined with additional information in the JWT token. Previously, basic information about the user was stored, such as the user name or email address. It is now possible to add information such as groups (e.g. department) and roles (e.g. job description) to the JWT token.
Access rights management in ServiceNow has been improved in version 23.4, making it possible to manage access to documents more effectively. It is now possible to mark an employee or department as inactive to prevent access to documents and other media.
Inactive employees are not included in the Principal Resolution Cache, which reduces cache size and improves performance. The management of access rights can be done by using the Constraint Query for Users setting.