Release Notes for Mindbreeze InSpire
Version 25.5
Innovations and new features
AI-based Neural Reranking – Optimized answer quality through deeper semantic matching
With the Mindbreeze InSpire 25.5 Release, Mindbreeze customers now have access to AI-based Neural Reranking through configurable Transformer Language models for optimizing answer quality. The relevance of answers to a query is redefined by configurable AI models in terms of deep semantic matching. By applying “Cross Attention,” Mindbreeze retrieves more potential answer options than originally requested and checks them for relevance according to the newly evaluated match. Based on this, the corresponding answer is generated and output. By configuring boosting rules, Mindbreeze customers can customize the new evaluation and ranking according to their use cases in addition to the initial evaluation of the answers.

Neural Reranking can be used not only for AI Answers in the Mindbreeze InSpire client, but also for Retrieval Augmented Generation (RAG) and agentic RAG use cases. This means that Mindbreeze customers not only receive high-quality content in the LLM context, but also benefit from reduced token consumption and faster answer generation. For flexible use of this optimization, it is also possible to dynamically activate Neural Reranking per request.
Link to the documentation
Technical extensions
Mindbreeze InSpire AI chat functionalities with AI Answers available in Insight Apps
With the Mindbreeze InSpire 25.5 Release, it is possible to use the entire functionality of Insight Services for RAG pipelines with AI Answers in Insight Apps. To this end, improvements have been made in particular with regard to consistency with other chat applications when using retrieval within the RAG pipeline. The AI Answers component has been further developed by adjusting the progress bar and optimizing the formatting of answers.
Link to the documentation
Simplified and consistent use of the OpenAI-compatible API for GenAI use cases
With the Mindbreeze InSpire 25.5 Release, customers can now use OpenAI-compatible APIs more easily for their GenAI use cases. For example, Mindbreeze InSpire LLMs can now be selected with the model name, and compliance with the OpenAI Chat API has been improved.
Link to the documentation
Optimized multi-turn chat conversations
With the Mindbreeze InSpire 25.5 Release, multi-turn chat conversations receive an optimization. Depending on the configuration and the productive RAG pipeline, the AI assistant can now access the chat history and previous conversations more comprehensively and generate an even more context-relevant answer.
Link to the documentation
Simplified traceability of configuration changes in OCI image snapshots
Mindbreeze now provides even more comprehensive support for customers in continuous deployment. The Mindbreeze SDK now enables the extraction of YAML representations from snapshots in OCI image format. This makes it faster and easier to identify configuration changes and the additional signing of the YAML representation allows for the development of a robust review process.
Additional improvements
Support for Producer-Consumer scenarios and Mindbreeze InSpire appliances in Hyperscaler environments
Mindbreeze now offers additional support for using a consumer node as a producer node and for creating Mindbreeze InSpire appliances in hyperscaler environments.
Link to the documentation
Link to the documentation
Security relevant changes
- Updated: Python dependency (CVE-2024-47081, CVE-2025-3262, CVE-2025-47273, CVE-2025-48379, CVE-2025-48945, CVE-2025-50181, CVE-2025-50182, GITHUB GHSA-5qpg-rh4j-qp35).
- Updated: Chromium to version 138.0.7204.157 (CVE-2025-6556, CVE-2025-6557, CVE-2025-6191, CVE-2025-6192, CVE-2025-6555, CVE-2025-6554, CVE-2025-7656, CVE-2025-6558, CVE-2025-7657).
- Updated: Dell Firmware (CVE-2025-20103, CVE-2025-20054, CVE-2024-45332, CVE-2024-43420, CVE-2025-20623).
- Updated: CoreOS Security to version 42.20250623.3.0 (CVE-2025-23395, CVE-2025-46802, CVE-2025-46803, CVE-2025-46804, CVE-2025-46803, CVE-2025-5278, CVE-2025-4598, CVE-2025-6032, CVE-2025-6020, CVE-2024-57970, CVE-2025-1632, CVE-2025-25724, CVE-2024-12718, CVE-2025-4138, CVE-2025-4330, CVE-2025-4517).
Additional changes
- Added: Simplified traceability of configuration changes in OCI-Image-Snapshots.
- Added: Binary data support for content processing and improved robustness in the Database connector.
- Added: Model name can be set when creating an Mindbreeze InSpire LLM Service in Insight Services for RAG.
- Added: Docling ContentFilterService.
- Added: Answers can be reranked with cross encoder models.
- Added: Support for Jira Data Center 10.
- Added: SMTP authentication for outgoing mails is supported.
- Added: Content filter plugin can be selected with the plugin ID in the /filterAndIndex request.
- Added: Support for multiple authenticated LLMs in Helm Chart.
- Fix for: Import of very small OCI snapshots in Kubernetes might hang up.
- Fix for: Excessive logging in case of "No authorizer for fqCategory".
- Fix for: Errors breaking scheduled bucket cleanup should be logged with ERROR level.
- Fix for: AI Answers behave differently than AI Chat for Single Conversation Chat Use Cases when retrieving in RAG Pipelines.
- Fix for: Microsoft File, Microsoft SharePoint, LDAP and Documentum Connectors now do not save their state in /tmp per default anymore.
- Fix for: Sporadic 401 responses during remote connector crawling.
- Fix for: Jira Principal Cache username resolving.
- Fix for: Similarity search from static aggregatable precomputed synthesized property returns result, but no answers.
- Fix for: Incomplete app.telemetry instrumentation for some query expressions in client service logpool.
- Fix for: Changing apptelemetryusersgroup in Kubernetes deployments was not effective.
- Fix for: Correct handling of Log Prompt in app.telemetry.
- Fix for: Index process may hang if shutdown happens immediately after startup.
- Fix for: Search issue in the File Manager in the Management Center.
- Fix for: Answer generation didn't stop when pressing the “Stop” button.
- Fix for: Missing answers in RAG if the source has no formatting.
- Fix for: Windows Installer does not include Client Resources.
- Fix for: Keeping the computed event running until the first generation token is received from the /generate request.
- Fix for: Ensuring sourceInfo loading is completed before checking mindbreeze.chat.v1beta service availability for generation.
- Fix for: Failing to show sources in AI answers if title property was not requested in VALUE format but in PROPERTIES format.
- Fix for: Assertion is raised during querying when processing annotations (contextualization) in certain rare cases.
- Improved: Launch of subprocesses was optimized under memory and latency constraints.
- Improved: Robustness of Service Processes by restarting query service for JWKS instability during initial setup in test environments.
- Improved: Robustness when generating with conversation messages.