Java API Interface Description

Copyright ©

Mindbreeze GmbH, A-4020 Linz, 2018.

All rights reserved. All hardware and software names used are brand names and/or trademarks of their respective manufacturers.

These documents are strictly confidential. The submission and presentation of these documents does not confer any rights to our software, our services and service outcomes, or any other protected rights. The dissemination, publication, or reproduction hereof is prohibited.

For ease of readability, gender differentiation has been waived. Corresponding terms and definitions apply within the meaning and intent of the equal treatment principle for both sexes.

IndexingPermanent link for this heading

This section deals with sending objects to Mindbreeze. You'll become acquainted with the components of a crawler and learn what data needs to be known for each object sent.

Sending objects to MindbreezePermanent link for this heading

To be able to search for an object, it must first be included in the index. This chapter explains how to send objects from your data source to Mindbreeze. It is very easy to make an object searchable; the following lines are sufficient to store an object with the title "title" and the key "1" in the index:

Indexable indexable = new Indexable();




When looking at these lines, there are still a few things to consider. First of all, you need to think about which documents from your data source are relevant for the search.

Which objects are present in my data source?Permanent link for this heading

If you want to add a new data source to the search, you should always consider what content will be of interest to the users.

This example uses some CMIS services as the data source. CMIS offers four different object types: Folders, documents, relationships, and policies. In the example shown, only documents are sent.

How are objects sent? Which process is in charge of sending?Permanent link for this heading

Mindbreeze uses crawlers to send objects to the index. A crawler knows the data source and sends the objects it contains to be indexed. There is a crawler for each data source type. Mindbreeze InSpire has a Microsoft Exchange crawler and a Microsoft SharePoint crawler, to name two. In our SDK, we offer the same plugin interface that we use for our crawlers.

As a first step, you should package the example crawler as a plugin and import it into your appliance. Right-click on the build.xml file and select Run As > Ant Build.

This creates the plugin archive in the build  directory.

Now the plugin has to be added to the appliance. Open the configuration interface of Mindbreeze and switch to the Plugins tab. Select the zip file and confirm with “Upload”.

Now the plugin is installed.

Now create an index and add a new data source.

Further informationPermanent link for this heading

For more information, see

Tips for producer-consumer scenariosPermanent link for this heading

When a producer-consumer setup is used, the indexes synchronize at regular intervals. The synchronization ("SyncDelta") takes anywhere from a few seconds to a few minutes depending on the amount of data. For technical reasons, the index can only be used read-only during this short period of time. (The same effect is achieved by manually setting the index to read-only.)

If a FilterAndIndexClient is used in this time period, e. g.


the indexable is not indexed. Due to asynchronous processing, no exception is thrown during this process.

For this reason, we recommend the following error handling strategies:

Automatic repeatPermanent link for this heading

Here, if the index is currently performing a SyncDelta or is read-only, the indexable is automatically repeated until it is successfully indexed.

This has to be activated in the configuration with the property repeat_on_503. The property must be set to true.

In a crawler, the property must be set as an option in plugins.xml.

In a stand-alone pusher, the property must be set in the configuration object when calling the factory method of FilterAndIndexClientFactory.

Manual repeatPermanent link for this heading

In order to find out if the use of FilterAndIndexClient was successful, a ProcessIndexableListener can be registered:

client.addProcessIndexableListener(new ProcessIndexableListener() {


  public void processed(ProcessIndexableEvent event) {

    Indexable indexable = event.getSource();

    boolean wasSuccessful = event.wasSuccessful();

    Operation operation = event.getOperation(); // e.g. FILTER_AND_INDEX or DELETE

    Throwable cause = event.getCause(); // if not successful, this is the exception

    if (!wasSuccessful){

      // Do error handling here




This ProcessIndexableListener is called asynchronously after using the FilterAndIndexClient .

Debugging with Eclipse IDEPermanent link for this heading

To debug using the Eclipse IDE, the following steps have to be carried out:

  1. You’ll need to adjust some properties to ensure that the generated file contains the correct settings. You can find the relevant configuration file in your Mindbreeze SDK installation under SDK/servicesapi/java/

The following settings should be adjusted as necessary:

  1. endpoint: The URL of your appliance (e.g.
  2. filterid, indexid: The TCP ports of the filter and index service (e.g. 23400, 23101)
  3. username, password: Information that is sent to the appliance in the BASIC authorization header (e.g. login information of the inspire api)
  4. nodeid: Can be found in the Management Center (e.g. inspire-abc123def567...)
  • Create a project with the command mesjavaplugin (or under Linux):
  • A folder with the following structure is created:
  • Add the new project to your Eclipse Workspace.
  • Right-click the Package Explorer and click “Import...”. Select “General” “Existing Projects into Workspace”:
  • Then select the new project:

  • Check the properties in the file (correct endpoint, credentials, etc.) and adjust them if necessary.

    In addition to the generated class “” where you can test, the mesjavaplugin tool also generates a run configuration in the config folder (mysource-debug. launch). When you run the test class (right-click > Run as > Java Application), Eclipse automatically starts your tests with the generated run configuration. This run configuration contains all necessary JAR files in the class path. You can find these files in the "rt" folder. By default, all log information is written to C:\\tmp\log-default-mysource.txt. To change this path, customize the path in the configureLogger() method.

    Open the “Reporting” section in the Management Center to check whether the requests to the index and filter services have been successful. Under "Performance" > "Applications" > "Filter Service" (or "Index Service") you can display the requests that have been received.

    Filter service:

    Index service:

    Complex metadataPermanent link for this heading

    This section describes the process of indexing simple metadata such as string and date:

    However, as this section shows, more complex data structures can also be indexed.

    XHTML fragmentsPermanent link for this heading

    XHTML fragments can be indexed as metadata. These metadata are then displayed as HTML in the search results.

    The following example demonstrates the use of ValueParser, which can be used to save an HTML link as metadata:


    import com.mindbreeze.enterprisesearch.mesapi.filter.ValueParserFactory;


    ValueParserFactory valueParser = ValueParserFactory.newInstance().newValueParser(null);


    String xHtmlString = "<a href=\"\">Click me</a>";

    Item.Builder value = valueParser.parse(Format.XHTML, null, xHtmlString);


    Notes: The xHtmlString has to contain valid XHTML. The XHTML is fully stored (in transformed form) in the index. However, when the search result is displayed as a metadata item, a lot of XHTML elements and attributes are removed to protect the layout from unwanted changes. The following XHTML elements are displayed in the search result: [a, span]. The following XHTML attributes are displayed: all except for [id, class, JavaScript-functions].

    Notes for query expression transformation service pluginsPermanent link for this heading

    Required pluginsPermanent link for this heading

    If an error occurs in a query expression transformation service plugin (exception or timeout), the transformation is skipped and the unchanged query expression is used instead.

    However, some plugins perform sensitive tasks, such as displaying and hiding security-relevant metadata or resolving DSL keywords. If these sensitive plugins are faulty and errors occur, skipping them would be disastrous, because security-relevant data may be displayed that would not otherwise be displayed if the plugin were working correctly.

    For this reason, query expression transformation service plugins can be marked with a “required” flag. Plugins flagged in this way are not skipped in the event of an error, but instead stop the entire pipeline and no results are displayed during the search (“fail-fast” principle).

    The “required” flag can be set for each plugin in plugins.xml as follows:

    <!-- within the plugins.Plugin.code.Code section -->