Skip to content

VNClagoon AI Desktop Client User Manual

This user manual provides step-by-step guidance on how to install the VNClagoon AI Desktop client (Electron) and configure it whether to use local LLMs (large language models) or a remote OpenVINO™ Model Server (OVMS) AI-backend for AI processing.

Install VNClagoon AI Desktop Client

Currently the VNClagoon AI desktop client is only available for MS-Windows platforms.

The VNClagoon AI Electron client is installed via an executable setup file. The installation file or a download location for it will be provided by your contact at VNC.

1. Preparation

  1. Download the VNClagoon AI Electron to your Windows client
  2. Create a folder where the downloaded LLM files will be stored in.
    • This folder will be referred to as Local-LLMs in this guidance.
  3. Create a folder where you can upload documents to, that should be considered for Retrieval-Augmented Generation (RAG), so the AI can process the provided information as knowledge base for "local data".
    • This folder will be referred to as AI-Documents in this guidance.

2. Installation

To trigger the installation process, just double click the downloaded install file with Administrator permissions:

Once the installation is finished,

  • a shortcut is created on your desktop and
  • the VNClagoon AI Desktop Client is started automatically

When the client finished loading, your are prompted to enter

  • the Server URL, where your VNClagoon AI installation is accessible ...

  • ...and after entering the URL and clicking Change, you are prompted to enter the username and password of your account, to finish the login process:

You only have to enter "Server URL" and credentials on the first start of the client. The client will remember your credentials until you actively logout of the application.

Right after the first login, the client will automatically start downloading the files of the default LLM Model, which currently is GGML Large V3.

The client will inform you via popup message, once download and verification of the model and with it the client installation, have been completed:

3. Configure VNClagoon AI Desktop Client

Configure folder for LLMs

After the initial startup of the client, the folders, created in section Preparation, need to be configured to serve their purpose.

The great advantage of the stand alone Desktop client compared to the web client is the ability to download and run the AI-Models locally, as long as your client hardware supports it.

  1. Click on AI in the menu bar and select Preferences from the opening drop-down menu:

  2. Scroll down beyond the prompt configuration, which you can leave unchanged, down to the bottom of the AI Settings menu:

  3. Now select Local LLM

    which will trigger the additional folder configuration sub menu LLM MODELS to appear.

  4. Click on Select Models Folder and select the folder Local-LLMs we created in Preparation.

    After the LLM-folder has been selected, the client will prompt you with the message:
    Cannot find models since the folder is still empty right now.

  5. Now click the SAVE button to apply the config changes.

Download LLMs

To download additional LLMs, open the AI dropdown in the menu bar again and select Download Models from the opening drop-down menu.
Currently these four models are available for download:

Once you select a model for download from the opening menu, a popup progress bar will appear,

that is replaced by an info box, when the download has been completed:

Now download as many models you like but remember, you can only use one at a time.
The respective LLMs differ in size, depending on their complexity:

Bigger, more complex LLMs provide better an more complex answers at the cost of compute power which results in longer response time. A good compromise between complexity and response time is the GGML Large V3 which is also the default model

When you open the Local-LLMs folder in your file browser, you will find all the downloaded models there:

Select the LLM you want to use

After all models you like to use have been downloaded, you need to select, which one you want to use for local AI processing.

  1. Open preferences again:

  2. Scroll down to Supported Models section and click on it:

  3. From the opening drop-down menu, select the model you want to use for processing ...

  4. When done, click SAVE to apply the configuration:

Finally, restart the VNClagoon AI desktop client to use the configured LLM!

On startup, the client will signal you via toaster message, that the Local LLM is ready now.

When you open preferences again, you will see, that the read notification about the Local LLM has changed to green now, informing yopu, that the Local LLM is ready as well:

4. Use internal Data (RAG)

In addition to a local LLM, that you can download and configure, it is also possible to feed documents and texts to the AI, which then can be used as additional knowledge base added to the LLMs data pool for prompts via Use internal data

  1. To use the content of uploaded documents for processing your prompts, you need to activate it by clicking the Use Internal Data button once, which will change its color to blue indicating that internal data usage is activated now.

  2. To deactivate it, just click again and the button will turn grey again, indicating, that it has been disabled.

Configure folder for RAG

Now to use this feature, you have to configure the folder, you already created and named AI-Documents in Preparation, where you upload documents and texts for processing to:

Open the Preferences menu again via the AI menu bar and scroll down to the bottom of page.

  1. Now activate the switches: Use VNC datasource for prompting and Auto update datasource, to ensure the the Documents folder is re-scanned on regular basis and newly added documents are recognized for processing.
  2. Click Override Private Documents to configure the Documents folder for processing internal data:

  3. In the opening Private Documents popup window, click on Select Documents Folder, which will open the file browser of your OS.

    Now navigate to the folder location of AI-Documents and select it.

  4. The selected path is now displayed. If it is correct, click Override to apply the selection:

  5. This will trigger the folder processing and you will have to wait a few seconds, until the documents have been updated:

  6. When the folder update is completed, the greyed out buttons for document folder selection, Override Private Documents, and manual update trigger, Update Private Documents, become available again. In addition, a toaster message will inform you, that the update has been completed.
    To save the changes, click the SAVE button:

Now after following this guidance, the installation and configuration of the VNClagoon AI Desktop client is complete.

For general usage and functionality, please refer to the Webclient User Manual.