Orchestration with Grounding
Objective
In this lesson, we are going to look at the concept of Grounding in terms of Generative AI. When a question is posed to a large language model, it will try to answer to the best of its knowledge, based on information it can find across the world wide web. However, it is usually much more beneficial to instruct the LLM to first consult a very specific domain before returning the results. This is known as Retrieval Augmented Generation , or RAG, whereby the results are ‘augmented’ by first grounding the LLM.
Grounding provides specialized data retrieval through vector databases, augmenting the retrieval process using specific external and context relevant data. Grounding combines generative AI capabilities with the capacity to use real-time, precise data to improve decision-making and business operations, for targeted business AI-driven solutions.
Try it out!
- In order to use the grounding feature, open the AI Launchpad application and select the doc-grounding resource group where the orchestration has been deployed.

- Expand Generative AI Hub and select Orchestration from the menu list to see the Orchestration Workflow.

-
The Orchestration Configurations page will open, where any existing orchestration workflow configurations are displayed.
-
Select the Create button to create a new configuration.

- The Orchestration Configuration page will open where you can configure multiple settings for a range of different modules such as Grounding, Data Masking, Filtering and Tanslation.

- Toggle the Advanced setting to ON to see the different modules available. You will notice that some of these are mandatory, while others are optional:

Let’s first observe the effects of querying an LLM with no context and no frame of reference.
- Ensure the Grounding Management feature is disabled (switch is set to Off) then go to Prompt Template section.

Using templates enables you to compose prompts, define placeholders, choose the LLM and configure it with required settings and parameters. It can also be enhanced with placeholders or variables to enable different prompts using the same basic configuration. Prompt Templates can be uploaded from a JSON file, the prompt library, or via the Prompt Registry using Github.
Let’s upload a saved template which has been pre-configured and loaded into the prompt registry
- In the templating section, choose the highlighted icon to select a template from the library of saved prompt templates:

The list of currently available templates will appear. It’s possible to filter by the type of template (either imperative or declarative) or the search box can be used to find a specific template.
- In the Filter section, select the checkbox next to declarative to reduce the list size. Then select the template named prompt-registry-v1 and choose Select to add it to the templating module.

The template will now load up with pre-defined entries for both the system role and user role. You will also see a variable called question which has been defined and assigned a default value:

- Scroll down to the Model Configuration section to choose a model from the available versions and define its parameters. A model has already been configured for this step, but you can choose from a large selection of other available models in here.

- Open up the list of available models and choose SAP ABAP 1 for this exercise. Selecting the model will automatically add it to the Model Configuration section where further parameters can be tweaked.

Once happy with the model selection and configuration, it is time to test out the prompt.
On the right hand side, you will see the Orchestration Test Run section, which has already been populated with the input variable from the template.
- Select Run to send the query to your chosen LLM and observe the response.

As this example uses a very targeted query with no context (looking for items in a specific catalog), the response from the model will either be an admission of failure to find the items, or it will make up an answer with generic options. This will depend on the model and version chosen, along with the parameters defined.
- Save your current configuration by selecting the Save button at the top of the screen.

- Give your configuration the name _Orch, select orchestration as the Scenario Name and then select Save.

In the next section, we can offer the model more context for the query by providing it with a database of information from which to search before providing an answer.
Grounding
Let’s now ‘ground’ the LLM on a repository of data before asking the same question as before.
For this scenario, we have already created a set of embeddings from a product catalog which have been stored as vectors in a HANA Cloud Database.
- Select the Grounding Management menu option within the Generative AI Hub section to view the available data repositories which can be used in this resource group (doc-grounding).

This is a collection of vectorized representations of various documents, csv files, images etc. which have been stored in MS Sharepoint, S3 buckets or directly in the vector store of SAP HANA Cloud itself.

- Select any of the repositories to view the data chunks. The repository we use for this workshop is called . Select to open the data and observe the format.

-
Download the following {placeholder|gnd_json_cfg}">JSON template file, and save it locally on your device. Feel free to inspect the file and observe the format and layout.
-
Return to the orchestration workflow by selecting Orchestration from the menu and then choose your previously created configuration called _Orch.

- In the orchestration configuration screen, select the three dots at the top right and then choose Upload which will lead to a pop-up window asking for a file to be uploaded.

- Choose the JSON file which you have saved in a previous step, and select Open to load it into the workflow.

- Observe that the Grounding Management module has now been activated and is enabled. There are also two new additions in the configuration section: An Output Variable and a Data Repository.

- Check the Model Configuration section and observe that the selected model has also been updated:

This is because the JSON configuration file has a section in which you can define the model and its parameters. In fact, there are many different options available to define in this configuration file, and it is an effective way of storing different orchestration workflows.
- Select the JSON icon to view the JSON representation of the orchestration workflow. In particular, observe the addition of the data repository used for grounding:

- When ready, select the Run button again to test the query on a model which is now grounded on the IT equipment data repository and observe the results.

- This time you should get a more targeted response suggesting the model was much more aware of the context behind the query:

This simplified example showcases how the grounding of a foundation model before running queries enhances its performance and reliability by aligning it with specific context and domain knowledge. This process involves customizing and fine-tuning the model using relevant data, which improves its understanding of nuanced language and concepts pertinent to specific applications.
Benefits include:
- Increased accuracy
- More relevant query results
- Reduced biases
- Better handling of specialized vocabularies
Ultimately, grounding ensures that the model’s outputs are more precise and contextually appropriate, leading to higher overall effectiveness in various tasks like information retrieval, decision support, and predictive analytics.
Remove Orchestration Configuration
This section will guide you through the process of deleting your orchestration configuration which is necessary to free up resources on the tenant.
- Select the History icon next to the Save button which will bring up all the current saved versions of your configuration.

- Select the version with a timestamp (should be listed underneath Current Draft) and then choose the three dots beside the History icon.

- In the context menu which appears after choosing the three dots, select Delete.

- A warning message will appear to confirm. Select OK to remove your configuration.

Congratulations! This concludes our workshop on the SAP Generative AI Hub. You are now equipped with the practical knowledge and understanding to begin your own journey with AI Core in SAP BTP.
For further information, please check out the following resources: