Discovering Generative AI Hub Interface

Objective

After completing this lesson, you will be able to navigate the generative AI hub interface.

Discovering Generative AI Hub Interface

Once you have accessed the generative AI hub and deployed models, the next step is to get familiar with the user interface of SAP AI Launchpad, which is used to access the generative AI hub. Using this environment is not only a great way to explore multiple models and enhance your day-to-day productivity, but it also provides robust features to manage your prompts and LLM application lifecycles, ultimately helping you address complex business problems more effectively.

This lesson will provide you with an overview of the SAP AI Launchpad, with a focus on the generative AI hub. You will see how the features of the generative AI hub provide a productive environment for using multiple models, managing your prompts, and orchestrating workflows all in one place in a secure and reliable manner.

SAP AI Launchpad Interface

SAP AI Launchpad is a multi-tenant software as a service (SaaS) application on SAP BTP. You can use SAP AI Launchpad to manage AI use cases, known as scenarios, across multiple instances of AI run times, for example, SAP AI Core.

Once you gain access to SAP AI Launchpad, you will see the interface having some of the following main Apps.

Image of SAP AI Launchpad

Note

These apps are only accessible to administrators and developers in your organization. As a business user, you will access and use generative AI hub described later.

Workspaces: This app allows technical users to manage API connections and organize resources. Each API connection represents a unique tenant, which can further contain multiple resource groups for streamlined generative AI hub access management.

Generative AI hub: This is the core application we will focus on. It provides all the necessary functionalities for working with generative AI, including managing models, developing prompts, and overseeing the lifecycle of your LLM applications.

SAP AI core Administrator: This app is used by administrators and developers to manage underlying infrastructure components such as Git repositories, application deployments, Docker registry secrets, and various connection secrets. It ensures secure and authorized access to the generative AI hub's capabilities.

ML Operations: This app is used to manage scenarios that can be used to create executables and configurations. These executable elements can be used to deploy models and orchestration service in the generative AI hub. This application also provides support for bringing in your datasets, models, and result sets and any other artifacts to manage LLM application prototyping and lifecycle.

Generative AI hub Interface

The generative AI hub within SAP AI Launchpad offers a suite of interfaces, including model library, chat, prompt editor, prompt management, orchestration, and administration.

Image of Model Library screenshot

Model Library

The Model Library shows the list of all the available models in the generative AI hub and serves as a resource for selecting appropriate models. It provides comprehensive information to help you decide which model best fits your use case, considering factors like performance, cost, and specialization.

It provides the following modes that can be used to select a model.

Catalog Mode:

Model Library - catalog selected

This mode shows all the available models with their metadata. Use filters to refine your selection or search for a model by name. A model card is displayed when you select a model.

Image of Gemini 2.5 Flash interface

The Model Card offers detailed information, including data input types, potential costs, and relevant performance metrics.

Image of GPT-4.1 Model Deployment running

This is where you can also initiate deployment or check the status of a deployed model.

Note

For quick testing, use the model in the Chat or Prompt Editor.

Leaderboard Mode:

Image of Leaderboard Mode

This mode shows model scores across a range of benchmarks. You can apply filters to narrow down your options, search by name, or reorder the list based on specific benchmarks.

Image of Model - Leaderboard scores

You also have the option of comparing context window and costs associated across multiple models and selecting the best model for your use case. Just hover over each benchmark or parameter to understand its significance to take an informed decision. Once you select a model, you will view the same model card from the Catalog mode.

Chart Mode:

Image of Chart Mode

This mode provides a visual comparison between any two benchmarks or parameters.

Image of Chart Mode visual comparison

For example, you can choose a context window and output token cost and select a model with the largest context window and the lowest output token cost. You can see the model card in detail and select the optimum model for your use case. You can also zoom out a chart for better analysis. This is one of the many valuable features of generative AI hub to help you optimize your LLM applications and solve business problems on a scale.

For more information, visit the SAP AI Launchpad Help Portal - Model Library.

Use Case Example: Imagine you need an LLM to summarize lengthy internal company reports. In the Model Library, you can compare different models based on their "context window" (how much text they can read at once) and "cost per output token." This helps you quickly identify a model that can handle your large documents efficiently and affordably, ensuring your solution is both efficient and budget friendly.

Chat Interface

Image Selected icon for Configuring Chat Settings

The chat interface serves as a conversational platform for interacting with models, enhancing user experience through personalized interactions. You can engage with models conversationally, with the context of previous messages maintained within the session. This feature enables developers to prototype and build applications that can scale efficiently.

You can select any available model using the Configure chat settings gear.

Image of Configure Chat Settings - Model Settings

Use the select option to open the Model Library and choose the appropriate model for your case.

You can also update the available parameters for each model.

Image of Configure Chat Settings - Chat Context

You can also update the context history for a larger context in the chat conversation and use advanced features like message roles and variables to tailor the conversation for your business needs.

Image of Download of chat

You may download your chat by selecting the Download icon. Text data will be saved locally in the JSON format; note that images are excluded from downloads.

To copy text data from an individual chat message or response, use the Copy icon. Images cannot be copied.

Use Case Example:

You have an idea for an AI assistant that helps sales teams answer common product questions. You can use the Chat interface to quickly test different LLMs with sample questions ("What are the key features of S/4HANA Cloud?" or "How does our product compare to a competitor's?"). This allows you to rapidly prototype conversational flows and see how well various models understand and respond before building out a full application. This is an example of human-to-AI interaction for direct conversational interaction and testing.

Grounding Management

The grounding management interface is used to manage the lifecycle of data pipelines. These data pipelines are used to provide more context for LLMs using documents stored in a SharePoint or cloud storage like S3.

For more information about grounding techniques, visit the Using Advanced AI Techniques with SAP’s Generative AI Hub course.

Prompt Editor

Image of Prompt Editor

The prompt editor provides a comprehensive environment for creating prompts, selecting models, saving prompts, and configuring parameters.

Key Features:

Image of Message Blocks

Message Blocks: This is a dedicated area for writing prompts for the System, Assistant, and User roles. You can add multiple blocks for multiple roles in a version.

Image of Variable Definitions and the Model Configuration information

Variable Definition: You can add variables to your prompts to make them flexible and customizable in various situations. You can reuse a prompt for your use case by using default values or current values. Using the SDK, variables let you input information at runtime, either through user interfaces or system workflows.

Model Selection: You can select from various models available in the generative AI hub and modify their parameters directly within the editor to achieve the best results tailored to your specific use cases.

Image of Metadata tags and notes capabilities

Metadata: You can add organizational metadata, such as tags and notes, to categorize and manage your prompts effectively.

Image of Mail Categories

Save Prompt and Version Management: The Prompt Editor allows you to save prompts and even assign a collection name to organize them. This ensures traceability and allows for iterative refinement.

Image of Response block

Response: You can see the response and save the results. You can continue editing the prompt in the same window and save multiple versions for future reference.

Use Case Example: Your business needs to generate personalized marketing emails for different customer segments. Using the Prompt Editor, you can craft a master prompt with variables for customer name, product interests, and promotional offers. You can test and save multiple versions of this prompt, each catering to a different segment, ensuring the LLM consistently generates high-quality, targeted emails. This shows how generative AI hub supports humans to software via AI interactions.

This interface is the go-to place for creating, managing, and retrieving prompts for prototyping and prompt lifecycle management.

This interface in the AI hub also supports human-to-AI and software-to-AI interactions. You can use direct prompts or feed software-generated reports for further analysis.

Prompt Management

Once you save a prompt, you can access and reuse it in Prompt Management.

Image of Prompt Management interface

You can see all the versions and models used in the prompt.

Image of Icon to Open Prompt Editor

You can access all the versions and their details and select a specific version to open it in the Prompt Editor.

Use Case Example: A successful prompt was developed to extract key data from invoices for automated processing. In Prompt Management, this validated prompt can be easily found, its history reviewed, and then reused by other development teams across different projects. This ensures consistency, saves development time, and prevents duplication of effort.

Orchestration

This interface is designed for creating and testing orchestration workflows. It enables you to design sophisticated sequences of operations involving LLMs, where prompts can be dynamically populated with various data during inference to execute complex business logic. The next unit will provide more in-depth details about this powerful capability.

The next unit will provide more details about orchestration.

Lesson Summary

You've now taken a guided tour of the generative AI hub interface within the SAP AI Launchpad. You've discovered its key components, including the Model Library for model selection, the Chat interface for interactive prototyping, Grounding Management for data integration, the Prompt Editor for crafting and managing prompts, and the Orchestration interface for building complex workflows. This comprehensive understanding of the interface empowers you to efficiently harness the power of LLMs, manage their lifecycle, and build secure, reliable, and high-performing Generative AI applications to drive business value within the SAP ecosystem.