As artificial intelligence rapidly reshapes business landscapes, organizations seek powerful yet secure and integrated ways to harness its potential. For many, that journey begins with understanding the core infrastructure that enables AI-driven innovation.
As a leader in enterprise applications, SAP is committed to infusing intelligent capabilities directly into the core of your business processes through SAP Business AI.
This lesson will introduce you to the fundamental infrastructure that makes this possible: SAP's AI Foundation. You will explore a key component of this foundation, the generative AI hub. You will see its role as your gateway to securely and effectively leverage Large Language Models (LLMs) for enterprise-grade solutions. Whether you're a business leader, a consultant, or a technical expert, understanding this layered approach is crucial for unlocking the full potential of generative AI within your organization.
AI Foundation
AI is a strategic need for organizations. However, as they seek to innovate faster and operate more efficiently, they often face growing complexity in managing AI development, deployment, and governance. This is precisely where SAP's AI Foundation comes in.

The AI Foundation serves as SAP’s AI operating system, enabling organizations to build, extend, and operate custom AI solutions at scale in a secure, responsible, and efficient manner on SAP Business Technology Platform (SAP BTP).
It is the backbone of SAP Business AI, providing the infrastructure, tools, and services needed to orchestrate cutting-edge technology with deep business context.
AI Foundation Matters:
Organizations today face several challenges, including high resource demands for onboarding AI models, fragmented tools, and difficulty scaling AI responsibly across regions and regulations. AI Foundation directly addresses these by offering:
- A Unified Development and Orchestration Platform: Providing a central, integrated access point for all AI Foundation tools, simplifying collaboration and governance.
- Built-in Governance and Security: Ensuring ethical, secure, and compliant AI operations from the ground up, significantly reducing risk.
- Seamless Integration: Connecting effortlessly with SAP and non-SAP data, infrastructure, and partner models, allowing for broad applicability and leveraging existing investments.
Core Products and Architecture of AI Foundation

SAP's AI Foundation is built upon a sophisticated, multi-layered architecture designed to support enterprise-grade AI operations and deliver on SAP's core AI promises. It combines well-established components and makes them available through a single entry point: the Unified AI Portal. This portal serves as a centralized access hub for all AI Foundation tools, simplifying collaboration and lifecycle management.
Key products integrated into AI Foundation include:
- Joule Studio: A low-code/no-code environment specifically for building and customizing AI agents and extending Joule skills, empowering developers and business users to innovate quickly.
- Generative AI Hub: This crucial component provides secure and managed access to top-tier LLMs and advanced prompt engineering tools, a core focus of this course. It accelerates AI development by enabling controlled and efficient LLM utilization.
- SAP Document AI: Automates data handling across a wide range of unstructured documents, significantly enhancing the accuracy and quality of data processing within critical business workflows.
- Underlying Technologies:
- SAP Knowledge Graph: Connects vast amounts of SAP and external data, creating a rich web of contextual intelligence that enables the development of smarter, more relevant AI outcomes.
- SAP Foundation Model: Produces reliable predictions based on SAP application data and addresses some limitations of AI and LLMs.

The AI Foundation's architecture is structured across four key layers to ensure robust and scalable AI operations:
- OS Interfaces Layer: This serves as the unified entry point for development, operations, and administration. It is where user-facing tools like the AI Playground and Joule Studio reside, enabling intuitive interaction.
- AI Kernel Layer: This core layer manages AI agents, workloads, models, and their entire lifecycle. It handles critical tasks like resource allocation and scheduling to ensure secure, compliant, and efficient AI operations.
- AI Integration Layer: This layer facilitates seamless connectivity to diverse data sources, various models, and various services. It supports workload management, data integration pipelines, and the sophisticated orchestration of AI models across different environments.
- Peripheral and Data Layer: This foundational layer provides the essential infrastructure by integrating both SAP and non-SAP data sources, leveraging existing IT infrastructure, and connecting to partner models. It also supplies the raw materials and computational power needed for AI processes.
This layered structure ensures that AI Foundation is not just a collection of tools, but a cohesive AI operating system that serves as the backbone for all AI technologies and solutions that SAP internally utilizes. Simultaneously, it empowers customers and developers with a comprehensive suite of tools to build, extend, and run their custom AI solutions and agents on a scale. SAP also fosters a strong partner ecosystem through its open approach, allowing seamless integration with leading AI and data partners to leverage best-in-class models and industry expertise.
SAP AI Core
SAP AI Core is a component of the SAP BTP. It enables the management, execution, and operation of AI assets in a standardized, scalable, and hyperscaler-agnostic way.
Resource Groups
SAP AI Core uses resource groups to keep different machine learning resources and tasks separate.
A resource group is a virtual folder for related items inside one SAP AI Core account. When you get started, you automatically get a default resource group. You can add or remove more resource groups if you're an administrator using SAP AI Launchpad. Resource groups may be configured according to the specific requirements of individual tenants and their respective use cases.
The Generative AI Hub: Powering LLMs within the AI Foundation

The growing landscape of LLMs offers unprecedented potential, but also introduces complexities in systematic model selection, access, and governance. The generative AI hub is an essential part of SAP’s AI Foundation, within SAP AI Core, and is designed to meet enterprise requirements. It is designed to enable systematic and tool-assisted model selection, access, and exploration of LLMs, ensuring they are based on business and contextual data.
The generative AI hub is fundamental to SAP's promise of Relevant, Reliable, and Responsible AI when leveraging LLMs:
- Relevant (Grounding in Business Context): The hub enables LLMs to be grounded in your unique business and context data. This means your LLM responses and applications are not just generic, but specifically tailored to your organization's real-time information, significantly improving accuracy and direct business applicability. This is made possible through features like the Grounding powered by SAP HANA Vector Engine. This makes it easier to find business documents related to a query and use them as context for the LLM.
- Reliable (Trusted Access and Performance): It consolidates trusted LLM and foundation model access, reducing dependency on a single vendor by supporting diverse models from multiple providers. This ensures the continuous functionality of your AI applications, with features like automatic and manual upgrade options for model versions.
- Responsible (Security and Compliance by Design): Generative AI hub ensures compliance and trustworthiness through built-in governance, security measures, and ethical considerations. It includes critical functionalities like content filtering and data masking to prevent risks such as prompt injection and jailbreaking, as well as adhering to strict security standards like SOC 2, NIST, and ISO certifications. Your data is fully protected and isolated in the generative AI hub. SAP does not share customer data with external vendors for model training. Customer data is never accessible to other customers. SAP uses enterprise-grade LLMs that do not store API input or output, ensuring complete protection of customer information.
Key Capabilities of the Generative AI Hub
The Generative AI Hub covers the entire lifecycle of Generative AI applications:
- Access to multiple models: Provides streamlined access to frontier AI models from various providers and flexible compute resource allocation. This ensures you can easily connect to the best LLMs for your specific use cases.
- Model agnostic development: You can easily switch between models, adapt orchestration configurations on the fly, and manage prompt variants for different LLMs. This capability means you can drive AI adaptation at your pace, leveraging new and improved models without disrupting your existing solutions or requiring migrations to other services.
- Exploration and Development: Offers a comprehensive toolset for building custom AI solutions. This includes a prompt editor, robust prompt management (registry, libraries), and Software Development Kits (SDKs). The integrated AI playground and chat environment allow for secure exploration of different models, metadata, and parameter changes to identify the best-fitting technology.
- SDKs and Client Libraries: Utilize developer-oriented tools, including robust SDKs and client libraries, to efficiently integrate generative AI into your applications and accelerate the development of bespoke AI solutions. SAP Cloud SDK for AI is the official SDK for SAP AI Core, the generative AI hub, and orchestration. It provides model access, enabling the integration of the generative AI hub capabilities with common frameworks like SpringAI, Langchain, and Langgraph.
- Orchestration: Facilitates coordinating and managing various AI components, including LLMs. It enables the execution of multiple AI models, manages data flow, optimizes computational resources, integrates features like content moderation and data masking, and streamlines end-to-end AI workflows.
Deployment and Delivery: Software deployment involves the steps, processes, and activities necessary to make a software system or update available to its intended users. Currently, most IT organizations and software developers use manual and automated methods to deploy software updates, patches, and new applications. You bring your own models (for example, fine-tuned open-source models, proprietary models) and deploy them. These capabilities enable the efficient deployment and delivery of AI models and applications, ensuring that they are operational and accessible to users.
- Governance: Implements robust policies and procedures to manage AI development, deployment, and operation in compliance with regulatory and organizational standards. Features include comprehensive logging for auditing, metering, monitoring, multi-tenancy support, and clear roles and responsibilities.
Support for the Model Lifecycle
The generative AI hub supports different approaches for using AI models:
Self-Hosted Models: Organizations can deploy their own models (such as fine-tuned open-source or proprietary models) and manage model artifacts, dependencies, and the entire lifecycle. This includes overseeing infrastructure, scaling, security, compliance, monitoring, costs, updates, versioning, and performance.
SAP Managed Models: This approach provides pre-integrated, managed access to various LLMs from third-party providers (such as Azure OpenAI GPT-5, Google Gemini 2.5 Pro, AWS Claude Sonnet 4, and others). Users do not need to obtain separate licenses for these LLMs, as SAP manages commercial agreements and technical integration. SAP is also responsible for data privacy, security, and confidentiality requirements concerning the LLMs accessed via the generative AI hub, covering related operational activities.
LLMs are subject to ongoing changes; providers release new versions, retire older ones, and may adjust aspects such as performance and cost. As a result, the specific LLM version an application uses may not always remain available.
Generative AI hub ensures that these LLM version changes do not impact your applications by supporting auto upgrade and manual upgrades for the latest version of a particular LLM.. You can monitor model version deprecation dates, use either auto-upgrade or manual upgrade strategies to maintain continued service and benefit from the latest advancements.
SAP Hosted Models:
These models are hosted and managed by SAP in its own infrastructure, ensuring seamless integration with SAP systems and applications. SAP handles all aspects of security, compliance, performance optimization, and operational management, allowing organizations to leverage AI capabilities without the need to manage infrastructure or technical complexities. These models are pre-configured for use within the SAP ecosystem, providing reliable and secure AI-driven solutions tailored to business needs.
For all the latest information on all the available models in generative AI hub refer Note 3437766.
Access Generative AI hub
To get started with the generative AI hub, users must establish access by setting up their SAP BTP global account, provisioning SAP AI Core, and ensuring the appropriate roles and permissions are assigned. The generative AI hub operates within SAP AI Core and is primarily accessed through the SAP AI Launchpad interface or programmatically via APIs and SDKs. With access secured, users can deploy LLMs from various leading providers, configuring essential parameters such as model type and version.
Each deployed LLM is provided with a secure endpoint for integrating enterprise applications, simplifying consumption and management with support for version monitoring and upgrades. This foundational process allows organizations to quickly harness generative AI for business innovation within their SAP environment.
Lesson Summary
You've now explored the foundational infrastructure that enables enterprise-grade AI within the SAP ecosystem. You understand how SAP's AI Foundation acts as the operating system that enables the Build, Extend, and Run of these solutions with built-in relevance, reliability, and responsibility. The generative AI hub is a central component within this foundation. It streamlines secure access to frontier LLMs, provides essential tools for development and deployment, and ensures that your generative AI applications are grounded in your unique business data, delivering trustworthy, context-aware, and high-value outcomes for your organization. It supports model lifecycle management for your LLM applications to ensure business continuity along with the latest model advancements.