Interpreting the Different AI Foundation Components

Objective

After completing this lesson, you will be able to explain why SAP Business AI relies on components such as vector engines, knowledge graphs, and model orchestration, and describe how these components work together to provide context, grounding, and reliable AI outcomes in enterprise scenarios.

As Enterprise Architects navigate further into SAP Business AI, one question appears in most enterprise architecture discussions:

Why do we need components like a Vector Engine, Knowledge Graphs, and other components in Business AI?

The short answer is simple: We need components like a Vector Engine, Knowledge Graphs, and other components in Business AI, because context is key.

To better define the use of these components, it is helpful to analyze a common architectural pattern—Retrieval Augmented Generation (RAG)—which many organizations using SAP solutions rely on to ensure GenAI is usable, accurate, and trustworthy. For Enterprise Architects, understanding this pattern is essential because it explains how AI systems can safely use enterprise knowledge without compromising accuracy or governance.

Large Language Models (LLMs) are impressive, but they respond based on what they learned during their training. Since training can offer incomplete, outdated, or missing company-specific knowledge, RAG enhances the LLM’s performance by transforming every question into an open-book scenario.

Before the LLM responds, it first retrieves relevant, up-to-date information from approved enterprise sources, such as documents, databases, or knowledge bases, and "grounds" its answers in the references. Grounding creates a more reliable and verifiable response, which is less prone to errors or inaccuracies, commonly referred to as hallucinations.

To efficiently accomplish these goals, AI must search across vast amounts of information in milliseconds. Approved sources are broken into text chunks, converted into vectors, and then stored in a format that enables the system to quickly compute the distance between the question vector and each text-chunk vector, thereby identifying the closest match.

Within SAP’s Reference Architecture, each foundational component has a precise role:

  • SAP HANA Cloud Vector Engine operates as a curated "book" of the organization’s knowledge. It stores the embeddings (vector representations) and enables a rapid semantic search to find appropriate passages. SAP Knowledge Graph complements this by representing structured business entities and their relationships (for example, customers → invoices → payments), so the AI can retrieve context with the right meaning and connections—not just similar text.

  • SAP GenAI Hub orchestrates retrieval and provides access to the latest LLMs, from SAP and partners, helping to rapidly locate and apply the most relevant context.

In the high-level reference architecture, these three main components operate collectively.

SAP GenAI Hub provides access to various foundation models. The application, such as one created through the SAP Cloud Application Programming Model, interacts with the AI layer, and the necessary context data is stored as vectors in SAP HANA Cloud, ready for retrieval at speed.

The SAP Generative AI Foundation - SAP Business Technology Platform

A detailed view can be found in the SAP Architecture Center - Generative AI on SAP BTP.

For most enterprises, the richest insights reside not in documents but in structured tables. These structured tables are where the traditional, text-oriented RAG approach has limitations. Knowledge graphs bridge this gap by transforming rows and keys into entities, relationships, and semantics that an LLM can understand, navigate, and analyze.

When combined with the RAG approach, knowledge graphs enable the model to retrieve and utilize structured context with significantly greater accuracy, especially for tasks that rely on precise and critical business data.

Typical Context - Embedding Flow

A representative embedding-and-retrieval process includes these steps:

  1. Ingest trusted sources.
  2. Create embeddings and store them in the Vector Engine.
  3. Retrieve the most relevant snippets or graph substructures.
  4. Ground the LLM’s prompt with context.
  5. Generate an answer, citing or linking back to the sources when necessary.

SAP further extends its capabilities by transforming structured business data into actionable predictive insights. At this point, SAP-RPT-1 is essential.

SAP-RPT-1 is a relational pre-trained transformer model, specifically designed to deliver accurate predictive insights from structured enterprise data. SAP-RPT-1 utilizes in-context learning, enabling users to provide data records directly and receive instant, reliable predictions. With SAP-RPT-1, no separate model training is required.

You can experiment with SAP-RPT-1 by using our free-of-cost SAP RPT playground,available through SAP GenAI Hub.

Lesson Summary

Reliable enterprise AI depends on context, grounding, and orchestration. By understanding the role of components such as Vector Engine, Knowledge Graphs, SAP GenAI Hub, and SAP RPT-1, you can explain why SAP Business AI goes beyond standalone models. This distinction is critical when discussing AI architecture and adoption, as it highlights how SAP’s AI approach is designed specifically for enterprise-scale, governed, and mission-critical scenarios, not just generic AI experimentation. You are now equipped to describe how these components work together to deliver accurate, trustworthy, and business-aware AI results in real-world enterprise scenarios.