This lesson moves from theory to practice. We will explore a few orchestration workflows, demonstrating how its specific sequence of steps transforms a powerful LLM into a truly reliable, compliant, and integrated enterprise asset. You will explore how these services work together in a structured yet highly adaptable pipeline to deliver robust AI solutions.
Orchestration Workflow
The Orchestration Service is most effective when its features work together in a specific order. A single API call can trigger a sophisticated, multi-step process that prepares data for the LLM and secures the output. The standard workflow is structured as follows:

- User Input: The initial query or data, typically originating from an end-user or an upstream application.
- Grounding: Based on the user input, the system searches and retrieves relevant, factual, and up-to-date information from designated enterprise data repositories. Examples of these repositories can be SAP Help Portal, internal documentation, master data etc. This ensures the LLM’s response is fact-based.
- Templating: The retrieved grounded data, the original user input, and predefined system instructions, which can include LLM persona and rules for response generation, are combined into a comprehensive prompt using a pre-configured template.
- Input Masking: This crucial step scans the templated prompt for Personally Identifiable Information (PII) or other sensitive data (e.g., emails, names, addresses, phone numbers) and pseudonymizes it. This ensures sensitive data is not directly exposed to the LLM.
- Input Filtering: The masked prompt is then scanned by a content safety service (for example, Azure Content Safety) for any harmful, toxic, or inappropriate content before it is sent to the LLM.
- Input Translation: If the primary LLM processes in a language different from the user’s input, the entire filtered and masked prompt is translated into the LLM’s operating language, for example from German to English.
- LLM Processing: The thoroughly prepared, safe, masked, filtered, and translated prompt is sent to the selected LLM, which processes this input and generates its response (typically in its operating language, such as English).
- Output Filtering: The LLM’s generated response is scanned by a content safety service for any harmful, toxic, or inappropriate content before it is returned to the user or a downstream application.
- Output Translation: If the original user input was in a different language, the final, safe LLM response is translated back into the user’s native language for example, English to German.
While the order of orchestration modules is predetermined to ensure robust security, data integrity, and compliance, the configuration and activation of each module are highly flexible and tailored to your specific use case. This means you control whether a module is active, how it behaves, and with what parameters, allowing the workflow to be both reliable and adaptable.
For instance, you might configure strict input filtering but looser output filtering or specify different PII entities for masking based on the data being processed. This balance between a fixed structure and dynamic configuration is key to the service’s power.
Use Case 1: Automated Technical Support Response
In this case, you will observe an elaborate example of a workflow customized for a specific need.
- The Business Problem: A customer submits a technical support ticket in German, asking for help configuring notifications in SAP Signavio Process Manager and including their email address. The support system needs to provide an accurate, policy-aligned response, protect sensitive customer data, and seamlessly handle the language barrier. If a full answer can’t be given automatically, it should summarize the issue for a human agent.
- The Orchestration Workflow in Action: The support system receives an email, which triggers this workflow with the customer’s German message.
- User Input: The German support issue: "Betreff: Unterstützung benötigt. Nachricht: Hallo, ich benötige Unterstützung mit SAP Signavio. Insbesondere möchte ich Benachrichtigungen im SAP Signavio Process Manager konfigurieren. Bitte kontaktieren Sie mich mit unter Jane.Janeson@gmx.net."
- Grounding: The system uses keywords from the German user input like "SAP Signavio Process Manager", "Benachrichtigungen", to query official help.sap.com documentation. Relevant articles on configuring notifications are retrieved as context for the issue or issue-context.
- Templating: A predefined prompt template is populated. It includes a system instruction: "You are a helpful support assistant. First, check if the provided context answers the issue. If yes, provide an email answer based only on the context. If no, summarize the issue (sentiment, key theme, contact) for a human support team." The template also incorporates the original German support issue and the retrieved English (or mixed language) grounding context.
- Input Masking: The masking module identifies "Jane.Janeson@gmx.net" within the templated prompt and pseudonymizes it (for example, replaces it with [profile-email]).
- Input Filtering: The filtering module scans this masked prompt for any harmful or inappropriate content that might be present in the raw feedback; this filtering can be configured as ‘relaxed’ to reduce its stringency and allow a broader range of input. When relaxed, the system has a higher tolerance for content that might otherwise be flagged, often to prevent blocking legitimate business-specific terms.
- Input Translation: This module translates German text to English for better processing by LLM.
- LLM Processing: The LLM (for example, gpt-5) receives the fully prepared, safe, and masked English prompt. It analyzes the feedback against the grounded HR policies and generates a summary and thematic analysis in English.
- Output Filtering: The filtering.output module scans the LLM’s generated English summary to ensure it contains no toxic language, bias, or inappropriate suggestions.
- Output Translation: The English output is translated back to German for consumption by the user.
- Business Value Delivered:
- Ensured Anonymity & Compliance: Sensitive PII is masked, and content is filtered, aligning with HR data privacy policies.
- Rapid Insight Generation: Quickly identifies trends and issues from large volumes of feedback, enabling faster response from HR.
- Grounded Analysis: LLM’s analysis is informed by actual company policies, making it relevant and actionable.
- Translation:The input for LLM and output from LLM is translated for best user experience.
- Efficiency: Automates a labor-intensive review process, allowing HR professionals to focus on strategic initiatives.
- Cost Optimization: By omitting unnecessary translation steps, the workflow reduces processing time and associated costs.
Use Case 2: Internal HR Feedback Analysis
In this case, you will see how an orchestration workflow can be created without some optional modules. This is needed to simplify complex workflows and tailor them to specific business needs.
- The Business Problem: An internal Human Resources department periodically collects employee feedback via survey forms. These forms are typically submitted in English. The HR team needs to analyze the feedback for common themes, sentiment, and potential policy violations, but it must strictly ensure employee anonymity and filter out any inappropriate content. The final summary and analysis need to be in English for internal reporting.
- The Orchestration Workflow in Action: An HR analyst uploads a batch of English feedback forms, triggering the workflow.
- User Input: Employee feedback forms.
- Grounding: The system uses keywords from the feedback (for example, "benefits," "work-life balance") to search and retrieve relevant company HR policies and internal best practices documents from the HR knowledge base. This provides crucial context for the LLM’s analysis.
- Templating: A predefined prompt template is populated. It includes a system instruction: "You are an HR analyst. Summarize the key themes, sentiment, and actionable insights from the employee feedback. Ensure anonymity and highlight any potential policy violations (based on the provided HR policies). Make sure the output is concise and objective." The template integrates the employee feedback and the retrieved HR policies.
- Input Masking: The masking module scans the templated prompt for any identifying information (for example, employee names, IDs, specific department names if considered sensitive for this context) that might have been inadvertently included in the feedback and pseudonymizes it.
- Input Filtering: The filtering module scans this masked prompt for any harmful or inappropriate content that might be present in the raw feedback; this filtering can be configured as ‘relaxed’ to reduce its stringency and allow a broader range of input. When relaxed, the system has a higher tolerance for content that might otherwise be flagged, often to prevent blocking legitimate business-specific terms.
- Input Translation: (This module is not configured or is effectively skipped for this use case). Since the user input is already in English and the LLM processes in English, translation is not required.
- LLM Processing: The LLM (for example, gpt-5) receives the fully prepared, safe, and masked English prompt. It analyzes the feedback against the grounded HR policies and generates a summary and thematic analysis in English.
- Output Filtering: The filtering.output module scans the LLM’s generated English summary to ensure it contains no toxic language, bias, or inappropriate suggestions.
- Output Translation: (This module is not configured or is effectively skipped for this use case). Since the desired output is English, no translation back to another language is needed.
- Business Value Delivered:
- Ensured Anonymity & Compliance: Sensitive PII is masked, and content is filtered, aligning with HR data privacy policies.
- Rapid Insight Generation: This process quickly identifies trends and issues from large volumes of feedback, enabling faster HR response.
- Grounded Analysis: LLM’s analysis is informed by actual company policies, making it relevant and actionable.
- Efficiency: Automates a labor-intensive review process, allowing HR professionals to focus on strategic initiatives.
- Cost Optimization: By omitting unnecessary translation steps, the workflow reduces processing time and associated costs.
Designing and Deploying Orchestration Workflows
For developers, it's important to understand that these complex workflows are defined and executed efficiently.
- Unified API and SDK: Developers use the SAP Cloud SDK for AI to define an orchestration template, which is a JSON file that specifies the sequence of services. This templated approach is key to balancing the workflow’s fixed logical sequence with the dynamic, use-case-specific configurations, enabling developers to precisely define how each step operates for their unique requirements.
- Single API Call: Once the template is deployed, the entire multi-step workflow can be executed with a single, unified API call. The application simply sends the initial query to the Orchestration Service endpoint, which manages the entire chain of events internally. This abstracts away the complexity of calling multiple different services and handling data between them.
This approach dramatically simplifies development, ensures consistency, and allows for rapid deployment and modification of sophisticated AI-driven business processes.
Lesson Summary
You have now seen the tangible value of the Orchestration Service through practical, real-world examples that follow the official SAP workflow. You understand that starting with Grounding is key to enterprise-grade accuracy, and that post-processing steps like Content Filtering and Data Masking are essential for security and compliance.
This inherent structure provides enterprise-grade reliability and predictability, while the configuration within each module ensures the workflow can be precisely tailored to diverse business problems. By sequencing these services, you can transform general-purpose LLMs into specialized, data-driven, and highly secure business tools, enabling you to build sophisticated and trustworthy AI solutions.