Implementing Prompt Engineering Techniques

Objective

After completing this lesson, you will be able to design a systematic approach to develop and evaluate prompt engineering from a simple baseline.

In this lesson, you will discover how to improve the intelligence and precision of LLM responses through advanced prompt engineering techniques. Building upon the baseline evaluations established previously, you’ll learn to implement powerful strategies like Few-shot Prompting and Meta-prompting using the SAP Cloud SDK for AI, and observe their impact on improving the quality and accuracy of your generative AI applications.

Few-Shot Prompting

Let's implement promoting techniques and then evaluate the results to see improvement in the prompt results.

We use the following code:

Python
123456789101112131415161718192021222324252627282930313233343536373839
prompt_10 = Template( messages=[ SystemMessage( """You are an intelligent assistant. Your task is to extract and categorize messages. Here are some example: {{?few_shot_examples}} Use the examples when extract and categorize the following message: Extract and return a json with the follwoing keys and values: - "urgency" as one of {{?urgency}} - "sentiment" as one of {{?sentiment}} - "categories" list of the best matching support category tags from: {{?categories}} Your complete message should be a valid json string that can be read directly and only contain the keys mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces."""), UserMessage("{{?input}}") ] ) import random random.seed(42) k = 3 examples = random.sample(dev_set, k) example_template = """<example> {example_input} ## Output {example_output} </example>""" examples = '\n---\n'.join([example_template.format(example_input=example["message"], example_output=json.dumps(example["ground_truth"])) for example in examples]) f_10 = partial(send_request, prompt=prompt_10, few_shot_examples=examples, **option_lists) response = f_10(input=mail["message"]) overall_result["few_shot--llama3.1-70b"] = evalulation_full_dataset(test_set_small, f_10) pretty_print_table(overall_result)

The code aims to create a prompt template to extract and categorize messages according to their urgency, sentiment, and support category tags. By using randomly selected examples from a development set, it generates a formatted few-shot learning prompt. The prompt is sent to a language model to process and categorize a given input message, and the overall performance of the model is then evaluated and displayed in a table format.

Here’s an expanded explanation for a few parts of the code:

  1. Setting the Random Seed: It sets a random seed using "random.seed(42)" to ensure that the random sampling of the examples is reproducible. This helps in maintaining consistency in experiments and evaluations.
  2. Sampling Examples: The variable "k" is set to 3, indicating the number of examples to sample from the "dev_set" dataset. The "random.sample(dev_set, k)" function selects three random examples from the development set.
  3. Formatting Examples: The selected examples are formatted into a template "example_template". Each example includes the input message and the expected output in JSON format. This formatted string is then joined using "\n---\n" to create a cohesive set of examples.
  4. Partial Function Application: The "partial" function is used to bind the generated prompt and examples to the "send_request" function, creating a function "f_10" that can be called with just the input message. This streamlines the process of sending requests to the model with the necessary context.
  5. Sending Request and Evaluating: The script sends the request using "f_10(input=mail["message"])" with the input message from "mail["message"]". The result is stored and evaluated against a small test dataset "test_set_small". The evaluation results are stored in "overall_result["few_shot--llama3-70b"]".
  6. Output Display: Finally, the "pretty_print_table(overall_result)" function is used to display the evaluation results in a formatted table, making it easier to interpret the results.

Response Example:

Code Snippet
12345
0%| | 0/20 [00:00<?, ?it/s] is_valid_json correct_categories correct_sentiment correct_urgency ========================================================================================= basic--llama3.1-70b 100.0% 83.5% 30.0% 70.0% few_shot--llama3.1-70b 100.0% 84.0% 50.0% 90.0%

This is the output for evaluation after implementing few-shot prompting.

You can see improvement in sentiment and urgency assignment.

We established a baseline earlier, and now we can evaluate and compare the results of the refined prompts with the baseline using the test data.

Meta-prompting

Here we'll implement meta-prompting to create detailed guides for prompts for various tags like urgency, sentiments, and so on.

We use the following code:

Python
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
example_template_metaprompt = """<example> {example_input} ## Output {key}={example_output} </example>""" prompt_get_guide = Template( messages=[ SystemMessage( """Here are some example: --- {{?examples}} --- Use the examples above to come up with a guide on how to distinguish between {{?options}} {{?key}}. Use the following format: ``` ### **<category 1>** - <instruction 1> - <instruction 2> - <instruction 3> ### **<category 2>** - <instruction 1> - <instruction 2> - <instruction 3> ... ``` When creating the guide: - make it step-by-step instructions - Consider than some labels in the examples might be in correct - Avoid including explicit information from the examples in the guide The guide has to cover: {{?options}} """ ), UserMessage("{{?input}}") ] ) guides = {} for i, key in enumerate(["categories", "urgency", "sentiment"]): options = option_lists[key] selected_examples_txt_metaprompt = '\n---\n'.join([example_template_metaprompt.format(example_input=example["message"], key=key, example_output=example["ground_truth"][key]) for example in dev_set]) guides[f"guide_{key}"] = send_request(prompt=prompt_get_guide, examples=selected_examples_txt_metaprompt,input=selected_examples_txt_metaprompt, key=key, options=options, _print=False, _model='gpt-4o') print(guides['guide_urgency'])

This code generates step-by-step guides for different categories—like "categories," "urgency," and "sentiment"—from labeled examples in a dataset.

It creates tailored guides for distinguishing between categories, urgency, and sentiment in text data. It formats examples using a specific template, then sends these examples to a model for generating step-by-step instructions. The guides help users distinguish between these categories based on patterns in the provided examples.

Detailed explanation:

  1. Template Definitions:

    • "example_template_metaprompt": Defines a template to format examples, specifying how to structure input and output within an example.
    • "prompt_get_guide": Outlines a prompt format to request the generation of a guide based on formatted examples. It also specifies the format and requirements for the guide, including making it a step-by-step instruction, accounting for possible incorrect labels, and avoiding explicit replication of the examples.
  2. Guide Preparation:

    • The script iterates over three keys: "categories", "urgency", and "sentiment".
    • For each key, it retrieves relevant options from "option_lists".
  3. Example Selection and Formatting: It formats examples from "dev_set" using the predefined template for each key, embedding the input message and corresponding ground truth.

  4. Guide Generation:

    • It sends a formatted prompt along with the examples to a model (gpt-4o), requesting the generation of a guide for distinguishing between the specified options for each key.
    • It stores the generated guides in a dictionary (guides), with each guide associated with its respective key (for example, "guide_categories", "guide_urgency", "guide_sentiment").

This process ensures that comprehensive and accurate instruction guides are generated for different classification tasks, facilitating the correct categorization of text data.

The last line of the code prints the guide for urgency.

You will see the guide describing rules for each urgency category that can be used in a prompt.

We use the following code to utilize these guides in a prompt.

Python
12345678910111213141516171819202122232425262728293031
prompt_12 = Template( messages=[ SystemMessage( """You are an intelligent assistant. Your task is to classify messages. This is an explanation of `urgency` labels: --- {{?guide_urgency}} --- This is an explanation of `sentiment` labels: --- {{?guide_sentiment}} --- This is an explanation of `support` categories: --- {{?guide_categories}} --- Giving the following message: Extract and return a json with the following keys and values: - "urgency" as one of {{?urgency}} - "sentiment" as one of {{?sentiment}} - "categories" list of the best matching support category tags from: {{?categories}} Your complete message should be a valid json string that can be read directly and only contain the keys mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces. """ ), UserMessage("{{?input}}") ] ) f_12 = partial(send_request, prompt=prompt_12, **option_lists, **guides) response = f_12(input=mail["message"])

The code updates the system role in the prompt for classifying messages based on urgency, sentiment, and support categories by utilizing predefined guides generated through the meta-prompt code. It then uses a partial function to send this prompt as a request with specific options and guides. Finally, it processes an email message to extract and return these classifications in a JSON format.

Evaluate this prompt and its response, using the following code:

Python
1234
overall_result["metaprompting--llama3.1-70b"] = evalulation_full_dataset(test_set_small, f_12) pretty_print_table(overall_result)

You can get the following output:

Code Snippet
123456
0%| | 0/20 [00:00<?, ?it/s] is_valid_json correct_categories correct_sentiment correct_urgency ============================================================================================== basic--llama3.1-70b 100.0% 83.5% 30.0% 70.0% few_shot--llama3.1-70b 100.0% 84.0% 50.0% 90.0% metaprompting--llama3.1-70b 100.0% 90.0% 30.0% 95.0%

Now, we see that accuracy for urgency and categories is improved, however accuracy is reduced in the case of sentiment.

Combining Meta-prompting and Few-shot Prompting

We can combine meta-prompting and few-shot prompting using the following code:

Python
1234567891011121314151617181920212223242526272829303132333435363738
prompt_13 = Template( messages=[ SystemMessage( """You are an intelligent assistant. Your task is to classify messages. Here are some examples: --- {{?few_shot_examples}} --- This is an explanation of `urgency` labels: --- {{?guide_urgency}} --- This is an explanation of `sentiment` labels: --- {{?guide_sentiment}} --- This is an explanation of `support` categories: --- {{?guide_categories}} --- Giving the following message: extract and return a json with the follwoing keys and values: - "urgency" as one of {{?urgency}} - "sentiment" as one of {{?sentiment}} - "categories" list of the best matching support category tags from: {{?categories}} Your complete message should be a valid json string that can be read directly and only contain the keys mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces. """ ), UserMessage("{{?input}}") ] ) f_13 = partial(send_request, prompt=prompt_13, **option_lists, few_shot_examples=examples, **guides) response = f_13(input=mail["message"])

This code defines a template for an intelligent assistant to classify messages based on urgency, sentiment, and support categories. It uses partial application to customize the request handling with specific examples and guidelines, then processes the input message to return a structured JSON response. This aids in accurate and efficient message classification.

It's combining few examples, with guides, generate during meta-prompting.

Evaluate this prompt and its response using the following code:

Python
1234
overall_result["metaprompting_and_few_shot--llama3.1-70b"] = evalulation_full_dataset(test_set_small, f_13) pretty_print_table(overall_result)

You will receive the following output:

Code Snippet
1234567
0%| | 0/20 [00:00<?, ?it/s] is_valid_json correct_categories correct_sentiment correct_urgency =========================================================================================================== basic--llama3.1-70b 100.0% 83.5% 30.0% 70.0% few_shot--llama3.1-70b 100.0% 84.0% 50.0% 90.0% metaprompting--llama3.1-70b 100.0% 90.0% 30.0% 95.0% metaprompting_and_few_shot--llama3.1-70b 100.0% 88.5% 50.0% 90.0%

Now, we see that accuracy for almost all categories is similar or has reduced. In addition, it's a more expensive prompt needing more resources.

Note

You may get a slightly different response to the one shown here and in all the remaining responses of models shown in this learning journey.

When you execute the same prompt in your machine, an LLM produces varying outputs due to its probabilistic nature, temperature setting, and nondeterministic architecture, leading to different responses even with slight setting changes or internal state shifts.

Evaluation Summary

We need to consider the overall accuracy and quality of a model along with its cost and scale.

At times, smaller models and simpler techniques may give better results.

In the preceding output, we can see that few-shot gives optimal performance with less expensive prompt.

Let's recap what we have done to solve the business problem so far:

  1. We created a basic prompt in SAP AI Launchpad using an open-source model.
  2. We recreated the prompt using SAP Cloud SDK for AI (Python) to scale the solution.
  3. We created a baseline evaluation method for the simple prompt.
  4. Finally, we used techniques like few shot and meta-prompting to further enhance the prompts.
  5. The results show improvement in the quality of prompt responses after implementing advanced techniques.

Lesson Summary

You’ve successfully implemented and evaluated key prompt engineering techniques: Few-shot Prompting to provide the LLM with context-rich examples, and Meta-prompting to generate explicit instructions and guides for consistent behavior. You also explored combining these methods. Through iterative evaluation, you’ve witnessed how these techniques, used with the SAP Cloud SDK for AI, can significantly enhance the accuracy and quality of LLM responses, moving your solutions closer to business-ready applications, while also understanding the trade-offs in terms of complexity and cost.

Exercise

In the exercises, you will learn to significantly enhance prompt effectiveness and context understanding by implementing advanced techniques like few-shot learning within prompt templates in SAP AI Launchpad.

Finally, you will see how to build secure and reliable AI applications by integrating your refined prompt templates with the orchestration service to implement data privacy and content filtering in workflows in SAP AI Launchpad.

Continuing with the scenario discussed previously, we created basic prompts that assign urgency, sentiment, and categories to customer messages that can be used in software.

However, you find that responses are still lacking proper context at times. You need to refine prompts to achieve better results.

You can refine the prompts using techniques like one-shot and few-shot prompting.

One-shot prompting is the most straightforward technique. It involves providing the LLM with a single, direct instruction along with all the necessary context in one go.

Few-shot prompting is a significantly more powerful technique that involves providing the LLM with a few (typically 1 to 5) examples of input-output pairs within the prompt itself. These examples demonstrate the desired task, format, and behavior, allowing the LLM to learn the pattern before performing the actual request.

Task 1: Implement Few-Shot Prompting using Prompt Templates

We will update the prompt template that you have created earlier with few-shot techniques.

Steps

  1. Ensure that you are logged on to generative AI hub.

  2. Select Prompt Management and then Templates.

  3. Select the All button. You can see your template here. You can also search for your template.

  4. Select the latest version of the template which is 3.0.0. Ensure you select your template and the correct timestamp within it. A good practice is to read the template before using it.

  5. Select the prompt template and then click Open in Prompt Editor button. Your prompt is ready to use.

  6. Use the following prompt in the User role:

    Code Snippet
    12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
    "<Instructions> Analyze the provided customer email and extract the following details into a JSON object. Ensure all fields are present and correctly typed according to the specifications in <OutputFormat>. Summarize 'Problem_Description' concisely (max 100 words). If any field's value cannot be determined from the email, use 'Unknown' or 'N/A' as appropriate. </Instructions> <OutputFormat> { "Complaint_ID": "string (e.g., AUTO-GEN-001)", "Complaint_Type": "enum (Plumbing, HVAC, Electrical, Noise, Cleaning, Pest Control, General Maintenance, Other)", "Urgency": "enum (High, Medium, Low)", "Problem_Description": "string (concise summary, max 100 words)", "Affected_Location": "string (e.g., Apartment 301, Main Lobby)", "Customer_Sentiment": "enum (Very Negative, Negative, Neutral, Positive)", "Suggested_Initial_Action": "string (clear next step for agent)" } </OutputFormat> <ExampleInput> Subject: Urgent - Leaky Faucet in Kitchen, Apartment 301 Dear Facility Management, I am writing to report a serious issue in my apartment, 301. The kitchen faucet has been leaking non-stop since last night. It's not just a drip, it's a steady stream, and I'm worried about water damage. I tried to tighten it myself but it didn't help. This is incredibly frustrating, especially since I just moved in last month. Please send someone to fix it immediately. Thank you, Sarah Jenkins </ExampleInput> <ExampleOutput> { "Complaint_ID": "AUTO-GEN-001", "Complaint_Type": "Plumbing", "Urgency": "High", "Problem_Description": "Kitchen faucet in Apartment 301 is leaking continuously since last night, causing concern for water damage. Tenant attempted to fix without success.", "Affected_Location": "Apartment 301", "Customer_Sentiment": "Very Negative", "Suggested_Initial_Action": "Dispatch plumber to Apartment 301 with leaking faucet repair kit immediately." } </ExampleOutput> <ExampleInput> Subject: AC not working properly in Main Lobby Dear ProCare Support, The air conditioning in the main lobby has not been cooling effectively for the past few days. It's making the waiting area very uncomfortable for visitors and staff, especially with the weather getting warmer. It's not completely broken, but definitely struggling. Could someone please take a look at it soon? Thanks. Regards, Building Manager </ExampleInput> <ExampleOutput> { "Complaint_ID": "AUTO-GEN-002", "Complaint_Type": "HVAC", "Urgency": "Medium", "Problem_Description": "Air conditioning in the main lobby is not cooling effectively, causing discomfort for visitors and staff. The unit is struggling but not completely non-functional.", "Affected_Location": "Main Lobby", "Customer_Sentiment": "Negative", "Suggested_Initial_Action": "Schedule HVAC technician to inspect main lobby AC unit within 24-48 hours." } </ExampleOutput> <ExampleInput> Subject: Light bulb replacement - Hallway 3rd Floor Hi Team, Just a quick note that a light bulb in the hallway on the 3rd floor, near apartment 305, seems to have burned out. It's not a critical issue, but it would be great if someone could replace it when convenient. No rush. Thanks, Resident </ExampleInput> <ExampleOutput> { "Complaint_ID": "AUTO-GEN-003", "Complaint_Type": "General Maintenance", "Urgency": "Low", "Problem_Description": "A light bulb in the 3rd floor hallway, near apartment 305, has burned out and needs replacement.", "Affected_Location": "3rd Floor Hallway (near Apt 305)", "Customer_Sentiment": "Neutral", "Suggested_Initial_Action": "Add to general maintenance task list for light bulb replacement during next routine visit." } </ExampleOutput> <UserQuery> {{?user_email_placeholder}} </UserQuery> . "

    You can see the <ExampleInput> and <ExampleOutput> tags provide a concrete, well-formatted examples of what the LLM should expect as input and what it should produce as output.

  7. Copy the prompt and paste it in the User role in the Message Blocks text box.

  8. Click the Save Template button. The Save Template dialog box is displayed.

  9. Change the Version to 4.0.0.

    3.1
  10. Click the Save button. The template is saved. You have updated the prompt template with few-shot examples.

Task 2: Use your Prompt Template to Address your Business Problem

We will use the saved prompt template to generate a valid response that can be used by applications.

Steps

  1. Ensure that you are logged on to generative AI hub.

  2. Select Prompt Management and then Templates.

  3. Select the All button. You can see your template here. You can also search for your template.

  4. Select the latest version of the template which is 4.0.0. Ensure that you select your template and correct timestamp within the template. A good practice is to read the template before using it.

  5. Select the prompt template and then click Open in Prompt Editor. Your prompt is ready to use.

  6. Scroll down and then select Variable Definitions.

  7. You need to provide customer messages in this variable. Use the following message:

    Code Snippet
    1234567891011121314151617181920
    Subject: Urgent: Ongoing Maintenance Issues at Our Facility Dear Support Team, I hope this message finds you well. My name is [Sender], and I am the community manager for [Community Name]. I have been overseeing our facility’s operations and maintenance for quite some time now, and I must say, the recent experiences with your maintenance services have been less than satisfactory. We have been facing several recurring issues with our electrical and plumbing systems that have not been adequately addressed despite multiple service requests. The lack of timely and effective solutions is causing significant inconvenience to our residents and staff, and it is becoming increasingly difficult to manage the situation. To give you a clearer picture, we have had technicians visit our facility on three separate occasions over the past month. Each time, the problem was either temporarily fixed or not resolved at all. This has led to a lot of frustration among our community members, and it is reflecting poorly on our management. I am reaching out to request a more permanent and effective solution to these ongoing maintenance issues. We need a thorough inspection and a comprehensive plan to address the root causes of these problems. It is crucial for us to ensure a safe and comfortable environment for everyone in our community. I trust that you understand the urgency of this matter and will prioritize our request accordingly. We have always valued the quality of service provided by Facility Solutions, and we hope to see a swift resolution to these issues. Thank you for your attention to this matter. I look forward to your prompt response. Best regards, [Sender]
  8. Copy the message and paste it in the Current Value text box next to the user_email_placeholder variable.

  9. Click the Run button to execute the prompt. A response is generated. You may have to scroll up to see the complete response.

    3-2

    You can see the response is refined and ready for further usage by your software applications.

    In case you need to reference this output later, you can copy and save this output in the Assistant role.

    Note

    In case you need to use the prompt template in Prompt Editor after adding the assistant role to generate fresh responses, you need to delete the Assistant role.
  10. Select the output in the Response text box and then add a role.

    3.3
  11. Scroll down and select the Assistant option. Copy the response text. In case you don’t see the text, select the text box and press space, and you will be able to see the complete text.

  12. Click the Save Template button. The Save Template dialog box is displayed.

  13. Change the Version to 4.1.0.

    3.4

    You have used the updated prompt template to get a better response and see how to retain the output, if needed.

Task 3: Optimize the Template with Variables and Default Values

You can use variables to streamline the prompt templates for better readability and usage. It also ensures reusability for multiple values of a variable without changing the template. Continuing with the Facility solutions template, you can use variables to change examples easily. Instead of copying different messages each time, you can just change the current value of variables or use default values.

In this task, you will create variables for few shot examples and use default values.

Steps

  1. Ensure that you are logged on to generative AI hub.

  2. Select Prompt Management and then Templates. You can see your template here. You can also search for it, if needed.

  3. Select the All radio button. You can see your template here. You can also search for your template.

  4. Select the latest version of the template which is 4.1.0. Ensure you select your template and the correct timestamp within it. A good practice is to read the template before using it.

  5. Select the prompt template and then click Open in Prompt Editor button. Your prompt is ready to use.

  6. Use the following prompt in the User role:

    Code Snippet
    12345678910111213141516171819202122232425262728
    "<Instructions> Analyze the provided customer email and extract the following details into a JSON object. Ensure all fields are present and correctly typed according to the specifications in <OutputFormat>. Summarize 'Problem_Description' concisely (max 100 words). If any field's value cannot be determined from the email, use 'Unknown' or 'N/A' as appropriate. </Instructions> <OutputFormat> { "Complaint_ID": "string (e.g., AUTO-GEN-001)", "Complaint_Type": "enum (Plumbing, HVAC, Electrical, Noise, Cleaning, Pest Control, General Maintenance, Other)", "Urgency": "enum (High, Medium, Low)", "Problem_Description": "string (concise summary, max 100 words)", "Affected_Location": "string (e.g., Apartment 301, Main Lobby)", "Customer_Sentiment": "enum (Very Negative, Negative, Neutral, Positive)", "Suggested_Initial_Action": "string (clear next step for agent)" } </OutputFormat> {{?few_shot_example_1}} {{?few_shot_example_2}} {{?few_shot_example_3}} <!-- Add more {{few_shot_example_N}} as needed --> <UserQuery> {{?user_email_placeholder}} </UserQuery>"

    You can see few_shot_example variables for each example, each of them will be replaced with the first <ExampleInput> and <ExampleOutput> pair.

  7. Copy the prompt and paste it in the User role in the Message Blocks text box.

  8. Scroll down to the Variables section.

    3.5
  9. Add the following values for the Default Value of each variable. Add the following default value for few_shot_example_1:

    Code Snippet
    1234567891011121314151617
    " <ExampleInput> Subject: Urgent - Leaky Faucet in Kitchen, Apartment 301 Dear Facility Management, I am writing to report on a serious issue in my apartment, 301. The kitchen faucet has been leaking nonstop since last night. It's not just a drip, it's a steady stream, and I'm worried about water damage. I tried to tighten it myself, but it didn't help. This is incredibly frustrating, especially since I just moved in last month. Please send someone to fix it immediately. Thank you, Sarah Jenkins </ExampleInput> <ExampleOutput> { "Complaint_ID": "AUTO-GEN-001", "Complaint_Type": "Plumbing", "Urgency": "High", "Problem_Description": "Kitchen faucet in Apartment 301 is leaking continuously since last night, causing concern for water damage. Tenant attempted to fix without success.", "Affected_Location": "Apartment 301", "Customer_Sentiment": "Very Negative", "Suggested_Initial_Action": "Dispatch plumber to Apartment 301 with leaking faucet repair kit immediately." }"
  10. Add the following default value for few_shot_example_2:

    Code Snippet
    12345678910111213141516171819202122
    "<ExampleInput> Subject: AC not working properly in Main Lobby Dear Support team, The air conditioning in the main lobby has not been cooling effectively for the past few days. It's making the waiting area very uncomfortable for visitors and staff, especially with the weather getting warmer. It's not completely broken, but definitely struggling. Could someone please take a look at it soon? Thanks. Regards, Building Manager </ExampleInput> <ExampleOutput> { "Complaint_ID": "AUTO-GEN-002", "Complaint_Type": "HVAC", "Urgency": "Medium", "Problem_Description": "Air conditioning in the main lobby is not cooling effectively, causing discomfort for visitors and staff. The unit is struggling but not completely non-functional.", "Affected_Location": "Main Lobby", "Customer_Sentiment": "Negative", "Suggested_Initial_Action": "Schedule HVAC technician to inspect main lobby AC unit within 24-48 hours." } </ExampleOutput>"
  11. Add the following default value for few_shot_example_3:

    Code Snippet
    123456789101112131415161718192021222324
    "<ExampleInput> Subject: Light bulb replacement - Hallway 3rd Floor Hi Team, Just a quick note that a light bulb in the hallway on the 3rd floor, near apartment 305, seems to have burned out. It's not a critical issue, but it would be great if someone could replace it when convenient. No rush. Thanks, Resident </ExampleInput> <ExampleOutput> { "Complaint_ID": "AUTO-GEN-003", "Complaint_Type": "General Maintenance", "Urgency": "Low", "Problem_Description": "A light bulb in the 3rd floor hallway, near apartment 305, has burned out and needs replacement.", "Affected_Location": "3rd Floor Hallway (near Apt 305)", "Customer_Sentiment": "Neutral", "Suggested_Initial_Action": "Add to general maintenance task list for light bulb replacement during next routine visit." } </ExampleOutput>"
  12. Add the following default value for user_email_placeholder

    Code Snippet
    1234567891011121314151617181920
    Subject: Urgent: Ongoing Maintenance Issues at Our Facility Dear Support Team, I hope this message finds you well. My name is [Sender], and I am the community manager for [Community Name]. I have been overseeing our facility’s operations and maintenance for quite some time now, and I must say, the recent experiences with your maintenance services have been less than satisfactory. We have been facing several recurring issues with our electrical and plumbing systems that have not been adequately addressed despite multiple service requests. The lack of timely and effective solutions is causing significant inconvenience to our residents and staff, and it is becoming increasingly difficult to manage the situation. To give you a clearer picture, we have had technicians visit our facility on three separate occasions over the past month. Each time, the problem was either temporarily fixed or left unresolved. This has led to significant frustration among our community members and reflects poorly on our management. I am reaching out to request a more permanent and effective solution to these ongoing maintenance issues. We need a thorough inspection and a comprehensive plan to address the root causes of these problems. It is crucial that we ensure a safe and comfortable environment for everyone in our community. I trust that you understand the urgency of this matter and will prioritize our request accordingly. We have always valued the quality of service provided by Facility Solutions, and we hope to see a swift resolution to these issues. Thank you for your attention to this matter. I look forward to your prompt response. Best regards, [Sender]

    You have provided default values for all variables.

    3.6
  13. Click the Save Template button. The Save Template dialog box is displayed.

  14. Change the Version to 5.0.0.

  15. Click the Save button. The template is saved. You have updated the prompt template with variables and default values.

Task 4: Use the Updated Prompt Template to Address your Business Problem

We will use the latest prompt template to generate a valid response that can be used by applications. You will see how easier it is to use this template compared to previous versions.

Steps

  1. Ensure that you are logged on to Generative AI hub.

  2. Select Prompt Management and then Templates.

  3. Select the All button. You can see your template here. You can also search for your template.

  4. Select the latest version of the template which is 5.0.0. Ensure you select your template and the correct timestamp within it. A good practice is to read the template before using it.

  5. Select the prompt template and then click Open in Prompt Editor. Your prompt is ready to use.

  6. Scroll down and delete the Assistant role.

    3.7
  7. See Variable Definitions. You will see all the variables with default values.

  8. Click Run to execute the prompt. A response is generated.

    3.8

    You can see the generated response. This is a rapid, reliable method for iterating and creating the best solution to your business problems.

    You can edit the default values by providing a current value.

  9. Scroll down to see Variables and add the following message to the Current Value for the user_email_placeholder:

    Code Snippet
    123456
    "Subject: Minor issue with light in 2nd floor hallway Dear Facility Management, I wanted to bring to your attention a minor issue in the hallway on the 2nd floor, specifically near apartment 205. The overhead light fixture has been flickering occasionally for the past couple of days. It’s not a critical problem, and there’s still plenty of light, but I thought you should be aware. No need for an immediate visit, but it would be great if someone could look during a routine check. Thank you, A Resident "
  10. Click Run to execute the prompt. A response is generated.

    3.9

    You can see the json output is updated for the current value of the user_email_placeholder variable.

    You have used the prompt template utilizing multiple variables, default values, and current values.

    You have created prompts using the Generative AI hub to solve your business problems using versatile features like prompt templates, variables, and prompt management to create a foundation for scalable AI solutions.

    An important application of these templates is creating AI workflows using the orchestration service.

Create Workflow Using a Prompt Template and the Orchestration Service

The orchestration service facilitates the development of workflows that integrate various tasks, such as data filtering and anonymization. Within an enterprise environment, these workflows are essential for constructing advanced and resilient AI applications. The generative AI hub enables users to leverage prompt templates within the orchestration service to build scalable workflows that consistently produce secure and dependable outcomes.

We will use prompt templates to create workflows that include data privacy and content filtering for secure, reliable results.

Steps

  1. Ensure that you are logged on to Generative AI hub.

  2. Select Orchestration. Orchestration Configurations are displayed.

    4.1
  3. Click the Create button. The Untitled_Configuration page is displayed. You can see basic modules.

    4.2
  4. Click the Advanced slide button to display all modules. You can see all modules. These advanced modules are disabled by default.

  5. We need grounding and translation modules. Activate Data Masking, Input Filtering, and Output Filtering by clicking the respective buttons.

    4.3

    These modules will move to the Activated section.

  6. Click the Select Template button.

    4.4
  7. The Select Template dialog box is displayed. Search for your template and select the latest version, which is 5.0.0.

    4.5
  8. Click the Select button. The template is displayed in the configuration page. The middle pane shows the template and modules. The right pane shows template variables and an option to execute the orchestration workflow.

    4.6
  9. Scroll down in the middle to see Data Masking, and then select Pseudonymize.

  10. Select the fields shown in the following screenshot.

    4.7

    These fields will be pseudonymized before sending the query to the LLM for processing. You can also just anonymize the data.

    We are using pseudonymization because it allows tracking recurring issues for the same apartment or resident over a period of time and linking maintenance histories for operational insights, without directly exposing personal identities to LLMs, unlike true anonymization, which would break these vital connections.

  11. Scroll down to Input Filtering and then select one of the methods, as shown in the following screenshot.

    4.7

    This will filter the prompt for any harmful or inappropriate content in the raw message; it is configured as medium or ‘relaxed’ to reduce its stringency and allow a broader range of input.

  12. Scroll down to see Model Configuration and then select a model of your choice.

  13. 4.9

    The LLM will receive the fully prepared, safe, and masked prompt.

  14. Scroll down to see Output Filtering and then select one of the methods, as shown in the following screenshot.

    4.10

    This filtering scans the LLM’s generated responses to ensure they contain no toxic language, bias, or inappropriate suggestions.

    This will filter the prompt for any harmful or inappropriate content that might be present in the raw message; it is configured as medium or ‘relaxed’ to reduce its stringency and allow a broader range of input.

  15. Scroll and delete the Assistant role in the template.

  16. You can see default values for all variables in the right pane, which can be changed based on your needs. The workflow is now ready for testing. Click Run. After a few moments, a response is generated.

    4.11

    You can see the generated response.

    Note that the prompt template enabled rapid workflow development and testing by providing predefined roles, prompts, variables, and default values.

  17. You can see the JSON for all the modules using the JSON toggle button at the top. You can also see the trace JSON for the entire execution using the Trace option. Resize the task panes for the best views.

    4.12

    Tracing workflows is essential for debugging errors, identifying bottlenecks, and ensuring accountability within complex, multi-step enterprise operations.

    You can also save the entire workflow using the Save button.

  18. Click the Save button. The Save Orchestration Configuration dialog box is displayed. Use your template name to save the configuration. Select only the orchestration scenario to save the configuration.

    4.13

    You can see this configuration on the Orchestration Configurations page. Use the search button to see your configuration.

    4.14

    The configuration provides a scalable approach for iterating, refining, and scaling workflows to solve your business problems.

    You can open this configuration and download it for further usage.

    4.15

    You have used prompt templates to develop workflow configuration, including data privacy measures and content filtering.