Prompt Editor
Querying the Large Language Model (LLM)
This lesson introduces the Prompt Editor where prompts can be designed and executed against a range of established large language models (LLMs).
Prompt engineering is ideal for task-specific scenarios requiring precise outputs. Once configured, it is less dependent on human intervention for refinement and offers greater versatility in shaping AI responses, allowing modifications to prompts for varied results.
Try it out!
-
Navigate to and expand the Generative AI Hub menu option where the following prompt-related options are available:
- Prompt Editor: In the prompt editor, you can design prompts and save the responses.
- Prompt Management: In the prompt management page, you can manage all saved prompts.
-
Select the Prompt Editor to open the prompt editor dialog screen.

- In the Prompt Editor page, enter a name for the prompt. The prompt can also be categorized with a collection name, but this is optional. Using meaningful names and collections will be helpful when it comes to prompt management. As the following is a translation example, the prompt name is toEnglish and the Collection is called Translation.

- Paste the following text into the message block of the user role in the prompt editor:
1234Translate the following into English:
Baie organisasies het onbenutte potensiaal om kunsmatige intelligensie
in die werksplek te benut, wat winsgewendheid maksimeer.- Scroll down and click on the highlighted icon to select the model.

The list of models currently available in this deployment will appear. It’s possible to filter models by input type, model provider, provisioning method etc.

- Scroll through the list of available models and select one for inference. In this example we will use GPT-5-Mini. Select this model to return to the prompt editor screen.

- Additional parameters for the prompt can also be set in here

Frequency Penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
Prescence Penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
Max Completion Tokens: The maximum number of tokens allowed for the generated answer.
Temperature: What sampling temperature to use, between 0 and 2. Higher values will make the output more random, while lower values will make it more focused and deterministic.
- Leave the parameters at the default values, and click on Run to get the output.

- Save the prompt and its response by clicking Save.

- Click on the ‘Reset’ icon to start a new prompt.

-
This time, name the prompt toGerman and keep the Collection name as Translation.
-
Edit the prompt to request the following translation:
1234Translate the following into German:
Baie organisasies het onbenutte potensiaal om kunsmatige intelligensie
in die werksplek te benut, wat winsgewendheid maksimeer.- Click on Run to execute the prompt and get the German translation.

- Save the prompt by clicking on Save.

- A previously saved prompt can be opened by choosing Select.

- Choose the prompt and click on Select to open it in the prompt editor.

The next unit shows how to find, manage and organize all your prompts via the Prompt Management menu.