Chat
This lesson introduces the Generative AI Hub Chat where prompts can be created, managed and executed against a range of established large language models (LLMs).
The key features of the Chat design is to excel at maintaining the background and flow of a conversation, ensuring smooth and connected interactions. Chat design heavily relies on human feedback for ongoing improvement.
Prerequisites
There is at least one deployment for a generative AI model running. For more information, see Create a Deployment for a Generative AI Model.
The AI API connection and resource group that were used in the activation steps are available and selected.
The genai_manager, prompt_manager, genai_experimenter or prompt_experimenter roles, or a role collection that contains one of these roles, has been assigned to the user. For more information, see Roles and Authorizations.
Try it out!
Select this AI Launchpad link to get started!
- Choose the doc-grounding resource group from the list of options on the right hand side.

- Expand the menu on the left hand side, then expand Generative AI Hub and choose Chat to start a dialog with an AI model.

Before starting a chat, it’s possible to configure the settings for your chosen model, including which model to use and what parameters to include.
- Adjust the model settings by selecting the Configure chat settings icon beside the model name.

In the Model Settings tab, the following options are available:
- Selected Model: Select from a number of different foundation models available. If you do not choose a model, the default option will be used.
- Parameters: Different models support different parameters and values. Full information is available from the model provider.
- Streaming Response: Chat responses are output in real time, as they are generated. (Only available for certain models).


Chat Context Settings
Select the Chat Context tab to adjust the chat context for the chosen model.

The following options are available:
- Context History: The number of previous interactions that form the context for the chat.
- Select Template: Users with an orchestration deployment and existing templates can select a template using the Select Template button.
- Messages: Instructions or context to guide the behavior of the model. Available for selected models.
- Variable Definitions: It’s possible to create placeholders in your prompt by declaring them as variables. Real values can be defined during inference, otherwise the default value will be used.
Feel free to play around with the various different models and settings available.
-
When ready, enter your chat input and press the ‘Send’ icon to initiate a response.

Results
The response to your chat input will then be generated.
You can now:
-
Clear the current chat, including the context history, by clicking the Clear button.
-
Download your entire chat using the Download button. Your chat will be automatically downloaded in JSON format, and can be saved locally.
-
Copy an individual chat message or response using the Copy icon.
-
Continue your chat by sending more messages. You can repeat your chat with changes to the messages, model, and parameters to change the outcome.
The next lesson looks at a more structured method for interacting with an LLM: The Prompt Editor.