- Home
- /
- Article
Webex AI Agent Studio Administration guide
This article describes the overview of Webex AI Agent Studio and it's capabilities, AI agent configuration set up, integration of AI agents with voice and digital channels, and AI agent reports.
Get started with Webex AI Agent Studio
Webex AI Agent Studio is a sophisticated platform that is designed to create, manage, and deploy automated AI agents to fulfill customer service and support needs. Using artificial intelligence, AI agents provide automated assistance to customers before they interact with human agents. These agents support voice interactions with intonation, language understanding, and contextual awareness within conversations. Also, AI agents seamlessly and informatively handle digital channel interactions through text and online chat. Customers benefit from a concierge-like experience, receiving assistance with questions, information retrieval, and minimizing wait times.
Capabilities of Webex AI Agent Studio
- Accurate and timely responses—Provides precise answers to customer inquiries in real time.
- Intelligent task execution—Executes tasks based on customer requests or inputs.
Key benefits for businesses
-
Enhanced customer experience—Delivers a real-time conversational experience for customers.
-
Personalized interactions—Tailors responses to individual customer needs and preferences.
-
Scalability and efficiency—Handles high volume of customer interactions without requiring additional human agents, leading to improved satisfaction, and reduced operational costs.
Understand AI agent types and examples
The following table provides a glimpse of AI agent types and their capabilities:
AI agent type | Purpose | Capability | Description | How to set up? |
---|---|---|---|---|
Autonomous |
Autonomous AI agents work on their own, deciding and performing tasks without needing direct help from humans. |
Perform actions |
Make choices based on information and predefined rules. Automate tasks that take a lot of time or are repetitive. |
|
Answer questions |
Autonomous agents can access and use a knowledge repository to provide informative and accurate answers to user queries. |
Autonomous AI Agents for answering questions | ||
Scripted |
Scripted AI agents are programmed to follow a set of redefined rules and instructions. |
Perform actions |
Scripted agents can perform specific tasks that are clearly defined and structured. |
Scripted AI Agents for performing actions |
Answer questions |
Scripted agents can respond to questions based on a user-created training corpus, which is a collection of examples and answers. |
Scripted AI Agents for answering questions |
Examples
Both autonomous and scripted AI agents apply to various use cases, depending on the specific requirements and desired capabilities. Some examples include:
-
Customer service—Both autonomous and scripted agents can provide customer support, with autonomous agents offering more flexibility and understanding of natural language.
-
Virtual assistants—Autonomous agents are well suited for virtual assistant roles as they can manage a wide range of tasks and offer more personalized interactions.
-
Knowledge management—Autonomous agents can create and manage knowledge repositories, making information easily accessible to users.
The choice between autonomous and scripted AI agents depend on the complexity of the tasks, the required level of autonomy, and the availability of training data.
Prerequisites
-
If you are an existing Webex Contact Center customer, ensure you meet the following prerequisites:
-
Webex Contact Center 2.0 tenant.
-
Webex Connect is provisioned for your tenant.
-
Voice media platform is Next-Generation media platform.
-
-
If you don't have a Webex Contact Center tenant, contact your Partner to initiate a Webex Contact Center Trial with the Next-Generation Media Platform.
-
Administrators can request a Webex Contact Center developer sandbox to try out AI agents.
Feature enablement
This feature is currently in beta. Customers can sign up for this feature on Webex Beta Portal by filling the participation survey for AI agents.
-
Currently, only the scripted AI agent functionality is available in the beta phase.
-
Autonomous agents are available to select customers only. Requests can be made through your CSM (Customer Success Manager), PSM (Partner Success Manager), or by emailing ask-ccai@cisco.com. Upon approval, autonomous agents will be made available in addition to scripted agents for your tenant.
Access Webex AI Agent Studio
To create your AI agents, you must sign in to the Webex AI Agent Studio application. You can sign in using the following ways:
Sign-in from Control Hub
- Sign in to Control Hub using the URL https://admin.webex.com.
- From the Services section of the navigation pane, choose Contact Center.
- In Quick links in the right pane, go to the Contact Center suite section.
- Click Webex AI Agent Studio to access the application.
The system opens the Webex AI Agent Studio application in another browser tab and you’re automatically signed-in to the application.
Sign-in from Webex Connect
To access the Webex AI Agent Studio application, you should have access to Webex Connect.
- Sign in to Webex Connect application using the tenant URL provided for your enterprise and credentials.
By default, the Services page appears as a home page.
- From the App Tray menu of the left navigation pane, click Webex AI Agent Studio to access the application.
The system opens the Webex AI Agent Studio application in another browser tab and you’re automatically signed-in to the application.
Home page layout
Welcome to the Webex AI Agent Studio application. When you sign in, the home page displays the following layout:
-
Navigation bar
The navigation bar that appears on the left provides access to the following menus:
- Dashboard—Displays a list of AI agents the user has access to, as granted by the enterprise administrator.
- Knowledge—Shows the central knowledge repository or knowledge base, which serves as the brain for autonomous AI agents to respond to customer queries.
- Reports—Lists prebuilt AI agent reports of various types. You can generate or schedule reports according to your business needs.
- Help—Provides access to the Webex AI Agent Studio user guide on the Webex Help Center.
-
User profile
The user profile menu allows you to view your profile information and sign out of the application.
The Enterprise Profile page contains information about the AI agent tenant, accessible only to administrators with full-admin access.
-
The Overview tab contains the following information:
- Enterprise identifiers—Includes Webex Org ID, CPaaS Org ID, Subscription ID for the enterprise. This is available for enterprises with Webex Contact Center integration for the corresponding Webex Connect tenant.
- Profile settings—Contains enterprise name, enterprise unique name, and the Logo URL.
- Global Agent settings—Allows selection of the default agent for voice channel to handle fallback scenarios.
- Data retention summary—Provides a summary of data retention periods for this enterprise.
-
In the Teammates tab, you can view and manage the list of teammates who have access to the application. Each user is assigned a role, which determines the actions they can perform based on granted permissions.
-
Know your Dashboard
On the dashboard, the AI agents are represented by cards. Each card displays basic information, including the AI agent name, last updated by, last updated on, and the engine used for training the agent.
Tasks on AI agent card
Hover over an AI agent card to view the following options:
- Preview—Click Preview to open the AI agent preview widget.
- Ellipsis icon—Click this icon to perform the following tasks:
-
Copy Preview link—Copy the preview link to paste in a new tab and preview the AI agent on the chat widget.
-
Copy Access token—Copy the AI agent's access token for invoking the agent through APIs.
-
Export—Export the AI agent details (in JSON format) to your local folder.
-
Delete—Permanently delete the AI agent from the system.
-
Pin—Pin the AI agent to the first position on the dashboard, or unpin to move it back to its previous position.
-
Create a new AI agent
You can create a new AI agent by using the + Create agent option on the top-right corner of the dashboard. You can choose to use a predefined template or create an agent from scratch.
To know how to create scripted and autonomous AI agents, see the following sections:
Import AI agent
You can import an AI agent in JSON format from a list of available AI agents. First, ensure you’ve exported the AI agent in JSON format to your local folder. Follow these steps to import it:
- Click Import agent.
- Click Upload to upload the AI agent file (in JSON format) exported from the platform.
- In the Agent name field, enter the AI agent name.
- (Optional) In the System ID, edit the system-generated unique identifier.
- Click Import.
Your AI agent is now successfully imported to the Webex AI Agent Studio platform and is available on the dashboard.
Keyword search
The platform provides robust search capabilities to help you easily locate and manage AI agents. You can perform keyword search using the agent name. Enter the agent name or a portion of the name in the search bar. The system displays a list of AI agents that match your search criteria.
Filter by agent type
In addition to keyword search, you can refine your search results by filtering based on the type of AI agent. Choose one of the agent type filters from the drop-down list—Scripted, Autonomous, and All.
Manage knowledge base
A knowledge base is a central repository of information for the Large Language Model (LLM)-powered autonomous AI agents. The autonomous AI agents leverage advanced AI and machine learning technologies to understand, process, and generate human-like text. These AI agents train on vast amounts of data, enabling them to provide detailed and contextually relevant responses. Knowledge bases store the data necessary for the functioning of the autonomous AI agents.
To access the knowledge base:
- Log in to the Webex AI Agent Studio platform.
- On the Dashboard, click the Knowledge icon on the left navigation pane. The knowledge bases page appears.
- You can find a knowledge base based on the following criteria:
- Name of the knowledge base
- Type of the knowledge base
- Knowledge bases updated between specified dates
- Knowledge bases created between specified dates
Click Reset all to reset the search criteria.
- You can also create a new knowledge base. To create a new knowledge base, see Create knowledge base for AI Agent.
Create knowledge base for AI Agent
1 |
On the Dashboard, click the Knowledge icon on the left navigation pane. |
2 |
On the Knowledge bases page, click +Create Knowledge base on the upper right corner. |
3 |
On the Create knowledge base page, enter the following details: |
4 |
Click Create. The system creates a knowledge base with the specified name. |
5 |
On the Files tab: |
6 |
On the Documents tab: |
7 |
Navigate to the Information tab to view and track the details of the files that you have uploaded and documents that you have created.
|
What to do next
Configure the knowledge base for the Autonomous AI agent for answering questions.
Set up Autonomous AI agent
Autonomous AI agents operate independently without direct human intervention. These agents use advanced algorithms and machine learning techniques to analyze data, learn from their environment, and adapt their actions to achieve specific goals. This section outlines the two primary capabilities of Autonomous AI agent.
Autonomous AI agent for performing tasks
The autonomous AI agents can perform various tasks, including:
-
Natural Language Processing (NLP)—Understand and respond to human language in a natural and conversational manner.
-
Decision making—Make informed choices based on available information and predefined rules.
-
Automation—Automate repetitive or time-consuming tasks.
This section includes the following configuration settings:
Create an Autonomous AI agent for performing actions
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
On the Dashboard, click +Create agent. |
3 |
On the Create an AI Agent screen, click Start from scratch.
You can also choose a predefined template to create your AI agent quickly. Filter the AI agent type as Autonomous. In this case, the fields on the Profile page autopopulate. |
4 |
Click Next. |
5 |
In the What type of agent are you building section, click Autonomous. |
6 |
In the What's your agent's main function section, click Perform actions. |
7 |
Click Next. |
8 |
On the Define agent page, specify the following details: |
9 |
Click Create. You've now successfully created the autonomous AI agent for performing actions which is now available on the Dashboard. On the AI agent header, you can perform the following tasks:
You can also import the prebuilt AI agents. For more information, see Import AI agent |
What to do next
Update the profile for the autonomous AI agent.
Update Autonomous AI agent profile
Before you begin
Create an Autonomous AI agent for performing actions.
1 |
On the Dashboard, click the AI agent that you've created. |
2 |
Navigate to tab and configure the following details: |
3 |
Click Publish to make the AI agent live. |
What to do next
Add the required actions to the AI agent.
Add actions to Autonomous AI agent
The Autonomous AI agents for performing actions are designed to comprehend user intents and act accordingly. For example, in a restaurant there is a need to automate online food order intake. To accomplish the task, you can create an Autonomous AI agent that performs the following actions:
-
Get the required information from the customer.
-
Transfer the information to the required flow.
The Autonomous AI agent to perform actions works on the following building blocks:
-
Action—A functionality that allows the AI agent to connect with external systems to perform complex tasks.
-
Entity or slot—Represents a step in fulfilling the user's intent. Slot filling involves asking specific questions to the customer to fulfill the customer's intent based on utterances. It is the trigger for an AI agent to start performing an action. Define the input entities as part of slot filling.
-
Fulfillment—Determines how the AI agent completes the action. As part of fulfillment, define the output entities for the Autonomous AI agent to generate the answer in a specific format. The system sends the output entities to the flow to continue with the action and complete the task successfully.
1 |
In the Action tab, click +New Action. |
2 |
On the Add a new action page, specify the following details: |
What to do next
You can either configure slots or you can configure slots and define fulfillment depending on the action scope chosen.
Configure slot filling
Slot filling involves adding the required input entities for the AI engine. In the Slot filling section of the Actions page, add the input entities:
-
You can add the entities one by one in table format.
-
You can also use the JSON file and define the entities. See A Tour of JSON Schema for details.
Add input entities in table format
1 |
To add an input entity, click +New input entity. |
2 |
On the Add a new input entity page, specify the following details: |
3 |
Click Add to add the input entity. You can add as many input entities as you need. |
4 |
Use the Controls option to perform the following actions on the entity: |
Add entities using JSON editor
You can add the input entities and output entities using the JSON editor. In the JSON editor view, the entities must be defined in a structured JSON format.
For more information, see A Tour of JSON Schema.
Input parameter structure
The input parameters must adhere to the following structure:
-
type—Data type of the parameters object. This is always 'object' to denote that the parameters are structured as an object.
properties—An object where each key represents a parameter and its associated metadata.
required—An array of strings listing the names of parameters that are mandatory.
properties Object
Each key in the properties object represents an input entity/parameter and contains another object with metadata about that parameter. The metadata should always include the following keywords:
-
type—Data type of the parameter. The allowed types are:
-
string—Textual data.
-
integer—Numeric data without decimals.
-
number—Numeric data that can include decimals.
-
boolean—True/false values.
-
array—A list of items, all of which are typically of the same type.
-
object—A complex data structure with nested properties.
-
-
description—A brief explanation of what the entity represents. This helps the AI engine understand the purpose and usage of the parameter. A description that is concise as well as consistent with the agent's instructions and action description is recommended for better accuracy.
-
Validation is enforced by the platform for ‘type’ only. ‘Description’ is not enforced for all entities but it is highly recommended that it’s added. Other useful keywords for entity metadata are:
-
enum—The enum field lists the possible values for a parameter. This is useful for parameters that should only accept a limited set of values. Developers can define custom lists of values that a parameter should accept to use this.
- pattern—The pattern field is used with string types to specify a regular expression that the string must match. This is particularly useful for validating specific formats, such as phone numbers, postal codes, or custom identifiers.
-
examples—The examples field provides one or more examples of valid values for the parameter. This helps the AI engine understand what kind of data is expected and can be especially useful for interpretation and validation purposes.
-
There are other keywords that can make the entity definition more accurate and robust. For more information, see A Tour of JSON Schema.
Example
The following example includes various types of entities and keywords:
{
"type": "object",
"properties": {
"username": {
"type": "string",
"description": "The unique username for the account.",
"minLength": 3,
"maxLength": 20
},
"password": {
"type": "string",
"description": "The password for the account.",
"minLength": 8,
"format": "password"
},
"email": {
"type": "string",
"description": "The email address for the account.",
"pattern": "\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"
},
"birthdate": {
"type": "string",
"description": "The birthdate of the user.",
"examples": ["mm/dd/YYYY"]
},
"preferences": {
"type": "object",
"description": "User preferences settings.",
"properties": {
"newsletter": {
"type": "boolean",
"description": "Whether the user wants to receive newsletters.",
"default": true
},
"notifications": {
"type": "string",
"description": "Preferred notification method.",
"enum": ["email", "sms", "push"]
}
}
},
"roles": {
"type": "array",
"description": "List of roles assigned to the user.",
"items": {
"type": "string",
"enum": ["user", "admin", "moderator"]
}
}
},
"required": ["username", "password", "email"]
}
This example inlcudes the following entities:
- username—A string type with minimum and maximum length constraint.
- password—A string type with a minimum length and a specific format (password indicates it should be handled securely).
- email—A string type with a regex pattern to ensure it’s a valid email address.
- birthdate—A string type with examples to prescribe the format of the date.
- preferences—An object type with nested properties (newsletter and notifications), including a boolean with a default value and a string with specific allowed values (enum).
- roles—An array type where each item is a string limited to specific values
(enum).
The username, password, and email are mandatory as defined by the ‘required’ array.
In this example, the entities have descriptive names, clear descriptions, and follow consistent structure and naming convention. Follow these best practices to create well-defined entities that is easy for the AI engine to interpret and enforce.
Define fulfillment
1 |
Define the fulfillment details for implementing the AI agent in a contact center. Specify the following details: |
2 |
Configure the output entities such that the AI agent generates the result in a format that is understandable by the flow. |
3 |
To add an output entity, click +New output entity. In the Add a new output entity screen, specify the following details: You can also use a JSON file to add the output entities. For more information, see Add entities using JSON editor . |
4 |
Click Add to add the output entity. You can add as many output entities as you need. |
5 |
Use the Controls option to perform the following actions on the entity: |
6 |
Click Add to complete the cofiguration. |
What to do next
Click Preview to preview the AI agent. For more information, see Preview your Autonomous AI agent. Click Publish to make the AI agent live.
After you configure the AI agent:
- To view the AI agent performance, see View Autonomous AI agent performance using Analytics.
- To view the sessions and history details, see View Autonomous AI agent Sessions and History.
Autonomous AI Agents for answering questions
Autonomous agents can access and use a knowledge repository to provide informative and accurate answers to user queries. This capability is useful in scenarios where the agent needs to:
-
Provide customer support—Answer FAQs, troubleshoot issues, and guide customers through processes.
-
Offer technical assistance—Provide expert advice on specific topics or domains.
This section includes the following configuration settings:
Create an Autonomous AI agent for answering questions
Before you begin
Ensure to create the knowledge base. For more information, see Manage knowledge bases.
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
On the Dashboard, click +Create agent. |
3 |
On the Create an AI Agent screen, click Start from
scratch. The default AI engine is You can also choose a pre-defined template to create your AI agent quickly. You can filter the AI agent type as Autonomous. In this case, the fields on the Profile page auto-populates. |
4 |
Click Next. |
5 |
In the What type of agent are you building section, click Autonomous. |
6 |
In the What's your agent's main function section, click Answer questions. |
7 |
Click Next. |
8 |
On the Define agent page, specify the following details: |
9 |
Click Create. The autonomous AI agent for answering questions is successfully created and is now available on the Dashboard. On the AI Agent header, you can perform the following tasks:
You can also import the AI agents. For more information, see Import pre-built AI Agent. |
What to do next
Update the profile for the autonomous AI agent.
Update Autonomous AI Agent profile
Before you begin
Create an Autonomous AI Agent for answering questions.
1 |
On the Dashboard, click the AI Agent that you've created. |
2 |
Navigate to tab and configure the following details: |
3 |
Click Save changes to make the AI Agent live. |
What to do next
Configure the Knowledge base for the AI agent.
Configure Knowledge Base
Before you begin
Create an Autonomous AI Agent for answering questions.
1 |
On the Dashboard page, select the AI Agent that you've created. |
2 |
Navigate to the Knowledge Base tab. |
3 |
Choose the required knowledge base from the drop-down list. |
4 |
Click Save changes to make the AI Agent live.. |
What to do next
Click Preview to preview the AI agent. For more information, see Preview your Autonomous AI Agent.
After you configure the AI Agent:
- To view the AI Agent performance, see View Autonomous AI Agent performance using Analytics.
- To see the Sessions and History details, see View Autonomous AI Agent Sessions and History.
View Autonomous AI agent session and history
You can view the session and history details of each of the Autonomous AI agent that you have created. The Sessions page displays the details of sessions established with the constomers. The History page allows you to view the details of the configuration changes performed on the AI agent.
Sessions
The Sessions page provides a comprehensive record of all interactions between AI agents and users. To navigate to the Sessions page:
- On the Dashboard, click the Autonomous AI agent for which you want to view the session details.
- From the left navigation pane, click Sessions.
The Sessions page appears. Each session is displayed as a record that contains all the messages of the session. This information is useful to audit, analyze, and improve the AI agent.
The sessions table shows a list of all the sessions/rooms created for that AI agent. The table gets paginated if there are more rows than can be accommodated in one screen. Any of the fields in the table can be sorted or filtered using the Refine Results section on the left-hand side. The fields which are present represent the following information about any particular session:
-
Session ID—The unique room id or session id for a conversation.
- Consumer Id—The id of the consumer who interacted with the AI agent.
-
Channels—Channel where the interaction took place.
-
Updated At—Time of the room closure.
-
Room Metadata—Contains additional information about the room.
-
Check the required check boxes:
- Hide test sessions—To hide the test sessions and display only the list of live sessions.
- Agent handover happened—To filter the sessions that are handed over to an agent. If agent handover happens, it displays the Headphone icon indicating the handover of the chat to a human agent.
- Error occurred—To filter the sessions in which the error occurred.
- Downvoted—To filter the downvoted sessions.
Click on a row in the sessions table for a detailed view of that session. The lock icon indicates that the session is locked and needs to be decrypted. You need to have permission to decrypt the session. If the Decrypt Access toggle is enabled, you can access any session using the Decrypt content button. However, this functionality is applicable only when the Advanced data protection is set to true or enabled for the tenant.
History
The History page allows you to view the details of the configuration changes performed on the AI agent. To view the history of a specific agent:
- On the Dashboard, click the Autonomous AI agent for which you want to view the history.
- From the left navigation pane, click History.
The History page appears with the following tabs:
- Audit Logs—Click on the Audit Logs tab to view the changes made to the AI agents.
- Model History—Click on the Model History tab to view the various versions of the Autonomous AI agent for performing actions.
Audit Logs
The Audit Logs tab tracks the changes made to the autonomous AI agent. You can view the details of the changes for the past 35 days. The Audit Logs tab displays the following details:
Users with Admin or AI agent developer roles can only access the Audit Logs tab. Users with custom roles that have the ‘Get Audit log’ permission can also view the audit logs.
- Updated at—The data and time of the change.
- Updated by—The name of the user who incorporated the change.
- Field—The specific section of the AI agent where the change was made.
- Description—Additional information about the change.
You can search for a specific audit log using the Updated by, Field, and Description search options. You can sort the logs based on the Updated at and Updated by fields.
Model History
The Model History tab is available only for the Autonomous AI agent for performing actions.
Whenever you publish the Autonomous AI agent for performing actions, a version of the Autonomous AI agent is saved and is available in the Model History tab. You can view the various versions of the AI agent from the Model History tab.
- Model description—A brief description about the version of the AI agent.
- AI Engine—The AI engine used for that version of the AI agent.
- Updated on—Date and time when the version was created.
- Actions—Allows you to perform the following actions on the AI
agent
- Load—All changes on the AI agent is lost. You must perform the configuration again.
- Export—Use to export the AI agent.
Preview your Autonomous AI agent
You can preview the autonomous AI agents at the time of creating the AI agent, while editing, and after deploying the agent. You can open the preview from:
- AI agent dashboard—On hovering over an AI agent card, the Preview option for that AI agent become visible. Click to open the preview of the AI agent.
- AI agent header— Click on the AI agent card to open the AI agent. The Preview option is always visible in the header section.
- Minimized widget—After a preview is launched and minimized, a chat head widget appears at the bottom right of the page. You can use this option to easily reopen the preview mode.
Webex AI Agent Studio also provides a shareable preview option. Click the menu on the top-right corner and select the Copy Preview Link option. You can share the preview link with other users such as testers or consumers of the AI agent.
Platform preview widget
The preview widget appears on the bottom-right section of the screen. You can provide utterances (or a sequence of utterances) to check the AI agent's responses and ensure it’s functioning correctly.
Also, you can minimize the preview widget, provide consumer information, and initiate multiple rooms to test the AI agent.
Shareable preview widget
The shareable preview widget allows you to share the AI agent with stakeholders and consumers in a presentable manner without the need of developing a custom UI to surface the AI agent. By default, the copied preview link renders the AI agent with a phone casing. You can do some quick customization by changing certain parameters in the preview link. You can customize the widget as follows:
- Widget color—By appending the brandColor parameter to the link. You can define simple colors using color names or use the hex code of colors.
-
Phone casing—By changing the value of the phoneCasing parameter in the link. This is set to true by default and can be disabled by making it false.
Example preview link with these parameters:
? bot_unique_name=<your_bot_unique_name>&enterprise_unique_name=<your_enterprise_unique_name>&phoneCasing=<true/false>&brandcolor<enter a color's hexadecimal value in the format '_XXXX'>
.
View Autonomous AI Agent performance using Analytics
The AI Agent Analytics section provides a graphical representation of the key metrics to evaluate the AI Agent performance and effectiveness. To generate the analytics of the Autonomous AI Agent:
- Choose the AI Agent from the Dashboard.
- On the left navigation pane, click Analytics. An overview of the AI agent performance appears in both tabular format and graphical representation.
The first section displays the following statistics about sessions and messages for the AI Agent.
- Total sessions and sessions handled by the AI Agent without human intervention.
- Total agent handovers, which is a count of number of sessions handed over to human agents.
- Daily average sessions
- Total messages (human and AI Agent messages) and how many of those messages came from users.
- Daily average messages
The second section displays the statistics about the users. It provides a count of total users and information about average sessions per user and daily average users.
The third section displays the AI Agent responses and agent handovers
Set up Scripted AI Agent
This section explains how to set up and manage scripted AI agents on the Webex AI Agent Studio platform to ensure they provide accurate responses to user queries and efficiently perform automated tasks.
Scripted AI Agent for performing tasks
Scripted AI agents enhance the no-code agent-building capabilities of the Webex AI Agent Studio platform. They enable multiturn conversations, gathering relevant data from customers to perform specific tasks. This includes:
-
Running simple commands—Follow instructions to complete predefined actions.
-
Processing data—Manipulate and transform data according to specified rules.
-
Interacting with other systems—Communicate with and control other solutions.
This section includes the following configuration settings:
Create a Scripted AI Agent for performing actions
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
On the Dashboard, click + Create agent. |
3 |
On the Create an AI Agent screen, create a new AI agent from the scratch. You can also quickly create your AI agent by choosing a pre-defined template. Filter the AI Agent type to |
4 |
Click Start from scratch and then Next. |
5 |
In the What type of agent are you building? section, click Scripted. |
6 |
In the What's your agent's main function? section, click Perform actions. |
7 |
Click Next. |
8 |
On the Define agent page, specify the following details: |
9 |
Click Create. The scripted AI agent for answering questions is successfully created and is now available on the Dashboard. On the AI Agent header, you can perform the following tasks:
You can also import the pre-built AI agents. For more information, see Import pre-built AI Agent. |
What to do next
Update Scripted AI Agent profile
Before you begin
Create a scripted AI agent for answering questions.
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
From the Dashboard, select the AI agent that you created. |
3 |
Navigate to and configure the following details: |
4 |
Click Save changes to save the settings. |
Manage Entities
Entities are the building blocks of conversations. They are essential elements that the AI agent extracts from user utterances. Entities represent specific pieces of information, such as product names, dates, quantities, or any other significant group of words. By effectively identifying and extracting entities, the AI agent can better understand user intent and provide more accurate and relevant responses.
Entity types
Webex AI Agent Studio offers 11 prebuilt entity types to capture various types of user data. You can also create any of the following custom entities.
Custom Entities
These entities are configurable and allow developers to capture use-case specific information.
-
Custom list—Define lists of expected strings to capture specific data points not covered by prebuilt entities. You can add multiple synonyms against each string. For example, a custom pizza size entity.
-
Regex—Use regular expressions to identify specific patterns and extract corresponding data. For example, a phone number regex, as in
123-123-8789
-
Digits—Capture fixed-length numerical inputs with high accuracy, especially in voice interactions. We use this as an alternative to Custom and Regex entity types in nonvoice interactions. For example, define a length of five to detect a five-digit account number.
-
Alphanumeric—Capture combinations of letters and numbers, providing accurate recognition for both voice and nonvoice inputs.
-
Free form—Capture flexible data points that are difficult to define or validate.
-
Map location (WhatsApp)—Extract location data shared by you on the WhatsApp channel.
System Entities
Entity name | Description | Example input | Example output |
---|---|---|---|
Date | Parses dates in natural language to a standard date format | “july next year” | 01/07/2020 |
Time | Parses time in natural language to a standard time format | 5 in the evening | 17:00 |
Detects email addresses | write to me at info@cisco.com | info@cisco.com | |
Phone number | Detects common phone number | call me at 9876543210 | 9876543210 |
Monetary units | Parses currency and amount | I want 20$ | 20$ |
Ordinal | Detects ordinal number | Fourth of ten people | 4th |
Cardinal | Detects cardinal number | Fourth of ten people | 10 |
Geolocation | Detects geographic locations (cities, countries etc.) | I went swimming in the Thames in London UK | London, UK |
Person names | Detects common names | Bill Gates of Microsoft | Bill Gates |
Quantity | Identifies measurements, as of weight or distance | We’re 5km away from Paris | 5km |
Duration | Identifies time periods | 1 week of vacation | 1 week |
You can edit created entities from the entities tab. Linking entities to an intent annotates your utterances with detected entities as you add them.
Entity roles
Entity roles are crucial when you need to collect the same entity multiple times within a single intent. By assigning distinct roles to the same entity, you can help the AI agent understand and process user input more accurately.
For example, to book a flight with a layover, you can create an Airport
entity with three roles: origin
,destination
, and layover
. By annotating training utterances with these roles, the AI Agent can learn expected patterns and seamlessly handle complex booking requests.
Only Mindmeld (custom and system entities) and Rasa (custom entities only) support entity roles. Administrators must select the Entity roles
check box under the advanced settings of the NLU engine selector dialog box to enable entity roles.
Administrators can’t switch from RASA or Mindmeld to Swiftmatch when entity roles are in use. Remove roles from intents to disable entity roles from advanced NLU engine settings. You can create an entity with entity roles.
Create an entity with entity roles
Before you begin
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
On the Dashboard, click the scripted AI agent that you've created. |
3 |
Click Training on the left pane. |
4 |
On the Training data page, click the Entities tab. |
5 |
Click Create entity. |
6 |
On the Create entity window, specify the following fields: |
7 |
Enable the Auto suggest slot values toggle to auto complete and provide alternate suggestions for this entity during the conversation. You can only see the Roles field when creating a custom entity if you enable entity roles in the Advanced Settings section of the Change training engine window for RASA and Mindmeld NLU engines. |
8 |
Click Save. You can use Edit and Delete options in the Actions column to perform related actions.
|
What to do next
After you create an entity, you can link roles to an entity.
Link roles to an entity
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
On the Dashboard, click the AI agent that you've created. |
3 |
Click Training in the left pane. |
4 |
On the Training data page, choose an intent to link entities and entity roles. By default, the Intent tab appears.
|
5 |
In the Slots section, click Link entity. |
6 |
Choose the entity role for the entity name. |
7 |
Click Save. You can assign roles to an entity for collecting the same entity twice for an intent. |
Manage Intents
Intent is a core component of the Webex AI Agent Studio platform that enables the AI agent to understand and respond to your input effectively. It represents a specific task or action that you want to accomplish during a conversation. You can define all intents that correspond to the tasks you want to perform. The accuracy of intent classification directly impacts the AI agent's ability to provide relevant and helpful responses. Intent classification is the process of identifying intent based on your input, allowing the AI agent to respond in a meaningful and contextually relevant manner.
System intents
- Default Fallback Intent—An AI agent's capabilities are inherently limited by the intents that are designed to recognize and respond to. While an enterprise can't anticipate every possible question you might ask, the default fallback intent can help conversations to be on track.
By implementing a default fallback intent, AI agent developers can ensure that the AI agent gracefully handles unexpected or out of scope queries, redirecting the conversation back to known intents.
AI agent developers need not add specific utterances to the fallback intent. The agent can be trained to automatically trigger the fallback intent when it encounters known out of scope questions that might otherwise be incorrectly categorized into other intents.
For example, in a banking AI agent, customers might attempt to inquire about loans. If the AI agent is not configured to handle loan-related inquiries, these queries can be incorporated as training phrases within the default fallback intent. When a customer queries about loans at any point in the conversation, the AI agent recognizes the query as falling outside of its defined intents and triggers the fallback response. This ensures a more appropriate response.
The fallback intent:- shouldn’t have any slots associated with it.
- must use the default fallback template key for its response.
- Help—This intent is designed to address customer inquiries about the AI agent's
capabilities. When customers are unsure of what they can accomplish or encounter
difficulties during a conversation, they often seek assistance by asking for
help.
By default, the response for the help intent is mapped to the
Help message
template key. However, AI agent developers can customize the response or change the associated template key to provide more tailored and informative guidance.It’s recommended to convey the AI agent's capabilities at a high level, providing customers with a clear understanding of what they can do next.
- Talk to an agent—This intent enables customers to request assistance from a human
agent at any stage of their interaction with the AI agent. When this intent is invoked,
the system automatically initiates a transfer to a human agent. The default response
template for this intent is
agent handover
. While there are no UI restrictions on changing the response template key, altering it won’t affect the outcome of the human handover.
Small talk intents
All newly created AI agents include four predefined small talk intents to handle common customer greetings, expressions of gratitude, negative feedback, and farewells:
- Greetings
- Thank you
- The AI agent wasn’t helpful
- Goodbye
Create an intent
Before you begin
Before creating an intent, ensure you create entities to link to the intent. For more information, see Create Entity with Entity roles.
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
On the Dashboard, choose a AI agent. |
3 |
Click Training from the left pane. |
4 |
On the Training data page, click Create Intent. |
5 |
On the Create intent window, specify the following details: |
6 |
Check the Required check box if the entity is mandatory. |
7 |
Enter the number of retries allowed for this slot. By default, the number is three. |
8 |
Choose the template key from the drop-down list. |
9 |
In the Response section, enter the final response template key that you want to return to users on completion of the intent. |
10 |
Enable the Reset slots after completion toggle to reset the slot values collected in the conversation once the intent is complete. If this toggle is in disabled status, the slot retains the old values and displays the
same response.
|
11 |
Enable the Update slot values toggle to update the slot value during the conversation with the consumer. The AI agent considers the last value filled in the slot to process the data. If you enable this feature, the system updates values for filled slots whenever customers provide new information for the same slot type.
|
12 |
Enable the Provide suggestions for slots toggle to provide suggestions for slot filling and alternate slot values in the final response, based on user input. |
13 |
Enable the End conversation toggle to close the session after this intent. Webex Connect and voice flows can use this to close a conversation with
consumers.
|
14 |
Click Save. Click Train on the top right of the Training tab to reflect any changes made in intents and entities.
To train Rasa or Mindmeld NLU engines, you need at least two training utterances per intent. Also, each slot must have at least two annotations. If you don't meet these requirements, the system disables the Train button. A warning icon appears next to the affected intent to indicate the issue. However, the default fallback intent is exempt from these requirements. |
What to do next
After you create an intent, some information is required to fulfill the intent. Linked entities dictate how we obtain this information from user utterances. For more information, see Link Entities with Intent.
Link entities with an intent
Before you begin
Ensure that you create entities and link them before adding utterances. This automatically annotates the entities while you add utterances.
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
On the Dashboard, click the AI agent that you've created. |
3 |
Click Training in the left pane. |
4 |
On the Training data page, choose an intent to link entities and entity roles. By default, the Intent tab appears.
|
5 |
In the Slots section, click Link entity. The linked entities appear in the Slots section.
|
6 |
Choose the entity role for the entity name. |
7 |
Click Save. When an entity is marked as required, additional configuration options become available. You can specify the maximum number of times the AI agent can request the missing entity before escalating or providing a fallback response. You can define the template key to be invoked if the required entity isn’t provided within the specified number of retries.
Once an AI agent identifies an intent and gathers all necessary data (slots), it responds using the message associated with the final template key configured for that intent. To start a new conversation or handle subsequent intents without carrying over previous data, you must enable the Reset Slots After Completion toggle. This setting clears all recognized entities from the conversation history, ensuring a fresh start for each new interaction. |
Generate training data
To get the AI agent working at a reasonable accuracy, manually add training data to their intents. The training data consists of different ways in which you can invoke the same intent. You can add at least 15-20 variants for each intent to improve its accuracy. Creating this training corpus manually can be tedious and time-consuming. You may add only a few variants, or add only keywords as variants instead of meaningful sentences. You can avoid this by generating training data to supplement your existing ones.
To generate training data, follow these steps:
- Enter the intent name and a sample utterance.
- Click Generate.
- Provide a brief description of the intent to guide the AI.
- Specify the desired number of variants and the level of creativity for the AI-generated suggestions.
- Generating many variants at once may impact quality. We recommend a maximum of 20 variants per generation.
- A lower creativity setting can produce less diverse variants.
- The generation process may take a few seconds, depending on the number of variants requested.
- The lightning icon distinguishes AI-generated variants from user-defined training data.
Flagging Generated Variants
To ensure responsible AI usage, developers can flag AI-generated outputs for review. This allows for the identification and prevention of any harmful or biased content. To flag AI-generated outputs:
- Locate the Flagging Option: A flagging option is available for each generated utterance.
- Provide Feedback: When flagging an output, developers can add comments and specify the reason for flagging.
This feature is initially available with a monthly usage limit of 500 generate operations. To accommodate growing needs, developers can contact their account owners to request an increase in this limit.
Create multilingual Intent and Entity
You can create training data in multiple languages. For each language configured for your AI Agent, define utterances that reflect the desired interactions. While the slots remain consistent across languages, the template keys uniquely identify the responses in each language.
Not all languages support all entity types. For more information on the list of entity types each language supports, see that the Languages verses entities supported table in Supported languages for Scripted AI Agents.
Manage Responses
Responses are the messages that your AI Agent sends to customers in response to their queries or intents. You can create responses that include:
- Text—Plain text messages for direct communication.
- Code—Embedded code for dynamic content or actions.
- Multimedia—Images, audio, or video elements to enhance the user experience.
Responses have two major components:
- Templates—Predefined response structures that are mapped to specific intents.
- Workflows—The logic that determines which template to use based on the identified intent.
Preconfigure templates for Agent Handover, Help, Fallback, and Welcome. You can change the response message from the corresponding templates.
Response types
The Response Designer section covers how to configure different types of responses.
You can use the Workflows tab to handle asynchronous responses while calling an external API that responds asynchronously. Code the workflows in Python.
Variable substitution
Variable substitution allows you to use dynamic variables as a part of response templates. You can use all standard variables (or entities) in a session, along with those that an AI Agent developer can set inside a free form object like the datastore
field, in response templates via this feature. Represent the variables using the syntax: ${variable_name}. For example, using the value of an entity called apptdate uses ${entities.apptdate} or ${newdfState.model_state.entities.apptdate.value}.
You can personalize responses using variables received from the channel or collected from consumers over the course of a conversation. The auto complete functionality shows the syntax of variables in the text area when you start typing ${. Selecting the required suggestion auto fills the area with the variable and highlights such variable.
Configure responses using a Response Designer
The response designer offers a user-friendly interface for creating responses without requiring extensive coding knowledge. Two response types are available:
- Conditional Responses: For nondevelopers, this option allows for easy construction of responses that the AI agent delivers to customers.
- Code Snippets: For developers using Python, this option provides flexibility for configuring responses using code.
The response designer is designed to ensure that the user experience caters to the specific channel the AI agent is interacting with.
Response templates
- Text—These are simple text responses. To enhance user experience, the response designer allows multiple text boxes within a single response, enabling you to break down lengthy messages into more manageable sections. Each text box can include various response options. During a conversation, the system randomly selects one of these options and displays it to the user, ensuring a dynamic and engaging interaction.
To maintain a dynamic and engaging user experience, you can add multiple response options to your templates. When you activate a template with multiple options, the system randomly selects one of them and displays it to the user. You can enable this feature by clicking the +Add variant button located at the bottom of your response.
When you save responses, you might see a warning indicating the number of errors that need correction. The system highlights fields with errors in red. You can use the navigation arrows to easily locate and fix these errors in any channel or response format. For a single card, the corresponding dot turns red to signal the error. For lists and carousels with multiple cards, dot navigation allows you to move through the cards with errors.
- Quick Reply—You can pair text responses with buttons, which can be either text-based or URL links. Text buttons require a title and a payload. When you click them, the system sends it to the AI Agent. URL buttons redirect users to a specific web page.
When your query is ambiguous, partial matching allows the AI Agent to suggest relevant articles or intents as options. This feature is available for web and Facebook interactions.
Adding URL quick replies
URL quick reply buttons redirect users to your website for further information or actions. Clicking these quick reply buttons opens the specified URL in a new tab within the same browser window. The system doesn’t send any data back to the AI Agent.
To add a URL quick reply in conditional or fixed response:
- Choose the article or template key for which you want to configure the URL quick reply.
- Click +Add a quick reply. The Button-type pop-up window appears.
- Choose the button type as the URL in the web channel.
- Specify the title for the button and the URL to which you want to redirect the consumer after they click the button.
- Click Done to add a URL quick reply.
You can also configure URL type buttons through dynamic response type, using snippets of Python code. The system supports these buttons in the preview and shareable preview sections. However, IMIchat's Live chat widget and other third-party channels don’t currently support them.
- Carousel—Rich responses can include a single card or multiple cards arranged in a carousel format. Each card requires a title and can contain an image, a description, and up to three buttons.
You can configure quick reply buttons within the Carousel template with text or URL links. Clicking a URL button redirects the user to the specified website. Clicking a text-based quick reply button sends a configured payload to the bot, triggering the corresponding response.
- Image—A multimedia template where you can configure images by providing URLs.
- Video—Renders videos in the preview based on the configured video URL.
- Code—Used to write Python code for calling APIs, or executing other logic.
Channel aware responses
We only support certain templates for each channel. For example, in SMS, we only support text, code (which eventually responds with text), and quick reply templates. We don't support multimedia or carousel templates for SMS and other channels that don't support them.
In addition to text, we provide the quick reply template for these channels, regardless of whether the underlying channel supports it (for example, WhatsApp). The platform automatically converts all configured quick replies to numbered text options on channels that don't support quick replies.
Channel-specific templates
Code snippets
Conditional responses, with their extensive features and diverse templates, can effectively address AI agent needs. For complex use cases or for developers who prefer coding, the Code Snippet response type is available.
Code Snippets allow you to configure responses using Python code. This approach enables you to create all types of responses, including quick replies, text, carousels, images, audio, video, and files, within a response template or article.
You can use the function code defined in the Code Snippet template to set variables that you can then use in other templates. It's important to note that function code can’t directly return responses when used within conditional responses.
Code snippet validation—The platform only checks for syntax errors within the code snippet you're configuring. However, any errors in the response content itself can cause issues for users interacting with the bot on the configured channel. The editor lets you add a "time picker" response for the web channel, but this can cause errors if your query triggers it.
If you don't configure a unique response for different channels, the system uses the web response as the default. The list of templates supported on the web channel are:
- Text—A simple text message that can have multiple variants. The system displays the configured message based on the query.
- Quick Reply—A template having text and click-able buttons.
- Carousel—A collection of cards, with each card having a title, an image URL, and a description.
- Image—A template to configure images by providing URLs.
- Video—A template to configure video by providing the video URL. You can play the video by clicking or tapping on the image.
- File—A template to configure a pdf file by providing the URL to access the file.
- Audio—A template to configure an audio file by providing the audio URL. It also shows the duration of the audio message in the output.
Configure Management settings
Before you begin
Create the Scripted AI Agent.
1 |
Navigate to and configure the following details: |
2 |
Click Save changes to save the settings. |
What to do next
Add languages to the Scripted AI Agent.
Add a language to a scripted AI Agent
Before you begin
Create the Scripted AI Agent.
1 |
Navigate to tab. |
2 |
Click +Add language(s) to add new languages and select the languages from the drop-down list. |
3 |
Click Add to add the language. |
4 |
Enable the toggle under Action to enable the language. |
5 |
Once you've added a language, you can set the language as default. Hover over the language and click Make default. You can't delete or disable a default language. Also, changing the default language can affect how the AI Agent works, including its articles, curation, testing, and preview. |
6 |
Click Save changes. |
Configure Handover settings
Before you begin
Create the Scripted AI Agent.
1 |
Navigate to and configure the following details: |
2 |
Click Save changes to save the handover settings. |
What to do next
Scripted AI Agent for answering questions
Scripted AI agents are knowledge-driven agents whose knowledge base consists of a corpus of questions and answers. Scripted AI agent can provide answers based on a user-created training corpus, which is a collection of examples and answers. This capability is useful in scenarios where:
- Specific knowledge is required—The agent needs to answer questions within a predefined domain.
- Consistency is important—The agent must provide consistent responses to similar queries.
- Limited flexibility is needed—The agent's responses are constrained by the information in the training corpus.
This section includes the following configuration settings:
Create a Scripted AI Agent for answering questions
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
On the Dashboard, click +Create agent. |
3 |
On the Create an AI Agent screen, click Start from scratch. You can also choose a predefined template to create your AI
agent quickly. You can filter the AI Agent type as Scripted. In this case, the fields on
the Profile page autopopulate. |
4 |
Click Next. |
5 |
In the What type of agent are you building section, click Scripted. |
6 |
In the What's your agent's main function section, click Answer questions. |
7 |
Click Next. |
8 |
On the Define agent page, specify the following details: |
9 |
Click Create. The system successfully creates scripted AI agent for answering questions and is now available on the Dashboard.
On the AI Agent header, you can perform the following tasks:
You can also import the prebuilt AI agents. For more information, see Import pre-built AI Agent. |
What to do next
Add articles to the AI agent.
Update Scripted AI Agent profile
Before you begin
Create a scripted AI agent for answering questions.
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
From the Dashboard, select the AI agent that you created. |
3 |
Navigate to and configure the following details: |
4 |
Click Save changes to save the settings. |
Manage Articles
Articles are an important part of scripted AI Agents. An article is the combination of a question, its variations & response to this question. Each article has a Default question that identifies it. All the articles together make up the knowledge base or corpus of an AI Agent. When you make a query, the system checks its knowledge base and answers the best it finds.
Rasa and Mindmeld NLU engines require a minimum of two training variants (utterances) for an article to be a part of a corpora’s trained model. If an article has fewer than two variations and if you select a Rasa or Mindmeld NLU engine, the Train and Save and Train buttons are unavailable for answering questions. When you rest the pointer on these unavailable buttons, the system displays a message asking you to resolve the issues before training. Also, the system displays a warning icon corresponding to the article with issues. You can resolve the issues by adding more than two variants for an article. The Train and Save and Train buttons become available once you resolve the issues. Having two variants isn’t applicable for the default articles – partial match message, fallback message, and welcome message.
You can classify articles into categories of their choice and all uncategorized articles remain classified as unassigned. When you create an AI Agent, four default articles are available:
- Welcome message—This contains the first message whenever there's a start of conversation between the customer and the AI Agent.
- Fallback message—AI Agent shows this message when the agent is unable to understand your question.
- Partial match—When the AI Agent recognizes multiple articles with a small difference in scores (as set in Handover and Inferences settings), the agent shows this match message along with the matched articles as options. You can also configure the text response to be displayed along with these options.
- What can you do?—You can configure the AI Agent's capabilities. AI Agent displays this message whenever the end-users question AI Agent capabilities.
In addition to these, the system adds the Talk to an agent default article if you enable agent handover from Handover and Inference settings.
All new AI Agents also have four Smalltalk articles that handle user utterances for:
- Greetings
- Thank you!
- The AI Agent wasn’t helpful.
-
Goodbye
These articles and responses are available in the AI Agent knowledge base by default while creating a new AI Agent. You can also modify or remove them.
Add articles through UI and default response
An article is the combination of a question, its variations, and a response to this question. The system compares every query to these articles (knowledge base) and displays the answer with the highest confidence level to the user as the response of the AI agent. To add articles:
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
From the Dashboard, choose the AI agent that you created. |
3 |
Navigate to Create new article. , and click |
4 |
Add the default variants. |
5 |
Choose any of these default responses for the article. Possible values:
For more information, see the Configure Responses using Response Designer section. |
6 |
Click Save and train. |
Import from catalogues
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
From the Dashboard, choose the AI agent that you created. |
3 |
Navigate to and click the Ellipses icon. |
4 |
Click Import from catalogues. |
5 |
Choose the categories of the articles to add to the agent. |
6 |
Click Done. |
Extract FAQs from a link
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
From the Dashboard, choose the AI agent that you created. |
3 |
Navigate to and click the ellipsis icon. |
4 |
Click Extract FAQs from link. |
5 |
Provide the URL where FAQs are hosted and click Extract. |
6 |
Click Import. |
Import from a file
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
From the Dashboard, choose the AI agent that you created. |
3 |
Navigate to and click the Ellipsis icon. |
4 |
Click Import from a file and choose CSV to import the articles from the CSV file. If you’re importing articles from a file in JSON format, choose JSON. |
5 |
Click Browse and select a file that contains all articles. Click Download sample to view the format for specifying the articles. |
6 |
Click Import. |
Add custom synonyms
Many AI agent use cases tend to involve words and phrases that may not be standard English or are specific to a business context. For example, you want the AI agent to recognize android app, iOS app, and so on. The AI agent must include these terms and their variations in the training utterances for all related articles, leading to redundant data entry.
To overcome this redundancy issue, you can use custom synonyms within a scripted AI agent for answering questions. Synonyms of each root word get replaced with the root word at runtime by the platform automatically.
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
From the Dashboard, choose the AI agent that you created. |
3 |
Navigate to and click the Ellipses icon. |
4 |
Click Custom synonyms. |
5 |
Click New Root Word. |
6 |
Configure the root word value and its synonyms, and click Save. |
7 |
Train the AI agent again after adding the synonyms. You can also export the synonyms (in .CSV file format) to the local folder and import the file back to the platform. |
Configure Management settings
Before you begin
Create the Scripted AI Agent.
1 |
Navigate to and configure the following details: |
2 |
Click Save changes to save the settings. |
What to do next
Add languages to the Scripted AI Agent.
Add a language to a scripted AI Agent
Before you begin
Create the Scripted AI Agent.
1 |
Navigate to tab. |
2 |
Click +Add language(s) to add new languages and select the languages from the drop-down list. |
3 |
Click Add to add the language. |
4 |
Enable the toggle under Action to enable the language. |
5 |
Once you've added a language, you can set the language as default. Hover over the language and click Make default. You can't delete or disable a default language. Also, changing the default language can affect how the AI Agent works, including its articles, curation, testing, and preview. |
6 |
Click Save changes. |
Configure Handover settings
Before you begin
Create the Scripted AI Agent.
1 |
Navigate to and configure the following details: |
2 |
Click Save changes to save the handover settings. |
What to do next
Preview your Scripted AI Agent
Webex AI Agent Studio allows you to preview your AI agents during and after development. This way, you can test the AI agent's functioning and determine if it generates the desired responses for the respective input queries. You can preview your scripted AI agent in the following ways:
- AI Agent dashboard—Hover over an AI Agent card to view the Preview option for that AI agent. Click Preview to open the AI Agent preview widget.
- AI Agent header—After entering the Edit mode for any AI Agent by clicking on the AI Agent card or the Edit button on the AI Agent card, the Preview option is always visible on the header section.
- Minimized widget—After launching and minimizing a preview, a chat head widget appears at the bottom right of the page. This lets you easily reopen the preview mode.
In addition to this, you can copy the shareable preview link from within an AI agent. On the AI Agent card, click the Ellipses icon in the top right, and click Copy Preview Link. You can share this link with the other users of the AI agent.
Platform preview widget
The preview widget appears at the bottom right of the screen. You can provide utterances (or a sequence of utterances) to see how the AI agent responds, ensuring it performs as expected. The AI agent preview supports multiple languages and can autodetect the language of utterances to respond accordingly. You can also manually select the language in the preview by clicking the language selector and choosing from the list of available options.
You can maximize the preview widget for a better view. Also, you can provide consumer information and initiate multiple rooms to thoroughly test the AI agent.
Shareable preview widget
The shareable preview widget lets you share the AI agent with stakeholders and consumers without creating a custom UI. By default, the copied preview link renders the AI agent with a phone casing. You can do some quick customization by changing certain parameters in the preview link. They two major customizations are:
- Widget color—By appending a
brandColor
parameter to the link. You can define simple colors using color names or use hex code of colors. -
Phone casing—By changing the value of a
phoneCasing
parameter in the link. This is set totrue
by default, but you can disable it by setting it to false.Example preview link with these parameters:
?botunique_name=<yourbot_unique_name>&enterpriseunique_name=<yourenterprise_unique_name>&root=.&phoneCasing=true&brandColor=_4391DA.
Common management sections for Scripted AI Agent
The following sections appear on the left panel of the AI Agent configuration page:
Training
As AI agents evolve and become more complex, changes to their logic or Natural Language Understanding (NLU) can sometimes have unintended consequences. To ensure optimal performance and identify potential issues, the AI agent platform offers a convenient one-click bot testing framework. You can:
- Easily create and run a comprehensive set of test cases.
- Define test messages and expected responses for various scenarios.
- Simulate complex interactions by creating test cases with multiple messages.
Define tests
You can define tests using the following steps:
- Log in to the Webex AI Agent Studio platform.
- On the Dashboard, click the scripted AI agent that you've created.
- Click Testing in the left pane. By default, the Testcases tab appears.
- Select a test case and click Execute selected tests.
Each row in the table represents a test case having the following parameters:
Parameter | Description |
---|---|
Message | A sample message that represents the types of queries and statements you can expect users to send to your AI agent. |
Expected language | The language in which the you interact with the AI agent. |
Expected article | Specify the article to be displayed in response to a particular user message. To assist you in finding the most relevant article, this column features a Smart auto-complete function. As you enter, the system suggests matching articles based on the text entered so far. |
Reset previous context | Click the check box to isolate test cases and run them independently of any existing AI agent context. When enabled, each test case is simulated in a new session, preventing interference from previous interactions, or stored data. |
Include partial matches | Enable this toggle to consider test cases successful even if the expected articles only partially match the actual response. |
Import from CSV | Import test cases from a comma-separated file (CSV) file. In this case, all existing test cases are overwritten. |
Export to CSV | Export test cases to a comma-separated file (CSV) file. |
Test callbacks | Enable this toggle to simulate incoming callbacks and test the flow behavior without requiring actual incoming calls. This option is available only for scripted AI agents for performing actions. |
Callback in flow | Click the check box in this column to indicate that an intent must trigger a callback. This option is available only for scripted AI agents for performing actions. |
Expected callback template | Specify the template key to activate when the callback occurs. This option is available only for scripted AI agents for performing actions. |
Callback timeout (s) | The maximum amount of time (in seconds) the AI agent waits for a callback response before considering the callback as timed out. The system allows a maximum of 20-second timeout. This option is available only for scripted AI agents for performing actions. |
Execute tests
On the Execution tab, click Execute selected tests to initiate a sequential execution of all selected test cases.
You can also execute test cases from the Test cases tab.
.To view test cases with specific outcomes, click the desired result (for example,
Passed
, Passed with partial match
,
Failed
, Pending
) in the summary ribbon. This filters the
test case list to display only those matching the selected result.
The session ID
associated with each test case is displayed in the results. This allows you to quickly cross-reference test cases and view transaction details. To perform this, choose the Transaction Details
option in the Actions column.
Execution history
On the History tab, access all executed test cases.
- Click the Download icon from the Actions column to export the executed test data as a CSV file for offline analysis or reporting.
- Review the specific engine and algorithm settings used for each test case execution. This information helps developers optimize the AI agent's performance.
- To view the advanced algorithm configuration settings used for a particular training engine, click the Info icon next to the training engine name. This provides insights into the parameters and settings that influenced the AI agent's behavior during testing.
Sessions
The Sessions section provides a comprehensive record of all interactions between AI agents and customers. Each session includes a detailed history of messages exchanged. You can export session data as a CSV file for offline analysis and auditing. Use this data to analyze user interactions, identify areas for improvement, and refine AI agent responses.
It can handle large data sets by displaying results in pages. You can use the Refine Results section to filter and sort sessions based on various criteria. Each row in the table displays essential session details, including:
- Channels—The channel where the interaction occurred (for example, chat, voice).
- Session ID—A unique identifier for the session.
- Consumer ID—The unique identifier of the user.
- Messages—The number of messages exchanged during the session.
- Updated at—The last updated system time.
- Metadata—Additional information about the session.
- Hide test sessions—Select this check box to hide the test sessions and display only the list of live sessions.
- Agent handover happened—Select this check box to filter sessions that we hand over to an agent. If agent handover happens, it displays the headphone icon indicating the handover of the chat to a human agent.
- Error occurred—Select this check box to filter the sessions in which an error occurred.
- Downvoted—Select this check box to filter the downvoted sessions.
Click on a row to access the detailed view of a specific session. Use check boxes to filter sessions based on agent handover, errors, and downvotes. Decrypting sessions requires user-level permission and advanced data protection settings. Click Decrypt content to view the session details.
Session details of a particular session in the Scripted AI Agent for answering questions
The Session Details view in a scripted AI agent for answering questions provides a comprehensive breakdown of a specific interaction between a user and the AI Agent.
The Messages section:
- Displays all messages sent by you during the session.
- Shows the corresponding responses generated by the AI Agent.
- Presents the chronological order of the messages, providing context for the interaction.
The Transaction Info tab:
- Lists the articles that you identified as relevant to your query, including exact and partial matches.
- Displays the similar scores associated with each identified article, indicating the degree of relevance.
- Presents the results of the underlying algorithms used to process the customer's query and identify relevant articles.
- Displays the number of algorithm results depending on the settings configured in the Handover and Inference tab.
The Other Info section in the Session Details view provides additional context and details about a specific interaction. Here's a breakdown of the information displayed:
- Processed Query—Shows the preprocessed version of the customer's input after the AI Agent's natural language understanding (NLU) pipeline processes it.
- Agent Handover—Indicates whether an agent handover occurred during the session. Check the Agent Handover by Rules check box if specific rules triggered an agent handover.
- Response Type—Specifies the type of response generated by the AI agent, such as a code snippet or a conditional response.
- Response Condition—Indicates the specific condition or rule that triggered the AI Agent's response.
- NLU Engine—Identifies the NLU engine used to process the customer's query (for example, RASA, Switchmatch, or Mindmeld).
- Threshold Scores—Displays the minimum threshold score and partial match score difference configured in the Handover and Inference settings. These values determine when we consider a query out of scope or when it requires agent intervention.
- Advanced Logs—Provides a list of debug logs associated with the specific transaction ID. The system typically retains advanced logs for 180 days.
Session details of a particular session in the Scripted AI Agent for performing actions
The Transaction Info tab in the Scripted AI Agent for performing actions provides a detailed breakdown of a specific interaction, categorizing information into four sections:
Intents Identified section:
- Displays the intents identified for the customer's query.
- Indicates the confidence level associated with each identified intent.
- Lists the slots that are associated with the identified intent. Click the slot to view additional information about its value and how the system extracts it from the user's query.
Entities Identified section lists the entities that the system extracts from the customer's message and associates it with the active consumer intent. These entities represent the key pieces of information that the AI Agent identified within the user's query.
The Algorithm Results section provides insights into the underlying processes that led to the AI Agent's response. Here's a breakdown of the information displayed:
- List of Intents—Shows the identified intents and their corresponding similarity scores.
- Entity List—Displays the entities that were extracted from the user's message.
The Other Info displays:
- Agent Handover—Indicates whether an agent handover occurred during the session. Check the Agent Handover by Rules check box if an agent handover was triggered by specific rules.
- Template Key—Indicates the template key associated with the intent that triggered the AI agent's response.
- Response Type—Indicates the type of response generated by the AI agent, such as a code snippet or a conditional response.
- Response Condition—Indicates the specific condition or rule that triggered the AI agent's response.
- NLU Engine—Identifies the NLU engine used to process the customer's query (for example, RASA, Switchmatch, or Mindmeld).
- Threshold Scores—Displays the minimum threshold score and partial match score difference configured in the Handover and Inference settings. The system determines whether a query is out of scope or requires agent intervention based on these values.
- Advanced Logs—Provides a list of debug logs associated with the specific transaction ID. Advanced logs are typically retained for 180 days.
You can also download and view the transaction info in the JSON format using the download option.
The Metadata tab displays:
- NLP Metadata—Review the preprocessing steps applied to the customer's input in the NLP tab.
- Datastore and FinalDF—Access data related to the session in the Datastore and FinalDF tabs for AI Agents.
- Search Functionality—Use the built-in search bar to find specific utterances within a conversation.
History
Whenever you add or modify articles, intents, or entities, it's essential to retrain your scripted AI agent to ensure it's the latest version. After each training session, thoroughly test your AI Agent to verify its accuracy and effectiveness.
The History page allows you to:
- View Training History—Track when you trained a corpus and the changes made.
- Compare Training Engines—Review the training engines used for different iterations and their corresponding training durations.
- Track changes—Monitor changes to settings, articles, responses, NLP, and curation.
- Revert to previous versions—Easily revert to an older training set if needed.
The History section provides convenient tools for managing your knowledge base articles:
- Activate Articles—Make previously inactive articles Live to include them in the AI Agent's responses.
- Edit Articles—Create a new version of an existing article while preserving the original for reference.
- Preview Performance—Evaluate the AI Agent's performance with a specific knowledge base using the Preview feature.
- Download Articles—Export your knowledge base articles as a CSV file for offline analysis or reference. This option is available for Scripted AI Agent for answering questions only.
Audit Logs
The Audit Logs section provides a detailed record of modifications made to your Scripted AI Agent within the past 35 days. To access Audit Logs:
- Navigate to the Dashboard and click the AI agent that you've created.
- Click the History tab to view the AI Agent's history.
- Click the Audit Logs tab to see a detailed log of changes:
- Updated At—The date and time the system made the change.
- Updated By—The user who made the change.
- Field—The section of the bot where the modification occurred (for example, Settings, Articles, and Responses).
- Description—Additional details about the change.
-
Use the
Updated by
andField
search options to find specific audit log entries. -
The Model History tab displays a maximum of 10 corpora for each AI Agent.
Curation
The system adds messages to the Curation console based on the following criteria:
- Fallback Messages—When the AI Agent fails to understand your message and triggers the fallback intent.
- Default Fallback Intent—If you enable this toggle, we send messages that activate the default fallback intent to the Curation console.
This criteria is applicable only for Scripted AI agent for performing actions.
- Downvoted Messages—Messages that users have downvoted during AI Agent previews.
- Agent Handover—Messages that result in a human agent handover due to configured rules.
- From Session—Messages flagged by users as not receiving the desired response from session or room data.
- Low Confidence—Messages with a confidence score falling within the specified low-confidence threshold.
- Partial Match—Messages where the AI Agent couldn't figure out the right intent or response.
Resolve issues
The Issues tab allows you to review and address messages flagged for curation. You can do the following:
- Choose to resolve or ignore issues based on their severity and relevance.
- Examine the original user utterance, the AI Agent's response, and any attached media.
If you enable the Advanced Data Protection in the backend, the system grants the decrypted access at the user level.
To resolve an issue, you can:
-
Link to an existing article—To connect an issue to an existing article, select the Link option and search for the desired article.
-
Create new article—Use the Add to a New Article option to create a new article directly from the Curation Console.
-
Ignore issues—Resolve or ignore issues to remove them from the Curation Console.
- You can't link to default articles (welcome message, fallback message, partial match).
- For a scripted AI agent for performing actions, select the appropriate intent from the drop-down list and tag any relevant entities.
- After making changes, retrain your AI Agent to ensure that it reflects the new knowledge in its responses.
- Resolve or ignore multiple issues simultaneously for efficient management.
The Resolved tab displays all issues addressed by the system. You can view a summary of each resolved issue, including whether we linked it to an existing article, created a new article/intent, or ignored it. If you see responses that you don't like that the system didn't catch, you can manually add specific examples to the Curation Console.
To add issues from sessions:
- Identify the Utterance—Locate the utterance that triggered the incorrect response.
- Check Curation Status—If the issue isn't already in the Curation Console, the system displays the
Curation Status
toggle. - Toggle the Flag—Enable the
Curation Status
toggle to add the utterance to the Curation Console for review and resolution.
If the issue is already in the Curation Console, the toggle's appearance changes to show its status.
View your Scripted AI performance using Analytics
The Analytics section provides a graphical representation of key metrics to evaluate the AI agent performance and effectiveness. The key metrics are divided into four sections represented as tabs, namely Overview, Responses, Training, and Curation.
On visiting the analytics screen, developers can select the AI agent they want to see the analytics for. They can customize the analytics view by choosing the channel, date range, and data granularity. By default, the system displays analytics data for the last month for all channels, with each day as a data point.
Overview
The overview contains key metrics and graphs that provide a snapshot of overall AI agent usage and performance to the developers.
- From the Dashboard, choose the AI agent that you've created.
- On the left navigation pane, click Analytics. An overview of the AI agent performance appears in both tabular format and graphical representation.
Sessions and messages
The first section in the overview displays the following statistics about sessions and messages for the AI agent:
- The count of the total sessions and the sessions that the AI agent handles without human intervention.
- Total agent handovers, which is a count of the number of sessions handed over to human agents.
- Daily average sessions
- Total messages (human and AI agent messages) and how many of those messages came from users.
- Daily average messages
The system follows this with a graphical representation of sessions (stacked column representing sessions handled by the AI agent and sessions handed over) and the total responses sent out by the AI agent.
Users
The second section in the overview contains stats about users for the AI Agent. It provides a count of total users and information about average sessions per user and daily average users. This is followed by a graph displaying new and returning users for each unit depending on the selected granularity.
Performance
The third section provides statistics about the AI agent’s responses to users. Here one can see the total responses sent out by the AI agent and the split up between responses where the AI agent:
- Identified the user’s intent.
- Responded with a fallback message.
- Responded with a partial match message.
- Informed the user of an agent handover.
The same is aggregated in a pie chart and an area graph provides information based on selected granularity.
Training
The training section represents of the ‘health’ of an AI Agent corpus. It’s recommended that developers configure 20+ training utterances for each intent/article in their AI Agents. This section displays all articles/intents as rectangles, with color and size indicating the amount of training data. The closer an intent is to white color, the more training data it needs for your AI Agent’s accuracy to improve.
Responses
This section gives the developers a detailed view of what the users are asking about and how often they are asking it. It graphically shows the most popular articles for AI Agents for answering questions and response templates for AI Agents for performing actions.
Curation
This section visually summarizes the number of curation issues that arise each day and the number that AI agents resolve.
Integrate AI Agents
This section explains how to integrate AI agents with both voice and digital channels to manage customer conversations.
Integrate AI agents with voice and digital channels
After you've created and configured your AI agents in the Webex AI Agent Studio platform, the next step is to integrate them with the voice and digital channels. This integration allows the AI agents to handle both voice-based and digital conversations with your customers, providing a seamless and interactive user experience.
For more information, see the article Integrate AI Agents with Voice and Digital channels.
Manage AI agent reports
This section outlines the overview of AI agent reports, report types, creation of AI agent reports, and report delivery modes.
Understand AI agent reports
The reports feature allows you to generate or schedule (generate periodically) specific reports from the available reports types and receive them over available delivery modes. These reports can provide valuable information around user behavior, usage, engagement, product performance, and so on. You can have the desired information delivered to their email, SFTP path, or S3 bucket. You can select a report type from prebuilt options and decide to generate it once immediately or at regular intervals.
When you access the Reports menu from the left navigation pane, the following tabs appear:
-
Configure—This tab lists out all the reports that are currently active and generated periodically. The following details are available for the list of reports:
- Active—Whether a user is still subscribed to the report.
- AI agent—Name of the AI agent associated with the report.
- Report type—The prebuilt report type you've subscribed to.
- Frequency—The interval in which you receive the report.
- Last report generated—The last report that was sent out.
- Next scheduled date—The next date the report will be sent out.
-
History—This tab displays all reports that have been dispatched until now. Click any report on this page to edit the configuration of reports.
You can click the Download icon under the Actions column to download these historical reports.
On-demand reports that appear in the History tab are available for download only after the report generation is complete.
Create an AI agent report
1 |
Log in to the Webex AI Agent Studio platform. |
2 |
Click Reports from the left navigation bar. |
3 |
Click +New report. |
4 |
Provide the following information to create and configure the report: |
AI agent report types
You can choose from a list of prebuilt reports based on the selected AI agent type. This section covers these report types, the sheets included in each report and the columns available in each sheet.
AI agent for answering questions report type
There are three different report types available for an AI agent for answering questions in the application. Different report types help you understand AI agent usage, user questions, and how the AI responds. You can also view the messages that ended-up as issues in curation.
Usage behavior and summaryThis section shows how often the AI agent accesses articles and categories. You can view the summary, categories, and articles information in a separate tab on the reports:
Field | Description |
---|---|
AI agent name | The name of the AI agent. |
Total conversations | Total conversations/sessions handled by an AI agent. |
Conversations with at least one user messages | Conversations or sessions where users provided at least one input. |
Total human messages | The messages sent by end users to the AI agent. |
Total AI agent responses | Total messages sent by the AI agent to end users. |
Total partial matches | Cases where there was some ambiguity about the user's message and the AI agent responded with multiple intents as options. |
Conversations sent to agent | Total conversations handed over to a human agent. |
Total upvotes | Total AI agent responses that were upvoted by customers. |
Total downvotes |
Total AI agent responses that were downvoted by customers. |
Field | Description |
---|---|
Category name | The name of the category as configured in the AI agent. |
Conversations for the category | The number of conversations or sessions where an article belonging to this category was detected. |
Total responses | The number of times an article belonging to this category was detected. |
Total upvotes | The number of times a response from this category was upvoted. |
Total downvotes |
The number of times a response from this category was downvoted. |
Field | Description |
---|---|
Article name | The name of the article (default variant) that is configured in the AI agent. |
Article category | The category this intent belongs to. |
Conversations for the article | The number of conversations or sessions where this article was detected. |
Total responses | The number of times where this article was detected. |
Total upvotes | The number of times the response for this article was upvoted. |
Total downvotes |
The number of times the response for this article is downvoted. |
Displays the conversation between the AI agent and customer along with the similarity score. You can view the following details in the report:
Field | Description |
---|---|
Timestamp | The timestamp for the message. |
Session ID | The unique identifier for the session. |
Consumer ID | The unique identifier for the end user on AI agent. |
Message type | The AI agent message or human message. |
Message text | The content of the message. |
Article | The identifier for the response sent back by the AI agent. |
Category | The intent detected by the AI agent for the customer's message. |
Top match score | The similarity score for the detected intent. |
Matched article 1 | The intent detected by the selected NLU engine. |
Article 1 score | The score for the intent detected. |
Feedback | The user feedback if a message was upvoted or downvoted. |
Feedback comment |
The comments left by users when downvoting a message. |
Displays the messages that ended up in curation as issues for various reasons. You can view the following details in the report:
Field | Description |
---|---|
Timestamp | Timestamp for the message. |
Session ID | An unique identifier for the user's session. |
Consumer ID | Unique identifier for the end user on AI agent. |
Human message | Content of the human message. |
AI agent message | Content of the message the AI agent responded with. |
Issue reason | The reason for this message ending up in curation. |
Article | Identifier for the response sent back by the AI agent. |
Category | Intent detected by the AI agent for the user's message. |
Top match score | Similarity score for the detected intent. |
Matched article 1 | Intent detected by the selected NLU engine. |
Article 1 score |
Score for the intent detected. |
AI agent for performing tasks report type
There are three different report types available for an AI agent for performing a task in the AI agent builder application. As an AI agent developer, you can create different report types. These help you understand the AI agent's usage, behavior, user queries, and responses. You can also view the messages that ended-up as issues in curation.
Shows a summary of conversations with triggered intents and template keys. The summary tab displays the following details:
Field | Description |
---|---|
AI agent name | The name of the AI agent. |
Total conversations | Total conversations or sessions that are handled by the AI agent. |
Conversations with at least one user messages | Conversations or sessions where users provided at least one input. |
Total human messages |
The messages that are sent by end users to the AI agent. |
Total AI agent responses | Total messages sent by the AI agent to end users. |
Total partial matches | Cases where there was some ambiguity about the user's message and the AI agent responded with multiple intents as options. |
Conversations sent to agent | Total conversations handed over to a human agent |
Total upvotes | Total AI agent responses upvoted by users. |
Total downvotes |
Total AI agent responses downvoted by users. |
You can also view the intent details in the Intents tab of the spreadsheet:
Field | Description |
---|---|
Intent name | The name of the intent as configured in the AI agent. |
Conversations for the intent | Number of conversations or sessions where this intent was invoked. |
Total invocations | The number of times this intent was invoked. |
Total completions | The number of times all the slots were collected and this intent was completed. |
Total upvotes | Total responses for that were upvoted for each intent. |
Total downvotes |
Total responses for that were downvoted for each intent. |
The report also has high-level template details such as:
Field | Description |
---|---|
Template key name | Name of the template as configured in the AI agent. |
Template key intent | Intents where this template key is used. |
Conversations for the template key | Number of times where this template key was sent out as a response. |
Total responses | The number of times this template key was sent as a response. |
Total upvotes | The number of times the response for this template was upvoted. |
Total downvotes |
The number of times the response for this template was downvoted. |
Displays the conversation of a customer with the AI agent along with the similarity scores. You can view the following details in the report:
Field | Description |
---|---|
Timestamp | Timestamp for the message. |
Session ID | An unique identifier for the user's session. |
Consumer ID | Unique identifier for the end user on the application. |
Message type | AI agent message or human message. |
Message text | Content of the message. |
Template key | Identifier for the response sent back by the AI agent. |
Intent | Intent detected by the AI agent for the customer's message. |
Top match score | Similarity score for the detected intent. |
Matched intent 1 | Intent detected by the selected NLU engine. |
Intent 1 score | Score for the intent detected. |
Feedback | User feedback if a message was upvoted or downvoted. |
Feedback comment |
Comments left by users when downvoting a message. |
Displays the messages that ended up in curation as issues for various reasons. This report is relevant only for scripted AI agents. You can view the following details in this report:
Field | Description |
---|---|
Timestamp | Timestamp for the message. |
Session ID | An unique identifier for the customer's session. |
Consumer ID | An unique identifier for the end user on the application. |
Human message | Content of the human message. |
AI agent message | Content of the message the AI agent responded with. |
Issue reason | The reason for this message ending up in curation. |
Template key | Identifier for the response sent back by the AI agent. |
Intent | Intent detected by the AI agent for the user's message. |
Top match score | Similarity score for the detected intent. |
Matched intent 1 | Intent detected by the selected NLU engine. |
Intent 1 score |
Score for the intent detected. |
Delivery modes of an AI agent report
In today's data-driven world, the efficient and secure delivery of AI agent reports is crucial for informed decision-making and operational excellence. To meet diverse organizational needs, we offer multiple delivery modes for AI agent reports, ensuring flexibility, reliability, and security. The delivery options include email, Secure File Transfer Protocol (SFTP), and Amazon S3 Bucket. Each mode is designed to cater to different requirements, whether it's the need for high security, ease of access, or scalable storage solutions. This document outlines the features and benefits of each delivery mode, helping you choose the best option for your specific needs.
SFTP
Field |
Description |
---|---|
Push reports to a secure location as scheduled |
Toggle this on to push the reports to the secure location at the scheduled time. You can only provide the following details by enabling this toggle. |
IP Address | The IP address of the system. |
Username | The username to access the reports. |
Password | The password to access the reports. |
Private key | The private key to access the files. |
Upload path |
The path where the files are routed in the system. |
Field | Description |
---|---|
Schedule emails for multiple recipients, separate with semicolon(;) | Toggle this on to add recipients. |
Recipients |
The email address of all recipients who must receive the reports at the specified time and frequency. |
S3 Bucket
Field | Description |
---|---|
Upload reports to a S3 bucket as per schedule |
Toggle this on to make the S3 fields available and route the reports to the configured S3 bucket. |
AWS access key ID | The access key ID to access the AWS services and resources. |
AWS secret access key | The secret access key to access the AWS services and resources. |
Bucket name | The name of the bucket to which the report is routed. |
Folder name |
The name of the folder that is created in the S3 bucket. |
Understand AI Compliance
This sections helps you understand AI development, data privacy, security, and safety.
AI development, data privacy, security, and safety
For every AI-powered feature, we undergo an AI Impact Assessment against our Responsible AI principles, and adhere to the Responsible AI Framework, in addition to existing Security, Privacy, and Human Rights by Design processes.
Privacy and SecurityWe don’t retain customer input data after the inference process, and the third-party model provider, Microsoft, doesn’t access, monitor, or store Cisco customer data. For more detail on feature-specific data retention policies, see Cisco Trust Portal.
The following is the list of AI transparency notes for all AI features:
Data Sources for Training and EvaluationOur third-party model provider, Microsoft doesn't use customer content to improve Azure OpenAI models and doesn’t store or retain Cisco customer data in Azure infrastructure.
Safety and Ethical ConsiderationsAll generative AI features are prone to errors, so we prioritize content safety for AI features by opting in to Content filtering, provided by Azure OpenAI.
Model Evaluation and PerformanceWe prioritize the performance and accuracy of AI Assistant by involving humans in the review, testing, and quality assurance of the underlying model.