After configuring suggested responses and setting up your AI Assistant skills, it is crucial to test their effectiveness and continuously monitor their performance. This proactive approach ensures that the AI Assistant provides accurate and relevant suggestions, leading to improved agent efficiency and enhanced customer satisfaction. Webex Contact Center provides comprehensive tools within Webex AI Studio and Analyzer for thorough testing and ongoing monitoring.

This article guides you through the processes of previewing AI Assistant skill responses, monitoring their impact using performance metrics, and utilizing Sessions and History tabs for auditing and debugging.

Prerequisites

Before testing and monitoring suggested responses performance, ensure the following:

  • AI Assistant add-on SKU: Your organization must have purchased the AI Assistant add-on SKU (A-FLEX-AI-ASST) for Webex Contact Center.
  • Webex AI Studio access: You must have the necessary administrator privileges to access the Webex AI Studio platform.
  • Configured AI Assistant skill: An AI Assistant skill must be created, configured, and published in the Webex AI Studio with relevant knowledge bases, instructions, and actions.

    For more information, refer to Create and manage AI Assistant skills and Configure actions for AI Assistant skills articles.

  • Linked AI Assistant skill to queues: The AI Assistant skill providing suggested responses must be linked to the appropriate queues.

    For more information, refer to the Link AI Assistant skills to queues article.

  • Real-time transcriptions enabled (for voice): For voice interactions, real-time transcription must be enabled for the relevant queues. This is essential for the AI Assistant to process spoken conversations and generate suggestions.

    For more information, refer to the Enable real-time transcripts for agents article.

Previewing AI Assistant skill

Before deploying an AI Assistant skill to live agents, you can test its behavior in a simulated environment within the Webex AI Studio. This allows you to validate the relevance and accuracy of suggestions. Follow the steps below:

1

Sign in to Control Hub and navigate to Services > Contact Center.

2

Go to AI Assistant under Desktop Experience in the navigation pane.

3

Under the Suggested responses section, click the Manage AI Assistant skills link. The AI Assistant skills dashboard opens in a new browser tab within the Webex AI Studio.

4

In the AI Assistant skills dashboard, click on the specific AI Assistant skill you want to test. This will open its configuration page.

5

Click the Preview button.

Chat mode preview: The preview within the Webex AI Studio allows you to simulate a chat interaction. You can assume the role of a customer, type queries, and observe how the AI Assistant skill generates suggestions, just as a human agent would see them.

Note for voice channel testing: For a detailed preview of how suggestions appear and function during live voice interactions, you must switch to the Agent Desktop and test the feature in a real-call scenario. This requires real-time transcription to be active for the call.

Utilizing Analyzer for performance monitoring

The suggested responses feature is part of the AI Assistant. You can review AI Assistant reports and KPIs in Analyzer to monitor usage and effectiveness.

For more information, refer to the AI Assistant reports in Analyzer article.

Insights offered by suggested responses report

This report provides insights into the real-world impact of suggested responses on agent performance and customer satisfaction, helping you gather feedback for continuous improvement and measure the feature's effectiveness. The report offers insights into the following key areas:

  • Usage metrics: Track how often agents use the Get Suggestions button, the number of suggestions offered per interaction, and the feature's adoption rate.
  • Impact on KPIs: Monitor changes in Average Handle Time (AHT), consults/transfers, customer satisfaction (CSAT)/ auto CSAT, and First Contact Resolution (FCR) for interactions where suggested responses were used.
  • Action execution: Track the number of unmoderated and moderated actions performed and their success rates.
  • Real-time transcription KPIs: For voice interactions, cross-reference the performance of suggested responses with the real-time transcription KPIs report to ensure the underlying transcription service is performing optimally. For more information, refer to the Enable real-time transcripts for agents article.

Utilizing AI Studio for auditing and debugging

Besides the Configuration tab, the Webex AI Studio provides two tabs for auditing and debugging your AI Assistant skills: Sessions and History.

Sessions tab

The Sessions tab provides a detailed record of every interaction where an AI Assistant skill was used for suggested responses. It can be used for auditing, debugging, and continuous improvement of AI Assistant skill performance in live interactions. To effectively leverage the insights from this tab, perform the following actions:

  1. View and filter interactions: The Sessions page displays a list of all interactions where AI Assistant skills provided suggestions:
      • Search: Use the search bar to find specific interactions by contact session ID, consumer ID or interaction ID.
      • Filter: Apply filters to narrow down the list by:
        • Contact date: Interactions within a specific time range.
        • Agents, teams, queues: Interactions handled by specific personnel or routed through particular queues.
        • Channel types: Voice or digital interactions.
        • Actions suggested/performed: Interactions where specific actions were proposed or performed.
        • Errors occurred: To filter sessions in which an error occurred.
        • Hide test sessions: To exclude test sessions from your view.
        • Agent handover happened: To filter sessions where an agent handover occurred.
        • Downvoted: To filter sessions that were downvoted by agents.
        • Flagged interactions (if implemented): Interactions flagged by human agents for review.
  2. Detailed interaction view: Click on any interaction in the list to view comprehensive details:
    • Interaction transcript: The full conversation between the human agent and the customer, provided by the real-time transcriptions feature.
    • AI Assistant skill used: Identifies which AI Assistant skill provided suggestions for this interaction.
    • List of suggestions: Shows all suggestions provided to the human agent, linked to the specific customer query that prompted them.
    • Source of suggestions: Displays the top articles, FAQs, or sections from the knowledge base that were used to generate the suggestions. This allows you to verify the accuracy and relevance of the information.
    • Actions suggested and performed: Provides details on which actions were proposed and whether they were performed (in unmoderated or moderated mode).
    • Agent modifications: If a human agent edited any fields in a moderated action before submission, these modifications are recorded.
    • Additional context: Any extra information provided by the human agent to refine suggestions is visible.
    • Metadata: Includes contact session ID, human agent ID/name, team ID/name, queue ID/name, contact date/time, and channel type.

History tab

The History tab provides a record of the configuration changes and versions of your AI Assistant skills. While not directly a performance monitoring tool for live interactions, it is crucial for understanding why performance might have changed due to configuration updates.

  • Version history: Tracks different published versions of your AI Assistant skill, allowing you to revert to previous configurations, if needed.
  • Change logs: Provides a detailed log of modifications made to the AI Assistant skill's settings, including who made the change, when, and what was changed. This is essential for auditing and debugging configuration-related issues.