In this article
dropdown icon
Overview
    Before you begin
    Open AI Quality Management reports in Analyzer
dropdown icon
QM for Interactions report (CSR)
    Available fields
    How to use this report
dropdown icon
Human Agent Performance report (ASR)
    Available fields
    How to use this report

Use Analyzer Reports for Webex AI Quality Management

list-menuIn this article
list-menuFeedback?

Analyzer reports help analysts, supervisors, and business users review AI Quality Management data at scale. Instead of opening interactions one by one, you can use Analyzer to review evaluation outcomes, customer experience signals, and performance trends across human agents, AI agents, teams, and queues. This article explains the AI Quality Management reports available in Analyzer, what each report shows, and how to use the available fields and filters to analyze interaction quality and performance.

Overview

AI Quality Management exposes all the QM metrics in CSR and ASR to create custom reports. There are two types of custom reports which users can create:

  1. List of all the customer interactions and associated QM metrics for these interactions (CSR)
  2. List of all the agents by team and their performance around key operational and QM metrics (ASR)

Together, these reports help you review interaction-level quality results, and compare agent performance over time using the same reporting experience.

Use these reports when you want to identify interactions with failed evaluations, compare agent aggregated evaluation scores, and understand how agents are performing on connected duration or identify interactions with lower sentiment scores.

Before you begin

  • Make sure your organization has access to Analyzer and the required AI Quality Management features.
  • Make sure the relevant QM features, such as evaluations, speech analytics, Auto CSAT, and sentiment analysis, are enabled where required.
  • You must have access to Analyzer and the data sets or reports used by your organization for AI Quality Management reporting.
  • Some report values depend on the features enabled for the selected interaction, queue, or agent.

Open AI Quality Management reports in Analyzer

To use AI Quality Management reports in Analyzer:

  1. Sign in to Webex Contact Center and open Analyzer.
  2. Create custom CSR or ASR report in visualizations using the available metrics.
  3. Open the report or report template configured for AI Quality Management.
  4. Select the report interval and apply the filters that are relevant for your analysis.
  5. Review the report data and drill down further by agent, team, queue, or interaction when needed.

QM for Interactions report (CSR)

Create custom QM for Interactions report when you want an interaction-level view that combines quality metrics with operational metrics and speech-based quality signals. This report helps analysts review how completed interactions were evaluated and compare that data with measures such as evaluation score, sentiment, Auto CSAT, cross talk time, dead air time, connected duration, transfers, and consults.

This report is useful when you want to investigate failed evaluations, identify interactions with low evaluation scores, or build broader quality trend analysis across channels and queues.

Available fields

    • ANI | Caller number or originating address for the interaction.
    • Entry Point | Name of the last entry point used by the interaction.
    • Contact Session ID | Interaction ID.
    • Contact Start Timestamp | Date and time when the contact started.
    • Contact End Timestamp | Date and time when the contact ended.
    • Channel | Telephony, email, chat, or social.
    • Direction | Inbound or outbound.
    • First Queue Name | Name of the queue where the interaction first landed.
    • Final Queue name | Name of the last queue
    • Agent Name | Last agent in the interaction.
    • Evaluation status | Status of the evaluation for the interaction.
    • Wrap-up Code Name | Wrap-up reason associated with the interaction.
    • Customer Sentiment Score | Sentiment score for the interaction between –100 and +100; > +45 is positive, < -45 is negative, between –45 and +45 is neutral .
    • Overall Evaluation Score | Overall evaluation score for the interaction.
    • Evaluation Sections Failure Count | Number of sections failed in the evaluation.
    • Cross Talk Time | Duration for the interaction in HH:MM:SS when agent spoke over customer.
    • Word Ratio | Percentage of time agent spends speaking in an interaction.
    • Dead Air Time | Total time during the connected portion of the interaction when neither customer nor agent is speaking.
    • Connected Duration| Time in HH:MM:SS.
    • Transfer Count | Total number of transfers during the interaction.
    • Hold Duration | Total duration the interaction was on hold in HH:MM:SS.
    • Hold count | Number of holds during an interaction.
    • Wrap Up Duration | Total time spent in wrap-up after handling the interaction in HH:MM:SS.
    • Consult Count | Total number of consults for the interaction.
    • Consult Duration | Total consult duration.
    • Auto CSAT | Auto CSAT score for the interaction.

    Available filters

    • Agent
    • Queue
    • Duration
    • Channel
    • Direction
    • Evaluation status

How to use this report

  • Use Interaction ID or ANI when you want to investigate a specific interaction.
  • Use Queue, Entry Point, Channel, and Direction to compare quality trends across routing paths and interaction types.
  • Use Evaluation status, Overall Evaluation Score, and Evaluation Sections Failed count to identify interactions that need review or follow-up.
  • Use Auto CSAT, Sentiment, Cross Talk Time, Word Ratio, and Dead Air Time together when you want a broader customer experience view of the interaction.

Human Agent Performance report (ASR)

Create custom Human Agent Performance report when you want an aggregated view of agent performance for the selected period. This report combines average evaluation scores, customer experience measures, and operational metrics so you can compare performance across agents and teams.

If an agent worked in more than one team during the selected period, the report shows separate rows for each agent and team combination.

Available fields

  • Agent name | Name of the agent.
  • Team name | Team associated with the agent for the selected interval.
  • Overall Evaluation score | Average evaluation score for the interactions handled by the agent within the selected interval.
  • Evaluations Failure Count | Number of evaluations failed by the agent.
  • Cross talk time | Average time for which agent spoke over customer in handled interactions.
  • Word ratio | Percentage of time when agent spoke during an interaction.
  • Dead Air Time | Average time when neither customer nor agent were speaking.
  • Connected count | Number of connected interactions handled by the agent.
  • Connected duration | Average connected duration for the agent's interactions.
  • Wrap-up duration | Average time the agent takes to complete wrap-up after an interaction.
  • Transfer count | Total number of transfers initiated by agent across all interactions.
  • Hold count | Number of times the agent put a caller on hold.
  • Average Hold duration | Average time for which agent puts call on hold.
  • Consult count | Total number of consults requested by the agent.
  • Consult duration | Total consult duration.
  • Consult Answer count | Total number of consults answered.

Available filters

  • Agent
  • Team
  • Duration
  • Channel

How to use this report

  • Use Overall Evaluation score and Evaluations Failure Count together when you want a view of agent performance on evaluation metrics.
  • Use Hold, Transfer, Consult, Cross talk, Word ratio, and Dead Air monitor to identify performance behaviors that may require deeper review or coaching.
Was this article helpful?
Was this article helpful?