In this article
Overview
dropdown icon
Who can create and review Evaluations
    Before you begin
    Enable evaluations and speech analytics in Control Hub
Create an Evaluation Form
View Evaluation Scores
dropdown icon
Speech Analytics
    Frequently Asked Questions

Use Evaluation and Speech Analytics in Webex AI Quality Management

list-menuIn this article
list-menuFeedback?

Evaluations and speech analytics help contact centers measure service quality more consistently, monitor agent performance at scale, and focus review efforts on the interactions that matter most. Evaluation forms give supervisors and quality managers a structured way to assess interactions against business-specific standards such as compliance, communication quality, resolution, and professionalism. Evaluation forms are customizable, enabling contact centers to tailor scoring criteria, sections, and questions to match their quality program, operational goals, and customer experience requirements. Speech analytics adds context by highlighting conversational patterns, quality signals, and agent behaviors that may need closer review. This article explains evaluations and speech analytics, shows how to create a customized evaluation form, and describes how to review and manually adjust scores in the interaction player.

Overview

The goal of an evaluation is to measure interaction quality against a defined set of criteria. In Webex AI Quality Management, evaluations combine automated scoring with supervisor input. This helps teams review more interactions, apply the same standards across teams, and preserve room for manual judgment where context matters.

Speech analytics helps supervisors and quality managers monitor agent performance by identifying interactions that may contain compliance gaps, escalation indicators, communication breakdowns, or soft-skill opportunities.

You can use evaluations and speech analytics to:

  • Standardize quality reviews across teams and queues.
  • Use assignment rules to apply the right form to the right interactions.
  • Review AI-generated scores and justifications.
  • Monitor agent performance using conversational insights and quality signals.

Who can create and review Evaluations

Supervisors and quality managers with the necessary permissions can create evaluation forms, assign them to teams, queues, or agents, and create interaction rules. After an evaluation is completed, they can review the results and modify scores if needed.

Before you begin

  • Make sure the Webex AI Quality Management add-on is purchased for your organization.
  • Evaluations and speech analytics must be enabled by your administrator in Control Hub.
  • Verify that you have the permissions required to create evaluation forms.

Enable evaluations and speech analytics in Control Hub

Administrators can activate Evaluations and speech analytics for the entire organization. After the feature is enabled, the Evaluation tab appears under Configuration manager in Supervisor Desktop.

To enable Evaluations and speech analytics in Control Hub:

  1. Sign in to Control Hub and navigate to Services > Contact Center.
  2. Navigate to Desktop Experience > AI Features in the navigation pane.
  3. Turn on Evaluations and speech analytics.

No additional permission is required to view evaluation scores.

Create an Evaluation Form

An evaluation form defines how interactions are scored. A form typically includes basic details, assignment rules, sections, and questions that can be auto-evaluated.

1

Open Evaluation form from Configuration Manager in the navigation bar.

2

Click + Create new form.

3

Enter the form details so that it has a clear name and description.

4

Define the assignment policy. Choose how the evaluation form is assigned to interactions.

Assignment policies can be based on queue, team, agent, or every nth interaction. More than one assignment policy can apply to the same form, which helps contact centers target the form to the right set of interactions. Using every nth interaction is especially useful when you want to randomize evaluations and review a broader sample of interactions instead of a fixed subset.

5

Configure interaction rules, if needed. Interaction rules are optional filters that help refine which interactions receive the form.

You can define rules using multiple criteria such as connected duration, hold duration, and similar interaction measures. For each rule, select an operator such as Equals, Not equal, Greater than, Greater than or equal to, Less than, or Less than or equal to, and then provide the corresponding value. Depending on the interaction data field, the value can be entered as a duration, count, score, or another supported measure. You can add multiple rules to further narrow the set of interactions that qualify for evaluation.

6

Add a section. Create one or more sections to group related questions, for example compliance, communication quality, and resolution.

7

Add questions to the section. For each question, enter the question text and choose the question type. Common options include Yes/No, Scale, and Single select.

8

Configure answer values. Define the answer options and the score for each option based on the question type. For Yes/No questions, the form provides fixed answers such as Yes and No. For Scale and Single select questions, users can enter answer choices and assign a score to each one. Users can add multiple answers and assign scores for each answer. This structured setup makes it clear how each response contributes to the overall evaluation score. The maximum score for the question or section is shown at the top of the section.

9

Save and activate the form. You can save the form as a draft at any time to preserve your progress. When the form is complete, activate it so it can be assigned to matching interactions. From the evaluations table, you can also edit, duplicate, or archive a form. After a form is published, you can modify only its assignment policy and interaction rules.

Once a form is published, you can only modify the form’s assignment policy and interaction rules.

View Evaluation Scores

Evaluation scores are available in the interaction table and interaction player. In the Evaluations tab inside the interaction details panel, supervisors and quality managers can see the average evaluation score, the list of available forms, and the score for each form assigned to the interaction.

To display the Evaluation score column in the Completed interactions table, click the Settings icon in the table toolbar, search for Evaluation score, and select it from the list of available columns. This allows you to view evaluation scores directly in the completed interactions list without opening each interaction.

  1. Open a completed interaction. In the interaction list, open the interaction you want to review.

  2. Go to the Evaluations tab. Review the average evaluation score. If more than one form is assigned to the interaction, the score is the average of all individual form scores.

  3. Select the form you want to review. If more than one form applies to the interaction, choose the relevant form from the form list. Use search when several forms are available.

  4. Inspect AI-generated scores and justifications. Review scores that were calculated automatically and read the supporting justification where available.

  5. Adjust the evaluation based on evidence. Use the recording, transcript, chapters, and additional interaction details to confirm the correct score before you finalize the evaluation.

The final evaluation score for a form is calculated as: (earned score × 100) / total possible score. For example, assume an evaluation form has two questions. Question 1 has a maximum score of 100, and Question 2 has a maximum score of 150. After the interaction is evaluated, the AI assigns a score of 50 for Question 1 and 80 for Question 2 based on the defined criteria.

The evaluation score percentage is calculated as follows: Evaluation score (%) = ((50 + 80) × 100) / (100 + 150) = (130 × 100) / 250 = 52%

Speech Analytics

Speech analytics provides additional interaction signals that help supervisors evaluate conversation quality and agent performance more effectively. The available speech analytics measures include Cross talk, Talk ratio, and Silence monitor.

Cross talk time indicates moments when the agent and customer speak at the same time. High cross talk can suggest interruptions, poor turn-taking, or difficulty maintaining a smooth conversation flow. Supervisors can use this measure to identify interactions where the agent may need coaching on active listening, pacing, or allowing the customer enough time to speak.

Word ratio shows the balance between how much the agent speaks compared to the customer during the interaction. This helps supervisors understand whether the agent is dominating the conversation or giving the customer enough opportunity to explain the issue. Talk ratio can be used to assess communication style, listening behavior, and the agent's ability to guide the conversation effectively without overwhelming the customer.

Dead air time highlights periods of silence during the interaction. Longer or frequent silence can indicate delays, uncertainty, inefficient call handling, difficulty navigating systems, or missed opportunities to keep the customer informed. Supervisors can use this signal to evaluate agent confidence, responsiveness, and overall handling efficiency.

Together, these speech analytics measures help contact centers identify interaction patterns that may affect customer experience and operational performance. They can be used alongside evaluation scores to review communication quality, interaction control, and coaching needs more consistently.

To display these speech analytics measures in the Completed interactions table, click the Settings icon in the table toolbar and select the columns for Cross talk, Talk ratio, and Silence monitor. This allows supervisors to review these signals directly from the interaction list before opening a specific interaction.

Frequently Asked Questions

This section answers frequently asked questions about using evaluation forms.

  1. Who can create evaluation forms?

    Supervisors and quality managers with the necessary permissions can create evaluation forms and review completed evaluations, based on the permissions assigned in your organization.

  2. Can more than one evaluation form apply to the same interaction?

    Yes. More than one assignment policy can apply to the same form, and an interaction can show more than one evaluation form in the interaction player. The overall score reflects the average score across multiple forms.

  3. Can I assign the form to a combination of agents and queues?

    Yes. You can assign the form to multiple agents, queues, and teams.

  4. How are the scores generated for consults or transfers?

    For consults or transfers, the entire interaction is considered for evaluation. The score is assigned to the agent who attended the last leg of the interaction.

  5. How are forms assigned to an interaction when multiple interaction rules are created for a form?

    An interaction is evaluated when it meets one or more interaction rules.

Was this article helpful?
Was this article helpful?