The Virtual Agent–Voice (VAV) is a self-service capability in Webex Contact Center that helps you to integrate the Interactive Voice Response (IVR) platform with cloud-based AI services. VAV supports human-like interactions that provide intelligent and automated assistance to callers. The VAV capability enables callers to resolve issues quickly and more efficiently in the IVR flow, and reduces the calls directed toward human agents.

VAV uses technologies such as Natural Language Processing (NLP), Automated Speech Recognition (ASR), and text-to-speech (TTS) to understand a caller’s intent and provide personalized and relevant voice responses.

VAV offers the following benefits:

  • Ability to respond quickly to caller's queries in real-time.

  • Ability to route a caller to a live agent if the virtual agent can’t handle the conversation.

Webex Contact Center uses the Contact Center AI (CCAI) services through the service provider-specific integration connector. Customers can use AI services to design virtual agents and create complex IVR call flows.


  • This feature is available with Cisco subscription services only.

  • This feature is supported only for the US data center deployment on the Real Time Media Service (RTMS) voice platform.

Webex Contact Center currently supports the following integration:

Google Dialogflow CX

Dialogflow CX agent is a virtual agent that handles concurrent conversations with your end users. It is a natural language understanding module that understands the nuances of human language. You can design and build agent bots to handle the types of conversations required for your system. For more information about CX, see the Google documentation.

The conversation between the virtual agent and the caller appears on the Transcript widget of the Agent Desktop.


The conversation appears on the Transcript widget only if the 'Agent Says' fulfillment response is set in Dialogflow CX.

Prerequisites

To integrate with the VAV provider, complete the following tasks:

  • Configure the service provider-specific Integration Connector, such as the Google CCAI connector in Control Hub. For more information, see the Configure Google CCAI Connector topic in the Set Up Integration Connectors for Webex Contact Center article.

  • Create the Contact Center AI (CCAI) feature in Control Hub. For more information, see the Create a Contact Center AI configuration article. The system generates the CCAI config ID that you can use in the Flow Control configurations.

The partial response feature engages a caller during a call by playing an interim message while the webhook response takes time to process the request in the background. In Dialogflow CX, the webhook request typically takes longer to receive the response. If there is no interim response to the caller during the processing of the request, the caller is kept completely silent and there is a possibility of the caller hanging up the call. To prevent this, you can configure the partial response to inform the caller that their request is still being processed.


  • If the webhook returns the actual response before or during the partial response, the system stops the partial response and plays the final response to the callers.

  • The first prompt response that is received from Virtual Agent V2 activity does not support partial response.

  • Barge-in cannot be enabled for the partial response prompt to allow the callers to interrupt an agent's response.

1

In the Dialogflow CX console, choose the project and the agent.

2

In the CX agent screen, go to the Build tab, and then choose the required flow and the required page (Start/End Flow/End Session) in this flow in which a fulfillment is needed from the contact center application.

The selected page details appear.
3

Under the Routes section, define a route and conditions that satisfy the custom exit criteria for triggering the transition.

4

In this route, under Fulfillment section, click Add dialogue option and choose Custom payload.


 

Define a custom payload only. Do not add any other dialogue options.

5

Add the custom payload of type Execute_Request that defines the payload to be sent from CX in the following format:

{
"Execute_Request":{
"Event_Name": "<Name of the event>", 
"Data" : { 
"Params":{ 
"<param1 name>": "<param1 value>", 
"<param2 name>": "<param2 value>"
}

 

Ensure that you map this event name to the State Event Name in the Virtual Agent V2 activity in Flow Designer for decision mapping.

6

Choose Page in the Transition section to set the transition to the same page when the flow resumes.

7

Create an event handler and provide the event name in the flow builder application. For more information, see https://cloud.google.com/dialogflow/cx/docs/reference/rest/v3/EventHandler.

The Virtual Agent V2 activity provides a real-time conversational experience for your contacts. You can add the Virtual Agent V2 activity to the call flow to handle speech-based AI-enabled conversations. When a caller speaks, the system matches the speech to the best intent in the virtual agent. Further, it assists the caller as part of the Interactive Voice Response (IVR) experience.

Outcomes

Indicates the output paths for the activity that occurs based on the outcome of the conversation between the virtual agent and the caller.

  • Handled– The outcome is triggered when the virtual agent execution is completed.

  • Escalated– The outcome is triggered when the call is required to be escalated to the human agent.

Error Handling

Indicates the output path of the activity for any error that occurs during the conversation between the virtual agent and the caller.

Errored– The flow takes this path in any error scenarios.

Before you begin

In the Management Portal, complete the following tasks:

1

From the Management Portal navigation bar, choose Routing Strategy > Flow.

2

Click New.

3

In the Flow Name field, enter a unique name.

4

Click Start Building Flow. The Flow Designer window appears.

5

Drag and drop the Virtual Agent V2 activity from the Activity Library to the main flow canvas.

6

In General Settings, perform the following actions:

  1. In the Activity Label field, enter a name for the activity.

  2. In the Activity Description field, enter a description for the activity.

7

In the Conversational Experience settings, choose the Contact Center AI Config name from the Contact Center AI Config drop-down list.

The Contact Center AI Config is populated based on the CCAI feature that is configured on Control Hub.

If you want to override the default input language and output voice for VAV, include the Set Variable activities before the Virtual Agent V2 activity in flow.

For custom input language, configure the Set Variable activity as follows:

  • Set the variable to Global_Language.

  • Set the variable value to the required language code (for example, fr-CA).

For custom output voice, configure the Set Variable activity as follows:

  • Set the variable to Global_VoiceName.

  • Set the variable value to the required output voice name code (for example, en-US-Standard-D).

For more information about the supported voices and languages in CX, see Supported voices and languages.

8

In the State Event settings, enter the custom event name and the data in the Event Name - Event Data columns. The State Event is a mechanism to trigger the event handler that is configured on the agent bot. In the agent bot, you can configure how the event must be handled.

Parameter

Description

Event Name

(optional) Indicates the name of the event that is defined on the integrated third-party AI platform.

Event Data

(optional) Indicates the JSON data that the system sends (as part of the defined event name) to the integrated third-party AI platform.

You can specify the event name and the data in the form of static value or expression. For expressions, use this syntax: {{ variable }}. The following is an example of the state event that is configured to greet the caller with a custom welcome message.

Event Name: CustomWelcome

Event Data: {"Name": "John"}

9

In Advanced Settings, perform the following actions:

  1. In the Speaking Rate field, enter the numeric value or expression to increase or decrease the rate of speech output.

    • Valid values for the numeric input are in the range from 0.25 to 4.0. The default value is 1.0.

      For example, with 0.5 set as the value, the output rate of speech becomes slower than the ideal rate. With 2 set as the value, the output rate of speech becomes faster than the ideal rate.

    • For expressions, you can use the syntax: {{variable}}.

  2. In the Volume Gain field, enter the numeric value or expression to increase or decrease the volume of speech output.

    • Valid values for the numeric input are in the range from –96.0 to 16.0 decibels (dB). The default value is 0.0 dB.

    • For expressions, you can use the syntax: {{variable}}.

  3. In the Pitch field, enter the numeric value or expression to increase or decrease the pitch of speech output.

    • Valid values for the numeric input are in the range from –20.0 to 20.0 hertz (Hz). The default value is 0.0 Hz.

    • For expressions, you can use the syntax: {{variable}}.

  4. In the Termination Delay field, enter the numerical value. This setting enables the virtual agent to complete the last message before the activity stops and moves on to the next step in the flow.

    For example, if you want the virtual agent to indicate something to the caller before the system escalates the call to an agent, consider the time it takes to complete the final message before escalation.

    Valid value for the numeric input is in the range from 0 to 30 seconds. The default value is 3 seconds.

  5. Check the Enable Conversation Transcript check box to allow Agent Desktop to display the transcript of the conversation between the virtual agent and the caller.

    The raw transcript is also available through a dynamic URL. This URL extracts specific sections from the transcript with an HTTP request.

10

In Activity Output Variables, you can view the list of variables that stores the output status of the event that occurs during the conversation between the virtual agent and the caller.

Output Variable

Description

VirtualAgentV2.TranscriptURL

Stores the URL that points to the transcript of the conversation between the virtual agent and the caller.


 

Use the Parse activity to extract the parameters from the Virtual Agent Voice transcript.

VirtualAgentV2.MetaData

Stores the JSON data that is received from the agent bot as part of the fulfillment or handling custom event. You can use this data to build more business logic in the flow builder.

VirtualAgentV2.StateEventName

Stores the name of the custom event that the system receives from the agent bot after the system triggers a custom state event.


 
  • Currently, en-US is the only supported language.

  • Only the u-law codec is supported.

  • When a call is transferred to a live agent, the transcript of the conversation between the caller and the virtual agent is displayed in the Transcript gadget in the Agent Desktop (only if the Transcript gadget is configured on the Agent Desktop).