The Virtual Agent–Voice (VAV) is a self-service capability in Webex Contact Center that helps you to integrate the Interactive Voice Response (IVR) platform with cloud-based AI services. VAV supports human-like interactions that provide intelligent and automated assistance to callers. The VAV capability enables callers to resolve issues quickly and more efficiently in the IVR flow, and reduces the calls that are directed toward human agents.

VAV uses technologies such as Natural Language Processing (NLP), Automated Speech Recognition (ASR), and text-to-speech (TTS) to understand a caller’s intent and provide personalized and relevant voice responses.

VAV offers the following benefits:

  • Ability to respond quickly to caller's queries in real-time.

  • Ability to route a caller to a live agent if the virtual agent can’t handle the conversation.

Webex Contact Center uses the Contact Center AI (CCAI) services through the service provider-specific integration connector. You can use AI services to design virtual agents and create complex IVR call flows.

  • This feature is available with Cisco subscription services only.

Regional Media services

CCAI services support regionalization. CCAI services are available with both default and remote PSTN regions in the Next Generation (RTMS) platform. For more information, see the Configure regional media for Virtual Agent-Voice article.

Supported Integrations

Webex Contact Center currently supports the following integrations:

  • Google Dialogflow CX
  • Google Dialogflow ES

Dialogflow CX agent is a virtual agent that handles concurrent conversations with your end users. It is a natural language understanding module that understands the nuances of human language. You can design and build agent bots to handle the types of conversations required for your system. For more information about CX, see the Google documentation.

The conversation between the virtual agent and the caller appears on the Transcript widget of the Agent Desktop.

The conversation appears on the Transcript widget only if the 'Agent Says' fulfillment response is set in Dialogflow CX.

Prerequisites

To integrate with the VAV provider, complete the following tasks:

  • Configure the service provider-specific Integration Connector, such as the Google CCAI connector in Control Hub. For more information, see the Configure Google CCAI Connector topic in the Set Up Integration Connectors for Webex Contact Center article.

  • Create the Contact Center AI (CCAI) feature in the Control Hub. For more information, see the Create a Contact Center AI configuration article. The system generates the CCAI config ID that you can use in the Flow Control configurations.

Supported features

Virtual-Agent-Voice with Dialogflow CX supports the following features:

  • Partial response
  • Customer event

Partial response

You can configure the partial response feature in CX to notify users while the Dialogflow agent fetches the response through webhook. For more information, see Configure Partial Response in Dialogflow CX.

Custom event

You can configure custom events and custom payload features in CX. The custom payload feature sends payload information from the Google CX application to the client side for processing. The custom event feature helps to select a particular event to be invoked under the CX application using the client-side API. For more information, see Configure custom events in Dialogflow CX.

Configure Virtual Agent V2 Activity

The Virtual Agent V2 activity provides a real-time conversational experience for your contacts. You can add the Virtual Agent V2 activity to the call flow to handle speech-based AI-enabled conversations. When a caller speaks, the system matches the speech to the best intent in the virtual agent. Further, it assists the caller as part of the Interactive Voice Response (IVR) experience.

Outcomes

Indicates the output paths for the activity that occurs based on the outcome of the conversation between the virtual agent and the caller.

  • Handled– The outcome is triggered when the virtual agent execution is completed.

  • Escalated– The outcome is triggered when the call is required to be escalated to the human agent.

Error Handling

Indicates the output path of the activity for any error that occurs during the conversation between the virtual agent and the caller.

Errored– The flow takes this path in any error scenarios.

Before you begin

Configure the entry point and choose the routing flow (once the flow is created in the Flow Designer). For more information, see Set up a channel.

1

From the Management Portal navigation bar, choose Routing Strategy > Flow.

2

Click New.

3

In the Flow Name field, enter a unique name.

4

Click Start Building Flow. The Flow Designer window appears.

5

Drag and drop the Virtual Agent V2 activity from the Activity Library to the main flow canvas.

6

In General Settings, perform the following actions:

  1. In the Activity Label field, enter a name for the activity.

  2. In the Activity Description field, enter a description for the activity.

7

In the Conversational Experience settings, choose one of the following options:

  • Static Contact Center AI Config–Choose the CCAI config name from the Contact Center AI Config drop-down list.

    The CCAI config name is populated based on the CCAI feature that is configured in Control Hub.

  • Variable Contact Center AI Config–Choose the flow variable that allows dynamic selection of the CCAI config ID based on the region from where the call originates. This variable maps the PSTN region to the corresponding Google profile region to handle the conversation.

    For more information about how to configure the variable CCAI config, see steps 6 to 8 in the Configure regional media for Virtual Agent-Voice article.
    • For a VAV flow to work, you need to set the global variables in the flow to configure the default input language and output voice for virtual agent. For more information about how to add global variables in the flow, see  Global Variables.
    • If you want to override the default input language and output voice for VAV, include the Set Variable activities before the Virtual Agent V2 activity in the flow.

      For custom input language, configure the Set Variable activity as follows:

      • Set the variable to Global_Language.

      • Set the variable value to the required language code (for example, fr-CA).

      For custom output voice, configure the Set Variable activity as follows:

      • Set the variable to Global_VoiceName.

      • Set the variable value to the required output voice name code (for example, en-US-Standard-D).

      For more information about the supported voices and languages in CX, see Supported voices and languages.

8

In the State Event settings, enter the custom event name and the data in the Event Name - Event Data columns. The State Event is a mechanism to trigger the event handler that is configured on the agent bot. In the agent bot, you can configure how the event must be handled.

  • Event Name–(optional) Indicates the name of the event that is defined on the integrated third-party AI platform.

  • Event Data–(optional) Indicates the JSON data that the system sends (as part of the defined event name) to the integrated third-party AI platform.

You can specify the event name and the data in the form of a static value or expression. For expressions, use this syntax: {{ variable }}. The following is an example of the state event that is configured to greet the caller with a custom welcome message.

Event Name: CustomWelcome

Event Data: {"Name": "John"}

9

In Advanced Settings, perform the following actions:

  1. In the Speaking Rate field, enter the numeric value or expression to increase or decrease the rate of speech output.

    • Valid values for the numeric input are in the range from 0.25 to 4.0. The default value is 1.0.

      For example, with 0.5 set as the value, the output rate of speech becomes slower than the ideal rate. With 2 set as the value, the output rate of speech becomes faster than the ideal rate.

    • For expressions, you can use the syntax: {{variable}}.

  2. In the Volume Gain field, enter the numeric value or expression to increase or decrease the volume of speech output.

    • Valid values for the numeric input are in the range from –96.0 to 16.0 decibels (dB). The default value is 0.0 dB.

    • For expressions, you can use the syntax: {{variable}}.

  3. In the Pitch field, enter the numeric value or expression to increase or decrease the pitch of speech output.

    • Valid values for the numeric input are in the range from –20.0 to 20.0 hertz (Hz). The default value is 0.0 Hz.

    • For expressions, you can use the syntax: {{variable}}.

  4. In the Termination Delay field, enter the numerical value. This setting enables the virtual agent to complete the last message before the activity stops and moves on to the next step in the flow.

    For example, if you want the virtual agent to indicate something to the caller before the system escalates the call to an agent, consider the time it takes to complete the final message before escalation.

    The valid value for the numeric input is in the range from 0 to 30 seconds. The default value is 3 seconds.

    If you configure the Termination Delay value as 0, the system does not play the last audio message to the caller.

  5. Check the Enable Conversation Transcript check box to allow Agent Desktop to display the transcript of the conversation between the virtual agent and the caller.

    The raw transcript is also available through a dynamic URL. This URL extracts specific sections from the transcript with an HTTP request.

10

In Activity Output Variables, you can view the list of variables that stores the output status of the event that occurs during the conversation between the virtual agent and the caller.

  • VirtualAgentV2.TranscriptURL–Stores the URL that points to the transcript of the conversation between the virtual agent and the caller.

    Use the Parse activity to extract the parameters from the Virtual Agent Voice transcript.

  • VirtualAgentV2.MetaData–Stores the JSON data that is received from the agent bot as part of the fulfillment or handling custom event. You can use this data to build more business logic in the flow builder.

  • VirtualAgentV2.StateEventName–Stores the name of the custom event that the system receives from the agent bot after the system triggers a custom state event.

  • Currently, en-US is the only supported language.

  • Only the u-law codec is supported.

  • When a call is transferred to a live agent, the transcript of the conversation between the caller and the virtual agent is displayed in the Transcript gadget in the Agent Desktop (only if the Transcript gadget is configured on the Agent Desktop).

Dialogflow ES agent is a virtual agent that handles concurrent conversations with your end users. It is a natural language understanding module that understands the nuances of human language. You can design and build agent bots to handle the types of conversations required for your system. For more information about Dialogflow ES, see the Google documentation.

The conversation between the virtual agent and the caller appears on the Transcript widget of the Agent Desktop. The conversation appears on the Transcript widget only if the 'Responses' is set in Dialogflow ES.

Prerequisites

To integrate with the Dialogflow ES, complete the following tasks:

  • You need to have the Dialogflow Essentials edition to integrate with the Dialogflow ES.

  • Configure the Google CCAI connector in Control Hub. For more information, see the Configure Google CCAI Connector topic in the Set Up Integration Connectors for Webex Contact Center article.

  • Create the Contact Center AI (CCAI) feature in Control Hub. For more information, see the Create a Contact Center AI configuration article. The system generates the CCAI config ID that you can use in the Flow Control configurations.

Configure Virtual Agent V2 activity

The customers who use the Next Generation platform can configure the Virtual Agent V2 activity in the Flow Designer.

The Virtual Agent V2 activity provides a real-time conversational experience for your contacts. You can add the Virtual Agent V2 activity to the call flow to handle speech-based AI-enabled conversations. When a caller speaks, the system matches the speech to the best intent in the virtual agent. Further, it assists the caller as part of the Interactive Voice Response (IVR) experience.

More than one Virtual Agent V2 activity in the same branch in a call flow is not supported.
Outcomes

Indicates the output paths for the activity that occurs based on the outcome of the conversation between the virtual agent and the caller.

  • Handled–The outcome is triggered when the virtual agent execution is completed.

  • Escalated–The outcome is triggered when the call is required to be escalated to the human agent.

Error Handling

Indicates the output path of the activity for any error that occurs during the conversation between the virtual agent and the caller.

Errored–The flow takes this path in any error scenarios.

Default system-level settings

The following settings are defined in the system internally by default. These settings don't appear on the user interface and cannot be changed:

  • Infinite number of retries for handling invalid or no input errors.

  • Barge-in is enabled to interrupt the Virtual Agent during interaction.

  • DTMF termination symbol = #. This setting indicates the end of the input.

  • DTMF 'No-input timeout' = 5 seconds. This setting indicates the time duration for which the Virtual Agent waits for the caller's input.

  • DTMF 'Inter-digit timeout' = 3 seconds. This setting indicates the time duration for which the Virtual Agent waits for the next DTMF input from the caller before the Virtual Agent moves on in the conversation flow.

Before you begin

  • Set up a Dialogflow agent. For more information on building a Dialogflow agent in the Google Cloud, see Build an agent.

  • Configure the Google CCAI connector and create a CCAI configuration in the Control Hub.

  • Configure the entry point and choose the routing flow (once the flow is created in the Flow Designer). For more information, see Set up a channel.

1

Sign in to Control Hub, choose Services > Contact Center > Flows.

2

Click Manage Flows and then click Create Flows.

3

In the Flow Name field, enter a unique name and click Start Building Flow. The Flow Designer window appears.

4

Drag and drop the Virtual Agent V2 activity from the Activity Library to the main flow canvas.

5

In General Settings, perform the following actions:

  1. In the Activity Label field, enter a name for the activity.

  2. In the Activity Description field, enter a description for the activity.

6

In the Conversational Experience settings, choose one of the following Contact Center AI Config options:

  • Static–Choose the CCAI config to handle the conversations within the default PSTN region.

    The Contact Center AI config is populated based on the CCAI feature that is configured on Control Hub.

  • Variable–Choose the CCAI config to handle the conversations within the same location of the caller while the call originates from the remote or non-default PSTN region. This variable maps the PSTN region to the corresponding Google profile region.

    For more information about how to configure the variable CCAI config, see steps 6 to 8 in the Configure regional media for Virtual Agent-Voice document.
    • For a VAV flow to work, you need to set the global variables in the flow to configure the default input language and output voice for virtual agent. For more information about how to add global variables in the flow, see  Global Variables.
    • If you want to override the default input language and output voice for VAV, include the Set Variable activities before the Virtual Agent V2 activity in the flow.

      For custom input language, configure the Set Variable activity as follows:

      • Set the variable to Global_Language.

      • Set the variable value to the required language code (for example, fr-CA).

      For custom output voice, configure the Set Variable activity as follows:

      • Set the variable to Global_VoiceName.

      • Set the variable value to the required output voice name code (for example, en-US-Standard-D).

      For more information about the supported voices and languages in ES, see Supported voices and languages.

7

In the State Event settings, enter the custom event name and the data in the Event Name - Event Data columns. The state event is a mechanism to trigger the intent without the need for matched text or spoken input. You can define the custom events to trigger the intent. For information on how to configure the intent for events in Dialogflow ES, see Google documentation.

  • Event Name–(optional) Indicates the name of the event that is defined on the integrated third-party AI platform.

  • Event Data–(optional) Indicates the JSON data that the system sends (as part of the defined event name) to the integrated third-party AI platform.

You can specify the event name and the data in the form of a static value or expression. For expressions, use this syntax: {{ variable }}. The following is an example of the state event that is configured to greet the caller with a custom welcome message.

Event Name: CustomWelcome

Event Data: {"Name": "John"}

8

In Advanced Settings, perform the following actions:

  1. In the Speaking Rate field, enter the numeric value or expression to increase or decrease the rate of speech output.

    • Valid values for the numeric input are in the range of 0.25 to 4.0. The default value is 1.0.

      For example, with 0.5 set as the value, the output rate of speech becomes slower than the ideal rate. With 2 set as the value, the output rate of speech becomes faster than the ideal rate.

    • For expressions, you can use the syntax: {{variable}}.

  2. In the Volume Gain field, enter the numeric value or expression to increase or decrease the volume of speech output.

    • Valid values for the numeric input are in the range of –96.0 to 16.0 decibels (dB). The default value is 0.0 dB.

    • For expressions, you can use the syntax: {{variable}}.

  3. In the Pitch field, enter the numeric value or expression to increase or decrease the pitch of speech output.

    • Valid values for the numeric input are in the range from –20.0 to 20.0 hertz (Hz). The default value is 0.0 Hz.

    • For expressions, you can use the syntax: {{variable}}.

  4. In the Termination Delay field, enter the numerical value. This setting enables the virtual agent to complete the last message before the activity stops and moves on to the next step in the flow.

    For example, if you want the virtual agent to indicate something to the caller before the system escalates the call to an agent, consider the time it takes to complete the final message before escalation.

    The valid value for the numeric input is in the range from 0 to 30 seconds. The default value is 3 seconds.

    If you configure the Termination Delay value as 0, the system does not play the last audio message to the caller.
  5. Check the Enable Conversation Transcript check box to allow Agent Desktop to display the transcript of the conversation between the virtual agent and the caller.

    The raw transcript is also available through a dynamic URL. This URL extracts specific sections from the transcript with an HTTP request.

9

In Activity Output Variables, you can view the list of variables that stores the output status of the event that occurs during the conversation between the virtual agent and the caller.

  • VirtualAgentV2.TranscriptURL–Stores the URL that points to the transcript of the conversation between the virtual agent and the caller.

    Use the Parse activity to extract the parameters from the Virtual Agent Voice transcript.

  • VirtualAgentV2.MetaData and VirtualAgentV2.StateEventName variables are not applicable.

  • Currently, en-US is the only supported language.

  • Only the U-law codec is supported.

  • When a call is transferred to a live agent, the transcript of the conversation between the caller and the virtual agent is displayed in the Transcript gadget in the Agent Desktop (only if the Transcript gadget is configured on the Agent Desktop).

IVR Transcript and global Variables in Agent Desktop

The administrator can provide access to an agent to view the Conversational IVR transcript and to view or edit the global (previously known as CAD) variables based on the configurations set in the call flow.

An agent can view the Conversational IVR transcript and the global variables extracted from the Conversational IVR transcript based on the permissions set in the call flow by the administrator. For more information on the Conversational IVR transcript in Agent Desktop, see IVR Transcript Widget.

For more information on the global variables in Agent Desktop, see Enter call-associated data variables.