Designing

How to design your assistant on Voiceflow.

Listen Steps

An overview of the Steps used to listen/capture your users' inputs and interactions with your assistant.

Overview

The Listen Steps are the fundamental building blocks in designing a positive & ideal user experience in your conversational experiences. They are responsible for listening, capturing, storing and having handling around user responses.

They are available on each project type can be used to have your assistant interact and expect a response from a user, whether that is through Buttons, Choices, or Capturing information.

This will be the primary form of functionality used to interact, store and receive a response within your assistants.

Note: Depending on project-type (Chat or Voice) your primary form available for Listen Steps will change. For Voice Assistants, you will be utilizing Choice Steps and for Chat Assistants you will be using Button Steps.

The Basics

Why Use Choice Paths/Buttons in Conversations?

One of the most important concepts in conversational design is to guide or prompt the user on to the next step of their journey or conversation. Remember, they are interacting with the system to accomplish a task and reach a successful outcome, but they may not know how to get there on their own.

When inviting your end-user to provide a range (wide or limited) of responses, you will find that many of said responses are likely outside the scope of your bot.

To reduce the amount of unexpected input that users send your conversational assistant, the Choice step allows you to focus on the range of responses your user can easily give.

Note: This scenario would operates similarly to the Buttons step for Chat Assistants & non-voice assistants.

In addition to providing choice-paths or intents to guide your user through, there are a number of other reasons you will want to leverage choice paths in building your conversations.

  • Expect users to be informative - Because users are cooperative, they often offer more information or context than is often required or prompted of them.
  • Get the dialog back on track - Your conversation won't always be able to handle cooperative responses. In these cases, rely on pre-built choices or paths for existing parts of your conversation or conversational error handling to get the dialog back on track or the intended conversation direction.
  • Move the conversation forward - Every path in your conversation should be intentional, Choices (or Buttons) offers the ability to advance the conversation with pre-defined choices/intents.

Conceptually Imagining CHoice Paths

Example: How we might use a Choice path to direct a conversation? Let's say we want to provide the user with options to go left or right:

To create a path for 'right' we would create an intent called "go_right" and provide some sample utterances for what the user might say to signify their intention is to go right.

Some sample utterances for our "go_right" intent could be:

  • "go right"
  • "I want to go right"
  • "right"
  • "please go right"

Now, if the user says any of the sample utterances within the go_right intent, that intent will be triggered and the conversation will follow whatever path has the go_right intent linked.

Choice

When designing linear and choice-based conversations in Voice Assistant projects, the Choice step is ideal for pre-defined paths and choices.

Note: For Voice Assistant projects, the Choice Step can also be used to open-ended prompt users with intents as its choices.

Creating Choice Paths

Choice steps are made up of intent-based and defined "paths" and a fallback paths for when a user doesn't reply or doesn't match the available intents.

When the user reaches a Choice step in the conversation, the assistant will listen for the user’s intent.

Note: In its default state (ie. when you drag a Choice step from the sidebar) it will appear and function as to 'Listening for an intent' ; if you click Add Choice and click the settings configuration you can limit this function.

Depending on the user’s intent, the assistant will take them down different paths or intents defined in the conversation.

You can add additional paths (2) to your Choice step by clicking 'Add Choice', each time you want to add a new Choice.

When selected, the choice path will have a 'port' attached to it on the canvas - allowing it to be linked to another block or step.

Adding Intents to Choice Paths

Once you've created a Choice path, you can choose its linked intent. Conceptually, this lets you direct users down conversation paths most relevant to their needs. Essentially, you are defining how your assistant should route users depending on their chosen intent or 'choice'.

For example, if you have an intent of "order pizza" for path 2 and the user activates the order pizza intent, the user will follow path 2.

You can choose an existing intent configured in your conversation and/or model, or create a new one within the Choice step. To link an existing intent, click into the path's input, as indicated in the steps above. You can then choose from your project's existing intent in the dropdown menu.

To create a new intent from this step, type a new intent name in the path's input and hit enter or select the Create New Intent button located at the bottom of the sub-menu.

Note: You can modify a path's linked intent without deleting the path's connection point.

Connecting a path (Follow Path)

Now that you've configured your Choice step, you can connect it to a conversation path on canvas.

To link a Choice path, navigate back and click the port of your selected path, drag and click the connector to the block you want to link to. You can modify connections by clicking the port of your selected path.

Choice Actions - Intent Actions

In addition to porting a connection in your Choice step, you can use Actions (3) to nest navigation and backend logic in a single Choice within the step. This can be configured under the Actions menu, following intent selection.

Under the Choice step, you can perform these nested actions per Choice:

  • Go to Block - Goes to a specific block referenced within the project
  • Go to Intent - Goes to an existing intent contained in the project
  • End - Ends the conversation at its current state
  • Set variable - Allows you to set and change the value of variables
  • API - Allows you to set up, configure and execute API calls & functions
  • Code - Allows you to set up and code custom Javascript functions & commands
Tip: Learn more about Actions in-depth and in detail here.

Intent Scoping

By default, your Choice step will be listening for all intents, waiting for the user to prompt and guide the conversation.

If you want to limit the scope or access from the Choice step to the intents contained in solely the step, you can click the settings icon located to the left of the Add Choice button, and select Intent scoping.

From there, you are able to limit the scope of the ability for the Choice step to match the intents in your conversation at the global-level or at the designated step-level.

Configuring for No Match

There may be instances where your user's input doesn't match the intents in your Choice step. For these cases, you can define what happens when the assistant detects a 'No Match'.

In the 'No Match' section of the Choice path, you can choose whether you want to reprompt your users and/or configure a fallback path. You can configure both options below to have utilize both features.

  • To add reprompt options, input the intended No Match response in the text input field. You can add additional reprompts by hitting the (+) button or remove with (-)
  • To connect your No Match to a conversation or fallback path, select 'Path' and hit the (+). This will let you select the conversation path for your fallback.

Should you configure a fallback path, you also have the option to add Actions. You also can rename the No Match path-ing, for your reference on the Canvas. You can also remove path-ing with (-).

Configuring No Reply

If the assistant does not hear your user's response, or the user's response is unintelligible, the No Reply Response occurs. To define your No Reply response, access the more options menu by clicking the ellipses button in the Choice step, and select 'Add no reply'.

You can configure your No Reply response message, the time delay, and connect it to a conversation path.

Should you configure a fallback path, you also have the option to add Actions. You also can rename the No Match path-ing, for your reference on the Canvas. You can also remove path-ing with (-).

Tip: No Reply Responses only occur twice and will exit the app if the assistant still doesn't understand after a third attempt.

Previewing No Match/No Reply

If you configure No Match and/or No Reply on your Choice step, a preview menu will also appear when you are navigating on the Choice-step from the Canvas.

To preview it, you can navigate back the main desired Choice step and select the question-mark bubble (?) for No Match or time bubble for No Reply:

You will notice the no-match or no-reply responses configured appear under the Choice intent/path. Clicking the copy icon, copies this content/data onto your clipboard. The Edit icon will take you to the editor.

Tip: Viewer-level stakeholders on your canvas are still able to access this preview menu, but will not be able to access the Editor and its icon.

Intent Creation & Editing from Choice Steps

You can now enter the workflow for Intent creation, editing, and NLU configurations (ie. entities/utterances) right inside the Choice Step!

To enter this workflow, select your desired Intent in the respective Choice option in the Choice step.

Tip: You will notice you can also preview the required entities associated with the Intent and its utterances.

Then, select the pencil icon located to the left of the Intent name in this selector menu.

Intent Editing - you can configure the name, utterances, and configure the required entities

Configuring Entity Reprompts - clicking your desired entity under the Required entities section in the Editor allows you to configure the entity reprompt messages and configurations. Once you configure an Entity Reprompt, it will appear as a preview in the main-step view on Canvas.

Tip: Under the Required entities section, this can also suggest entities populated from your utterances to include the option as quick-buttons, to serve as a function to add it to your required entities.

Description - you can use this field to pass any notes/comments/metadata that you think is relevant for stakeholders or designers for this intent, or use it as a forum to tag/mention your teammate for collaboration.

Previewing Entity Reprompts - when you configure a required entity reprompt, you are able to preview it in the main Step-level view on the Canvas. To preview it, you can navigate back the main desired Choice step and select the checkmark-nested-braces icon (1):

You will notice the required entity configured appear (2) under the intent, as well as the entity reprompt content underneath. Clicking the copy icon (3) copies this content/data onto your clipboard. The Edit icon (4) will take you to the editor.

Tip: Viewer-level stakeholders on your canvas are still able to access this preview menu, but will not be able to access the Editor and its icon.

This preview menu will also appear when you configure No-Match and/or No-Reply on your Choice step.

Buttons

Functioning similar to Choice steps, Buttons are used commonly in Chat Assistant (ie. Chatbot) type projects to present choice paths, options or decision/input points to help progress in the conversation.

In chatbot contexts, they are often buttons that your end user can select in order to progress the conversation.

For Chat Assistant projects, the Button Step can be used to prompt users with quick replies and present choices. It can also be used to open-ended prompt users with intents as its choices, or in combination with buttons.

Note: This step behaves and functions similarly to the Choice step, including associated features including Intent Editing/Creation Workflows, Scoping, previewing No Match/No Reply and many more!

Configuring Buttons

By default, your Buttons step will be listening for all intents, waiting for the user to prompt and guide the conversation. You can add additional buttons by clicking Add Button (similar to Add Choice for Choice step above).

You can configure your button labels in the button text input field, and define what happens when it is pressed:

  • Path - you can use the port in your Button Step so that it can be linked to another part of your flow. In the Prototype and Test Tool, when the button is clicked, it will navigate the user down that path in the flow.
  • Attach Intent - selecting an intent from this dropdown will match that intent when the button is pressed. If you have the 'Path' connected, it will still follow the path you have linked. Similar to the Choice step, you can edit your intents directly from this step, as well.
  • Actions - you can use Actions to nest navigation and backend logic in a single Choice within the step. This includes Actions & call-to-actions such as Open URL, API Calls, and many more.

Intent Scoping

As indicated above, your Buttons step will be listening for all intents by default, waiting for the user to prompt and guide the conversation.

If you want to limit the scope or access from the Buttons step to the intents contained in solely the step, you can click the settings icon located to the left of the Add Button button, and select Intent scoping. From there, you have the option of configuring Listen for all intents or Only intents in this step.

No Match in Button-Step

There may be instances where your user submits an input that doesn't match the choices in the Button step. For these cases, you can define what happens when the assistant detects a 'No Match'.

In the 'No Match' section of the Button(s) path, you can choose whether you want to reprompt your users and/or configure a fallback path. You can configure both options below to have utilize both features.

  • To add reprompt options, input the intended No Match response in the text input field. You can add additional reprompts by hitting the (+) button or remove with (-)
  • To connect your No Match to a conversation or fallback path, select 'Path' and hit the (+). This will let you select the conversation path for your fallback.

Should you configure a fallback path, you also have the option to add Actions. You also can rename the No Match path-ing, for your reference on the Canvas. You can also remove path-ing with (-).

In the 'No Match' section, you can choose whether you want to reprompt your users and/or configure a fallback path.

To add reprompt responses & variants, input your desired reprompt response in the text box. You can hit the (+) to add additional variants/reprompts of the No Match responses.

To connect your No Match to a conversation path, select 'Path' as your No Match type. This will let you select the conversation path for your fallback.

No Reply in Button-Step

If the assistant does not process or understand your user's response, or the user's response is unintelligible, the No Reply Response occurs. To define your No Reply response, access the more options menu by clicking the ellipses button in the Choice step, and select 'Add no reply'.

You can configure your No Reply response message, the time delay, and connect it to a conversation path.

Should you configure a fallback path, you also have the option to add Actions. You also can rename the No Match path-ing, for your reference on the Canvas. You can also remove path-ing with (-).

Tip: No Reply Responses only occur twice and will exit the app if the assistant still doesn't understand after a third attempt.

Previewing No Match/No Reply on Canvas

Functioning similarly to Choice steps in Voice-projects, configuring No Match and/or No Reply on your Buttons step, a preview menu will also appear when you are navigating on the Button-step from the Canvas.

To preview it, you can navigate back to the main desired Button step and select the question-mark bubble (?) for No Match or time bubble for No Reply.

You will notice the no-match or no-reply responses configured appear under the Button intent/path. Clicking the copy icon, copies this content/data onto your clipboard. The Edit icon will take you to the editor.

Tip: Viewer-level stakeholders on your canvas are still able to access this preview menu, but will not be able to access the Editor and its icon.

Button Layout Options

You can set how you want your buttons to appear in the Prototype and Test Tool, using the options found in the settings menu, under Buttons Layout:

  • Stacked - this will display your buttons stacked vertically, left-aligned in the chat
  • Carousel - this will display your buttons horizontally, so the user can scroll through them in the chat

Capture Step

In order to create a seamless, user-friendly conversations, Chatbots and Voice assistants need to be able to capture information in the ways that humans talk.

The Capture Step lets you build dynamic conversation experiences by capturing and recording all or part of a user's utterance within a selected variable. This can be used to collect a specific piece of information from your user, such as their name or an email address.

After adding a Capture step to your project, we can see that we have the option of capturing the “Entire user reply”, or capturing and categorizing the user response to specific entities in the user's utterance.

You also have the option of adding Actions, No Match, No Reply and can add/configure additional Captures.

Note: The power of the Capture step is leveraging it to personalize conversations by applying captured variables in your responses.

Note that in conversation design, the Capture is ultimately waiting on user input. This means you're not able to add steps behind capture within a block - it must be the last step such as a Prompt or Choice step. The Capture step should end a “turn” in the conversation you’re designing.

Tip: On Voiceflow, there are now multiple ways in order to save user information. You can use the Capture step and/or the Choice step. Read more about best practices and when to use Capture vs. Choice step(s) here.

Capturing the Entire User Reply

The Capture Step is powerful to capture and record a user's entire utterance (user reply) and store it within a selected variable.

Once you’ve selected ‘Entire User Reply’, choose the variable you want to use to store the user’s reply response.

You can select from your existing variables or create a new one. This lets you use the capture information throughout your project.

Capturing Specific Entities

Alternatively, you can choose to capture entities within your users response. This option lets you extract specific pieces of information out of your user’s utterance (e.g. name, plan type, country). 

Note: Entities are the same ones that exist in your interaction model and can be reused in intents, or referenced in output steps like speak or text.

Entity Creation & Editing from Capture Steps

You can now enter the workflow for Entity Creation & Editing, Type, Color and NLU configurations (slot values & synonyms) right inside the Capture Step!

  • To Create New Entity, select the bottom-menu option and configure your newly created Entity in the modal that appears
  • To Edit a Selected Entity, select your desired Entity in the entity dropdown menu in the Capture step. Then, select the pencil icon located to the left of the Entity name in this selector menu.

Adding utterances

Once you’ve selected the entity you want to capture, ensure you add a few sample utterances for the entity. This helps the machine learning model identify different ways the user might say the entity and its phrases.

As this is utterance handling & capture of expected user response(s) containing an entity, ensure that you are using the entity itself and its variations in your sample responses/utterances.

Note: This differs from populating the synonyms/slot types under your NLU Model (M) for the entity itself.

Configuring entity prompts

In some cases the user’s response may not contain the entity you want to capture. Adding an Entity Prompt lets your assistant ask and follow-up with the user for the required information.

For example, let’s say we want the user to provide their favourite colour. If they respond instead with ‘hello’, this will trigger the the entity prompt and the assistant will request a valid response from the user.

You can input your desired Entity Prompt responses in the field. And similar to Text steps, Entity Prompt fields support markup styling. Should you not require an entity reprompt anymore, you can delete it with the (-) icon.

Capture Step - Actions

In addition to configuring your Capture step, you can use Actions to nest navigation and backend logic in a single Capture within the step.

Under the Capture step, you can perform these nested actions per Capture:

  • Go to Block - Goes to a specific block referenced within the project
  • Go to Intent - Goes to an existing intent contained in the project
  • End - Ends the conversation at its current state
  • Set variable - Allows you to set and change the value of variables
  • API - Allows you to set up, configure and execute API calls & functions
  • Code - Allows you to set up and code custom Javascript functions & commands
Tip: Learn more about Actions in-depth and in detail here.

No Match/No Reply - Configurations on Capture Step

There may be instances the user says something completely unrelated (No Match) to what you are trying to capture; or they simply may not respond (No Reply).

  1. If the assistant does not hear your user's response, or the user's response is unintelligible, the No Reply Response occurs.
  2. If the user says something completely unrelated to what we are trying to capture, we can handle those paths with a No Match response, and provide a better experience when it’s not understood.

For either these cases, you can guide your users to an alternative conversation path with a ‘No Match’ or 'No Reply' response, under the Capture step.

Tip: You can configure No Reply by hitting the settings icon at the footer of the Capture step editor, and hit 'Add no match.' The instructions below apply to both No Match and No Reply.

Under the No Match portion of your entity capture, you can choose to reprompt your users (1) by adding responses (that can be formatted with Text styling) and configure their randomization (2).

You can also add a No Match path, so that you're able to connect the path (3) to a section in your project. This will let you select the conversation path for your fallback.

You can rename the label of the No Match path, so that it can be easily referenced on Canvas.

You can also use Actions (4) to nest navigation and backend logic in a No Match within the Capture step.

Tip: You can configure your No Reply response message, the time delay before triggering no reply response(s), and connect it to a conversation path similar to the No Match workflow outlined above.

Adding multiple entity captures

When you’re having a conversation with another person, you might provide or request several pieces of information at once. With the Capture step, multiple entities can be added per step and extract additional information.

Tip: You may end up in scenarios where you either expect your user to or they attempt to provide multiple pieces of intended capture information in one response. For example asking for name and email, or name and confirmation number, or email and tracking number, depending on the context of the question.

You can use the capture step to collect multiple entities in your step. To add another captured entity, hit ‘Add Capture’.

Note: You can configure prompts for each of your captured entities. This ensures that the assistant captures all the necessary information in the right entity and slot type.

Each entity can have a prompt attached to it, so if the user doesn’t fill all the entities, we can ask for each one individually before moving on with the flow, ensuring all necessary information is captured in the right place.

Intent (Legacy)

Note: This walkthrough is of the legacy-version of the Intent step. Intent step now lives under the Event section.

The Intent step lets you create non-linear and flexible conversation paths within your project. It operates as linearly as the Choice step, or as globally by allowing the user to trigger any open-intent available.

Because of this flexibility, Intents provide the flexibility to jump between conversations, Topics & Components/Flows and other Intents.

Note: The Intent step requires no direct link and can be activated from anywhere within the project (as long as it's in contextual scope) by its linked intent. Unlike the Choice or Buttons step, you can't link to an Intent step. It only accepts outward connections.

Linking Intents to an Intent Step

Intent steps are constantly listening for their linked intent to be invoked by a user. When their linked intent is invoked, the Intent step is triggered and users are directed to its corresponding conversation path.

To link an intent to an Intent step, you can choose an existing intent or create a new intent.

Pulling Entities from an Intent Step

Intents linked within an Intent step act like a normal intent and can thus pull entities from the user's utterance if the entity values have been defined.

Prompt (Legacy)

Important Notice: This step functionality has been embedded into the Choice and Button steps, and will no longer be supported as a standalone step and not available for new projects. Video walkthrough found here.

The Prompt steps acts as a stop and listen function inside your project. When the Prompt step is traversed, the system will wait for the user to submit an input that matches an intent within your project. Once an intent is matched, the project will jump to the step containing the intent and continue from there.

Tip: The user can also match intents inside commands from a prompt step

The prompt and intent steps are used in tandem to create non-linear conversations, an important conversation design best practice.

Prompt Step - Buttons

Add buttons to the end of your messages in conversations to allow users to quickly trigger intents visually.

Tip: For Chat Assistant projects, you can add buttons to prompt your users with quick replies or choices.

In the buttons section of your Prompt step, you can input labels for your buttons and choose its linked intent. To add additional buttons, hit 'Add Button'.

Prompt Step - No Match

There may be instances where your user submits an input that doesn't match any of the intents in your project. For these cases, you can define what happens when the assistant detects a 'No Match'.

In the 'No Match' section, you can choose whether you want to reprompt your users and/or configure a fallback path.

To add reprompt options, select 'Reprompt' as your No Match type. You can add additional reprompts by hitting the 'Text' button.

To connect your No Match to a conversation path, select 'Path' as your No Match Type. This will let you select the conversation path for your fallback.

Prompt Step - No Reply

If the assistant does not hear your user's response, or the user's response is unintelligible, the No Reply Response occurs. To define your No Reply response, access the more options menu by clicking the ellipses button in the Prompt step, and select 'Add No Reply'.

You can configure your No Reply response message, the time delay, and connect it to a conversation path.

Tip: No Reply Responses only occur twice and will exit the app if the assistant still doesn't understand after a third attempt.

Other docs in this section