Designing

How to design your assistant on Voiceflow.

User Input Steps

An overview of the Steps used to capture your users' inputs and interactions with your assistant.

Choice

When designing linear and choice-based conversations, the Choice step is ideal for pre-defined paths and choices.

Why Use Choice Paths in Conversations?

Here's a sample of how we might use a Choice path to direct a conversation:

Let's say we want to provide the user with options to go left or right. To create a path for 'right'  we would create an intent called "goes_right" and provide some sample utterances for what the user might say to signify their intention is to go right. Some sample utterances for our "go_right" intent could be:

  • "go right"
  • "I want to go right"
  • "right"
  • "please go right"

Now, if the user says any of the sample utterances within the go_right intent, that intent will be triggered and the conversation will follow whatever path has the go_right intent linked.

Note: For Voice Assistant projects, the Choice Step can also be used to prompt users with choices.

Creating Choice Paths

Choice steps are made up of intent-based and defined "paths" and a fallback paths for when a user doesn't reply or doesn't match the available intents.

When the user reaches a choice step in the conversation, the assistant will listen for the user’s intent. Depending on the user’s intent, the assistant will take them down different paths in the conversation.

You can add additional paths to your Choice step by clicking 'Add Choice'.

Adding Intents to Paths

Once you've created a Choice path, you can choose its linked intent. This lets you direct users down conversation paths most relevant to their needs. Essentially, you are defining how your assistant should route users depending on their chosen intent.

For example, if you have an intent of "order pizza" for path 2 and the user activates the order pizza intent, the user will follow path 2.

You can choose an existing intent or create a new one within the Choice step. To link an existing intent, click into the path's input. You can then choose from your project's existing intent in the dropdown.

To create a new intent from this step, type a new intent name in the path's input and hit enter.

Note: You can modify a path's linked intent without deleting the path's connection point.

Connecting a path

Now that you've configured your Choice step, you can connect it to a conversation path on canvas.

To link a Choice path, click the port of your selected path, drag and click the connector to the block you want to link to.

You can modify connections by clicking the port of your selected path.

Configuring for No Match

There may be instances where your user's input doesn't match the intents in your Choice step. For these cases, you can define what happens when the assistant detects a 'No Match'.

In the 'No Match' section of the Choice path, you can choose whether you want to reprompt your users and/or configure a fallback path.

To add reprompt options, select 'Reprompt' as your No Match type. You can add additional reprompts by hitting the 'System' button.

To connect your No Match to a conversation path, select 'Path' as your No Match type. This will let you select the conversation path for your fallback.

No Reply

If the assistant does not hear your user's response, or the user's response is unintelligible, the No Reply Response occurs. To define your No Reply response, access the more options menu by clicking the ellipses button in the Choice step, and select 'Add a No Reply Response'.

You can configure your No Reply response message, the time delay, and connect it to a conversation path.

Tip: No Reply Responses only occur twice and will exit the app if the assistant still doesn't understand after a third attempt.

Buttons

Buttons are used commonly in Chat Assistant (ie. Chatbot) type projects to present choice paths, options or decision/input points to help progress in the conversation.

Note: For Chat Assistant projects, the Button Step can be used to prompt users with quick replies and present choices.

Configuring Buttons

When you add a Button Step, there will be one button visible in your editor by default. You can add additional buttons by clicking 'Add Button'.

You can configure your button labels in the button text input field, and define what happens when it is pressed:

  • Path - this adds a port to your Button Step so it can be linked to another part of your flow. In the Prototype and Test Tool, when the button is clicked, it will navigate the user down that path in the flow.
  • Intent - selecting an intent from this dropdown will match that intent when the button is pressed. If you have the 'Path' option enabled, it will still follow the path you have linked. If you do not have the 'Path' option enabled, it will navigate the user to any open Intent Steps where the intent is set.
  • URL - opens a new tab with that URL when the button is clicked.It can be combined with the 'Path' or 'Intent' option to continue navigating the user through your project. If only the 'URL' is enabled, then the project will end once the URL is opened.

No Match in Button-Step

There may be instances where your user submits an input that doesn't match the choices in the Button step. For these cases, you can define what happens when the assistant detects a 'No Match'.

In the 'No Match' section, you can choose whether you want to reprompt your users and/or configure a fallback path.

To add reprompt options, select 'Reprompt' as your No Match type. You can add additional reprompts by hitting the 'Text' button.

To connect your No Match to a conversation path, select 'Path' as your No Match type. This will let you select the conversation path for your fallback.

No Reply in Button-Step

If the assistant does not hear your user's response, or the user's response is unintelligible, the No Reply Response occurs. To define your No Reply response, access the more options menu by clicking the ellipses button in the Button step, and select 'Add a No Reply Response'.

You can configure your No Reply response message, the time delay, and connect it to a conversation path.

Tip: No Reply Responses only occur twice and will exit the app if the assistant still doesn't understand after a third attempt.

Button Layout Options

You can set how you want your buttons to appear in the Prototype and Test Tool, using the options found in the '...' menu, under Buttons Layout:

  • Stacked - this will display your buttons stacked vertically, left-aligned in the chat
  • Carousel - this will display your buttons horizontally, so the user can scroll through them in the chat

Prompt

The Prompt steps acts as a stop and listen function inside your project. When the Prompt step is traversed, the system will wait for the user to submit an input that matches an intent within your project. Once an intent is matched, the project will jump to the step containing the intent and continue from there.

Tip: The user can also match intents inside commands from a prompt step

The prompt and intent steps are used in tandem to create non-linear conversations, an important conversation design best practice.

Prompt Step - Buttons

Add buttons to the end of your messages in conversations to allow users to quickly trigger intents visually.

Tip: For Chat Assistant projects, you can add buttons to prompt your users with quick replies or choices.

In the buttons section of your Prompt step, you can input labels for your buttons and choose its linked intent. To add additional buttons, hit 'Add Button'.

Prompt Step - No Match

There may be instances where your user submits an input that doesn't match any of the intents in your project. For these cases, you can define what happens when the assistant detects a 'No Match'.

In the 'No Match' section, you can choose whether you want to reprompt your users and/or configure a fallback path.

To add reprompt options, select 'Reprompt' as your No Match type. You can add additional reprompts by hitting the 'Text' button.

To connect your No Match to a conversation path, select 'Path' as your No Match Type. This will let you select the conversation path for your fallback.

Prompt Step - No Reply

If the assistant does not hear your user's response, or the user's response is unintelligible, the No Reply Response occurs. To define your No Reply response, access the more options menu by clicking the ellipses button in the Prompt step, and select 'Add No Reply'.

You can configure your No Reply response message, the time delay, and connect it to a conversation path.

Tip: No Reply Responses only occur twice and will exit the app if the assistant still doesn't understand after a third attempt.

Intent

The Intent step lets you create non-linear and flexible conversation paths within your project. It operates as linearly as the Choice step, or as globally it allows the user to trigger any open-intent available. Because of this flexibility, Intents provide the flexibility to jump between conversations, Topics & Components/Flows and other Intents.

The Intent step requires no direct link and can be activated from anywhere within the project (as long as it's in contextual scope) by its linked intent. Unlike the Choice step, you can't link to an Intent step. It only accepts outward connections.

Linking Intents to an Intent Step

Intent steps are constantly listening for their linked intent to be invoked by a user. When their linked intent is invoked, the Intent step is triggered and users are directed to its corresponding conversation path.

To link an intent to an Intent step, you can choose an existing intent or create a new intent.

Pulling Entities from an Intent Step

Intents linked within an Intent step act like a normal intent and can thus pull entities from the user's utterance if the entity values have been defined.

Capture Step

In order to create a seamless, user-friendly experience, Chatbots and Voice assistants need to be able to capture information in the ways that humans talk.

The Capture Step lets you build dynamic conversation experiences by capturing and recording all or part of a user's utterance within a selected variable. This can be used to collect a specific piece of information from your user, such as their name or an email address.

After adding a Capture step to your project, we can see that we have the option of capturing the “Entire user reply”, or specific entities in the user's utterance.

Note: The power of the Capture step is leveraging it to personalize conversations by applying captured variables in your responses.

Note that in conversation design, the Capture is ultimately waiting on user input. This means you're not able to add steps behind capture within a block - it must be the last step such as a Prompt or Choice step. The Capture step should end a “turn” in the conversation you’re designing.

Tip: On Voiceflow, there are now multiple ways in order to save user information. You can use the Capture step and/or the Choice step. Read more about best practices and when to use Capture vs. Choice step(s) here.

Capturing the Entire User Reply

The Capture Step is powerful to capture and record a user's entire utterance (user reply) and store it within a selected variable.

Once you’ve selected ‘Entire User Reply’, choose the variable you want to use to store the user’s reply. You can select from your existing variables or create a new one. This lets you use the capture information throughout your project.

Capturing Specific Entities

Alternatively, you can choose to capture entities within your users response. This option lets you extract specific pieces of information out of your user’s utterance (e.g. name, plan type, country). 

Note: Entities are the same ones that exist in your interaction model and can be reused in intents, or referenced in output steps like speak or text.

Adding utterances

Once you’ve selected the entity you want to capture, ensure you add a few sample utterances for the entity. This helps the machine learning model identify different ways the user might say the entity.

Tip: As this is utterance handling & capture of expected user response(s) containing an entity, ensure that you are using the entity itself and its variations in your sample responses/utterances. Note that this is different than populating the synonyms/slot types under your NLU Model (M) for the entity itself.

Configuring entity prompts

In some cases the user’s response may not contain the entity you want to capture. Adding an Entity Prompt lets your assistant ask the user for the required information.

For example, let’s say we want the user to provide their favourite colour. If they respond instead with ‘hello’, this will trigger the the entity prompt and the assistant will request a valid response from the user.

Configuring No Match

There may be instances the user says something completely unrelated to what you are trying to capture. If the user says something completely unrelated to what we are trying to capture, we can handle those paths and provide a better experience when it’s not understood.

For these cases, you can guide your users to an alternative conversation path with a ‘No Match’ response, under the Capture step.

Under the 'No Match' portion of your entity capture, you can choose whether you want to reprompt (1) your users and/or configure a fallback path (2).

To add reprompt options, select 'Reprompts' as your No Match type. You can add additional reprompts by hitting the 'System' or 'Audio' (Voice Assistant projects) or 'Text' (Chat Assistant projects) buttons.

To connect a 'No Match' to a conversation path, select 'Path' as your No Match type. This will let you select the conversation path for your fallback. You can also rename the label of your No Match path by selecting the Path option on the No Match section of your Capture step.

No Reply

If the assistant does not hear your user's response, or the user's response is unintelligible, the No Reply Response occurs.

To define your No Reply response, access the more options menu by clicking the ellipses (...) button, and select 'Add No Reply'.

Tip: You can configure your No Reply response message, the time delay, and connect it to a conversation path similar to the No Match workflow outlined above.

Adding multiple entity captures

When you’re having a conversation with another person, you might provide or request several pieces of information at once. With the Capture step, multiple entities can be added per step and extract additional information.

Tip: You may end up in scenarios where you either expect your user to or they attempt to provide multiple pieces of intended capture information in one response. For example asking for name and email, or name and confirmation number, or email and tracking number, depending on the context of the question.

You can use the capture step to collect multiple entities in your step. To add another captured entity, hit ‘Add Capture’.

Note: You can configure prompts for each of your captured entities. This ensures that the assistant captures all the necessary information in the right entity and slot type.

Each entity can have a prompt attached to it, so if the user doesn’t fill all the entities, we can ask for each one individually before moving on with the flow, ensuring all necessary information is captured in the right place.

Other docs in this section