FAQs

Read the official frequently asked questions (FAQs) about Voiceflow. Get answers to common product & prototyping questions.

FAQs - Designing

Get answers to common designing and frequently asked building on Voiceflow questions.

How do I CONVERT my CHAT ASSISTANT PROJECT TO A VOICE ASSISTANT? WHAT ABOUT VOICE to CHAT?

Currently there's no way to do this on the Voiceflow Creator Tool. That said, this tool will convert a Voiceflow Chat Assistant project to a Voice Assistant, or a Voice Assistant project to a Chat Assistant.

How do I search through my PROJECT AND ITs flows?

Currently searching through a Voiceflow project for a specific block type, title, or content is not supported.

However there are two alternative ways of navigating through a project that might help:

  1. To quickly find the start block on a project, press ‘S’ on your keyboard. This will bring you directly to the start block.
  2. To find specific text within a flow, use Cmd+F (Ctrl+F) to open the ‘find’ function on your browser. When you type in text, anything that matches it in the project will appear highlighted.
  3. Clicking on a comment in the commenting menu, will allow you to jump directly to where that comment is located. To open the commenting menu or to leave a comment on your project, press ‘C
Note: If you have multiple topics and/or components in your project, you can also search through them with the Search bar that appears in the respective section under the Layers menu.

WHEN I GO TO TEST MY PROJECT, WHAT IS 'TRAIN ASSISTANT'? Why should I train my assistant?

Training your project and its associated NLU is a critical part of creating a high-fidelity testing experience. In Test mode, the Voiceflow tool notifies you when your assistant needs to be trained.

Generally, we recommend you always train your model after adding or modifying your Intents, Utterances and Entities. This ensures you’re testing with the most up-to-date model. More documentation on this found here.

WHY SHOULD I BE Adding synonyms for an entity? what is this and why would I do it? What about Intents/Utterances?

In order for the accurate end-user experience and to make your holistic Interaction Model (NLU/NLP) more accurate and smarter, it is crucial to ensure that this is populated with expected user responses and appropriate entity/intent classification.

For optimal performance, you should have at least 5-10 Utterances for each Intent and at least 10 values for your Entities.

Whats the difference between entities and variables?

Variables and Entities conceptually are similar, but have fundamental differences when designing conversations. They both represent the ability to store information to be invoked and/or categorized at a future time.

Every variable created is by default a global variable, meaning it is accessible project-wide. However you can also create locally scoped variables, called "flow variables" that only persist within a given flow.

Meanwhile, Entities are considered slot(s) in your projects where data captured is categorized as a type and encapsulated as the data the Slot will be capturing. This is especially applicable when there are certain response types you are expecting from users such as 'Size' or 'Name', as variables cannot be categorized as slot/response types.

For example, if you are expecting to capture and store a user's email, you would create an Entity centered around that data capture/classification and use a pre-built 'Email' Entity Type.

Typically, you would use variables in more background functions in your conversational experience while entities are used to handle the actual conversation itself.

Can I copy/paste steps? how about across projects and workspaces?

You can copy/paste step(s) or a collection of connected steps across projects and workspaces! You can do this by clicking and dragging your selection of step(s) and hitting the keyboard shortcuts to Copy/Paste (Ctrl/Cmd + C/V) or right-clicking and selecting your desired Copy/Paste Action.

When you make your selection/desired copied step, you will get toast notification that you are copying from the tool. 

When leveraging Copy/Paste please note that:

  • Intents are copied, but Variables/Entities will not transfer over (you will have to manually re-add them)
  • Markup & Colors are replicated/transfer over
  • Capture steps will get reset across Project Types (ie. Voice Assistant vs. Chat Assistant)
Tip: Ensure you check project types if you are copy/pasting as certain features are unavailable. (ie. APL visuals will not paste from Alexa Skill to Google Action projects)

We are working on features to ensure that Components (Flows) will be reusable across workspaces, coming soon.

What are all the different things that the button step can do? Why and when would I need to add an intent to that?

For Chat Assistant projects, the Button Step can be used to prompt users with quick replies and present choices.

When you add a Button Step to your canvas, there will be one button visible in your editor by default. You can add additional buttons by clicking Add Button.

You can configure your button labels in the button text input field, and define what happens when it is pressed:

  • Path - this adds a port to your Button Step so it can be linked to another part of your flow. In the Prototype and Test Tool, when the button is clicked, it will navigate the user down that path in the flow.
  • Intent - selecting an intent from this dropdown will match that intent when the button is pressed. If you have the Path option enabled, it will still follow the path you have linked. If you do not have the Path option enabled, it will navigate the user to any open Intent Steps where the intent is set.
  • URL - opens a new tab with that URL when the button is clicked. It can be combined with the Path or Intent option to continue navigating the user through your project.

Further information on how to create Button steps can be found here.

When does a project trigger the no match reply?

If you choose to add a No Match reply within your project, it will get triggered when the assistant (your project experience) cannot find a matching choice from the user’s input (within the Button or Choice step).

In order to create a No Match reply:

  1. Click on the No Match section at the bottom of your Choice or Button step
  1. Choose how you’d like to treat the no matches:
  • To add reprompt options, select Reprompt as your No Match type. You can add additional reprompts by hitting the Text button (Chat Assistant projects).
  • To connect your No Match to a conversation path, select Path as your No Match type. This will allow you select & connect the conversation path for your fallback.
  • To have a number of reprompts and then have the user go through a certain path, you can choose both Reprompt and Path.

Full break-down of these instructions on how to create a No Match reply can be found here.

How do I use numbers in an utterance? What is the best way to capture numbers?

In Voice assistants and conversation design, the best practice is to spell out the way an expected utterance/response (enunciation).

For numbers, you can either use the pre-built data/entity type of Numbers, or in Voice Assistant projects you would spell out the numbers as you expect the user to pronounce them. In Voiceflow, it is highly recommended that you capture numbers via numbers entity data type, if you are storing/capturing numbers.

How do I capture information in a choice or buttons block? How is this different from the capture STEP? When should I use one instead of the other?

In Voiceflow's Creator Tool, there are now multiple ways to save or handle user information. You can use Capture and/or Choice step(s) in multiple scenarios. We've broken down with a full walkthrough and best practices of when to use Capture vs. Choice Step(s) here.

What are all the ways I can bulk import utterances?

You can now bring an external file with a bulk import of utterances into your Voiceflow project.

There are a few ways to do this on Voiceflow:

a) Under NLU Model you can bulk import utterances to your Intents and/or Entities via the Bulk Import button

b) Directly from your Intent step, or any other step on your Canvas that calls on Intents such as Choice steps

For both options, you have the option to upload utterances via In-Line Editor or a CSV File. More documentation on this process is found here.

Why are my Logic Step(s) not pulling my entity?

There are a number of reasons why your Logic Step(s) are not pulling your entity correctly. One of the most common errors, are that Logic Step(s) are caps-sensitive; this means the entity inputs needs to be exactly what the entity captures in its slot(s) and associated synonyms.

what is the difference between Open intents vs closed intents? why would I use them?

Open (Global) Intents can be accessed from any part/topic or flow of your project. Closed (local) intents are only accessible in the specific topic and/or component/flow and cannot be invoked or accessed anywhere other than that section of the project. You would want to leverage these features as one way to ensuring entities don’t conflict with each other.

With the new Topics & Components (Flows) update, you are able to easily localize your project intents and control if these intents can be accessed from other parts (topics) of your project. Full documentation on this is found here.

AS A DESIGNER, How do I represent logic, API calls, or some sort of back-end data in my design (without needing to actually connect an API)?

To demonstrate an API or backend call on your Canvas, you can use the Text Markup, the Set step, or use a real API step. We’ve demonstrated how you would use each of those in this video resource.

Whats the difference between using a set step and guided navigation?

In Voiceflow, the Conditions step allows you to add logic to your flows. The logic in Voiceflow is governed by variables. The Set step allows you to manually set the value of a variable in your project. You can learn about Logic, Variables, and how the Set step works here.

To use a Logic step without needing to include or set up your variables, you can turn on the guided navigation setting by following the instructions found here.

What is the best way to design for a DTMF (dial-tone) experience within Voiceflow?

You can now leverage and prototype/build DTMF (Dual-Tone Multi-Frequency/Dial-Tone) experiences within Voiceflow, following these series of steps found here.

Other docs in this section