Keep up with Voiceflow’s latest product updates and improvements
New AI Set Step, fixes & optimizations!
AI Set Step
Like the Generate Response (now called AI Response), this step will allow you to pass a Prompt to GPT, with this step, the result we get back from the Prompt will be saved to a variable, as opposed to passed to the user as an Assistant Response. This opens up many new use cases:
- Building logical paths in your project based on prompts
- Recognizing user utterances without building intents (or training the test tool NLU)
How-to: simple description of how the user can get started with it
- To enable these amazing AI steps, just head over to your project settings and look for the AI Assist section
- Toggle the “Generative AI Steps” option, and you’ll be all set to start using the new AI Set step
Note: both AI steps have been moved into a standalone AI Section in the Steps menu.
Bug Fixes 🐛
- Fixed analytics import
- Fixed UI/UX issues for the new dashboard
- Fixed setting for Kanban/Card view
- Fixed email invite link
- Fixed headers section in API step
- Fixed copy intent block name
- Fixed no versions error message
- Removed AI steps when disabled
- Fixed root topic name
- Fixed Analytics dashboard loading
- Fixed utterance duplicates on create intent modal
- Fixed UI on Safari
Product Updates ⭐
- Search optimization
- SDK Analytics
- Updated publish error messages
- Show cards in transcripts
- Filter prototype tool from Analytics dashboard
- Update project members on workspace member change
- Custom Action step UI optimizations
Temporary Performance Issues
We're experiencing some performance issues where select nlu data may not be visible in projects on public cloud. Our team is actively resolving this issue.
Users with questions or concerns please email firstname.lastname@example.org.
UPDATE: This was fixed as of Dec 7th @ 12:30pm. Please email support if you have any new errors that you believe may be related.
CHANGELOG April 21, 2023 Conversation Memory, DFCX Assistant Importer, and more!
- Enable LLM feature to utilize the previous interaction in a session to give context to the LLM Prompt
- With this release, have more sophisticated, human-like assistant interactions (ie. asking follow up questions without needing to provide context again)
- Moreover: simplify the ability to carry forward context in your conversation
This feature can be enabled on the Response AI or Set AI Step
Once you have placed your step, you can configure it in the Editor. In the editor, you will have three different options to prompt your Assistant:
- Use Prompt only: when the Step is hit during a user's session, the Prompt you provide will be the only data passed to the LLM to generate a response.
- Use Memory and Prompt: when hit during a user session, the Prompt you provide will be augmented with the previous 8 turns in the conversation, and the LLM will generate a response from both pieces of data.
- Use Memory only (only available on Response AI): this will pass only the previous 8 turns of the conversation to the LLM and allow it to response without any guidance from you.
Documentation on this new release
DFCX Assistant Importer
Bring your DFCX content on VF canvas without the need for manual migration
This release will enable you to:
- Bring CX content into VF and start building right way
- Optimize your design system to easy export later
- As a next step, if you’re interested—send us your CX agent and we can run it through our importer, and give you a completed design with all your CX content, then get it all connected back to your production platform
Rasa - simple export script
For those who are using Rasa for production, this sample adapter can take a Voiceflow design and turn it into a Rasa assistant
- Find the public repo here for our open source repo script that you can download
- Documentation on this release
Actions: release for all steps
Add Actions from any port and use it for all our steps
- To add an Action from a port, draw a line and click the canvas. Access the ‘Actions’ option at the bottom the line’s Step menu.
Call-outs to note:
- The Actions section will not be visible on all the Step Editors. You’ll continue to see this section on the OG steps from our v1 Actions release. But they aren’t included in editors of the newly support steps.
- To access an Action's editor, click into the canvas chip
Path to Code Step
- Previously, anybody who is using the code step previously needed to pair it with a Condition Step and create one-time-use variables, all just to branch the conversation based on the result of the Code Step
- Now this can be all handled in a single Code Step
- Drag in a code step, add or delete paths in the pop up modal
- View the updated documentation here
Try searching something else,