Customer Story

KQED's interactive podcast series the first of its kind

Mark Ammendolia
February 19, 2020

KQED reimagines storytelling with new interactive podcast

KQED's new interactive podcast series The Voicebot Chronicles is far from your typical listening experience. The series uses a unique combination of reporting, storytelling and interaction to explore the complex relationship humans have with virtual assistants - the talking machines inhabiting millions of homes throughout the world.

Interacting with voice-enabled devices can be a tiring endeavor. Why doesn't Alexa understand me? Does Google need me to talk louder?

What makes The Voicebot Chronicles so unique is that it tries to answer fundamental questions about the human need to be understood - all while empowering the user to play a role in the investigation.

Each story encompasses numerous interactive elements. Lowell Robinson, Senior Producer of Voice and AI at KQED and the producer of the series simply calls it, "a story about voice that is conducted through your voice."

Though there are many types of interactive voice experiences, we have yet to see a major news outlet create an experience outside of flash briefings, quizzes or games. The Voicebot Chronicles breaks that mould by blending traditional reporting with rich user interaction, so we can not only grasp how our 'voicebots' understand us - but more importantly - the inner workings of good conversation and the complexity involved in making that happen.

How KQED used Voiceflow to experiment new ways to engage

The team at KQED used Voiceflow to help bring this interactive podcast series to life. It enabled them to easily and effectively prototype new iterations of The Voicebot Chronicles for both Alexa and Google. Using Voiceflow's test tool, and its ability to upload interactive prototypes in both your browser and on an Alexa or Google device, the KQED voice team could interact with their new changes in a matter of seconds.

"What I really like about Voiceflow is very quickly I can make a project or an idea come to life and share it. I can push that code, have it on a device and we can go into a room and listen and interact. I just think the idea of taking a prototype and making it happen that fast is pretty special." - Lowell Robinson

Building dynamic conversations for voice user interfaces doesn't have to be hard — especially when you can build these experiences without having to worry about all that code.

Voiceflow's drag-and-drop canvas allowed the team at KQED to see how the story was developing and where each branching narrative connected with another idea or storyline. By using Voiceflow as part of their tech stack, Lowell and his team could make revisions in a pinch, better organize and manage dialog flows and spend more time focusing on the important stuff: the design process.

"One thing that I absolutely love about Voiceflow is that conversations are complex...having a tool like Voiceflow and being able to have a map of where these branches are would be hard for me as a visual thinker to imagine doing it another way. For me, Voiceflow allows me to really focus on the creative aspect and not worry so much about the code management." - Lowell Robinson

An interactive storytelling experience

The Voicebot Chronicles made its official launch in early February and is now available on your Alexa and Google devices. To experience on Alexa, say, "Alexa, open the Voicebot Chronicles." You can also find it here on the Amazon Marketplace.

To interact with The Voicebot Chronicles on Google Assistant, say, "Hey Google, talk to The Voicebot Chronicles." You can also find it on the Google Assistant directory.

For more information on The Voicebot Chronicles, be sure to check out the following article:
The Voicebot Chronicles: KQED Interactive Series Comes to Alexa and Google Assistant


0:01 - 0:11 → Lowell Robinson
So The Voicebot Chronicles talks about how do machines understand us? How do you feel about having a conversation with an inanimate bot?

0:12 - 0:22 → Chloe Veltman
How we as human beings interact with these voicebots that are becoming increasingly parts of our lives. So for example, Siri and Alexa and the Google assistant.

0:23 - 0:34 → Lowell Robinson
It also has kind of deeper dives into stories with people that probably are pretty different than ourselves in terms of like how they interact with technology.

0:35 - 1:01 → Chloe Veltman
So it's not just the regular KQED kind of storytelling where, for example, you might have a reporter such as myself telling a story on the radio or through a podcast. In this case, the person who experiences this story gets to hear some reporting, but also gets to play a role in the storytelling. There are interactive elements that Lowell designed , so you get to play games that deepen your understanding of the stories.

1:02 - 1:06 → Lowell Robinson
So we quite a few conversations going back and forth about being understood

1:07 - 1:13 → Chloe Veltman
And we were wondering just in a very ad hoc fashion about what it would be like to tell stories using some of these devices.

1:14 - 1:52 → Lowell Robinson
At a certain point, we went um...we went to our executive editor and we pitched the story idea to him and because of like Chloe's expertise in terms of like reporting and storytelling and my expertise in terms of building things, there was a lot of trust there to say like, 'we don't know what this is, but you can build it'.

What I ended up using Voiceflow for was wanting to be able to create interactives like on the fly. Voiceflow is really fast that way. With the earlier prototypes, I'm always making something in Voiceflow. I'm always trying it that way. I'm always sort of connecting with the patch editor and seeing like, 'Oh, well what happens if we do this'?

1:53 - 2:32 → Lowell Robinson
Or what if we space it out differently like that. So one of the things that I absolutely love about Voiceflow is that conversations are complex. And when you start to script out conversations and you try to imagine where could this go? Having a tool like Voiceflow and being able to have a map of where are these branches going to and how do you keep these things in check.

It'd be hard for me as a visual thinker to imagine doing it another way. One thing that I learned very early with voice and things that people are doing is that people that say they, they know the answer. I'm not saying that they're lying to you, but it's pretty good chance they don't know the answer.

2:33- 2:56 → Lowell Robinson
We don't know the answer. That's the part of being involved in something that's new. For us, it's not just shoehorning existing content into a space that doesn't belong. It's treating the content that we create that it's the right match for the medium. And so if we're really interacting with voice, if we're really interacting in the space, what is the right content for that space?

2:57- 3:28 → Lowell Robinson
And what I work on is trying to take a look at this content and reimagine it and create prototypes that you can share out for people to get a better sense of like, this is what the future could be. But you need to start somewhere. And I think that for me, creating experiences that we can interact and share with and try.

That's what we need to do. We need to make that. We need to see that. We need to share that and put that out there.