How to ensure your Alexa Skill is found, used and loved
Chances are you, or someone you know, got and used an Amazon Alexa or Google home device on Christmas day. In fact so many people found one in their present pile and activated them in the morning that the Alexa service crashed around 10am just when people wanted them to play a selection of Christmas tunes.
The market has doubled for smart speakers with 9.5 million people using them in the UK in 2018 and eMarketing estimate this will grow to 12.6 million in 2019. So chances are too, that you or your company has either released or is working on releasing an Alexa Skill or a Google Action to take advantage of these new channel opportunities. You won’t be alone. As the sales are increasing, so are the numbers of Skills with nearly 30,000 now available in the UK.
This proliferation of Skills and Actions mirrors the early growth of the Appstore and is essential because it allows developers to hone their technical skills and designers to test out new concepts in these new technologies. But it also creates choice paralysis and makes Skill discovery more difficult. This risks the time, money and resources spent on creating the Skill is lost because it never gets found or used.
So how do you maximise the chances of your new Skill or Action being found, used and loved?
In “How to Build an Alexa Skill That Your Audience Will Love” Will Ezell says:
- Know who your service is designed for
- Solve a real problem
- Make it a joy to interact with
- Combine usability with utility
- Keep it fresh
And to achieve all that, the answer is good old fashioned user research.
In fact it’s just like the kind of research strategy you would plan for creating a new service on any other digital platform, with a few tweaks for the different mode of interaction. Last year we shared some tips on usability testing with voice activated devices. As more clients are talking to us about Skills, we’ve updated them for 2019 to help you build your research plan to build Skills that your audience will love.
As Ezell mentions above, the first thing you need to do is start with finding a real problem that can be solved with a voice technology and know who will be using it so you can solve their needs.
Discovery research comes in different forms depending on what you need to discover:
- Depth interviews give insight into problems people can retell
- Ethnography or in-home observation allows us to directly observe behaviour to spot gaps and opportunities and understand whether voice would be an appropriate solution
- Competitor testing can also help understand where there is a gap in the market – but if there are no competitors then you may want to consider concept testing as discussed below.
Whatever problem you’re thinking about solving with voice technology, you also need to make sure people will feel capable of using it. Voice-activated devices are still very new, so most people are still novices using them. Even those who might consider themselves to be experts have limited experience with the range of things they could do. Most people only use Amazon Alexa and Google Home to do a limited range of services such as playing music and asking for weather [Also reported in this Reuters research]. Users are comfortable with small tasks so pick a small problem to solve and focus your Skill on that so users can pick it up and integrate it into their Alexa use with only a small step.
Once you have identified your problem then you need to develop your concept to solve it and test how potential users may respond to it. The easiest way to get feedback from your potential users is to put something in front of them and let them use it. But rather than spending time developing prototypes at this stage, you can use a Wizard of Oz approach with a script and a human standing in for the voice activated device such as Alexa.
For testing concepts like this, we use a comfy home styled lab for the user to relax in, our Wizard working behind a two-way mirror and our observers watching a feed in a separate room. This allows the Wizard to respond to the users’ body movements such as moving away from the device and allows observers to discuss what they are seeing without being heard in the lab. Just like testing a visual app or website, users can be left to interact as they would naturally and then we replay an audio recording probe for insights into their thoughts as they used the device and their overall thoughts on the concept.
By the way, this method also works really well for testing IVR and call-centre scripts and concepts.
At the end of a concept testing session, when your user understands the concept, it’s a great time to explore what commands would be useful or expected. Here you can give users a set of commands and ask them to select the ones they would expect to use or ask them to rank or prioritise them and talk through what they would expect them to do.
Once you know how users respond to the concept, and you have commands that should be included then you can put them together into a prototype to test flows.
At this stage we still use a Wizard of Oz approach but at a higher fidelity. A great way to do this is with voice interaction platform tool such as Sayspring. This allows you to create commands and phrases personal to your project and create responses in an Alexa voice to be played back to the user in a testing session.
Skill discovery and onboarding
Alongside building your service and testing prototypes, you also need to test how users will find it and start using it. Over the years we’ve observed that users rarely engage with onboarding processes on apps. Even if they flip through the onboarding screens, they don’t retain much of what was said, and if they do remember something they don’t know how to reactivate the onboarding to refresh their memory. This is also true for voice-activated devices and perhaps even worse because our audio working memory is shorter.
Onboarding and skill discovery can be tested by building prototypes as discussed above. However, we recommend using more open tasks in the sessions to allow us to observe what search terms people would naturally use, if and how they choose to engage (or not) with on-boarding and if they choose to go back and re-find it. From the retrospective, we then learn why they made those decisions and these insights to inform your designs so they will work more effectively in the wild.
Usability and learnability
Usability testing of a Skill can be done in the lab or at home if the way people use the Skill is context dependent, such as supporting cooking or shopping activity in their kitchen. Session plans would cover all that users can do with the Skill and use intervening tasks so that we can then return the user to the Skill later in the session and test whether they can remember the commands.
Finally, even when your Skill is designed for a problem that you’ve validated and you’ve tested it through concept to usability, you will want to know if it will be one of the 3% that makes it through to week 2 of being used. For that you need a different type of research.
Digital ethnography with a diary study design allows you to capture every interaction with your Skill and also emotions, behaviours and needs of those touchpoints as well as those moments when it could have been used but wasn’t. Every diary study has a unique design but
Want to know more?
We hope you have found our tips helpful and inspiring as you plan your Alexa Skill or Google Action. If you would like to know more about our research services for