5 important things to think of when conducting usability testing of voice interaction using voice controlled assistants.

January 11, 2018 12:29 pm

5 important things to think of when conducting usability testing of voice interaction using voice controlled assistants.



Over the last year there has been a significant increase in the use of voice controlled assistants such as the Amazon Echo or the Google Home, with over 17 million devices estimated to have been purchased over the last three months alone.

As sales of these devices have boosted more companies are starting to develop systems to work with voice interaction and we are seeing an increase in different ways people can use their devices, some of these are new services such as making calls or sending message others are new channels for existing services such as asking the weather, ordering their online groceries or even ordering a taxi.

As researchers, we’re interested in the usability of these devices and how we can develop different ways for testing voice interaction. The key difference between mobile app or website and a voice interaction system is that the user has no visual cues. Users have to rely on the audio response given by the device rather than seeing visual feedback and this is something that must be taken into account when conducting usability sessions with voice interaction.

As this is new technology there is not a lot of advice around how to run a session using voice interaction devices and how this might be different to a usability session of a website or app. So, drawing on our experience of testing IVR and voice controlled assistants, here are our five top tips.

1. Use participants with different levels of experience using voice controlled assistants.

When planning sessions, it is important to think about the participants you will recruit to take part. As this is a new technology, it is important to recruit users that do not yet own voice controlled devices as well as those more experienced. By doing this you can see any issues with the system for beginners while noting down and problems within the user journey with someone who is already familiar with the device and how it works.

2. Introduce the device to users

For beginner users, it’s a good idea to start he session with an introduction to the device nd how they can start talking to it. This will allow the user to become familiar and relaxed and settle them before you move onto testing your app or Skill. This introduction will also be useful for users who have experience of voice controlled assistants, but may not have used the device you have chosen to test on.
Below are three things you could do to introduce the device to the user dependant on the participants level of experience:

3. Consider providing users with secondary device such as a mobile phone in the session.

If you have ever used a voice controlled device, you’ll know that some of the commands needed for the different ‘Skills’ built into the device to work successfully can be quite specific as the example above shows ‘Ask the BBC to play Radio One.’ If you do not say ‘Ask the BBC’ the device is just simply unable to complete the task. In this case, it is unfair to expect the user to think of this on the spot and get it correct. Setting users up to fail can often lead to having a negative impact of research sessions. A number of Skills have an onboarding experience that uses both the voice controlled device in conjunction with their smart phone. This process helps users learn the language of the skill. So, to minimise negative impact on the session, even if you are testing post-onboarding experience, consider providing users with a smart phone so they can refer to it for basic language.

4. Allow users to have time in the session to use it as they would naturally

This is important as it allows the participants to use the device in their own time as they would if they were at home. As users don’t have any visual indicators when using these devices, it can often take longer to begin with, so it can be useful to provide users with this time to allow them to try different commands. As users do this, you can gain some insight into features they expect to be available and can use this to ask how they had expected these features to work. This also allows you to follow up with questions around how they felt if this command was not available or if it was, what did they think to the way this worked? This gives feedback that can help with further development.

5. Test at all stages of development.

We all know how important it is to do usability testing on throughout the product cycle whether or be a website or mobile app, but this is even more important when it comes to new technologies. As more people are purchasing trying these devices it is essential that the user journey is ready for them. By testing early in the development process you are able to trial out different ideas without the expense of fully developing an idea to find that it does not work for users.

Exploratory

Starting testing at this stage allows you to trial out different ideas without the cost of developing the system. Testing methods at this stage in the process could be:

  • Command prioritisation: Giving users a list of commands and asking them to rank them from high to low as to whether this is something they would or would not use a voice controlled assistant for.
  • Trial commands: Give users a list of commands, this could even be the top prioritised from the previous task and ask them to talk through how they would expect them to work.

Prototyping

Testing at prototype stage allows you to build up a basic system without the cost of full development. Testing methods at this stage could be:

  • Wizard of Oz method: Something we have done before when testing IVR systems, the Wizard of Oz method allows the testing of the commands by having one practitioner – the ‘Moderator’ – leading the session face to face with each user, whilst another practitioner – the ‘Wizard’ – controls the prompts and responses sent to the user via the device.
  • Voice interaction prototype platform: When thinking of creating your prototype there are tools available to help you with this. Sayspring is a tool that allows you to build up a voice interaction prototype that allows you to create commands and phrases personal to your project and allows you to enter the responses that can be played back to the participants.

Final system test

Testing at this stage is more about the whole experience and should include cross platform testing to ensure the user journey from voice controlled assistant to app on mobile or tablet is consistent and easy to follow for users. As these devices are linked to an app that provides assistance when it comes to learning new skills or settings needed to ‘enable’ certain skills it is important testing across this journey is completed.

Takeaway

Plannning usability sessions with voice controlled assistants is similiar to traditional usability testing so you can draw on past experience to plan sessions, and it is different so there are new aspects to consider. Like all usability research, testing should start early in the product lifecycle and continue through, but as it is a new technology you should consider including beginners as well as experienced users, allowing time to introduce the device to users, provide secondary devices to help them become familiar with language and allow a little more time in the session to allow users to complete the tasks. I’m sure as this technology grows, so will the ways in which we can test its usability and we must continue to do this throughout the product cycle to ensure meets user’s expectations and needs.