In our roles as user experience practitioners, we are regularly asked to test with prototypes, ranging from low fidelity paper prototypes through to hi fidelity pre-launch fully interactive prototypes. Clients often ask how their prototype compares with others, and how different aspects will work with or affect the testing. So, here’s our top 5 tips for building a prototype to get the best out of the research.
Content and design
Let’s start with the content and design of the prototype. Although the level of detail that’s included in the prototype varies depending on the fidelity, at all fidelities the content and design needs to support what you are trying to find out.
1. Make them relevant, believable and consistent
When building prototypes for testing always have the user profile in mind. As you are adding content, ask yourself who is my user, who will be testing it and is the content relevant?
For example, if you’re designing a banking app aimed at a general demographic, don’t set the account balance in the prototype to £1million. Chances are your users won’t have that much money and the figure will just distract them from the feature or journey you are trying to test.
Similarly, it’s important to make your content consistent through different pages of the journey you’re testing. Users notice details and when these change they can get confused. If you’re testing the flow of a multi-stage form, there’s nothing worse than starting out as a John and then realising you have become a Jane.
When users get distracted by unrealistic or inconsistent content there’s a chance they switch off and we lose the chance to observe natural behaviour and learn what’s really important in the session.
2. Be wary of the use of colour
While having a colourful prototype may make stakeholders happy, it’s important to ask yourself what you need to find out from testing the prototype and if the extra effort of adding colour helps or hinders this.
Sometimes adding a bit of colour can lead to results that don’t hold up in the final designs. For example using colour for calls-to-action in an otherwise monochrome prototype, as shown in the image above, will draw users’ attention. Results will suggest they find it easily. Later on, as the design develops, the same colour calls to action may not be as noticeable in the full colour design.
3. Turn off design features before testing
Prototyping tools include many features to help designers collaborate and develop designs such as highlighting where links have been added. This is useful when discussing a design but will confuse users being asked to work through a journey, and may lead to unexpected results.
For example, Invision tool includes ‘hotspots’ that flash up blue to show what areas on a page are clickable. This is useful during design discussion stage, but if you want to find out if users can find the call to action, the blue hotspot flash gives it away and as a result you don’t see the natural behaviour of where the user would expect to find it in the design.
4. Test how the prototype will work on the device you’re testing on
Prototyping tools offer a range of ways to deliver the prototype for testing. Some generate html pages to deliver from a server, others allow you to bundle several user journeys and export a package for local use on a mobile or tablet. This is convenient but it can cause problems in user testing sessions.
One problem is that large file sizes can take a long time to load and they risk users becoming disengaged. Another problem happens if there are multiple journeys included in the prototype. Users will become confused if they accidentally swipe between them and see pages unrelated to the journey we are trying to observe.
5. Observe, learn and iterate, but not too often
It can often be difficult as a designer to sit and watch users struggles with your design, and after seeing several of your users fail with your prototype you may want to take some action. We would recommend avoiding changing the design in response to every user because this makes it hard to compare results as each user saw a slightly different variation of the design.
Having multiple testing days presents the opportunity to iterate across the days, so if something didn’t work on day one, try testing a solution on day two. This allows you to be sure the problem affects several users before you solve it, and have the chance to test your solution.