Archive for the ‘Research Thoughts’ Category
Are you concerned about conducting research too far in advance that it might make the insight become invalid? Or gathering insight prior to launch but then not having the resource to implement the changes?
These are the kinds of concerns we hear regularly from clients when user research is considered. The budget will only allow for one round of research, so the timing of that research becomes a worry.
But the reality is, there is no bad time to test. All too often, research is reactive rather than pro-active. Once a problem occurs, alarm bells ring and one big piece of research is commissioned. However, in order to adapt to changes in behaviour and pre-empt problems, we recommend breaking research down and gathering customer insight little and often. Below we look at the entire design process from discovery through to post-launch and suggest how research could be implemented at each stage of the process, as well as addressing some key concerns.
This is usually the point at which you’ve decided you want to understand the user’s needs. You may have a product in mind and want to understand the user needs it could address. Or you may have a product that’s live and want to understand more about how it’s used and could be improved. Some of the key things you might want to find out at this stage are:
- How does your current or proposed product or service solve the needs of its users?
- Learn about your users and identify groups amongst them
- Validate assumptions you have about users and their needs
To do this there are a series of research methods you can use. These can be broken down into research with users and desk research which require varying levels of time and budget:
Research with users
- In-depth and contextual interviews – requires no development, and allows you to understand user wants and needs
- Competitor usability testing – understand the user experience on competitor sites to raise opportunities and problems to avoid
- Personas – based on research, personas allow you to identify the profiles of your users and represent their needs, motivations and frustrations
- User journey mapping – involves identifying user goals and visualising the process users go through to reach this goal. Again, requires no development or functionality
- Competitor analysis – identifying what your competitors are offering, how your current product compares, or understanding gaps in the market
But unfortunately, investing in research early in the design process is still a concern for some clients. A common question we receive is:
What if the research is too far in advance of launch that findings become obsolete?
There’s really no such thing as research being too far in advance. Only by testing early will you be able to validate your concept and understand the changes that need to be made. These will then help to shape your program of work and understand where additional research should be gathered to support development. Even at launch, you might have come on a long way since that initial discovery research, but you wouldn’t have got to where you are without it.
By this point, you have a clear set of goals and any assumptions have been validated. With a concept in mind and some evidence to support the user needs, stage 2 involves considering content, getting some lo-fidelity designs in front of users, and reiterating those designs until refined.
Some of the research methods you may use at this stage include:
- Card sorting and tree testing – focus on information architecture, and allows users to input into the organisation and structure of your content
- Designing and testing prototypes of increasing fidelity – involves the iterative development of prototypes (from lo-fidelity wireframes, to fully functioning prototypes), to refine designs
- Targeted research focussing on single features – full development of a single feature in order to observe user interaction and refine accordingly
By this stage, the prospect of moving forward can seem quite daunting. You have lots of elements that need addressing but you’re unsure how to prioritise, and how much to chew off at once. A concern we often hear is:
What if there’s too much to change before the next sprint? How do I prioritise what we design and test first?
Although daunting, the key at this stage is to plan ahead. Consider what needs addressing and order these issues based on how likely they are to impact the user experience. From this, you’ll have a good idea what to focus on first. If you’re still unsure, discuss your ideas with your team, or even with us – based on the results of research or an expert review, we can help you to prioritise what to work on.
As the final stage before your new product goes live, this is when you should be testing the full user journey to uncover any last-minute tweaks or bug fixes.
At this stage your research methods may include:
- Full user journey testing – in-depth usability sessions allow you to assess the full flow, and identify sticking points
- Eye tracking – used in usability sessions to observe natural interaction with the interface, helping you to understand how users navigate the page and make journey choices
This should identify some of the high priority fixes that may need to take place before launch, but also some less severe issues which can be built into a backlog and addressed post-launch. With that in mind a key concern at this stage is:
What if I don’t have the development resources to make recommended changes before launch?
If research has been taking place throughout the development process, then, in theory, there should only be minor changes needed at this stage. But we know things don’t always run as smoothly as anticipated, so in these situations, it’s a matter of prioritising the work. After delivering a project at SimpleUsability we often work alongside clients to make recommendations and help to prioritise how and when these recommendations should be implemented. This way you can get those essential problems fixed and begin building a backlog of issues to work through post-launch.
Now your product is live, but there’s likely to be some teething issues that need addressing, and you might want to start working through that backlog. Your product should be a working progress, so conducting research little and often can be useful to check everything is working as it should and test any new changes that might be going live.
For this reason, the most appropriate research methods at this stage are:
- Regular usability testing – to check the end to end user journey is still working effectively, and explore use with different user groups and functionality
- Eye tracking – to observe natural interaction with the interface, understand how it’s navigated, and what is missed
- Expert review – evaluating your interface to understand likely problems encountered by users
- Competitor analysis and benchmarking – identifying what your competitors are offering, and understanding how your current product compares
With regular use of these methods, your product should be constantly improving. However, post-launch research often raises concerns relating to cost when the live product seems to be functioning perfectly fine. For example:
I know my site is working now, so why should I test if it’s not broken?
‘If it ain’t broke, don’t fix it’. Unfortunately, this is something we usually hear when research is believed to be wasteful or unnecessary. In reality, no one should be waiting until something is broken to fix it. With competition and innovation at its prime, you should constantly be pushing to make your product better. For example, competitors may be implementing changes, and the industry may be shifting, so you don’t want to be at the back of the pack.
We can’t ignore internal concerns and constraints, but getting your peers on board and conducting research little and often is an essential way to stay in touch with users and improve your product. It’s important that research is not just reactive when a problem needs to be addressed as this will require a lot of hard work and will be expensive to fix. Instead, research should be a routine part of your design strategy, implemented across the discovery, iteration, pre-launch, and post-launch stages. It doesn’t have to be days of testing with a fully functioning website, just little and often should provide the feedback you need for success!
You’re probably thinking, what does Toto have to do with user research? This method of research gets its name from the story by L.F. Baum. In the story, the Wizard fools everyone by creating a vision of himself to look powerful by using a set of controls while he hides behind a curtain concealing the reality.
What is Wizard of Oz testing?
The Wizard of Oz method is a process that allows a user to interact with an interface without knowing that the responses are being generated by a human rather than a computer by having someone behind-the-scenes who is pulling the levers and flipping the switches.
The Wizard of Oz method allows researchers to test a concept by having one practitioner – the ‘Moderator’ – leading the session face to face with each user, whilst another practitioner – the ‘Wizard’ – controls the responses sent to the user via the chosen device. In the image above from IBM, we see an example for testing the concept of a ‘Listening typewriter’. The user is sitting in one room talking into a microphone while the ‘Wizard’ sits behind the scenes typing what the user is saying so it appears on the user’s screen as if it was done by the computer.
As UX researchers we often remind people to test systems at every stage of development, and that includes testing before development has even begun. This can save time, money and those ever so embarrassing moments when products are launched before they are fit and ready for users.
The Wizard of Oz methodology allows you to test users’ reactions to a system before you even have to think about development. This could be a new concept you are unsure will work for your users or a project that would require a substantial amount of effort to create, but we want to learn more before it makes sense to invest the time and money, and it cannot be tested with the usual prototype tools. Wizard of Oz is a flexible approach that allows concepts to be tested and modified without having to worry about potentially tiresome code changes, breaks in a daily testing schedule or full development costs.
Prototype, prototype, prototype! The easiest way to conduct Wizard of Oz testing is to build a simple and easy-to-use prototype that allows the ‘Wizard’ to quickly react to the user’s gestures or actions with the designed response with a single click.
Similar to any methodology, creating a Wizard-of-Oz prototype starts with us having to determine what we want to test or explore. Then we need to figure out how to fake the functionality needed to give the user a realistic experience from their viewpoint. For example, you could prototype a jukebox without creating the mechanics and use a hidden person to play the selected songs to the customer.
Our experience with the Wizard of Oz methodology
At SimpleUsability we have used the Wizard of Oz method when testing IVR and SMS systems. For IVR we saved development time and money by creating an Axure prototype with integrated voice files to playback to the user once they have selected the option that is most suited for their needs. This allowed our clients to see, in real time, how users got on navigating their IVR systems, and what prompts had a negative effect on the user’s journey. This also allowed us to run a more complex project, trialing different versions of the same prompt within a single project to see which prompts work the best for a specific audience.
We have used the ‘Amazon Polly’ automated voice generator to generate automated voice files that are quick to make and consistent. This speeded up the project process because we didn’t have to wait around for voice files to be recorded by a human. It also saved costs of having voice files professionally recorded before testing showed which versions worked best.
How can we help you?
We have used the Wizard of Oz methodology for clients from a variety of sectors including financial services and telecoms and can tailor the approach to meet varying business requirements.
In the financial services sector, we worked with our client Arrow Global on a broader programme of work to optimise their online portal by providing customers with an additional channel to perform key actions relating to their account. To further encourage channel shift, we explored the current IVR system and the way in which this could be adapted to appropriately direct customers to the self-service portal.
Our work for EE was focused on exploring various concepts for enabling customers to take control of their accounts; firstly, to understand whether this was something they would like to do and secondly, whether they were able to do it using their IVR system. By using the Wizard of Oz methodology at the concept stage, we were able to save EE significant development costs and time and then help them optimise the flow for customers managing their account over the phone, reducing the need for call center support.
If you’re looking for guidance on how to recruit research participants, the internet is your oyster. But when it comes to understanding whether you should reuse your participants, you’ll find little advice on best practices. However, here at SimpleUsability and Research Helper we have 17-years experience recruiting participants and are happy to share what we’ve learnt. This article will outline best practices for reusing research participants across the contexts of different methodologies and different project needs.
Should we reuse participants?
Whether we should reuse participants is something that often gets asked by clients. “Should we invite the same users in for each round of research so that they can see our product improving” Or, “should we make sure we recruit a fresh set of eyes?”
There are a lot of questions to ask when considering whether to reuse participants or recruit a fresh sample, and the outcome is likely to depend heavily on the methodology you are using or the type of product you are testing. So first let’s have a look at the general principles that apply to re-using research participants:
When you should not reuse your research participants
- If the participant has taken part in a research session within the last 6 months. You want to avoid developing ‘professional participants’ who anticipate the types of problems they think they are expected to find and lose the fresh perspective needed. Both Nielson Norman Group and MRS recommend that participants should not be used more than twice in one year so we find that waiting 6 months before inviting a participant in again prevents them from becoming research masters.
- For iterative tests when the focus is on the ease of use or first exposure to the system. If a user has previously engaged with an earlier version of the site, system, or app, it’s likely they will remember things from the session, so will not approach the new designs with a completely fresh perspective. Avoid reusing participants where this is the case.
- Research on a similar topic. Even if the system itself is different, sometimes users may recall their previous experiences if the research is of a similar topic. For example, if users have previously completed research for home insurance, we would avoid inviting them in for another session relating to any kind of insurance.
- If the participant has previously been excluded from a study. Whether this is during the session itself or when you come to analysis, participants sometimes need to be excluded due to their dishonesty during the recruitment process, limited feedback during the session, or inappropriate responses. It is always important to record which users have had to be excluded, and a good idea to avoid inviting them in again.
- If the participant has previously been unreliable. We’ve all had times when our circumstances change unexpectedly and we have to cancel our plans. However, if a participant makes a habit of cancelling their sessions last minute, you might as well save yourself the hassle and avoid reusing them in future.
When you could consider reusing your research participants
- When carrying out research on an internal system. In certain cases you may be limited to a particular user group. For example, if evaluating an internal system, your user base is restricted to people who work for the company and use the system. In a situation where you can’t physically find alternative users, you will have to go back to the same participants to gain feedback on new designs.
- When your research is targeted towards a restricted target audience. You may want to see how your product addresses the needs of a particular target audience. For example, when testing for accessibility your audience is already restricted, so when coupled with limited time and resource, it may be difficult to avoid reusing participants. However, where possible we would still recommend avoiding anyone who has been recruited within the last six months.
- For studies on the same system that do not focus on ease of learning. If you’re looking to see how a system works over extended use, or how iterative designs can improve it over time then you might want to reuse the same participants. However, it would be best practice to still include some new participants to ensure learnability is not affecting the research findings.
Based on these general principles, our recommendation would be to avoid reusing participants when research is taking place more than every 6 months, or for systems requiring fresh user insight. However, as we’ve highlighted, there are exceptions where reusing participants is appropriate, such as if you are dealing with a restricted user base or wish to carry out a longitudinal study. Ultimately, these guidelines should be considered on a project by project basis in order to decide whether to reuse participants or recruit new participants for each round of research.
I’ve been watching people buy things on Amazon this week. Whatever the sector, when we’re researching users’ needs, Amazon is a great comparison to include in a session to trigger discussions because it is often held up as having some of the best interaction designs to help people find and buy the products they want quickly and efficiently.
But one thing that has come up several times this week is users saying how distracting it is.
> Read more
We were delighted to go along to the second User Research London conference this week. The first conference in 2017 was small, but it sparked an interest and this year it has grown. Researchers love to research, and that includes learning about how others are doing it and thinking about it. So this conference saw many more come together to listen, watch, talk and learn.
The day offered a mix of longer and shorter talks from speakers from around the world, from huge organisations and tiny consultancies. A common thread that came up again and again was deliverables. Not just what we deliver, but how we deliver it and how we shape our research to deliver it in order that we make the most impact we can from our research work.
> Read more
Within UX research, focus groups get a bad rap, often, this can be justified (for a number of reasons we will come onto exploring), however, this shaming is not always justified. Used for the correct reasons and facilitated in the correct manner, focus groups can become another useful tool in your methodological toolbox.
This article sets out to explore the pros and cons of focus groups within UX research, drawing on how and when we use focus groups here at SimpleUsability for context.
‘What we say and what we do are very different’
If you work in UX, this will be a phrase you will be very familiar with, and for a very good reason. It is well established and evidenced that as human beings we are not good at predicting our own behaviour for a number of reasons. This is one of the key drivers behind the stigma associated with focus groups, they are often used when they shouldn’t be, and when there is a more suitable methodology available, therefore the results may be misleading.
> Read more
NUX Camp is now a fixture on the Leeds Digital Festival schedule bringing ensuring UX, and the skills and techniques to do it well, are part of the North’s largest digital festival. Two of our lead UX practitioners, Amy Martindale and Natalie Crook, went along for the day and tell us here about the highlight and takeaways from the day.
With the growth of Alexa and Google Home, ‘voice’ is the topic everyone is talking about. Researchers and designers alike are trying to find ways to build new voice interfaces and making them work for users still new to the devices and opportunities they offer. Natalie shares what she learned from the two workshops about designing for voice, the first from Hilary Brownlie and second from Graham Odds.
> Read more
Often thought to be a contentious relationship, Bolser and SimpleUsability came together for a presentation for LDF 2018 to share how User testing and Agile can be integrated throughout a project lifecycle.
Our presenters were Dr Lucy Buykx, Senior UX practitioner and Amy Martindale, Lead UX practitioner from SimpleUsability, together with Bolser’s Hanneka Kilburn, Head of Design and Theo Wrightman, Scrum Master. SimpleUsability and Bolser work with some of the world’s biggest brands providing complimentary services: SimpleUsability is a behavioural research agency, who have evolved a robust and insightful UX research methodology built on trusted psychology principles and innovative technology and Bolser are a full stack agency who have adopted an Agile framework with Scrum since 2013.
> Read more
When was the last time you bought something online? Did it go as expected? How did the process make you feel?
Previously, we have looked at how UX design can benefit from storytelling elements, but in this article, we will briefly discuss how to use storytelling ideas when organising your test sessions to get the most value out of a participant.
As humans, we naturally process information in stories. They are the key to engrossing us and helping us to understand other people’s ideas. Due to this, they are excellent for capturing and retaining our participant’s attention, dropping them into a scenario and frame of mind to aid our research and, in appropriate methodologies, improving their recall. Besides the participants, they can also help us, as researchers, when differentiating between all the testing sessions we have carried out on the day – the structure of our sessions may be the same, but the stories will always differ.
> Read more
You say ‘to-may-to’, I say ‘to-mah-to’
You often hear the terms ‘user research’ and ‘market research’ used interchangeably within companies, product teams often debate the differences between the disciplines. They both contain the word research and their main aim is to understand the ‘user’ or the ‘consumer’ – so, what’s the difference?
At best, people try to distinguish them by the data they work with, “user researchers are the qualitative guys and market researchers are the quantitative guys.” However, this stereotype is also incorrect and can be damaging. As any valuable UX research agency will use a combination of qualitative and quantitative methods, so will a market research agency.
This article sets out to explore the blurring line between the two disciplines, their shared similarities, their differences, and ultimately understand if there is a need for a hard classification. All that matters is the user, right?
> Read more