Archive for the ‘Research Thoughts’ Category
If you’re looking for guidance on how to recruit research participants, the internet is your oyster. But when it comes to understanding whether you should reuse your participants, you’ll find little advice on best practices. However, here at SimpleUsability and Research Helper we have 17-years experience recruiting participants and are happy to share what we’ve learnt. This article will outline best practices for reusing research participants across the contexts of different methodologies and different project needs.
Should we reuse participants?
Whether we should reuse participants is something that often gets asked by clients. “Should we invite the same users in for each round of research so that they can see our product improving” Or, “should we make sure we recruit a fresh set of eyes?”
There are a lot of questions to ask when considering whether to reuse participants or recruit a fresh sample, and the outcome is likely to depend heavily on the methodology you are using or the type of product you are testing. So first let’s have a look at the general principles that apply to re-using research participants:
When you should not reuse your research participants
- If the participant has taken part in a research session within the last 6 months. You want to avoid developing ‘professional participants’ who anticipate the types of problems they think they are expected to find and lose the fresh perspective needed. Both Nielson Norman Group and MRS recommend that participants should not be used more than twice in one year so we find that waiting 6 months before inviting a participant in again prevents them from becoming research masters.
- For iterative tests when the focus is on the ease of use or first exposure to the system. If a user has previously engaged with an earlier version of the site, system, or app, it’s likely they will remember things from the session, so will not approach the new designs with a completely fresh perspective. Avoid reusing participants where this is the case.
- Research on a similar topic. Even if the system itself is different, sometimes users may recall their previous experiences if the research is of a similar topic. For example, if users have previously completed research for home insurance, we would avoid inviting them in for another session relating to any kind of insurance.
- If the participant has previously been excluded from a study. Whether this is during the session itself or when you come to analysis, participants sometimes need to be excluded due to their dishonesty during the recruitment process, limited feedback during the session, or inappropriate responses. It is always important to record which users have had to be excluded, and a good idea to avoid inviting them in again.
- If the participant has previously been unreliable. We’ve all had times when our circumstances change unexpectedly and we have to cancel our plans. However, if a participant makes a habit of cancelling their sessions last minute, you might as well save yourself the hassle and avoid reusing them in future.
When you could consider reusing your research participants
- When carrying out research on an internal system. In certain cases you may be limited to a particular user group. For example, if evaluating an internal system, your user base is restricted to people who work for the company and use the system. In a situation where you can’t physically find alternative users, you will have to go back to the same participants to gain feedback on new designs.
- When your research is targeted towards a restricted target audience. You may want to see how your product addresses the needs of a particular target audience. For example, when testing for accessibility your audience is already restricted, so when coupled with limited time and resource, it may be difficult to avoid reusing participants. However, where possible we would still recommend avoiding anyone who has been recruited within the last six months.
- For studies on the same system that do not focus on ease of learning. If you’re looking to see how a system works over extended use, or how iterative designs can improve it over time then you might want to reuse the same participants. However, it would be best practice to still include some new participants to ensure learnability is not affecting the research findings.
Based on these general principles, our recommendation would be to avoid reusing participants when research is taking place more than every 6 months, or for systems requiring fresh user insight. However, as we’ve highlighted, there are exceptions where reusing participants is appropriate, such as if you are dealing with a restricted user base or wish to carry out a longitudinal study. Ultimately, these guidelines should be considered on a project by project basis in order to decide whether to reuse participants or recruit new participants for each round of research.
I’ve been watching people buy things on Amazon this week. Whatever the sector, when we’re researching users’ needs, Amazon is a great comparison to include in a session to trigger discussions because it is often held up as having some of the best interaction designs to help people find and buy the products they want quickly and efficiently.
But one thing that has come up several times this week is users saying how distracting it is.
> Read more
We were delighted to go along to the second User Research London conference this week. The first conference in 2017 was small, but it sparked an interest and this year it has grown. Researchers love to research, and that includes learning about how others are doing it and thinking about it. So this conference saw many more come together to listen, watch, talk and learn.
The day offered a mix of longer and shorter talks from speakers from around the world, from huge organisations and tiny consultancies. A common thread that came up again and again was deliverables. Not just what we deliver, but how we deliver it and how we shape our research to deliver it in order that we make the most impact we can from our research work.
What we should deliver was answered concisely by Tom Ablewhite from Foolproof. He gave us a checklist of assets liked and valued by different audiences – product managers, designers, developers and reminded us we should shape our message to the audience. To do this, we need to understand not just who they are but work harder at understanding them and how best to bring the message to them.
Laurissa Wolfman-Hvass from Mailchimp zeroed in on the value of research: “Research provides insight that reduces uncertainty and empowers people to make better, more informed decisions”. But it’s up to us to ensure that we communicate this well, so it can both empower people and help those informed decisions. She gave us permission to think about learning about our audience using the skills we already have and apply when we go about learning about users. Develop empathy, observe, ask good questions, notice what is important to them and what stresses them so you can help them with your answers from research. An alternative approach came from Cyd Harrell who talked about metaphor for analysis but also using it to communicate research insights to help it resonate better with our audience.
As researchers and as businesses we all want to see the impact of our work and understand how it has added value. Christina Li and Dr Tim Dixon both addressed impact in their talks. Christina drew on her long career in Government digital services to argue that we should become more comfortable with mixing qual and quant together to both improve the outputs, and to measure impact of our research and ensuing designs. Flipping around the norm, she says that quant enriches the qual. In Tim Dixon’s short talk he have an intriguing introduction to the Digital Impact Framework that Nomensa are building to shape research and measure the impact. The framework gives us a structure to measure impact externally through social and economic metrics and internally to the organisation through process and innovation metrics.
Our work is not over when we deliver the research. As researchers we want to see changes made and user experiences improved. Ana Roji talked about transforming our typical research outputs into different forms to provide tangible assets that people can revisit and reuse to keep them engaged. In her talk, she shared the story of creating cards containing insights and case studies to inform people and opportunities and activities to inspire them to take action and make changes.
On this topic, Paul Andre of Facebook took a step back. He talked about expanding ourselves as researchers – not just learning more methodologies and tools – but developing our knowledge and thinking about the world so we can create a compelling vision of the future to empower and engage stakeholders to make changes.
The day wrapped up with the wonderful Meena Kothandaraman of Boston consultancy Twig+fish. Meena told us it’s our job to socialise the research, to get non-researchers engaged so they can become better consumers, and contributors to research. She shared the Ncredible framework they use for planning and shaping a research project and how it can be used to engage and communicate better with non-researchers through making research processes more transparent.
The day finished with Meena calling us to action to stand up and repeat the researchers vow. It was a great end to a great day.
Within UX research, focus groups get a bad rap, often, this can be justified (for a number of reasons we will come onto exploring), however, this shaming is not always justified. Used for the correct reasons and facilitated in the correct manner, focus groups can become another useful tool in your methodological toolbox.
This article sets out to explore the pros and cons of focus groups within UX research, drawing on how and when we use focus groups here at SimpleUsability for context.
‘What we say and what we do are very different’
If you work in UX, this will be a phrase you will be very familiar with, and for a very good reason. It is well established and evidenced that as human beings we are not good at predicting our own behaviour for a number of reasons. This is one of the key drivers behind the stigma associated with focus groups, they are often used when they shouldn’t be, and when there is a more suitable methodology available, therefore the results may be misleading.
> Read more
NUX Camp is now a fixture on the Leeds Digital Festival schedule bringing ensuring UX, and the skills and techniques to do it well, are part of the North’s largest digital festival. Two of our lead UX practitioners, Amy Martindale and Natalie Crook, went along for the day and tell us here about the highlight and takeaways from the day.
With the growth of Alexa and Google Home, ‘voice’ is the topic everyone is talking about. Researchers and designers alike are trying to find ways to build new voice interfaces and making them work for users still new to the devices and opportunities they offer. Natalie shares what she learned from the two workshops about designing for voice, the first from Hilary Brownlie and second from Graham Odds.
> Read more
Often thought to be a contentious relationship, Bolser and SimpleUsability came together for a presentation for LDF 2018 to share how User testing and Agile can be integrated throughout a project lifecycle.
Our presenters were Dr Lucy Buykx, Senior UX practitioner and Amy Martindale, Lead UX practitioner from SimpleUsability, together with Bolser’s Hanneka Kilburn, Head of Design and Theo Wrightman, Scrum Master. SimpleUsability and Bolser work with some of the world’s biggest brands providing complimentary services: SimpleUsability is a behavioural research agency, who have evolved a robust and insightful UX research methodology built on trusted psychology principles and innovative technology and Bolser are a full stack agency who have adopted an Agile framework with Scrum since 2013.
> Read more
When was the last time you bought something online? Did it go as expected? How did the process make you feel?
Previously, we have looked at how UX design can benefit from storytelling elements, but in this article, we will briefly discuss how to use storytelling ideas when organising your test sessions to get the most value out of a participant.
As humans, we naturally process information in stories. They are the key to engrossing us and helping us to understand other people’s ideas. Due to this, they are excellent for capturing and retaining our participant’s attention, dropping them into a scenario and frame of mind to aid our research and, in appropriate methodologies, improving their recall. Besides the participants, they can also help us, as researchers, when differentiating between all the testing sessions we have carried out on the day – the structure of our sessions may be the same, but the stories will always differ.
> Read more
You say ‘to-may-to’, I say ‘to-mah-to’
You often hear the terms ‘user research’ and ‘market research’ used interchangeably within companies, product teams often debate the differences between the disciplines. They both contain the word research and their main aim is to understand the ‘user’ or the ‘consumer’ – so, what’s the difference?
At best, people try to distinguish them by the data they work with, “user researchers are the qualitative guys and market researchers are the quantitative guys.” However, this stereotype is also incorrect and can be damaging. As any valuable UX research agency will use a combination of qualitative and quantitative methods, so will a market research agency.
This article sets out to explore the blurring line between the two disciplines, their shared similarities, their differences, and ultimately understand if there is a need for a hard classification. All that matters is the user, right?
> Read more
‘A picture tells a thousand words’
This is certainly the case when it comes to storyboarding. UX designers, researchers and stakeholders need to be able to put themselves into the user’s shoes to consider how they might react and engage with a product. Relying on imagery over text, storyboards can be a really effective tool within UX to: help conceptualise designs, visualise user personas or needs, aid research design and, add validation to research findings. This article will explore how to create a storyboard for UX projects and why they are effective.
> Read more
5 important things to think of when conducting usability testing of voice interaction using voice controlled assistants.
Over the last year there has been a significant increase in the use of voice controlled assistants such as the Amazon Echo or the Google Home, with over 17 million devices estimated to have been purchased over the last three months alone.
As sales of these devices have boosted more companies are starting to develop systems to work with voice interaction and we are seeing an increase in different ways people can use their devices, some of these are new services such as making calls or sending message others are new channels for existing services such as asking the weather, ordering their online groceries or even ordering a taxi.
> Read more