Archive for the ‘Articles’ Category
Last week was the annual UX Brighton conference, with a theme of ‘Advancing research’. Two of our Lead UX practitioners, Jake Kitching and Craig Williams, jumped at the chance at making the long journey down to attend. The conference is set in the impressive Brighton Dome Concert Hall and was a fitting venue for the calibre of speakers.
The event continues to grow with last week’s boasting just under 600 attendees with global representation as far as India and Canada. The event was a melting pot of freelancers, agency and in-house UXers which made for some great conversations.
> Read more
Last week the annual NUX conference took place at its new home in the Royal Northern College of Music in Manchester. One of our lead UX practitioners, Natalie Crook, was lucky enough to attend and has shared the highlights and takeaways from the day in this article.
It’s Complicated – Designing in The Age of Emergence
The day started with the opening keynote talk with Christina Wodtke. She presented the cynefin framework as a tool to help when it comes to decision making within the design and development process.
Christina used the framework to remind us that instead of sticking within our design teams we should be branching out to involve as many people as possible; when we do this, we are able to remove ourselves from our little design bubbles and broaden our knowledge through crowdsourcing.
How to Re-Shape Projects (without antagonising people)
Next up was Kate Tarling who focused on how we can get to the bottom of an ambiguous client brief with approaches to understand what is actually being asked. It is important that we are able to untangle the purpose and context of work to anticipate challenges and confusion.
Great Workshops, Great Teams
After a quick refreshment-break, we were back in our seats and ready for the next talk by Alison Coward. Alison’s talk was based on how to facilitate great workshops by creating a safe and positive space where everyone feels comfortable to get involved and voice their opinion.
A top tip from Alison was to create space for productive conflicts allowing members within a team to challenge each other’s ideas and leaving time for individual thinking.
10 Easy Ways to Irritate Your Design Team
As we approached lunch, we were distracted from our grumbling bellies by the next speaker, Jane Austin. Jane kept our minds occupied with a great talk on how to get the best out of your design team.
She shared 10 ways designers can be irritated by those outside of the business design team and how this can lead to an unsuccessful design process. The solution, Jane argued, was the underlying theme on the day that we must work together and share our knowledge whether you are a designer, developer, researcher or business person.
Up next was Christopher Murphy. He shared some of his personal stories around risks he has taken throughout his career and how by taking risks we can learn that sometimes it is ok to break the rules.
Christopher introduced us to the word ‘Shoshin’ which origins from Zen Buddhism meaning a “beginner’s mind.” It refers to having a level of openness, eagerness, and lack of preconceptions when studying a subject, even when studying at an advanced level, just as a beginner would. This being something we could all benefit from, as we should never assume we know everything and should always be open to learning more from those around us.
Good Intentions and Bad Actors: Unleashing Our True Design Superpowers
Moving into the afternoon the next talk was by Lisa DeBettencourt . Lisa focused on the ethics of UX and how we as designers and researchers have a responsibility when it comes to the products we create.
The Designer is Present
The closing keynote talk was by Steve Portigal. Steve discussed how mindfulness has taken over Silicone Valley with some of the biggest tech companies embracing this process to understand what is happening in the present moment.
Steve highlighted the difference between empathy and sympathy and how we must be able to differentiate between the two when it comes to user research. Empathy is essential within user research as it allows us to take the perspective of another person as if it was the truth, refrain from judgement, recognising emotion from other people and make ourselves able to communicate that.
And then for the networking…
The day finished with drinks, catching up with friends within the industry and a very happy head full of inspiring thoughts from the day to take back to the business. Having reflected on the day I would recommend looking up each of the speakers from this year’s NUX for some promising and thought-provoking insights into the industry.
Similar to many children growing up, I always wanted to know why things were the way they were. This hasn’t rectified itself into adult life, therefore it’s ideal that I find myself working as a user experience consultant for SimpleUsability, where I can ask ‘Why?’ all day long.
But in the field of usability, constantly asking a research participant “why?” could become pretty annoying. Moderators of usability studies have to discover different methods to find out the reasons behind why somebody carries out a task in a certain way.
Maybe we should address another question first; “Why ask why?” Technology has provided us with many tools to find out what people are doing, particularly on websites. From analytics, we can see what people are clicking on and even watch where attention is driven to. What this doesn’t tell us is why people clicked there, what else they looked at first but were confused about, and also what they missed.
> Read more
This week at NUX Leeds, UX Director at Lion & Mason, Andy Curry shared with us UX enthusiasts why he thinks we should all be doing more field research, or essentially ‘Getting out of the office!’
This type of research is often overlooked because of the amount of time and cost required to organise research for such a small sample. But, Andy shared the power of field research, and the potential it has to transform a project, to help us understand why he loves it so much. So without further ado, Andy walked us through the 6 key reasons why we should all love it too.
> Read more
Are you concerned about conducting research too far in advance that it might make the insight become invalid? Or gathering insight prior to launch but then not having the resource to implement the changes?
These are the kinds of concerns we hear regularly from clients when user research is considered. The budget will only allow for one round of research, so the timing of that research becomes a worry.
But the reality is, there is no bad time to test. All too often, research is reactive rather than pro-active. Once a problem occurs, alarm bells ring and one big piece of research is commissioned. However, in order to adapt to changes in behaviour and pre-empt problems, we recommend breaking research down and gathering customer insight little and often. Below we look at the entire design process from discovery through to post-launch and suggest how research could be implemented at each stage of the process, as well as addressing some key concerns.
This is usually the point at which you’ve decided you want to understand the user’s needs. You may have a product in mind and want to understand the user needs it could address. Or you may have a product that’s live and want to understand more about how it’s used and could be improved. Some of the key things you might want to find out at this stage are:
- How does your current or proposed product or service solve the needs of its users?
- Learn about your users and identify groups amongst them
- Validate assumptions you have about users and their needs
To do this there are a series of research methods you can use. These can be broken down into research with users and desk research which require varying levels of time and budget:
Research with users
- In-depth and contextual interviews – requires no development, and allows you to understand user wants and needs
- Competitor usability testing – understand the user experience on competitor sites to raise opportunities and problems to avoid
- Personas – based on research, personas allow you to identify the profiles of your users and represent their needs, motivations and frustrations
- User journey mapping – involves identifying user goals and visualising the process users go through to reach this goal. Again, requires no development or functionality
- Competitor analysis – identifying what your competitors are offering, how your current product compares, or understanding gaps in the market
But unfortunately, investing in research early in the design process is still a concern for some clients. A common question we receive is:
What if the research is too far in advance of launch that findings become obsolete?
There’s really no such thing as research being too far in advance. Only by testing early will you be able to validate your concept and understand the changes that need to be made. These will then help to shape your program of work and understand where additional research should be gathered to support development. Even at launch, you might have come on a long way since that initial discovery research, but you wouldn’t have got to where you are without it.
By this point, you have a clear set of goals and any assumptions have been validated. With a concept in mind and some evidence to support the user needs, stage 2 involves considering content, getting some lo-fidelity designs in front of users, and reiterating those designs until refined.
Some of the research methods you may use at this stage include:
- Card sorting and tree testing – focus on information architecture, and allows users to input into the organisation and structure of your content
- Designing and testing prototypes of increasing fidelity – involves the iterative development of prototypes (from lo-fidelity wireframes, to fully functioning prototypes), to refine designs
- Targeted research focussing on single features – full development of a single feature in order to observe user interaction and refine accordingly
By this stage, the prospect of moving forward can seem quite daunting. You have lots of elements that need addressing but you’re unsure how to prioritise, and how much to chew off at once. A concern we often hear is:
What if there’s too much to change before the next sprint? How do I prioritise what we design and test first?
Although daunting, the key at this stage is to plan ahead. Consider what needs addressing and order these issues based on how likely they are to impact the user experience. From this, you’ll have a good idea what to focus on first. If you’re still unsure, discuss your ideas with your team, or even with us – based on the results of research or an expert review, we can help you to prioritise what to work on.
As the final stage before your new product goes live, this is when you should be testing the full user journey to uncover any last-minute tweaks or bug fixes.
At this stage your research methods may include:
- Full user journey testing – in-depth usability sessions allow you to assess the full flow, and identify sticking points
- Eye tracking – used in usability sessions to observe natural interaction with the interface, helping you to understand how users navigate the page and make journey choices
This should identify some of the high priority fixes that may need to take place before launch, but also some less severe issues which can be built into a backlog and addressed post-launch. With that in mind a key concern at this stage is:
What if I don’t have the development resources to make recommended changes before launch?
If research has been taking place throughout the development process, then, in theory, there should only be minor changes needed at this stage. But we know things don’t always run as smoothly as anticipated, so in these situations, it’s a matter of prioritising the work. After delivering a project at SimpleUsability we often work alongside clients to make recommendations and help to prioritise how and when these recommendations should be implemented. This way you can get those essential problems fixed and begin building a backlog of issues to work through post-launch.
Now your product is live, but there’s likely to be some teething issues that need addressing, and you might want to start working through that backlog. Your product should be a working progress, so conducting research little and often can be useful to check everything is working as it should and test any new changes that might be going live.
For this reason, the most appropriate research methods at this stage are:
- Regular usability testing – to check the end to end user journey is still working effectively, and explore use with different user groups and functionality
- Eye tracking – to observe natural interaction with the interface, understand how it’s navigated, and what is missed
- Expert review – evaluating your interface to understand likely problems encountered by users
- Competitor analysis and benchmarking – identifying what your competitors are offering, and understanding how your current product compares
With regular use of these methods, your product should be constantly improving. However, post-launch research often raises concerns relating to cost when the live product seems to be functioning perfectly fine. For example:
I know my site is working now, so why should I test if it’s not broken?
‘If it ain’t broke, don’t fix it’. Unfortunately, this is something we usually hear when research is believed to be wasteful or unnecessary. In reality, no one should be waiting until something is broken to fix it. With competition and innovation at its prime, you should constantly be pushing to make your product better. For example, competitors may be implementing changes, and the industry may be shifting, so you don’t want to be at the back of the pack.
We can’t ignore internal concerns and constraints, but getting your peers on board and conducting research little and often is an essential way to stay in touch with users and improve your product. It’s important that research is not just reactive when a problem needs to be addressed as this will require a lot of hard work and will be expensive to fix. Instead, research should be a routine part of your design strategy, implemented across the discovery, iteration, pre-launch, and post-launch stages. It doesn’t have to be days of testing with a fully functioning website, just little and often should provide the feedback you need for success!
You’re probably thinking, what does Toto have to do with user research? This method of research gets its name from the story by L.F. Baum. In the story, the Wizard fools everyone by creating a vision of himself to look powerful by using a set of controls while he hides behind a curtain concealing the reality.
What is Wizard of Oz testing?
The Wizard of Oz method is a process that allows a user to interact with an interface without knowing that the responses are being generated by a human rather than a computer by having someone behind-the-scenes who is pulling the levers and flipping the switches.
The Wizard of Oz method allows researchers to test a concept by having one practitioner – the ‘Moderator’ – leading the session face to face with each user, whilst another practitioner – the ‘Wizard’ – controls the responses sent to the user via the chosen device. In the image above from IBM, we see an example for testing the concept of a ‘Listening typewriter’. The user is sitting in one room talking into a microphone while the ‘Wizard’ sits behind the scenes typing what the user is saying so it appears on the user’s screen as if it was done by the computer.
As UX researchers we often remind people to test systems at every stage of development, and that includes testing before development has even begun. This can save time, money and those ever so embarrassing moments when products are launched before they are fit and ready for users.
The Wizard of Oz methodology allows you to test users’ reactions to a system before you even have to think about development. This could be a new concept you are unsure will work for your users or a project that would require a substantial amount of effort to create, but we want to learn more before it makes sense to invest the time and money, and it cannot be tested with the usual prototype tools. Wizard of Oz is a flexible approach that allows concepts to be tested and modified without having to worry about potentially tiresome code changes, breaks in a daily testing schedule or full development costs.
Prototype, prototype, prototype! The easiest way to conduct Wizard of Oz testing is to build a simple and easy-to-use prototype that allows the ‘Wizard’ to quickly react to the user’s gestures or actions with the designed response with a single click.
Similar to any methodology, creating a Wizard-of-Oz prototype starts with us having to determine what we want to test or explore. Then we need to figure out how to fake the functionality needed to give the user a realistic experience from their viewpoint. For example, you could prototype a jukebox without creating the mechanics and use a hidden person to play the selected songs to the customer.
Our experience with the Wizard of Oz methodology
At SimpleUsability we have used the Wizard of Oz method when testing IVR and SMS systems. For IVR we saved development time and money by creating an Axure prototype with integrated voice files to playback to the user once they have selected the option that is most suited for their needs. This allowed our clients to see, in real time, how users got on navigating their IVR systems, and what prompts had a negative effect on the user’s journey. This also allowed us to run a more complex project, trialing different versions of the same prompt within a single project to see which prompts work the best for a specific audience.
We have used the ‘Amazon Polly’ automated voice generator to generate automated voice files that are quick to make and consistent. This speeded up the project process because we didn’t have to wait around for voice files to be recorded by a human. It also saved costs of having voice files professionally recorded before testing showed which versions worked best.
How can we help you?
We have used the Wizard of Oz methodology for clients from a variety of sectors including financial services and telecoms and can tailor the approach to meet varying business requirements.
In the financial services sector, we worked with our client Arrow Global on a broader programme of work to optimise their online portal by providing customers with an additional channel to perform key actions relating to their account. To further encourage channel shift, we explored the current IVR system and the way in which this could be adapted to appropriately direct customers to the self-service portal.
Our work for EE was focused on exploring various concepts for enabling customers to take control of their accounts; firstly, to understand whether this was something they would like to do and secondly, whether they were able to do it using their IVR system. By using the Wizard of Oz methodology at the concept stage, we were able to save EE significant development costs and time and then help them optimise the flow for customers managing their account over the phone, reducing the need for call center support.
If you’re looking for guidance on how to recruit research participants, the internet is your oyster. But when it comes to understanding whether you should reuse your participants, you’ll find little advice on best practices. However, here at SimpleUsability and Research Helper we have 17-years experience recruiting participants and are happy to share what we’ve learnt. This article will outline best practices for reusing research participants across the contexts of different methodologies and different project needs.
Should we reuse participants?
Whether we should reuse participants is something that often gets asked by clients. “Should we invite the same users in for each round of research so that they can see our product improving” Or, “should we make sure we recruit a fresh set of eyes?”
There are a lot of questions to ask when considering whether to reuse participants or recruit a fresh sample, and the outcome is likely to depend heavily on the methodology you are using or the type of product you are testing. So first let’s have a look at the general principles that apply to re-using research participants:
When you should not reuse your research participants
- If the participant has taken part in a research session within the last 6 months. You want to avoid developing ‘professional participants’ who anticipate the types of problems they think they are expected to find and lose the fresh perspective needed. Both Nielson Norman Group and MRS recommend that participants should not be used more than twice in one year so we find that waiting 6 months before inviting a participant in again prevents them from becoming research masters.
- For iterative tests when the focus is on the ease of use or first exposure to the system. If a user has previously engaged with an earlier version of the site, system, or app, it’s likely they will remember things from the session, so will not approach the new designs with a completely fresh perspective. Avoid reusing participants where this is the case.
- Research on a similar topic. Even if the system itself is different, sometimes users may recall their previous experiences if the research is of a similar topic. For example, if users have previously completed research for home insurance, we would avoid inviting them in for another session relating to any kind of insurance.
- If the participant has previously been excluded from a study. Whether this is during the session itself or when you come to analysis, participants sometimes need to be excluded due to their dishonesty during the recruitment process, limited feedback during the session, or inappropriate responses. It is always important to record which users have had to be excluded, and a good idea to avoid inviting them in again.
- If the participant has previously been unreliable. We’ve all had times when our circumstances change unexpectedly and we have to cancel our plans. However, if a participant makes a habit of cancelling their sessions last minute, you might as well save yourself the hassle and avoid reusing them in future.
When you could consider reusing your research participants
- When carrying out research on an internal system. In certain cases you may be limited to a particular user group. For example, if evaluating an internal system, your user base is restricted to people who work for the company and use the system. In a situation where you can’t physically find alternative users, you will have to go back to the same participants to gain feedback on new designs.
- When your research is targeted towards a restricted target audience. You may want to see how your product addresses the needs of a particular target audience. For example, when testing for accessibility your audience is already restricted, so when coupled with limited time and resource, it may be difficult to avoid reusing participants. However, where possible we would still recommend avoiding anyone who has been recruited within the last six months.
- For studies on the same system that do not focus on ease of learning. If you’re looking to see how a system works over extended use, or how iterative designs can improve it over time then you might want to reuse the same participants. However, it would be best practice to still include some new participants to ensure learnability is not affecting the research findings.
Based on these general principles, our recommendation would be to avoid reusing participants when research is taking place more than every 6 months, or for systems requiring fresh user insight. However, as we’ve highlighted, there are exceptions where reusing participants is appropriate, such as if you are dealing with a restricted user base or wish to carry out a longitudinal study. Ultimately, these guidelines should be considered on a project by project basis in order to decide whether to reuse participants or recruit new participants for each round of research.
I’ve been watching people buy things on Amazon this week. Whatever the sector, when we’re researching users’ needs, Amazon is a great comparison to include in a session to trigger discussions because it is often held up as having some of the best interaction designs to help people find and buy the products they want quickly and efficiently.
But one thing that has come up several times this week is users saying how distracting it is.
> Read more
We were delighted to go along to the second User Research London conference this week. The first conference in 2017 was small, but it sparked an interest and this year it has grown. Researchers love to research, and that includes learning about how others are doing it and thinking about it. So this conference saw many more come together to listen, watch, talk and learn.
The day offered a mix of longer and shorter talks from speakers from around the world, from huge organisations and tiny consultancies. A common thread that came up again and again was deliverables. Not just what we deliver, but how we deliver it and how we shape our research to deliver it in order that we make the most impact we can from our research work.
> Read more
Within UX research, focus groups get a bad rap, often, this can be justified (for a number of reasons we will come onto exploring), however, this shaming is not always justified. Used for the correct reasons and facilitated in the correct manner, focus groups can become another useful tool in your methodological toolbox.
This article sets out to explore the pros and cons of focus groups within UX research, drawing on how and when we use focus groups here at SimpleUsability for context.
‘What we say and what we do are very different’
If you work in UX, this will be a phrase you will be very familiar with, and for a very good reason. It is well established and evidenced that as human beings we are not good at predicting our own behaviour for a number of reasons. This is one of the key drivers behind the stigma associated with focus groups, they are often used when they shouldn’t be, and when there is a more suitable methodology available, therefore the results may be misleading.
> Read more