User testing & Agile – Enemies or Friends? Our Leeds Digital Festival event co-hosted with Bolser

April 24, 2018 2:21 pm

User testing & Agile – Enemies or Friends? Our Leeds Digital Festival event co-hosted with Bolser

Often thought to be a contentious relationship, Bolser and SimpleUsability came together for a presentation for LDF 2018 to share how User testing and Agile can be integrated throughout a project lifecycle.

Our presenters were Dr Lucy Buykx, Senior UX practitioner and Amy Martindale, Lead UX practitioner from SimpleUsability, together with Bolser’s Hanneka Kilburn, Head of Design and Theo Wrightman, Scrum Master. SimpleUsability and Bolser work with some of the world’s biggest brands providing complimentary services: SimpleUsability is a behavioural research agency, who have evolved a robust and insightful UX research methodology built on trusted psychology principles and innovative technology and Bolser are a full stack agency who have adopted an Agile framework with Scrum since 2013.

So, user testing and agile, friends or not?

The Scrum Agile framework focuses on delivering a velocity of code releases. For some, a requirement to add user testing into the mix feels like braking the momentum. Our presenters set out to show that working well together enhances rather than detracts from the agile approach and ensures the user is at the centre of concepts, designs, and development. They talked us through typical stages of a project, starting at sprint zero.

Sprint 0 – Discovery phase

A project starts with a kick-off meeting with the stakeholders, to identify the visions and goals, and documenting the expected business and user value. Thinking about planning the project, it’s important to understand the success criteria and if the client has a fixed deadline or a specific set of features needed for release and so which parts of the iron triangle of planning (see below) are fixed and which are flexible.

The design team run a workshop to draw out understanding of the problem to be solved and the assumptions at this stage. This informs the research team at SimpleUsability to carry out discovery research to understand more about user needs around the problem area, identifying key user groups, and importantly, validating assumptions. These early workshops and research ensure all the assumptions are brought into the open early and validated to minimise risk as the project moves forward.

Sprint 1 – Design concepts and Dev knowledge acquisition

With goals set and assumptions ironed out, the teams are now ready to move into Sprint 1. For Bolser, this is where the design work really starts. Hanneka told us that they start to map out the user flow, produce wireframes and begin creating lo-fi prototypes, thinking through the design of features and how they could best suit user’s needs.

From a testing point of view, Amy described this as the proof of concept stage. Methods might include card sorting or tree testing to inform the information architecture of a website. The advantage of these methods is no designing or development is needed, and the feedback can help the design team understand what would work best for users before committing to particular designs.

If wireframes and lo-fi prototypes are ready, this is also a great opportunity to get those in front of users. Amy told us that some clients are concerned low-fi designs may affect the usability but explained that SimpleUsability is very familiar with this type of testing and have developed ways to ensure sessions stay focused on the interactions and away from the appearance.

From Theo, we then heard how in the background the Dev teams will be going through ongoing knowledge acquisition at this stage. They’ll be MVP planning and be creating a backlog, as well as researching the best technology to use so that when they are ready to get started they can hit the ground running.

Sprint 2 – Testing sessions and Dev set up

Sprint 2 means the first sets of designs are being created and features can be formed. At this stage, designs will be sent back to clients for approval.

In the meantime, the researchers over at SimpleUsability can begin gathering evidence to inform how to progress the designs. This can be done by testing a single design, or where possible multiple design versions. By testing multiple designs, not only do the users have something to compare against but also if stakeholders are mixed in their preference for one design approach or the other, this puts the decision making in the hands of the user. They will quickly identify the features that are working well and those that aren’t.

With the designs moving forward, this is developers’ opportunity to begin setting up the foundations. They’ll start by setting up a repository and begin piecing together the bare bones of the product ready to move forward.

Sprint 3+ – Iterative design, development and user testing

At this point, the iterative process really starts to gain pace. Developers are now starting to build the first set of user tested designs, designers are creating the second set of designs which again will be user tested by Simple Usability.

At the end of the sprint, all work completed will be demoed to the client for further feedback. At this point the team will also do a retrospective, here we don’t look at the work completed within the sprint but instead we inspect the process. We talk about what worked well and what we can change to improve, this ensures the process stays smooth, efficient and relevant.

With design and coding sprints underway, the SimpleUsability team can begin iterative user testing, where they work with designers to rapidly reiterate and test designs along the way. At each sprint, they might test new features or longer user journeys, but the key is to be flexible. Whether it be every week or every 2 weeks, it’s important that testing is a continual process to ensure assumptions are being validated and user needs are at the centre of the design.

Hardening sprint and pre-launch user testing

At the final sprint there is no more code being created along at Bolser, so for Hanneka and Theo, it’s a matter of adding all the last bits of content, ensuring all integrations are working across all devices and fixing any major bugs.

Now the product has all come together, this is an opportunity for Lucy and Amy to test the full user journey pre-launch. Giving an example from e-commerce, Amy explained this could be from logging into an online shopping site, right through to checkout and purchase. At this stage, they’ll often be looking to check the entire flow of the journey and whether the content helps to guide users through the process now it’s all in place.

From the launch onwards, it’s important the communication between the two teams and clients continues. This way, the user remains in central focus, allowing any immediate issues to be flagged and ironed out.

So there we had it, a whistle-stop tour of an agile project timeline with user testing along the way. So what did the audience think: User testing and Agile, are they enemies or friends?

Of course, the majority of the room voted friends, and right they were. Although they may experience issues in places, particularly around budget and time, it really is beneficial to use this approach and integrate user testing throughout. It doesn’t have to be days of testing with a fully functioning website, just little and often will give you the feedback you need for success!

Over to the audience…

To finish off, we handed over to the audience to see what queries they had concerning agile and user testing. Below are a couple of the questions asked and some of the great responses given by the panel.

Q: If, in sprint 3, for example, the user testing reveals issues that affect designs in a previous sprint, how do you deal with this?

Hanneka: It’s all about learning in each sprint, but it’s good to make sure you’re doing it collaboratively with all members of the project team. For example, we [designers] sit next to the developers, so we constantly talk to them and involve them. That way, hopefully, they are more invested, so when changes do need to be made they understand and are onboard. We make sure it’s not us vs them.

Lucy: Also, if developers come along to watch the user testing as well, they’ll be more in touch with users and understand that the rework is necessary.

Q: What influenced the title of the talk? Are there enemies in this case?

Hanneka: I think they haven’t traditionally slotted together, and what we’re doing here is working collaboratively and showing you how you can integrate the two.

Q: But in all honesty, it is an enemy to me because it takes too long.

Amy: I think it’s about taking a step back and thinking about the type of testing you need, it doesn’t have to be a week, it may just be an expert looking over the designs for a day and giving some feedback. It is possible to scale back the user testing if necessary but not ignore it completely so that you are not going into development blind.

Q: If you have a deadline that can’t move and time is tight, user testing is always the first to go. How do you think this could be better managed?

Lucy: It’s really about changing the mindset of those who think its worthy of taking a backseat. They need to understand how validating assumptions early on and throughout the process allows issues to be ironed out and minimises risk for the project.

Amy: The best way to get the stakeholders onboard is to get them observing research. We find when clients observe the research for the first time, they soon realise how valuable it is to see real users. The problem is usually getting that initial user testing done to allow them to observe, but even if it’s post-launch, getting them to observe the once will hopefully get them onboard ready for the next project.

Q: When clients are wanting to constantly see complete designs upfront, how do you get them to feel comfortable with just a limited design before moving forward?

Hanneka: This can be problematic, but the key is to educate your clients early. Explain to them how you work and what they can expect upfront. The design demo we provide helps, as we can talk clients through the progress we are making, and the fact we work in small iterations means they can constantly see improvements and changes, which will hopefully give them some faith.