As soon as each session is over, each observer should go back through their notes and make sure that everything they have written is understandable. Working together, the observers and the moderators should quickly discuss what they saw and whether there is anything that needs to change before the next participant arrives. Normally, we try and keep the environment the same for all participants in the study. But if there's an obvious problem in the wording of the task or something in the product that has prevented the participants form continuing, then it makes sense to fix it, if possible.
After each participant, you can write up on a whiteboard or a flip chart any quantitative metrics that you're tracking for the study. For instance, you might be keeping count of the number of times the participants have errors in certain tasks or the number of times participants refer to help text. You might be capturing rough timings for specific tasks or asking participants to give you a satisfaction rating. Writing those numbers down while everyone is in the room means you have to reach consensus on what things actually were errors or how long a task actually took.
After you have run all of your participant sessions, it's time to gather the qualitative information together. This information is all of the observation notes that your observers took, include what went well and what needs improvement. Now is the time to pull out all the quotes and behavioral descriptions, and write down each one on an induvidual sticky note. Then, start grouping them into themes. As you do this, each observer will be reminded of what they saw during the sessions, will be able to recreate in their minds the issues and events that lead to the user quote or behavior.
There's a reason why I like using sticky notes and a blank piece of wool for this task, it encourages conversation and allows everyone who observed to take part in creating the groups and themes. The conversations that happen during this excerisize are the start of potential solutions that you can implement in your product. Now that the study is over, it's time to have those conversations and get people thinking about how to fix the pain points they saw. This discussion might get heated at times because two individuals interpret what they saw differently.
However, as long as people are always speaking from participant data, rather than from their own opinion, it's likely that some good potential solutions will emerge. Sometimes you might end up with several possible solutions but not be quite sure which one is the best. It's okay if you don't have data from this usability session to help you decide. You can use a task in your next usability study to get more information, or try out a potential solution. It's important to hold this data analysis meeting as soon as possible after the sessions are finished.
It's best to do it as soon as the final participant has left. Leaving it any longer means that people on the team start going off and creating their own solutions. They'll quickly forget any of the observations they made.
- What is usability testing?
- Finding the right participants
- Making a screener
- Asking the right questions
- Avoiding bias
- Making a task list
- Creating the test environment
- Running a pilot study
- Moderating sessions
- Capturing real-time observations
- Analyzing and reporting your results