UX Design Process Best Practices: Documentation for Driving Design Forward — Part 5
This is the UX Design Process Best Practices ebook, written by Jerry Cao, Ben Gremillion, Kamil Zięba, and Matt Ellis, and originally published on UXPin.com.
A Practical Approach to Usability Testing
Usability testing makes the difference between design thinking (designing for the user) and the outdated modes of thinking favoring features, businesses, or the product itself. The latter test only at the end to validate their ideas, while design thinkers test throughout the process to generate their ideas.
The cycle of iterating, testing, and implementing the feedback will chip away at all the imperfections until all that remains is the best possible version of your product.
In this chapter, we’ll outline the four steps of usability testing (and the documentation that occurs in each):
1. Define Goals — Determine which questions you want the test to answer.
2. Prepare the Test — Determine which test will answer your questions, then plan the best way to conduct it.
3. Conduct the Test — Recruit participants and administer the test.
4. Present Results — Compile data into an easy-to-understand format and share it with your team-members.
We’ll start with the planning phase.
1. Define Goals
The first step to any successful usability test is defining your goals: what questions do you want the test to answer. This could be broad, such as,
Which checkout methods are most intuitive to our users?
Or specific, such as:
Which form design works best for increasing e-commerce purchases?
The important thing is that you know why you’re conducting the test. Knowing where you’re going will let you find the best route there.
Naturally, you’ll have a lot of questions about your product, and this curiosity is good. However, remember to limit each test to only the most relevant issue at the moment. Each test should have a central focus for the most accurate results — the more objectives you test at once, the more room for error.
This is another advantage to testing often: you can address each issue with the attention it deserves. If you find yourself with too many questions, make a list and prioritize the questions based on the steps of the design process. You can always save this list for later tests.
Deciphering these questions will automatically set your mind to think up answers on its own. As David Sherman mentions in his article on usability testing, these potential answers will be your test’s hypothesis. By stating the hypothesis outright, you can draw attention to any potential bias in order to prevent it, and will also help you communicate the results later (“we originally thought this, but then discovered this”).
You can generate hypotheses simply by setting aside time for you and your team, to try to answer the goal questions on your own.
2. Prepare the Test
There are literally hundreds of types of usability tests to choose from, each one with their own special area of expertise and their own limitations. It’s not about knowing which tests work and which don’t, it’s about knowing which will work for a specific need. That’s why defining your goals first is crucial.
1. Scripted — These tests analyze the user’s interaction with the product based on set instructions, targeting more specific goals and individual elements. (tree testing, hallway usability tests, benchmark testing)
2. Decontextualized — Ideal for preliminary user testing and persona research, these tests don’t necessarily involve the product, but analyze more generalized and theoretical topics, targeting idea generation and broad opinions. (user interviews, surveys, card sorting)
3. Natural (or near-natural) — By analyzing the user in their own environment, these tests examine how users behave and pinpoint their feelings with accuracy, at the cost of control. (field and diary studies, A/B testing, first-click testing, beta testing)
4. Hybrid — These experimental tests forego traditional methods to take an unparalleled look at the user’s mentality. (participatory design, quick exposure memory testing, adjective cards)
Before you test your users, you might also send out a user survey.
Obviously, these aren’t usability tests, but they certainly contribute valuable research to the testing process. You’ll want to include a mix of the following types of questions:
• Multiple Choice — Whether the user selects from an assortment of prewritten answers, or simply just yes-or-no, multiple-choice questions allow you to easily categorize answers and give you control for specialized data, but do not allow user insight and risk being biased. These lean towards quantitative.
• Verbal (or Written) Responses — Asking users open-ended questions and encouraging their elaboration may reveal some insights you had not anticipated. However, you are at the whim of how well the user can articulate themselves, and this data can be difficult to categorize and therefore analyze. These lean towards qualitative.
• Rating Scale — By asking users to rate their feelings on a numeric scale, you’re able to capture qualitative data in a quantitative way. This allows you to analyze a user’s feelings in a concrete way.
Once you determine the type of usability test(s) to run, you should send out a descriptive announcement to give your team a heads up. It’s even more helpful, in fact, if you summarize your tactics with a quick planning document.
Research Plan Document
Modified from Tomer Sharon’s One-Pager (fantastically helpful yet lightweight), our research plan document is a formalized announcement with all the necessary details of the testing to both explain what’s happening and invite collaboration.
In addition to keeping the test planners organized, the research plan lets the entire team know all the relevant details about the test, and gives them the chance to contribute their feedback or improvements before the test is conducted. It also appeases stakeholders seeking a bottom-line assessment.
2. Best Practices
Brevity is the name of the game with research plan documentation. You want to hand your team a slim document around one page to encourage them to actually read it.
While keeping things brief, you’ll want to cover at least these 7 sections:
1. Background — Here is where brevity is important. In a single paragraph, describe the reasons and events leading to the research.
2. Goals — In a sentence or two (or bullets), summarize what the study hopes to accomplish. Phrase the goals objectively and concisely. Instead of “Test how users like our new checkout process,” write “Test how the new checkout process affects conversions for first-time users.”
3. Questions — Here is where you elaborate if needed. List out around 5–7 questions you’d like the study to answer.
4. Tactics — Where, when, and how the test will be conducted. Explain why you’ve chosen this particular test.
5. Participants — Describe the type of user you are studying, including their behavioral characteristics. You could even attach personas (or link to them) for more information.
6. Timeline — The dates for when recruitment starts, when the tests will be expected to take place, and when the results will be ready.
7. Test Script — If your script is ready, include it here.
Check out Sharon’s sample One-Pager to see how it should look.
Encourage your team members to give suggestions or advice so that the test results are helpful to everyone. Find out the questions that they want answered as well.
3. Conduct the Test
After gathering feedback from the team, you’re ready to actually conduct the test. This involves recruiting the right participants, scheduling times, and writing the actual test documentation.
For recruiting users, stick to your target audience defined at the onset of the design process. These are the same types of people who influenced your personas. If you’d like help reaching out to these people, Jeff Sauro, founder of Measuring Usability LLC, lists 7 methods for user recruitment, including online tools. In our experience, we’ve found hallway testing and tools like UserTesting incredibly helpful (part of the inspiration for integrating usability testing into UXPin).
As for your role during the actual test, sometimes you must make the choice between being present (moderated) or allowing the user to work on their own (unmoderated):
• Unmoderated — Unmoderated tests are cheaper, faster, and generally easier to recruit and schedule. They also remove the influence of a moderator, leading to more natural and less biased results. On the downside, there is less opportunity for follow-up questions or supporting users who go astray during tests. Moreover, you put yourself at risk of finding users only interested in the compensation, thus reducing the quality of the results.
• Moderated — While costlier and requiring more effort to organize, moderated tests allow you to “lead” the user, for better or worse. Moderated tests are recommended for rougher prototypes (higher risk of bugs and usability issues) or incredibly complex prototypes (users might need some clarification).
Additionally, you can also choose to conduct your test on-location or remotely. While every test has different qualities and best practices, the following advice works across the board:
• Make user is comfortable — Even the word “test” makes people a little nervous, so put in extra effort to put them at ease. Remind them you are testing the product, not their capabilities. A test script helps ensure you hit upon a few reassuring points in the beginning of each test.
• Test competitor products — If applicable, use competitor products as a frame of reference. This distinguishes whether your user’s opinion exists independent of your product, or as a consequence of it.
• Don’t interfere — Unless the user is at a complete standstill, sit back and allow them to figure out the product on their own and make mistakes. This avoids bias and may reveal insights into user behavior you hadn’t predicted. The best insights usually come from when a user isn’t engaging with the product the way it’s designed. Pay attention to workarounds and let them inspire feature improvement.
• Record the session — This makes a solid reference point for later, when interpreting the results.
• Collaborate — If you have a team observing a moderated user test, Tomer Sharon (mentioned above) suggests creating a Rainbow Spreadsheet, a shared observational sheet that allows everyone to record their own interpretations of the data for quick and easy comparisons later. We used his spreadsheet during our Yelp redesign exercise and found it was very helpful for summarizing results for designers and stakeholders.
Testing for mobile products, especially, require care and attention, given the technical difficulties. For advice specific to this, read Rosie Sherry’s article, A Field Guide To Mobile App Testing.
Of course, the way you write the actual tasks will also influence the results.
As the name suggests, user tasks are what the user tries to accomplish during the test.
Well-defined user tasks make the difference between an organized procedure and simply “winging it.” More-so than simply telling the user what to do or posing them questions, it accounts for the purpose of the test.
2. Best Practices
Everything you present to your users during the test — both the content of the question/task, as well as the phrasing — impacts how they respond.
Tasks are either open or closed, and your tests should incorporate a healthy mixture of both:
• Closed — A closed task offers little room for interpretation — the user is given a question with clearly defined success or failure (“Find a venue that can seat up to 12 people.”). These produce quantitative and accurate results.
• Open — By contrast, an open question can be completed in several ways. These are “sandbox” style tasks (“Your friends are talking about Optimal Workshop, but you’ve never used it before. Find out how it works.”) These produce qualitative and sometimes unexpected results.
As for the wording, be careful to avoid bias. Just one wrong word can skew results.
For example, if you want to find the most natural ways in which users browse your online shop, writing a task like “It’s 10 days before Christmas and you need to search for a gift for your mother,” might lead the user to use the search functional, as opposed to their normal method of window clicking.
3. Present Results
The point of the testing, is, of course, to collect data to influence design. But if you don’t present the results effectively, or at all, the entire testing process is a waste.
The usability data from tests can, and often means the difference between success and failure. For example, Venmo, a money-exchanging app, took the data from its analytics and used it to fix a critical error.
But it was not as simple as that. First, the support team brought the problem to the product team, who collaborated with the data team. The data was analyzed in a helpful way through Looker, which showed them the problem in a way everyone could understand.
After that, it was a quick fix.
From the Venmo example, which you can read about here, we can identify the two criteria for success at this stage of the process:
• presenting the results in a way that’s helpful
• sharing the results with the team in a uniform way
This phase is about taking raw data — sometimes even just numbers — and turning it into something useful. It’s not just enough to conduct tests, you have to show the results the right way, and then share them.
The usability report is the way to share the results with the team, so that everyone’s on the same page.
First, the usability report is a universal document (or, more accurately, a collection of documents) that everyone on the team can reference. It makes sharing easy, especially with a cloud folder (see below) where everyone can access the same information.
Collaboration, in general, is a key aspect of this stage, if for no other reason than to remove bias. The more people to comment on the notes, the less influenced they are by personal interpretations. As Alla Kholmatova describes, a second evaluator increases problem detection by 30%–43%.
Furthermore, documentation of the test helps down the road. Later in the design process, you may want to draw on the notes of the testing, in which case you’ll be glad for an easily accessible formal report.
2. Best Practices
To best organize and make the results readily available, we suggest creating a cloud folder with universal access. At UXPin, any team member can read or reference the latest results at any time.
As you write the report, keep the following tips in mind:
• Avoid vagueness — Mentioning that “Users couldn’t buy the right product” isn’t very helpful since multiple factors might be involved. Perhaps the checkout process was difficult, or the product listings were hard to browse. Explain the root of each issue from an interaction design and visual design perspective (e.g. confusing layouts, a checkout process with too many steps, etc.).
• Prioritize issues — Regardless of how many issues you find, people must know what’s most important. We recommend categorizing the report (e.g. Navigation Issues, Layout Issues, etc.) and then adding color tags depending on severity (Low/Medium/High). List every single issue, but don’t blow any out of proportion. For example, don’t say that a red CTA button lead to poor conversion if the steps of the checkout process don’t make sense.
• Include recommendations — You’re the expert and stakeholders need an action plan. Don’t include any hi-fi prototypes or mockups in the usability report, but definitely suggest a few improvements. To supplement written suggestions, our own UX Researcher Ben Kim also links to lo-fi wireframes or prototypes in a UXPin project dedicated to usability testing.
When presenting the results, include any and all relevant materials. The usability report should be a folder, not a single file. Don’t forget to include things like:
• Formal usability report,
• Supporting charts, graphs, and figures,
• Previous testing documentation (i.e., the list of questions the user was asked),
• Videos or audio tracks of the test (which is why it’s good to record sessions).
Finally, do not treat the usability results as a folder meant to be handed off. The documentation is just the starting point. Schedule a follow-up meeting with the team to review the usability report and relevant data, discussing issues and the outlined recommendations.
Takeaway: Test Early and Test Often
User testing insights speak far louder than guesswork and conjecture. They help guide design decisions and serve as powerful evidence to counter people’s opinions.
Don’t wait until the end of the project to conduct your usability testing.
Once you have a lo-fi prototype, start testing. The data is less about validation and more about inspiration: test early, and test often, so you can actually put the results to use before it’s too late.