Data-driven Design Part III - User Testing & Usability Labs
To be able to offer high quality ad-free content for the readers, some Explained articles contain affiliate links, which may receive a small commission at no additional cost to readers.
Designing a product without users is like tailoring a suit without measurements: You might use the finest materials and employ expert craftsmanship, but without considering the specific dimensions and preferences of the wearer, the suit won't fit properly or the wearer just might not like the style of it. This is the reason why all products must be tested with real users sooner or later, ideally way before the actual release.
User testing serves as fundamental part of the UX design process, crucial for gathering user feedback and refining products to enhance usability. In this article, well have a look at the processes and tools that are necessary for successful user testing, including how these practices can also be conducted remotely or asynchronously. We also have a a look at AI tools that can speed up the process significantly. By using these techniques, designers can foster a human-centric design approach that truly resonates with users, validating assumptions, and ensuring the development time is used for features that actually matter for the users.
This article is third part of Zuzze's data-driven design trilogy:
- Analytics software: Analyze UX on auto-pilot 24/7
- Heuristic Usability Evaluation: UX expert to the rescue
- User testing: Digital product tested by actual users
What is User Testing?
User testing is a research-driven evaluation process where real users interact with a product, service, website, or app to assess its usability, functionality, and overall user experience. During user testing sessions, participants engage with websites, mobile apps, or prototypes, completing predefined tasks while researchers monitor their actions. This process helps identify:
- Usability challenges
- User pain points
- Navigation issues
- Unambiguous language
- Interface design flaws
- Opportunities for UX optimization
Why is User Testing needed?
Nearly 90% of users don’t return after a bad user experience. Too often development teams spend months or even years on designing and developing product in isolation from users assuming they know what users want. In reality, in most cases the product team or decisions makers are not actual users of the final product. Therefore it is crucial to understand the target group and empathize their actual needs and desires rather than ask an opinion of your boss, colleague, or wife who probably were not in the product target group in the first place.
While user testing may look like a waste of resources in short-term, in long-term companies who use user testing are more likely to succeed by creating more intuitive, user-friendly products that meet customer needs and expectations. This iterative process of testing and refinement is essential for improving user satisfaction, increasing conversion rates, optimizing costs and enhancing overall product performance. Testing early and often ensures that user needs are considered at every stage, reducing costly revisions later and improving overall product success.
When is User Testing conducted?
User testing is not limited to final released products; it can be conducted throughout the product development process whenever validation from the users is needed and there is a risk that the current design may not meet the users' expectations. Here's how user testing fits into different stages:
Early Concept Testing
Before creating a full prototype, user testing elements can be combined with user research to gather early feedback on initial ideas or concepts. Methods like interviews, focus groups, or surveys help validate from the start whether the product concept addresses user needs and pain points. This can be done face-to-face or remotely with tools like Google Meet, Google Forms or TestingTime.
Low-Fidelity Prototypes
Early-stage prototypes, such as paper sketches or wireframes, are tested to evaluate basic functionality and concept feasibility. This helps refine the product direction before significant resources are invested. These can be for example wireframes created in Miro or Figma
Mid- and High-Fidelity Prototypes
As the design evolves, interactive prototypes are tested to assess usability, navigation, and workflows. These tests identify specific design flaws and usability issues that need improvement. These can be for example clickable prototypes on Figma.
User Acceptance Testing (UAT)
For the development product, software teams can set up a specific UAT (User Acceptance Testing) environment to share the prototype and collect early feedback with selected users already before release. This helps the team to test new features with beta testers also when the main product is already released to production.
Post-Launch Testing
User testing continues after the product is launched to gather feedback on real-world usage, ensuring ongoing optimization and alignment with user expectations. In production, testing can also be combined with A/B Testing (split testing) where users who land on the page are directed to 2 different versions on the site where analytics are installed. A good analytics setup can do most of the hard work on live site and hence moderated user testing is not so common in production as it is in earlier phases of the product development. For more details how to achieve this you can check out the first part of Zuzze's data-driven design series Analytics software: Best Tools for Analytics, Heatmaps and Recordings to track UX Metrics.
User Testing Process
User testing involves a variety of methods to capture detailed insights into user interactions and experiences. One of the main approaches to user testing is moderated usability testing, where a facilitator guides participants through tasks and collects real-time feedback. This method allows for immediate clarification and probing into user thoughts and behaviors. Another one is unmoderated testing, where participants complete tasks independently, often using online platforms. Now, let's have a look at both of these in detail.
Moderated user testing
In moderated user testing, the interaction with user happens real-time either face-to-face or remotely via a video call.
Step 1: Set goals & tasks
First, a clear set of objectives is established, defining what the test seeks to uncover about user interactions. Example goals can be for example to optimize navigation or analyze first time learnability.
The selected tasks should be usually ordered from simplest to more complex to avoid overwhelming users from the start. In the beginning there can be time for free exploration to ensure the user is comfortable with the setup and app before actual testing tasks start. Typical testing session should not last more than 30-60min to ensure users stay focused. In number of tasks this usually converts to 3-5 tasks for user to perform.
Usually the tasks should not tell how the tasks are done but more like give context for the user to use the product like in real-life. Example task for a gym app could for example be "I'm coming back to home after intense workout week and would like to know how many times I have visited the gym in the past week." This does not tell which button to press or page to use but gives crucial information how users navigate the application in real-life.
Step 2: Recruit participants
Next, a representative sample of target user demographics is recruited. According to NNGroup good starting point to optimize resources is to start with 5 users in the first tests which according to studies covers approximately 85% of usability issues. This is a sweet spot to start detecting patterns between users but avoid using too much time and budget. However, to cover all usability issues, usually approximately around 15 users is required. If you have a budget for 15 tests it's recommended to split the tests into 3 groups so you have time to iterate and react between the tests to maximize the benefits.
Recruitment can happen in many ways like:
- Recruiting participants in-house or people you know (as long as their profile fits the target group)
- Recruiting participants in location (e.g. at the gym for a gym app)
- Recruiting online e.g. social media, especially facebook groups
- Using a third-party service to recruit the participants for you for a fee such as TestingTime.
The crucial part is that the recruited users match the target group. Note that in most cases, external users outside your company are expecting some kind of reward as a thank you for their time. This can be for example money, a gift card or a free gift that matches the target group (e.g. for people at gym could be a water bottle, a towel etc.).
Step 3: Prepare test environment
Usability lab
A usability lab is a dedicated and controlled environment designed to conduct usability testing and user research. It enables researchers, designers, and developers to observe how real users interact with a product—such as software, websites, mobile apps, or physical devices—and evaluate its usability, functionality, and overall user experience.
A well-designed lab should mimic real-world settings to ensure participant comfort and authentic interactions. Begin by selecting a quiet location, free from distractions, to maintain focus on the tasks at hand. The lab should be equipped with necessary technology, such as high-quality webcams and screen recording software as well as potential eye-tracking software, to capture user interactions and facial expressions accurately. Audio recording devices are important for capturing verbal feedback, especially during think-aloud sessions. Although participants know they're being recorded, it's best to use cameras as subtly as possible.
Comfortable seating arrangements and ergonomic setups can help participants feel at ease, promoting natural behavior during testing. It's beneficial to have an observation area for stakeholders to view sessions without disrupting participants. This can be achieved through a one-way mirror window or live video feed.
On-site testing
In some cases, moderated user testing can also be done in actual task context if the environment impacts the usage of the product. For example working out app could be tested with test users at the gym to ensure the lab environment is not biasing the results and the user behavior is more natural.
Remote testing
Alternatively, moderated user testing can also be conducted remotely by recording the screen and moderating the session in a video call like Google Meet or Zoom. This method allows researchers to reach a broad audience, eliminating geographical constraints and reducing logistical complexities. This approach is usually more cost-effective and with AI note takers like Fireflies.ai in the call, there is a potential to save hours to get summaries with AI compared to gathering the notes and analyzing them manually.
Step 4: Testing & Facilitation
Clear communication is vital, so ensure that facilitators provide concise instructions and remain unobtrusive, while encouraging participants to express themselves freely. Facilitators can employ various methods, such as direct observation or think-aloud protocols, where users verbalize their thoughts during the test. This qualitative data is invaluable for identifying pain points and areas for improvement.
During sessions, facilitators should remain attentive, listening actively to both verbal and non-verbal cues from participants. Asking open-ended questions can elicit deeper insights without leading responses. Facilitators must balance intervention with observation, knowing when to let participants explore independently and when to offer guidance.
Step 5: Analysis
After user testing it's time to summarize the findings and insights. Reviewing sessions is essential for capturing nuanced feedback. Nowadays with AI, this step can be significantly speeded up by using the transcription of the recording (e.g. from Google Meet) and analyzing it with AI tools like ChatGPT, Gemini, Claude, or Perplexity.
Unmoderated user testing
In unmoderated user testing, the user performs tasks independently and asynchronously without a real-time human moderator usually from a comfort of their home. Unmoderated testing is usually faster to execute but may need more preparation than moderated testing. The testing can include screenshots of UI elements or prototypes that needs validation.
Compared to moderated user testing, the limitations of unmoderated user testing are that participants may not recover from errors without guidance causing unfinished test results. Additionally, participants tend to be less engaged and behave less realistically in tasks that depend on creativity, decision making, or emotional responses.
Step 1: Define goals and tasks
Just like with, moderated testing, it's crucial to understand the goal of the study and define the tasks and their metrics. Because facilitator won't be supporting the user, defining these is essential and often take more time than in moderated user testing. It's also common practice to have in the end open ended question(s) so user will have a chance to explain some of the results in more detail in case they did not understand something or got stuck.
Step 2: Select testing software
Testing software should be selected based on the requirements you set in Step 1. There are several software that can provide the tools needed for unmoderated user testing. These tools often include integrated analytics, allowing teams to track metrics like task completion rates and time on task. Here are some of the tools used in unmoderated testing:
- Maze.co
- Optimal Workshop
- Lookback
- UserBrain
- PlayBookUX
- Userlytics
- dscout
- Userfeel
- UserZoom
- Loop11
- UserTesting
You can find detailed feature comparison between some of these tools on this analysis by NNGroup. These tools are equipped with features for observation and annotation of user sessions and some also offer additional user research tools like card sorting, surveys etc.
Step 3: Pilot test
Before using money for the actual testing, it's good practice to test the test first to ensure the instructions are not ambiguous or misunderstood for instance. This will also test the overall test flow and show if the order of the tasks need to be changed. Finally, technology is not always straight forward and testing the test will usually reveal if there are any technical issues like missing images or prototypes not working correctly. Some of this can be covered by doing the test yourself but to avoid bias it's better to ask someone else. While it's best to test with real users, the pilot tester can also be someone outside the target group.
Step 4: Recruit participants & Test
Many testing software have an option to provide matching testers from their panel with a help of screening questions you can set but most also allow sending the link for the participants you may have recruited outside the software.
Step 5: Analysis
Most testing software provide built-in tools for this but you can also analyze the findings manually by taking notes and marking the timestamps and creating visual reports out of the results.
How to become a tester?
Most platforms above give money or or gifts for users who participate in user testing sessions. If you're looking for earning some extra income on the side, becoming a tester for user testing is one option. Typical rewards start usually from $1 (simple short surveys) and can go up to several hundreds (interviews, screen recordings, onsite testing etc.). I personally have used the following platforms and confirmed these are legit:
- TestingTime: General surveys & interviews
- GrapeData: Specialized in B2B surveys
- NorstatPanel: Specialized in B2C surveys
Looking for help in creating products users love?
Zuzze is a design and development powerhouse focused on user-centric design aiming to create products that users are queuing up for from the start. Click here to learn more.