Article
Design process

updated on:

25 Feb

,

2025

How to Conduct Web Usability Testing: A Practical Guide to Improving Your Website

15

min to read

Table of contents

In the 1980s, Sony introduced a bright yellow Walkman and brought in a focus group to test it. The participants loved it — they said the bold color was modern and exciting. But when asked to choose one to take home, almost everyone picked the black version instead.

Why? Because what people say and what they actually do aren’t always the same.

Sony yellow Walkman story about the importance of usability testing
Yellow vs black Walkman Sports. Source

This is the core lesson of web usability testing: user behavior often contradicts expectations. Surveys and feedback forms can only tell you so much. To truly understand how people interact with your website, you need to observe them in action.

Ever had users tell you your site is "easy to navigate," but then you see them struggling to find the checkout button? That’s why usability testing matters. And being experts in UI/UX design and user research, we at Eleken have our own approaches to testing and can make valuable suggestions.

In this guide, we’ll walk you through a step-by-step process to help you:

  • Identify usability issues before they cost you conversions
  • Choose the right testing methods and tools
  • Make data-driven design decisions that actually improve user experience

Let’s get started!

What is web usability testing?

Website usability testing is the process of observing real users as they interact with a website to identify usability issues. It’s not about asking people what they think, it’s about watching what they actually do.

At its core, usability study helps answer crucial questions:

  • Can users easily find what they need?
  • Are there any frustrating roadblocks in their journey?
  • Do they complete tasks quickly and without confusion?

As a result, business owners can identify a variety of usability problems, like

  • A clothing store’s checkout button is buried under other content → Users abandon their carts.
  • A contact form asks for too much unnecessary information → Visitors quit halfway.
  • A homepage has too many competing elements → Users leave without clicking anything.

To sum up, if a website is hard to use, visitors won’t stick around — they’ll leave. Usability testing for websites prevents this by catching issues before they cost conversions.

Now that you know what usability testing is, the next challenge is figuring out how to do it effectively. You wouldn’t just throw a user in front of your website and hope for insights. That’s like a doctor running every possible medical test without knowing what they’re looking for.

A good usability test starts with a clear plan. You need to know what you’re testing, who you’re testing with, and how you’ll measure success. Let’s break it down step by step.

Step-by-step guide: How to do website usability testing

Jumping into website user testing without a structure is a recipe for confusing results and wasted effort. If you don’t define your goals, you’ll collect random feedback that doesn’t lead to meaningful improvements. If you test the wrong people, you’ll fix things that weren’t broken in the first place.

To avoid these pitfalls, follow this structured nine-step process. 

Step 1: Define your objectives

Before you run a usability test, you need to know exactly what you’re testing for. If you don’t define a clear goal, you’ll end up with scattered insights, wasted time, and no actionable improvements.

Start with a question

Good UX research starts by answering a specific question, such as:

  • Why are users abandoning their carts before checkout?
  • Are users struggling to find key features in the navigation?
  • Does our mobile checkout take longer than expected?

If you’re testing everything at once, you’re testing nothing at all. Keep it focused.

Bad objective: "I want to see how users interact with my website."
Good objective: "I want to identify why users abandon the checkout process and find ways to reduce drop-offs by 20%."

A well-defined objective in your UX research plan gives you a clear problem to solve and helps you design better tests.

research plan template for a web usability testing
An example of a research plan you may use for testing

Step 2: Identify your target audience

Not all feedback is created equal. If you test the wrong users, you’ll end up solving problems that don’t actually matter to your real audience.

Imagine running a usability test for an enterprise analytics dashboard, but all your test participants are casual smartphone users who have never worked with data visualization tools. Their feedback might be valid, but it won’t help improve the experience for actual data analysts who rely on the tool daily.

This is why choosing the right participants is just as important as running the test itself.

Define your ideal test participants

Start by identifying who your real users are:

  • Demographics – Age, location, job role, tech familiarity
  • Behavior – How do they use your site? What tasks do they need to complete?
  • Pain points – What frustrations do they currently have?

For example, when conducting user testing for a low-code AI product, Kepler by Stradigi, we chose existing users who had experience in creating ML models, which helped us define the needed improvements worth implementing.

user flow built based on usability testing results
A scheme we made up to improve Kepler’s dashboard based on user testing results

But what you might be wondering is where to find test participants.

Finding testers

Recruiting participants can be tricky, but here are a few reliable methods:

  1. Use your existing users

Your best test subjects are real users already engaging with your website. You can recruit them by:

  • Sending a quick email invitation to active customers
  • Adding a website pop-up offering an incentive for participation
  • Running a short social media call for testers
  1. Leverage usability testing platforms

If you need a broader reach, platforms like UserTesting, or Maze allow you to find participants that match specific criteria. We will talk about UX research tools further in the text, usually, most of them offer a testers pool.

Maze as a tool for finding testers for website usability testing
Maze offers service of finding the right audience
  1. Pre-screen participants with a quick survey

Not every volunteer will be a good fit. Before you bring someone in for a test, send them a short screening questionnaire to make sure they match your target user profile. Ask relevant user interview questions like:

  • How often do you shop online?
  • What other project management tools have you used?
  • How experienced are you with financial planning apps?

This ensures that your test participants reflect the real-world audience who will actually use your site. 

But what if you can’t find enough testers? The 5-user rule might put your mind at ease. 

What is the rule of 5 usability testing?

The rule of 5 in usability testing is the idea that testing with just five users is enough to uncover 80% of usability issues. This concept, introduced by Jakob Nielsen, is based on research showing that after five participants, each additional tester reveals fewer and fewer new insights.

the rule of 5 usability testing explained on a graph
Source

That means you don’t need a huge sample size. More than that, and you’ll spend extra time for diminishing returns. Here’s how Mister_Anthropy from Reddit explains it:

reddit user explains the rule of 5 usability testing

However, if you only have 5–10 users, your focus should be on observations, not overcomplicated usability metrics. Instead of drowning in numbers, track what actually matters:

  • Task completion rate – Can users complete key actions?
  • Time on task – Do users struggle with speed?
  • Errors & frustrations – Where do they get stuck?

When does the 5-user rule not apply?

The 5-user rule works best for qualitative usability testing, where participants are observed completing tasks. But for quantitative website UX testing — where success rates, time-on-task averages, and quantitative benchmarks are the focus — NN Group recommends at least 40 users for reliable data.

Additionally, the rule of five is most effective when testing a homogeneous user group. If your product serves multiple user segments, separate tests may be needed to capture different behaviors and pain points.

Step 3: Choose the right testing method

Now that you’ve defined your goals and identified your target users, it’s time to decide how you’ll conduct usability testing. There are several UX research methods, and choosing the right one depends on your budget, timeline, and research depth.

Moderated vs. unmoderated usability testing

The first big decision is whether to conduct moderated or unmoderated usability testing. 

Moderated testing is like an interview — a facilitator observes the user, asks follow-up questions, and provides guidance if needed. This method is great for gathering detailed insights about user behavior and frustrations.

Unmoderated testing is more like a take-home exam — users complete tasks independently while their actions are recorded. It’s faster and cheaper but doesn’t allow for real-time clarifications.

Here’s a quick comparison of these two types of usability testing:

comparison table of moderated and unmoderated web usability testing

If you want to deeply understand why users struggle, go with moderated testing. If you need broad feedback quickly and affordably, unmoderated testing is a better option.

Remote vs. in-person usability testing

Next, decide whether it will be remote usability testing or in person.

Remote usability testing is conducted online, allowing users to complete tasks from their own devices. It’s cost-effective, convenient, and scalable, making it ideal for websites and apps.

In-person usability testing takes place in a physical setting, where researchers can observe facial expressions, body language, and other subtle cues that remote testing might miss. It’s useful for detailed behavioral analysis but can be more expensive and time-consuming.

comparison table for remote and in-person website usability test

For most websites and apps, remote testing is the go-to choice. There are plenty of tools that allow you to gather insights without requiring users to be in the same location. That’s what we’re going to cover next.

Step 4: Select the best usability testing tools

The right usability testing tool can make your process smoother, faster, and more insightful. But with so many options available, how do you choose the best one?

It all depends on your testing method, budget, and research goals. Below, we’ll break down some of the top tools and what they’re best suited for.

comparison table for best website usability testing tools

1. Maze – Best for unmoderated usability testing

Maze is a fast, remote usability testing tool that lets you collect insights from users at scale. It’s great for early-stage prototype testing and quick feedback loops.

  • Features: Click tests, A/B tests, heatmaps, misclick tracking
  • Pricing: Free for small tests, paid plans start at $99/month

2. UserTesting – Best for moderated live feedback

UserTesting connects you with real users and lets you conduct live usability tests with video recordings and think-aloud sessions. It’s best for detailed UX research.

  • Features: Video recordings, real-time feedback, panel of participants
  • Pricing: No fixed pricing, custom plans based on usage

3. Hotjar – Best for heatmaps & behavior tracking

Hotjar helps analyze how users interact with your website through heatmaps, session recordings, and surveys. It’s ideal for website usability analysis.

  • Features: Click tracking, scroll depth analysis, feedback polls
  • Pricing: Free for basic use, paid plans start at $32/month

4. Lookback – Best for remote interviews

Lookback is a moderated usability testing tool that allows researchers to observe, record, and interact with participants in real time.

  • Features: Live interviews, screen recording, chat feature
  • Pricing: Starts at $25/month

5. BrowserStack – Best for cross-browser testing

BrowserStack lets you test websites across different browsers and devices to ensure compatibility and usability.

  • Features: Real-device testing, automated screenshots, mobile app testing
  • Pricing: Plans start at $29/month

6. UXtweak – Best for prototype & website testing with AI analysis

UXtweak offers a full-suite usability testing platform with tools for prototype validation, tree testing, and first-click analysis.

  • Features: AI-powered analysis, session replay, survey tools
  • Pricing: Free for small projects, paid plans start at $113/month

7. Useberry – Best for no-code prototype testing

Useberry specializes in prototype usability testing without requiring coding skills. It integrates well with Figma, Adobe XD, and Sketch.

  • Features: Click tracking, path analysis, user flows
  • Pricing: Free for limited tests, paid plans start at $67/month

8. Lyssna (formerly UsabilityHub) – Best for preference tests & design surveys

Lyssna focuses on quick preference tests, first-click tests, and design surveys, making it useful for validating design decisions early on.

  • Features: A/B testing, 5-second tests, preference surveys
  • Pricing: Free plan available, paid plans start at $75/month

9. Userlytics – Best for advanced usability studies with video recordings

Userlytics offers both moderated and unmoderated testing with advanced video and sentiment analysis tools.

  • Features: AI sentiment analysis, screen recording, custom panels
  • Pricing: Custom pricing based on usage

As for Eleken, we mostly use Maze for unmoderated and Lookback for moderated testing. But obviously, each of the mentioned tools has its pros and cons, so to choose the right one you should consider your own needs and available resources.

For a more comprehensive list, check out our article on 15 best tools for usability testing.

Step 5: Design test scenarios and tasks

A usability test is only as effective as the tasks you give participants. If your test scenarios are vague or unrealistic, you’ll get inconclusive or misleading feedback. Well-designed tasks, on the other hand, provide clear, actionable insights into how users interact with your website.

What are test scenarios and tasks?

  • Test scenarios: Real-world situations that set the context for the usability test (e.g., “You want to buy a pair of running shoes online.”).
  • Tasks: Specific actions users must complete within the scenario (e.g., “Find and purchase a pair of running shoes in size 10.”).

How to write effective test scenarios

A good test scenario should:

  • Be realistic – Reflect actual user goals, not artificial instructions.
  • Avoid leading questions – Don’t hint at the correct answer.
  • Focus on key user flows – Prioritize essential tasks over edge cases.

Here are some examples of test scenarios and tasks:

  1. Scenario: You need a new laptop for work. Task: Find and compare two laptops under $1,000.
  2. Scenario: You’re planning a trip. Task: Search for a round-trip flight from New York to London.
  3. Scenario: You want to cancel a subscription. Task: Locate the cancellation option in your account settings

For moderated usability testing, you’ll also need to come up with quality user testing questions.

In moderated testing, a facilitator asks questions before, during, and after the test to gather deeper insights. Here’s how you can structure them:

1. Pre-test questions (to understand user background & expectations)

  • Have you used similar websites/apps before?
  • How would you normally complete this type of task?
  • What do you expect to happen when you click [X]?

2. During-test questions (to capture real-time thought processes)

  • What are you looking for right now?
  • Was this where you expected to find [feature]?
  • How confident do you feel about your next step?

3. Post-test questions (to gather overall impressions & pain points)

  • How easy or difficult was this task?
  • What, if anything, frustrated you during this process?
  • If you could change one thing about this experience, what would it be?

One more important thing to mention here is that before running a full usability test, conduct a pilot test with one or two users. This helps you spot unclear instructions, refine your scenarios, and ensure the test runs smoothly.

Step 6: Conduct the usability test

With your test scenarios and questions ready, we’ve finally came to the most interesting part – it’s time to run the website usability test. 

Before the test: Set up for success

  • Pilot the test – Run a small trial with few users (you may ask your team members to participate) to catch unclear instructions or technical issues.
  • Create a distraction-free environment – Make sure participants feel comfortable and aren’t rushed.
  • Check your recording tools – If you’re using screen recording (e.g., Hotjar, Lookback), test it beforehand to avoid data loss.

During the test: Observe, but don’t interfere

For moderated testing, follow these key principles:

  • Encourage users to think aloud – Ask them to describe their thoughts as they complete tasks.
  • Avoid leading questions – Let users navigate naturally without hints.
  • Stay neutral – If they get stuck, resist the urge to help immediately. Instead, ask, “What are you expecting to happen here?”

For unmoderated testing, ensure that:

  • Instructions are crystal clear – Users won’t have a facilitator to clarify them.
  • Tasks mimic real-life scenarios – They should reflect what users normally do, not artificial test cases.
  • Sessions are recorded – This lets you review user interactions later.

What to observe during the test

Whether moderated or unmoderated, focus on:

  • Where users hesitate or get stuck
  • How long key tasks take
  • Any moments of visible frustration (e.g., clicking repeatedly, backtracking, abandoning tasks)
  • Unexpected behaviors – Users often do things designers never anticipate.

After the test: Organize your findings

  • Take notes immediately – Jot down key observations while they’re fresh.
  • Time-stamp key moments – If using video recordings, mark usability issues for easy review.
  • Categorize issues – Organize feedback into themes like navigation problems, unclear labels, or slow task completion.

Step 7: Analyze the results

Once your web usability test is complete, it’s time to make sense of the data. The goal isn’t just to collect insights, it’s to turn them into actionable improvements for your website or product. Here’s what you need to do step-by-step:

1. Organize your findings

Start by categorizing usability issues into common themes:

  • Navigation problems – Users struggled to find key features.
  • Confusing labels – Terminology wasn’t clear or intuitive.
  • Task failures – Users couldn’t complete an essential action.
  • Performance issues – Slow load times or technical bugs affected usability.

As many people on the internet communities recommend, use a rainbow spreadsheet to track and visualize patterns across multiple participants.

an example of a rainbow spreadsheet used to organise findings from website usability testing
Source

2. Prioritize usability issues

Not all issues have the same impact. Some problems completely block users, while others are minor annoyances. Use a prioritization framework like RICE (Reach, Impact, Confidence, Effort):

  • Reach: How many users are affected?
  • Impact: How badly does it hurt the user experience?
  • Confidence: How certain are we that this is a real issue?
  • Effort: How much work is needed to fix it?
prioritising usability issues with RICE framework as a part of web usability test
Example of prioritizing features with RICE framework

3. Turn insights into action

For each major usability issue, document:

  • What happened? (E.g., “Users couldn’t find the checkout button.”)
  • Why is it a problem? (E.g., “It was positioned in an unexpected location.”)
  • Recommended fix (E.g., “Move the checkout button to the top right, where users expect it.”)

4. Share your findings with stakeholders

Make your results easy to understand for designers, developers, and decision-makers. One more recommendation from Reddit communities is to frame your findings as a story.

If you present findings as raw numbers and spreadsheets, they might be ignored. Instead, frame insights as a story:

  • Screenshots and video clips to highlight key moments
  • User quotes to reinforce pain points
  • A concise summary of critical issues and recommended fixes

Instead of reporting, “Users took longer than expected to complete checkout.”, try “Users struggled to find the checkout button, leading to confusion and frustration. One participant said, ‘I kept looking in the top right corner, but it wasn’t there.’”

This makes it clear, actionable, and memorable.

Step 8: Implement changes

After website usability analysis, the next step is to turn insights into real improvements. Fixing every issue at once is rarely feasible, so it’s important to prioritize changes strategically.

Use your prioritization to guide implementation

Since you’ve already categorized issues based on impact and effort, use this as a roadmap for making changes. Start with high-impact, low-effort fixes — these are the quick wins that can immediately improve usability. More complex changes that require significant development can be planned for future iterations.

Collaborate with designers and developers

Usability testing findings need to be translated into actionable design and development tasks. To make this process smoother:

  • Provide clear recommendations with supporting evidence (e.g., test recordings, heatmaps)
  • Explain the user frustration behind each issue, not just the symptom
  • Work with developers to balance feasibility and usability improvements

Track changes and measure impact

Making changes isn’t the end of the process, you need to measure if they actually improve usability. After implementing fixes:

  • Track key performance indicators like task completion rates and time on task
  • Gather user feedback to validate improvements

The goal isn’t just to fix problems, it’s to create a continuous feedback loop where usability improvements are regularly tested and refined.

Step 9: Retest and iterate

The UX research process isn’t a one-and-done event. Even after implementing changes, new usability issues can emerge, user expectations may shift, and your product will continue evolving. That’s why usability testing should be an ongoing cycle.

1. Run follow-up tests to validate improvements

After making changes, test again to ensure they’ve had the desired effect. If users still struggle, you may need to refine the solution or try a different approach.

To guide your follow-up tests, ask:

  • Did the changes actually solve the identified usability problems?
  • Are users completing tasks more efficiently and with less frustration?
  • Did any new usability issues arise as a result of the changes?

2. Build usability testing into your design process

Integrate testing into your ongoing product development cycle.

UX testing in the product development process scheme

Ways to do this include:

  • Running usability tests before major redesigns to prevent usability issues from creeping in
  • Testing new features early in development to catch problems before launch
  • Scheduling regular usability check-ins (e.g., quarterly tests) to monitor long-term UX trends

3. Keep gathering user feedback

Usability testing is just one way to improve UX, keep learning from your users through:

  • Customer support inquiries – Common complaints often highlight usability problems
  • On-site surveys – Quick polls can capture pain points in real time
  • Analytics tools – Metrics like bounce rates and session durations can indicate usability issues

By continuously testing, refining, and listening to users, you’re not just fixing problems—you’re shaping a product that evolves with real user needs. Usability isn’t a box to check off; it’s an ongoing conversation between your design and the people who use it.

Conclusion: From testing to transformation

Usability testing isn’t just a process — it’s a mindset. It helps bridge the gap between how you think users interact with your product and how they actually do. When done right, it turns assumptions into insights, roadblocks into opportunities, and frustrating experiences into seamless interactions.

Here’s a quick recap of the usability testing journey:

  1. Define your objectives – Focus on a specific problem or question.
  2. Identify your target audience – Test with the right users, not just any users.
  3. Choose the right testing method – Moderated, unmoderated, remote, or in-person.
  4. Select the best usability testing tools – Use the right software for your needs.
  5. Design test scenarios and tasks – Make them realistic and focused.
  6. Conduct the usability test – Observe without interfering, and document insights.
  7. Analyze the results – Identify patterns and prioritize issues.
  8. Implement changes – Turn insights into action, working with your team.
  9. Retest and iterate – Test again to validate improvements and uncover new issues.

And here’s a brief and structured overview of a UX research process in video format: 

Usability testing is not about seeking perfection — it’s about building a product that evolves with real user needs. The more you test, the more you learn, and the better the experience becomes.

To sum up, if there’s one key takeaway from this guide, it’s this: usability testing is never truly done. It’s an ongoing cycle that ensures your product keeps up with real user needs — not just assumptions.

Need expert guidance? At Eleken, we know how to run usability tests that turn data into real design improvements. If you're looking for a team that understands UX inside and out, let's talk.

Share
written by:
image
Kateryna Mayka

Senior content writer at Eleken UI/UX design agency. Kateryna has 4 years of experience translating complex design concepts into accessible content for SaaS businesses.

imageimage
reviewed by:
image

imageimage
Top Stories