6 Usability Testing Examples & Case Studies

Interested in analyzing real-world examples of successful usability tests?

In this article, we’ll be examining six examples of usability testing being conducted with substantial results.

Conducting usability testing takes only seven simple steps and does not have to require a massive budget. Yet it can achieve remarkable results for companies across all industries.

If you’re someone who cannot be convinced by theory alone, this is the guide for you. These are tried-and-tested case studies from well-known companies that showcase the true power of a successful usability test.

Here are the usability testing examples and case studies we’ll be covering in this article:

  • McDonald’s
  • AutoTrader.com
  • Halo: Combat Evolved

Example #1: Ryanair

Ryanair is one of the world’s largest airline groups, carrying 152 million passengers each year. In 2014, the company launched Ryanair Labs, a digital innovation hub seeking to “reinvent online traveling”. To make this dream a reality, they went on a recruiting spree that resulted in a team of 200+ members. This team included user experience specialists, data analysts, software developers, and digital marketers – all working towards a common goal of improving the user experience of the Ryanair website.

What made matters more complicated, however, is that Ryanair’s website and app together received 1 billion visits per year. Working with a website this large, combined with the paper-thin profit margins of around 5% for the airline industry, Ryanair had no room for errors. To make matters even more stressful, one of the first missions for the new team included launching an entirely new website with a superior user experience.

To give you a visual idea of what they were up against, take a look at their old website design:

case study usability test

Not great, not terrible. But the website undoubtedly needed a redesign for the 21st century.

This is what the Ryanair team set out to accomplish:

  • Reducing the number of steps needed to book a flight on the website;
  • Allowing customers to store their travel documents and payment cards on the website;
  • Delivering a better mobile device user experience for both the website and app.

With these goals in mind, they chose remote and unmoderated usability testing types for their user tests. This by itself was a change for the team, as the Ryanair team had relied on in-lab, face-to-face testing until that point. 

By collaborating with the UX agency UserZoom , however, new opportunities opened up for Ryanair. With UzerZoom’s massive roster of user testers, Ryanair could access large amounts of qualitative and quantitative usability data. Data that they badly needed during the design process of the new website.

By going with remote unmoderated usability testing, the Ryanair team managed to:

  • Reduce the time spent on usability testing;
  • Conduct simultaneous usability tests with hundreds of users and without geographical barriers;
  • Increase the overall reach and scale of the tests;
  • Carry out tests across many devices, operating systems, and multiple focus groups.

With continuous user testing, the new website was taken through alpha and beta testing in 2015. The end result of all work this was the vastly improved look, functionality, and user experience of the new website:

Ryanair's new website design

Even before launch, Ryanair knew that the new website was superior. Usability tests had shown that to be the case and they had no need to rely on “educated guesses”. This usability testing example demonstrates that a well-executed testing plan can give remarkable results.

Source:   Ryanair case study  by UserZoom

Example #2: McDonald’s

McDonald’s is one of the world’s largest fast-food restaurant chains, with a staggering 62 million daily customers . Yet, McDonald’s was late to embrace the mobile revolution as their smartphone app launched rather recently – in August 2015. In comparison, Starbucks’ smartphone app was already a booming success and accounted for 20% of its’ overall revenue in 2015.

Considering the competition, McDonald’s had some catching up to do. Before the launch of their app in the UK, they decided to hire UK-based  SimpleUsability  to identify any usability problems before release. The test plan involved conducting 20 usability tests, where the task scenarios covered the entire customer journey from end-to-end. In addition to that, the test plan included 225 end-user interviews.

Not exactly a large-scale usability study considering the massive size of McDonald’s, but it turned out to be valuable nonetheless. A number of usability issues were detected during the study:

  • Poor visibility and interactivity of the call-to-action buttons;
  • Communication problems between restaurants and the smartphone app;
  • Lack of order customization and favoriting impaired the overall user experience.

Here’s what the McDonald’s mobile app looks like today:

case study usability test

This case study demonstrates that investing even a tiny percentage of a company’s resources into usability testing can result in meaningful insights.

Source:   McDonald’s case study  by SimpleUsability

Example #3: SoundCloud

SoundCloud is the world’s largest music and audio distribution platform, with over 175 million unique monthly listeners . In 2019, SoundCloud hired test IO , a Berlin-based usability testing agency, to conduct continuous usability testing for the SoundCloud mobile app. With SoundCloud’s rigorous development schedule, the company needed regular human user testers to make sure that all new updates work across all devices and OS versions.

The key research objectives for SoundCloud’s regular usability studies were to:

  • Provide a user-friendly listening experience for mobile app users;
  • Identify and fix software bugs before wide release;
  • Improve the mobile app development cycle.

In the very first usability tests, more than 150 usability issues (including 11 critical issues) were discovered. These issues likely wouldn’t have been discovered through internal bug testing. That is because the user testers experimented on the app from a plethora of devices and geographical locations (144 devices and 22 countries). Without remote usability testing, a testing scale as large as this would have been very difficult and expensive to achieve.

Today, SoundCloud’s mobile app looks like this:

SoundCloud usability testing example

This case study demonstrates the power of regular usability testing in products with frequent updates. 

Source:   SoundCloud case study (.pdf)  by test IO

Example #4: AutoTrader.com

AutoTrader.com is one of the world’s largest online marketplaces for buying and selling used cars, with over 28 million monthly visitors . The mission of AutoTrader’s website is to empower car shoppers in the researching process by giving them all the tools necessary to make informed decisions about vehicle purchases.

Sounds fantastic.

However, with competitors such as CarGurus gaining increasing amounts of market share in the online car shopping industry, AutoTrader had to do reinvent itself to stay competitive. 

In e-commerce, competitors with a superior website can gain massive followings in an instant. Fifty years ago this was not the case – well-established car marketplaces had massive car parks all over the country, and a newcomer would have little in ways to compete.

Nowadays, however, it’s all about user experience. Digital shoppers will flock to whichever site offers a better user experience. Websites unwilling or unable to improve their user experience over time will get left in the dust. No matter how big or small they are.

Going back to AutoTrader, the majority of its website traffic comes from organic Google search, meaning that in addition to website usability, search engine optimization (SEO) is a major priority for the company. According to John Muller from Google, changing the layout of a website can affect rankings , and that is why AutoTrader had to be careful with making any large-scale changes to their website.

AutoTrader did not have a large team of user researchers nor a massive budget dedicated to usability testing. But they did have Bradley Miller – Senior User Experience Researcher at the company. To test the usability of AutoTrader, Miller decided to partner with UserTesting.com to conduct live user interviews with AutoTrader users.

Through these live user interviews, Miller was able to:

  • Find and connect with target personas;
  • Communicate with car buyers from across the country;
  • Reduce the costs of conducting usability tests while increasing the insights gained.

From these remote usability live interviews, Miller learned that the customer journey almost always begins from a single source: search engines. Here, it’s important to note that search engines rarely direct users to the homepage. Instead, they drive traffic to the inner pages of websites. In the case of AutoTrader, for example, only around 20% of search engine traffic goes to the homepage (data from SEMrush ).

These insights helped AutoTrader redesign their inner pages to better match the customer journey. They no longer assumed that any inner page visitor already has a greater contextual knowledge of the website. Instead, they started to treat each page as if it’s the initial point of entry by providing more contextual information right then and there inside the inner page.

This usability testing example demonstrates not only the power of user interviews but also the importance of understanding your customer journey and SEO.

Source: AutoTrader case study  by UserTesting.com

Example #5: Udemy

Udemy is one of the world’s largest online learning platforms with over  40 million students across the world. The e-learning giant also has a massively popular smartphone app, and the usability testing example in question was aimed at the smartphone users of Udemy.

To find out when, where, and why Udemy users chose to opt for the mobile app rather than the desktop version, Udemy conducted user tests. As Udemy is a 100% digital company, they chose fully remote unmoderated user testing as their testing method. 

Test participants were asked to take small videos showing where they were located and what tasks they were focused on at the time of learning and recording. 

What the user researchers found was that their initial theory of “users prefer using the mobile app while on the go” was false. Instead, what they found was that the majority of mobile app users were stationary. Udemy users, for various reasons, used the mobile app at home on the couch, or in a cafeteria. The key findings of this user test were utilized for the next year’s product and feature development.

This is what Udemy’s mobile app looks like today:

case study usability test

This usability testing case study demonstrates that a company’s perception of target audience behavior does not always match the behavior of the real end-users. And, that is why user testing is crucial.

Source:   Udemy case study  by UserTesting.com

Example #6: Halo: Combat Evolved

“Halo: Combat Evolved” was the first video game in the massively popular Halo franchise. It was developed by Bungie and published by Microsoft Game Studios in 2001. Within 10 years after its’ release, the Halo games sold more than 46 million copies worldwide and generated Microsoft more than $5 billion in video game and hardware sales. Owing it all to the usability test we’re about to discuss may be a bit of stretch, but usability testing the game during development was undeniably one of the factors that helped the franchise take off like a rocket.

In this usability study, the Halo team gathered a focus group of console gamers to try out their game’s prototype to see if they had fun playing the game. And, if they did not have fun – they wanted to find out what prevented them from doing so. 

In the usability sessions, the researchers placed test subjects (players) in a large outdoor environment with enemies waiting for them across the open space.

The designers of the game expected the players to sprint closer towards the enemies, sparking a massive battle full of action and excitement. But, the test participants had a different plan in mind. Instead of putting themselves in danger by springing closer, they would stay at a maximum distance from the enemies and shoot from far across the outdoor space. While this was a safe and effective strategy, it proved to be rather uneventful and boring for the players.

To entice players to enjoy combat up close, the user researchers decided that changes would have to be made. Their solution – changing the size and color of the aiming indicator in the center of the screen to notify players when they were too far away from enemies. 

Here, you can see the finalized aiming indicator in action:

case study usability test

Subsequent usability tests proved these changes to be effective, as the majority of user testers now engaged in combat from a closer distance.

User testing is not restricted to any particular industry, OS, or platform. Testing user experience is an invaluable tool for any product – not just for websites or mobile apps. 

This example of usability testing from the video game industry shows that players (users) will optimize the fun out of a game if given the chance. It’s up to the designers to bring the fun back through well-designed game mechanics and notifications.

Source:  “ Designing for Fun – User-Testing Case Studies ” by Randy J. Pagulayan

  • Learn about our financial technology consulting, UX design, and engineering services
  • Investing and Wealth
  • Fintech SaaS
  • Why Praxent
  • Capabilities
  • Capabilities demo
  • Schedule a call
  • Uncategorized
  • Development
  • Life at Praxent
  • Project Management
  • UX & Design
  • Tech & Business News
  • Product Management
  • Financial Services Innovators
  • UX Insights

Usability Testing Case Studies: Validate Assumptions and Build Software with Confidence

We define usability and examine some usability testing case studies to demonstrate the benefits.  

As we’ve said before, one of the most important benefits of software prototyping is the early ability to conduct usability testing. The truth of the matter is that no one will use your product if it’s not easy and intuitive or if it doesn’t solve a problem that users have in the first place.

The easiest way to make sure your software project meets these requirements is with usability testing, and the most effective way to implement usability testing early in the development process is with a prototype .

What Is Usability Testing?

Usability testing is the process of studying potential end-users as they interact with a product prototype. Usability testing occurs before you develop and launch a product, and is an essential planning step that can guide a product’s features, functions and purpose. Developing with a clear purpose and research-based data will ensure your goals and plans are in alignment with what an end user wants and needs, and as a result that your product will be more likely to succeed. Usability testing is a type of user research, and like all user research is instrumental in building more informed products that contribute to a business’ long term success.

Intentionally observing real-life people as they interact with a product is an important step in effective user experience design that should not be missed. Without usability testing, it’s very difficult to determine or validate that your product will provide something people are willing to pay for. Companies that don’t invest in this type of upfront testing often create products that are built around their own goals, as opposed to those of their customers, which do not always align. People don’t simply want products just because they exist, and users sometimes approach applications in unexpected ways. Thus, usability testing is key for confidence building during product development.

In this post, we look at a few usability testing examples to illustrate how the process works and why it’s so essential to the overall development process.

Download Praxent user journey map template and ebook Praxent

Create Your Own User Journey Maps in Sketch or Illustrator

Maximize ROI on usability tests and foster smart decisions for new products with user journey maps.

Get the free step-by-step guide and handy template..

>> Download the e-book and templates for creating user journey maps in Sketch and Illustrator.

User Testing Case Studies

Usability testing case study #1: cisco, usability testing for user experience.

We worked with Cisco’s developer program group to craft a new, more immersive user experience for Cisco DevNet, their developer resources website. Their usability case study illustrates how we tackled their challenge, and the instrumental role that an effective prototyping strategy played in the process.

The Challenge

The depth and breadth of content on Cisco’s DevNet had spawned hundreds of micro-sites, each with different organizational structures and their own navigation paradigms. Existing visitors to the site would only visit a few specific pages, meaning they were never exposed to newly released tools and technologies. Also, new visitors struggled to discover where to begin or how to find the resources most relevant to them. Users were missing out on a lot of valuable resources, and the user experience was less than ideal.

ClickModel® Usability Testing

Cisco wanted to implement a new user experience to the homepage of DevNet in order to make it easier to dive from the homepage deep within the site’s resources to find information on a particular tool or technology. We were charged with prototyping the proposed user experience, so that Cisco could conduct usability testing with developer focus groups. To build our prototype, we implemented our ClickModel tool.

At Praxent, prototyping the user experience allows stakeholders and users to give feedback before the software development process begins.

Confidence to Move Forward with Development

The ClickModel prototype emulated the new site that would appear to users. The prototype prompted insightful feedback from the developer focus groups regarding both the proposed information architecture and the priority and placement of various navigational elements on the homepage and subsequent interior landing pages. The prototype also made it easier to collect feedback on the utility of a proposed color-coding scheme for sorting resources into major technology categories.

This feedback and testing allowed Cisco’s DevNet project to course correct in the Structure, Skeleton, and Surface areas before they spent significant money building in the wrong direction. Cisco took their prototype in-house and moved forward decisively and with confidence to create better resources for the developer community.

DeveloperProgram.com runs developer programs for some of the world’s largest technology and telecoms companies. We rely on our partner Praxent who understands our business, our clients, the developer’s needs, and are able to articulate that into a portal design that is easy to navigate and understand, with the foresight to create an infrastructure that allows for untethered growth. The design team is a pleasure to work with, quickly comprehending our needs and converting that to tangible deliverables, on time and always outstanding.

— Steve Glagow, Executive Vice President • DeveloperProgram.com

Usability Testing Case Study #2: NORCAL

Responsive data displays with usability testing.

In the wake of a corporate merger, NORCAL, a provider of medical professional liability insurance, was looking to build a new online portal. The portal would allow their insurance brokers to review their book of business and track which policyholders were behind on payments. Their billing department was inundated with phone inquiries from brokers who needed information about specific policyholder accounts, which was hindering their ability to attend to important billing tasks.

NORCAL’s insurance brokers are constantly on the go, so it was crucial that the proposed portal not just be accessible by mobile smartphones and tablets, but the portal be optimized specifically for use on those devices.

A native app solution was discussed, but NORCAL determined early on that they wanted to invest in a responsive web application that could be accessed on desktops and mobile devices by both their internal teams and brokers in the field.

Prototyping to the Rescue

The primary user experience challenge tackled during the engagement was how to display complex data tables in a way that would be equally useful on large screen desktop computers as well as handheld smartphone screens. Since multi touch smartphone devices don’t have cursors, they can’t display information using hover states like a desktop computer can.

During the ClickModel process, we prototyped various on- and off-screen methods of data interaction displays for NORCAL’s team to review and test. This provided a few real-life usability testing examples of how they might tackle their problem.

Praxent prototypes the user experience across smartphone, tablet, laptop, and desktop devices to arrive at a responsive web design that works in various contexts.

Interacting with the clickable, tappable prototype on both desktop and mobile devices gave NORCAL crucial insight to determine what pieces of data were most essential to be displayed on the smaller smartphone screens and which additional data fields would be displayed only on desktop screens.

The ClickModel iterative prototyping process provided a clear-cut way for stakeholders from billing, marketing, and engineering to communicate effectively about the user experience. This led to important consensus and direction regarding feature requirements and scope, which was able to guide their project as they moved forward.

What Next? Getting Started With Usability Testing Studies for UX

As you can see, there are many benefits of having a prototype that looks, feels and acts real. In the two usability testing case studies above, ClickModel was an effective tool to build such prototypes, and helping clients garner the information and data-backed insight they needed to proceed with confidence. Learn more about our testing process, and how it also leads to reliable project estimates that are so important as you move forward with the development process.

ClickModel® Overview Guide

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

The Beginner’s Guide to Usability Testing [+ Sample Questions]

Clifford Chi

Published: July 28, 2021

In practically any discipline, it's a good idea to have others evaluate your work with fresh eyes, and this is especially true in user experience and web design. Otherwise, your partiality for your own work can skew your perception of it. Learning directly from the people that your work is actually for — your users — is what enables you to craft the best user experience possible.

implementing feedback from usability testing

UX and design professionals leverage usability testing to get user feedback on their product or website’s user experience all the time. In this post, you'll learn:

What usability testing is

  • Its purpose and goals
  • Scenarios where it can work
  • Real-life examples and case studies
  • How to conduct one of your own
  • Scripted questions you can use along the way

What is usability testing?

Usability testing is a method of evaluating a product or website’s user experience. By testing the usability of their product or website with a representative group of their users or customers, UX researchers can determine if their actual users can easily and intuitively use their product or website.

UX researchers will usually conduct usability studies on each iteration of their product from its early development to its release.

During a usability study, the moderator asks participants in their individual user session to complete a series of tasks while the rest of the team observes and takes notes. By watching their actual users navigate their product or website and listening to their praises and concerns about it, they can see when the participants can quickly and successfully complete tasks and where they’re enjoying the user experience, encountering problems, and experiencing confusion.

After conducting their study, they’ll analyze the results and report any interesting insights to the project lead.

case study usability test

Free UX Research Kit + Templates

3 templates for conducting user tests, summarizing your UX research, and presenting your findings.

  • User Testing Template
  • UX Research Testing Report Template
  • UX Research Presentation Template

You're all set!

Click this link to access this resource at any time.

What is the purpose of usability testing?

Usability testing allows researchers to uncover any problems with their product's user experience, decide how to fix these problems, and ultimately determine if the product is usable enough.

Identifying and fixing these early issues saves the company both time and money: Developers don’t have to overhaul the code of a poorly designed product that’s already built, and the product team is more likely to release it on schedule.

Benefits of Usability Testing

Usability testing has five major advantages over the other methods of examining a product's user experience (such as questionnaires or surveys):

  • Usability testing provides an unbiased, accurate, and direct examination of your product or website’s user experience. By testing its usability on a sample of actual users who are detached from the amount of emotional investment your team has put into creating and designing the product or website, their feedback can resolve most of your team’s internal debates.
  • Usability testing is convenient. To conduct your study, all you have to do is find a quiet room and bring in portable recording equipment. If you don’t have recording equipment, someone on your team can just take notes.
  • Usability testing can tell you what your users do on your site or product and why they take these actions.
  • Usability testing lets you address your product’s or website’s issues before you spend a ton of money creating something that ends up having a poor design.
  • For your business, intuitive design boosts customer usage and their results, driving demand for your product.

Usability Testing Scenario Examples

Usability testing sounds great in theory, but what value does it provide in practice? Here's what it can do to actually make a difference for your product:

1. Identify points of friction in the usability of your product.

As Brian Halligan said at INBOUND 2019, "Dollars flow where friction is low." This just as true in UX as it is in sales or customer service. The more friction your product has, the more reason your users will have to find something that's easier to use.

Usability testing can uncover points of friction from customer feedback.

For example: "My process begins in Google Drive. I keep switching between windows and making multiple clicks just to copy and paste from Drive into this interface."

Even though the product team may have had that task in mind when they created the tool, seeing it in action and hearing the user's frustration uncovered a use case that the tool didn't compensate for. It might lead the team to solve for this problem by creating an easy import feature or way to access Drive within the interface to reduce the number of clicks the user needs to make to accomplish their task.

2. Stress test across many environments and use cases.

Our products don't exist in a vacuum, and sometimes development environments are unable to compensate for all the variables. Getting the product out and tested by users can uncover bugs that you may not have noticed while testing internally.

For example: "The check boxes disappear when I click on them."

Let's say that the team investigates why this might be, and they discover that the user is on a browser that's not commonly used (or a browser version that's outdated).

If the developers only tested across the browsers used in-house, they may have missed this bug, and it could have resulted in customer frustration.

3. Provide diverse perspectives from your user base.

While individuals in our customer bases have a lot in common (in particular, the things that led them to need and use our products), each individual is unique and brings a different perspective to the table. These perspectives are invaluable in uncovering issues that may not have occurred to your team.

For example: "I can't find where I'm supposed to click."

Upon further investigation, it's possible that this feedback came from a user who is color blind, leading your team to realize that the color choices did not create enough contrast for this user to navigate properly.

Insights from diverse perspectives can lead to design, architectural, copy, and accessibility improvements.

4. Give you clear insights into your product's strengths and weaknesses.

You likely have competitors in your industry whose products are better than yours in some areas and worse than yours in others. These variations in the market lead to competitive differences and opportunities. User feedback can help you close the gap on critical issues and identify what positioning is working.

For example: "This interface is so much easier to use and more attractive than [competitor product]. I just wish that I could also do [task] with it."

Two scenarios are possible based on that feedback:

  • Your product can already accomplish the task the user wants. You just have to make it clear that the feature exists by improving copy or navigation.
  • You have a really good opportunity to incorporate such a feature in future iterations of the product.

5. Inspire you with potential future additions or enhancements.

Speaking of future iterations, that comes to the next example of how usability testing can make a difference for your product: The feedback that you gather can inspire future improvements to your tool.

It's not just about rooting out issues but also envisioning where you can go next that will make the most difference for your customers. And who best to ask but your prospective and current customers themselves?

Usability Testing Examples & Case Studies

Now that you have an idea of the scenarios in which usability testing can help, here are some real-life examples of it in action:

1. User Fountain + Satchel

Satchel is a developer of education software, and their goal was to improve the experience of the site for their users. Consulting agency User Fountain conducted a usability test focusing on one question: "If you were interested in Satchel's product, how would you progress with getting more information about the product and its pricing?"

During the test, User Fountain noted significant frustration as users attempted to complete the task, particularly when it came to locating pricing information. Only 80% of users were successful.

Usability Test Example: User Fountain + Satchel

Image Source

This led User Fountain to create the hypothesis that a "Get Pricing" link would make the process clearer for users. From there, they tested a new variation with such a link against a control version. The variant won, resulting in a 34% increase in demo requests.

By testing a hypothesis based on real feedback, friction was eliminated for the user, bringing real value to Satchel.

2. Kylie.Design + Digi-Key

Ecommerce site Digi-Key approached consultant Kylie.Design to uncover which site interactions had the highest success rates and what features those interactions had in common.

They conducted more than 120 tests and recorded:

  • Click paths from each user
  • Which actions were most common
  • The success rates for each

Usability Test Example: Kylie.Design + Digi-Key

This as well as the written and verbal feedback provided by participants informed the new design, which resulted in increasing purchaser success rates from 68.2% to 83.3%.

In essence, Digi-Key was able to identify their most successful features and double-down on them, improving the experience and their bottom line.

3. Sparkbox + An Academic Medical Center

An academic medical center in the midwest partnered with consulting agency Sparkbox to improve the patient experience on their homepage, where some features were suffering from low engagement.

Sparkbox conducted a usability study to determine what users wanted from the homepage and what didn't meet their expectations. From there, they were able to propose solutions to increase engagement.

Usability Test Example: Sparkbox + Medical Center

For example, one key action was the ability to access electronic medical records. The new design based on user feedback increased the success rate from 45% to 94%.

This is a great example of putting the user's pains and desires front-and-center in a design.

The 9 Phases of a Usability Study

1. decide which part of your product or website you want to test..

Do you have any pressing questions about how your users will interact with certain parts of your design, like a particular interaction or workflow? Or are you wondering what users will do first when they land on your product page? Gather your thoughts about your product or website’s pros, cons, and areas of improvement, so you can create a solid hypothesis for your study.

2. Pick your study’s tasks.

Your participants' tasks should be your user’s most common goals when they interact with your product or website, like making a purchase.

3. Set a standard for success.

Once you know what to test and how to test it, make sure to set clear criteria to determine success for each task. For instance, when I was in a usability study for HubSpot’s Content Strategy tool, I had to add a blog post to a cluster and report exactly what I did. Setting a threshold of success and failure for each task lets you determine if your product's user experience is intuitive enough or not.

4. Write a study plan and script.

At the beginning of your script, you should include the purpose of the study, if you’ll be recording, some background on the product or website, questions to learn about the participants’ current knowledge of the product or website, and, finally, their tasks. To make your study consistent, unbiased, and scientific, moderators should follow the same script in each user session.

5. Delegate roles.

During your usability study, the moderator has to remain neutral, carefully guiding the participants through the tasks while strictly following the script. Whoever on your team is best at staying neutral, not giving into social pressure, and making participants feel comfortable while pushing them to complete the tasks should be your moderator

Note-taking during the study is also just as important. If there’s no recorded data, you can’t extract any insights that’ll prove or disprove your hypothesis. Your team’s most attentive listener should be your note-taker during the study.

6. Find your participants.

Screening and recruiting the right participants is the hardest part of usability testing. Most usability experts suggest you should only test five participants during each study , but your participants should also closely resemble your actual user base. With such a small sample size, it’s hard to replicate your actual user base in your study.

To recruit the ideal participants for your study, create the most detailed and specific persona as you possibly can and incentivize them to participate with a gift card or another monetary reward.

Recruiting colleagues from other departments who would potentially use your product is also another option. But you don’t want any of your team members to know the participants because their personal relationship can create bias -- since they want to be nice to each other, the researcher might help a user complete a task or the user might not want to constructively criticize the researcher’s product design.

7. Conduct the study.

During the actual study, you should ask your participants to complete one task at a time, without your help or guidance. If the participant asks you how to do something, don’t say anything. You want to see how long it takes users to figure out your interface.

Asking participants to “think out loud” is also an effective tactic -- you’ll know what’s going through a user’s head when they interact with your product or website.

After they complete each task, ask for their feedback, like if they expected to see what they just saw, if they would’ve completed the task if it wasn’t a test, if they would recommend your product to a friend, and what they would change about it. This qualitative data can pinpoint more pros and cons of your design.

8. Analyze your data.

You’ll collect a ton of qualitative data after your study. Analyzing it will help you discover patterns of problems, gauge the severity of each usability issue, and provide design recommendations to the engineering team.

When you analyze your data, make sure to pay attention to both the users’ performance and their feelings about the product. It’s not unusual for a participant to quickly and successfully achieve your goal but still feel negatively about the product experience.

9. Report your findings.

After extracting insights from your data, report the main takeaways and lay out the next steps for improving your product or website’s design and the enhancements you expect to see during the next round of testing.

The 3 Most Common Types of Usability Tests

1. hallway/guerilla usability testing.

This is where you set up your study somewhere with a lot of foot traffic. It allows you to ask randomly-selected people who have most likely never even heard of your product or website -- like passers-by -- to evaluate its user-experience.

2. Remote/Unmoderated Usability Testing

Remote/unmoderated usability testing has two main advantages: it uses third-party software to recruit target participants for your study, so you can spend less time recruiting and more time researching. It also allows your participants to interact with your interface by themselves and in their natural environment -- the software can record video and audio of your user completing tasks.

Letting participants interact with your design in their natural environment with no one breathing down their neck can give you more realistic, objective feedback. When you’re in the same room as your participants, it can prompt them to put more effort into completing your tasks since they don’t want to seem incompetent around an expert. Your perceived expertise can also lead to them to please you instead of being honest when you ask for their opinion, skewing your user experience's reactions and feedback.

3. Moderated Usability Testing

Moderated usability testing also has two main advantages: interacting with participants in person or through a video a call lets you ask them to elaborate on their comments if you don’t understand them, which is impossible to do in an unmoderated usability study. You’ll also be able to help your users understand the task and keep them on track if your instructions don’t initially register with them.

Usability Testing Script & Questions

Following one script or even a template of questions for every one of your usability studies wouldn't make any sense -- each study's subject matter is different. You'll need to tailor your questions to the things you want to learn, but most importantly, you'll need to know how to ask good questions.

1. When you [action], what's the first thing you do to [goal]?

Questions such as this one give insight into how users are inclined to interact with the tool and what their natural behavior is.

Julie Fischer, one of HubSpot's Senior UX researchers, gives this advice: "Don't ask leading questions that insert your own bias or opinion into the participants' mind. They'll end up doing what you want them to do instead of what they would do by themselves."

For example, "Find [x]" is a better than "Are you able to easily find [x]?" The latter inserts connotation that may affect how they use the product or answer the question.

2. How satisfied are you with the [attribute] of [feature]?

Avoid leading the participants by asking questions like "Is this feature too complicated?" Instead, gauge their satisfaction on a Likert scale that provides a number range from highly unsatisfied to highly satisfied. This will provide a less biased result than leading them to a negative answer they may not otherwise have had.

3. How do you use [feature]?

There may be multiple ways to achieve the same goal or utilize the same feature. This question will help uncover how users interact with a specific aspect of the product and what they find valuable.

4. What parts of [the product] do you use the most? Why?

This question is meant to help you understand the strengths of the product and what about it creates raving fans. This will indicate what you should absolutely keep and perhaps even lead to insights into what you can improve for other features.

5. What parts of [the product] do you use the least? Why?

This question is meant to uncover the weaknesses of the product or the friction in its use. That way, you can rectify any issues or plan future improvements to close the gap between user expectations and reality.

6. If you could change one thing about [feature] what would it be?

Because it's so similar to #5, you may get some of the same answers. However, you'd be surprised about the aspirational things that your users might say here.

7. What do you expect [action/feature] to do?

Here's another tip from Julie Fischer:

"When participants ask 'What will this do?' it's best to reply with the question 'What do you expect it do?' rather than telling them the answer."

Doing this can uncover user expectation as well as clarity issues with the copy.

Your Work Could Always Use a Fresh Perspective

Letting another person review and possibly criticize your work takes courage -- no one wants a bruised ego. But most of the time, when you allow people to constructively criticize or even rip apart your article or product design, especially when your work is intended to help these people, your final result will be better than you could've ever imagined.

Editor's note: This post was originally published in August 2018 and has been updated for comprehensiveness.

Don't forget to share this post!

Related articles.

The Top 13 Paid & Free Alternatives to Adobe Illustrator of 2023

The Top 13 Paid & Free Alternatives to Adobe Illustrator of 2023

Using Human-Centered Design to Create Better Products (with Examples)

Using Human-Centered Design to Create Better Products (with Examples)

9 Breadcrumb Tips to Make Your Site Easier to Navigate [+ Examples]

9 Breadcrumb Tips to Make Your Site Easier to Navigate [+ Examples]

UX vs. UI: What's the Difference?

UX vs. UI: What's the Difference?

The 10 Best Storyboarding Software of 2022 for Any Budget

The 10 Best Storyboarding Software of 2022 for Any Budget

The Ultimate Guide to Designing for the User Experience

The Ultimate Guide to Designing for the User Experience

It’s the Little Things: How To Write Microcopy

It’s the Little Things: How To Write Microcopy

10 Tips That Can Drastically Improve Your Website's User Experience

10 Tips That Can Drastically Improve Your Website's User Experience

Intro to Adobe Fireworks: 6 Great Ways Designers Can Use This Software

Intro to Adobe Fireworks: 6 Great Ways Designers Can Use This Software

Fitts's Law: The UX Hack that Will Strengthen Your Design

Fitts's Law: The UX Hack that Will Strengthen Your Design

3 templates for conducting user tests, summarizing UX research, and presenting findings.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

Skip navigation

  • Log in to UX Certification

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Usability Testing 101

Portrait of Kate Moran

December 1, 2019 2019-12-01

  • Email article
  • Share on LinkedIn
  • Share on Twitter

Usability  testing is a popular UX research methodology.

Definition: In a usability-testing session, a researcher (called a “facilitator” or a “moderator”) asks a participant to perform tasks, usually using one or more specific user interfaces. While the participant completes each task, the researcher observes the participant’s behavior and listens for feedback.

The phrase “usability testing” is often used interchangeably with “user testing.”

In This Article:

Why usability test, elements of usability testing, types of usability testing, cost of usability testing, nn/g resources for usability testing.

The goals of usability testing vary by study, but they usually include:

  • Identifying problems  in the design of the product or service
  • Uncovering opportunities  to improve
  • Learning about the target user’s  behavior and preferences

case study usability test

Why do we need to do usability testing?  Won’t a good professional UX designer know  how to design a great user interface? Even the best UX designers can’t design a perfect — or even good enough — user experience without  iterative design  driven by observations of real users  and of their interactions with the design.

There are many variables in designing a modern user interface and there are even more variables in the  human brain . The total number of combinations is huge.  The only way to get UX design right is to test it.

There are many different types of usability testing, but the core elements in most usability tests are  the facilitator, the tasks, and the participant .

case study usability test

The facilitator administers tasks to the participant. As the participant performs these tasks, the facilitator observes the participant’s behavior and listens for feedback. The facilitator may also ask followup questions to elicit detail from the participant.

case study usability test

Facilitator

The facilitator guides the participant through the test process. She gives instructions, answers the participant’s questions, and asks followup questions.

The facilitator works to ensure that the test results in high-quality, valid data, without  accidentally influencing the participant’s behavior . Achieving this balance is difficult and requires training.

(In one form of remote usability testing, called  remote unmoderated testing , an application may perform some of the facilitator’s roles.)

The  tasks  in a usability test are realistic activities that the participant might perform in real life. They can be very specific or very open-ended,  depending on the research questions  and the type of usability testing.

Examples of tasks from real usability studies:

Your printer is showing “Error 5200”. How can you get rid of the error message? You're considering opening a new credit card with Wells Fargo. Please visit wellsfargo.com and decide which credit card you might want to open, if any. You’ve been told you need to speak to Tyler Smith from the Project Management department. Use the intranet to find out where they are located. Tell the researcher your answer.

Task wording  is very important in usability testing. Small errors in the phrasing of a task can cause the participant to misunderstand what they’re asked to do or can influence how participants perform the task (a psychological phenomenon called  priming ).

Task instructions can be delivered to the participant verbally (the facilitator might read them) or can be handed to a participant written on task sheets. We often ask participants to read the task instructions out loud. This helps ensure that the participant reads the instructions completely, and helps the researchers with their notetaking, because they always know which task the user is performing.

Participant

The  participant  should be a  realistic user  of the product or service being studied. That might mean that the user is already using the product or service in real life. Alternatively, in some cases, the participant might just have a similar background to the target user group, or might have the same needs, even if he isn’t already a user of the product.

Participants are often asked to  think out loud  during usability testing (called the “think-aloud method”). The facilitator might ask the participants to narrate their actions and thoughts as they perform tasks. The goal of this approach is to understand participants’ behaviors, goals, thoughts, and motivations.

case study usability test

Qualitative vs. Quantitative

Usability testing can be either  qualitative or quantitative .

Qualitative usability testing  focuses on collecting insights, findings, and anecdotes about how people use the product or service. Qualitative usability testing is best for discovering problems in the user experience. This form of usability testing is more common than quantitative usability testing. Quantitative usability testing  focuses on collecting metrics that describe the user experience. Two of the metrics most commonly collected in quantitative usability testing are task success and time on task. Quantitative usability testing is best for collecting  benchmarks .

The number of participants needed for a usability test varies depending on the type of study. For a typical  qualitative usability study  of a single user group, we recommend  using five participants  to uncover the majority of the most common problems in the product.

Remote vs. In-Person Testing

Remote usability tests are popular because they often require less time and money than in-person studies. There are two types of remote usability testing:  moderated and unmoderated .

Remote moderated  usability tests work very similarly to in-person studies. The facilitator still interacts with the participant and asks her to perform tasks. However, the facilitator and participant are in different physical locations. Usually, moderated tests can be performed using screen-sharing software like Skype or GoToMeeting. Remote unmoderated  remote usability tests do not have the same facilitator–participant interaction as an in-person or moderated tests. The researcher uses a dedicated  online remote-testing tool  to set up written tasks for the participant. Then, the participant completes those tasks alone on her own time. The testing tool delivers the task instructions and any followup questions. After the participant completes her test, the researcher receives a recording of the session, along with metrics like task success.

case study usability test

Simple,  “discount” usability  studies can be inexpensive, though you usually must pay a few hundred dollars as incentives to participants. The testing session can take place in a conference room, and the simplest study will take 3 days of your time (assuming that you have already learned how to do it, and you have access to participants):

  • Day 1: Plan the study
  • Day 2: Test the 5 users
  • Day 3: Analyze the findings and convert them into redesign recommendations for the next iteration

On the other hand,  more-expensive research  is sometimes required, and the cost can run into several hundred thousand dollars for the most elaborate studies.

Things that add cost include:

  • Competitive testing  of multiple designs
  • International testing  in multiple countries
  • Testing with multiple user groups (or  personas )
  • Quantitative studies
  • Use of fancy equipment like  eyetrackers
  • Needing a true usability lab or focus group room to  allow others to observe
  • Wanting a detailed  analysis and report  about the findings.

The  return on investment  (ROI) for advanced studies can still be high, though usually not as high as that for simple studies.

  • Qualitative Usability Testing (Study Guide)
  • User Testing: Why & How  (Video)
  • How to Conduct Usability Studies  (Report)
  • How to Set Up a Desktop Usability Test  (Video)
  • How to Set Up a Mobile Usability Test  (Video)
  • Turning User Goals into Task Scenarios for Usability Testing  (Article)
  • Usability Testing for Mobile Is Easy  (Article)

Facilitating a Usability Test

For hands-on training and help honing your facilitation skills, check out  our full-day course on usability testing .

  • Talking with Participants During a Usability Test  (Article)
  • User Testing Facilitation Techniques  (Video)
  • Team Members Behaving Badly During Usability Tests  (Article)
  • Thinking Aloud: The #1 Usability Tool  (Article)

Recruiting Participants

  • Recruiting Test Participants for Usability Studies  (Article)
  • Why You Only Need to Test with 5 Users  (Article)
  • How Many Test Users in a Usability Study?  (Article)
  • Usability Testing with 5 Users: Design Process  (Video)
  • Usability Testing with 5 Users: ROI Criteria  (Video)
  • Usability Testing with 5 Users: Information Foraging  (Video)
  • Employees as Usability-Test Participants  (Article)
  • Using Usability-Test Participants Multiple Times  (Video)
  • Obtaining Consent for User Research (Article)

Remote Usability Testing

For detailed help planning, conducting, and analyzing remote user testing, check out our full-day seminar: Remote Usability Testing.

  • Remote Usability Tests: Moderated and Unmoderated  (Article)
  • Remote Moderated Usability Tests: How and Why to Do Them  (Article)
  • Remote Unmoderated User Tests: How and Why to Do Them  (Article)
  • Tools for Unmoderated Usability Testing  (Article)

Special Usability Testing Studies or User Groups

  • Quantitative vs. Qualitative Usability Tests  (Article)
  • Conducting Usability Testing with Real Users’ Real Data  (Article)
  • How to Conduct Usability Studies for Accessibility  (Report)
  • Paper Prototyping: Getting User Data Before You Code  (Article)
  • Paper Prototyping 101  (Video)
  • Beyond the NPS: Measuring Perceived Usability  (Article)
  • International Usability Testing  (Article)
  • Usability Testing with Minors  (Article)

Free Downloads

Related courses, usability testing.

Learn how to plan, conduct, and analyze your own studies, whether in person or remote

ResearchOps: Scaling User Research

Orchestrate and optimize research to amplify its impact

Survey Design and Execution

Learn how to use surveys to drive and evaluate UX design

Related Topics

  • User Testing User Testing
  • Research Methods

Learn More:

Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=n8MnoJyl3W4

case study usability test

Data vs. Findings vs. Insights

Sara Ramaswamy · 3 min

case study usability test

Usability Test Facilitation: 6 Mistakes to Avoid

Kate Moran and Maria Rosala · 6 min

case study usability test

Help Users Think Aloud

Kate Kaplan · 4 min

Related Articles:

Avoid Leading Questions to Get Better Insights from Participants

Amy Schade · 4 min

‘But You Tested with Only 5 Users!’: Responding to Skepticism About Findings From Small Studies

Kathryn Whitenton · 8 min

Qualitative Usability Testing: Study Guide

Kate Moran · 5 min

Should You Run a Survey?

Maddie Brown · 6 min

Quantitative Research: Study Guide

Kate Moran · 8 min

7 Steps to Benchmark Your Product’s UX

Alita Joyce · 11 min

Usability Testing: Everything You Need to Know (Methods, Tools, and Examples)

As you crack into the world of UX design, there’s one thing you absolutely must understand and learn to practice like a pro: usability testing.

Precisely because it’s such a critical skill to master, it can be a lot to wrap your head around. What is it exactly, and how do you do it? How is it different from user testing? What are some actual methods that you can employ?

In this guide, we’ll give you everything you need to know about usability testing—the what, the why, and the how.

Here’s what we’ll cover:

  • What is usability testing and why does it matter?
  • Usability testing vs. user testing
  • Formative vs. summative usability testing
  • Attitudinal vs. behavioral research

Performance testing

Card sorting, tree testing, 5-second test, eye tracking.

  • How to learn more about usability testing

Ready? Let’s dive in.

1. What is usability testing and why does it matter?

Simply put, usability testing is the process of discovering ways to improve your product by observing users as they engage with the product itself (or a prototype of the product). It’s a UX research method specifically trained on—you guessed it—the usability of your products. And what is usability ? Usability is a measure of how easily users can accomplish a given task with your product.

Usability testing, when executed well, uncovers pain points in the user journey and highlights barriers to good usability. It will also help you learn about your users’ behaviors and preferences as these relate to your product, and to discover opportunities to design for needs that you may have overlooked.

You can conduct usability testing at any point in the design process when you’ve turned initial ideas into design solutions, but the earlier the better. Test early and test often! You can conduct some kind of usability testing with low- and high- fidelity prototypes alike—and testing should continue after you’ve got a live, out-in-the-world product.

2. Usability testing vs. user testing

Though they sound similar and share a somewhat similar end goal, usability testing and user testing are two different things. We’ll look at the differences in a moment, but first, here’s what they have in common:

  • Both share the end goal of creating a design solution to meet real user needs
  • Both take the time to observe and listen to the user to hear from them what needs/pain points they experience
  • Both look for feasible ways of meeting those needs or addressing those pain points

User testing essentially asks if this particular kind of user would want this particular kind of product—or what kind of product would benefit them in the first place. It is entirely user-focused.

Usability testing, on the other hand, is more product-focused and looks at users’ needs in the context of an existing product (even if that product is still in prototype stages of development). Usability testing takes your existing product and places it in the hands of your users (or potential users) to see how the product actually works for them—how they’re able to accomplish what they need to do with the product.

3. Formative vs. summative usability testing

Alright! Now that you understand what usability testing is, and what it isn’t, let’s get into the various types of usability testing out there.

There are two broad categories of usability testing that are important to understand— formative and summative . These have to do with when you conduct the testing and what your broad objectives are—what the overarching impact the testing should have on your product.

Formative usability testing: 

  • Is a qualitative research process 
  • Happens earlier in the design, development, or iteration process
  • Seeks to understand what about the product needs to be improved
  • Results in qualitative findings and ideation that you can incorporate into prototypes and wireframes

Summative usability testing:

  • Is a research process that’s more quantitative in nature
  • Happens later in the design, development, or iteration process
  • Seeks to understand whether the solutions you are implementing (or have implemented) are effective
  • Results in quantitative findings that can help determine broad areas for improvement or specific areas to fine-tune (this can go hand in hand with competitive analysis )

4. Attitudinal vs. behavioral research

Alongside the timing and purpose of the testing (formative vs. summative), it’s important to understand two broad categories that your research (both your objectives and your findings) will fall into: behavioral and attitudinal.

Attitudinal research is all about what people say—what they think  and communicate about your product and how it works. Behavioral research focuses on what people do—how they actually do interact with your product and the feelings that surface as a result.

What people say and what people do are often two very different things. These two categories help define those differences, choose our testing methods more intentionally, and categorize our findings more effectively.

5. Five essential usability testing methods

Some usability testing methods are geared more towards uncovering either behavioral or attitudinal findings; but many have the potential to result in both.

Of the methods you’ll learn about in this section, performance testing has the greatest potential for targeting both—and will perhaps require the greatest amount of thoughtfulness regarding how you approach it.

Naturally, then, we’ll spend a little more time on that method than the other four, though that in no way diminishes their usefulness! Here are the methods we’ll cover:

These are merely five common and/or interesting methods—it is not a comprehensive list of every method you can use to get inside the hearts and minds of your users. But it’s a place to start. So here we go!

In performance testing, you sit down with a user and give them a task (or set of tasks) to complete with the product.

This is often a combination of methods and approaches that will allow you to interview users, see how they use your product, and find out how they feel about the experience afterward. Depending on your approach, you’ll observe them, take notes, and/or ask usability testing questions before, after, or along the way.

Performance testing is by far the most talked-about form of usability testing—especially as it’s often combined with other methods. Performance testing is what most commonly comes to mind in discussions of usability testing as a whole, and it’s what many UX design certification programs focus on—because it’s so broadly useful and adaptive.

While there’s no one right way to conduct performance testing, there are a number of approaches and combinations of methods you can use, and you’ll want to be intentional about it.

It’s a method that you can adapt to your objectives—so make sure you do! Ask yourself what kind of attitudinal or behavioral findings you’re really looking for, how much time you’ll have for each testing session, and what methods or approaches will help you reach your objectives most efficiently.

Performance testing is often combined with user interviews . For a quick guide on how to ask great questions during this part of a testing session, watch this video:

Even if you choose not to combine performance testing with user interviews, good performance testing will still involve some degree of questioning and moderating.

Performance testing typically results in a pretty massive chunk of qualitative insights, so you’ll need to devote a fair amount of intention and planning before you jump in.

Maximize the usefulness of your research by being thoughtful about the task(s) you assign and what approach you take to moderating the sessions. As your test participants go about the task(s) you assign, you’ll watch, take notes, and ask questions either during or after the test—depending on your approach.

Four approaches to performance testing

There are four ways you can go about moderating a performance test , and it’s worth understanding and choosing your approach (or combination of approaches) carefully and intentionally. As you choose, take time to consider:

  • How much guidance the participant will actually need
  • How intently participants will need to focus
  • How guidance or prompting from you might affect results or observations

With these things in mind, let’s look at the four approaches.

Concurrent Think Aloud (CTA)

With this approach, you’ll encourage participants to externalize their thought process—to think out loud. Your job during the session will be to keep them talking through what they’re looking for, what they’re doing and why, and what they think about the results of their actions.

A CTA approach often uncovers a lot of nuanced details in the user journey, but if your objectives include anything related to the accuracy or time for task completion, you might be better off with a Retrospective Think Aloud.

Retrospective Think Aloud (RTA)

Here, you’ll allow participants to complete their tasks and recount the journey afterward . They can complete tasks in a more realistic time frame  and degree of accuracy, though there will certainly be nuanced details of participants’ thoughts and feelings you’ll miss out on.

Concurrent Probing (CP)

With Concurrent Probing, you ask participants about their experience as they’re having it. You prompt them for details on their expectations, reasons for particular actions, and feeling about results.

This approach can be distracting, but used in combination with CTA, you can allow participants to complete the tasks and prompt only when you see a particularly interesting aspect of their experience, and you’d like to know more. Again, if accuracy and timing are critical objectives, you might be better off with Retrospective Probing.

Retrospective Probing (RP)

If you note that a participant says or does something interesting as they complete their task(s), you can note it and ask them about it later—this is Retrospective Probing. This is an approach very often combined with CTA or RTA to ensure that you’re not missing out on those nuanced details of their experience without distracting them from actually completing the task.

Whew! There’s your quick overview of performance testing. To learn more about it, read to the final section of this article: How to learn more about usability testing.

With this under our belts, let’s move on to our other four essential usability testing methods.

Card sorting is a way of testing the usability of your information architecture. You give users blank cards (open card sorting) or cards labeled with the names and short descriptions of the main items/sections of the product (closed card sorting), then ask them to sort the cards into piles according to which items seem to go best together. You can go even further by asking them to sort the cards into larger groups and to name the groups or piles.

Rather than structuring your site or app according to your understanding of the product, card sorting allows the information architecture to mirror the way your users are thinking.

This is a great technique to employ very early in the design process as it is inexpensive and will save the time and expense of making structural adjustments later in the process. And there’s no technology required! If you want to conduct it remotely, though, there are tools like OptimalSort that do this effectively.

For more on how to conduct card sorting, watch this video:

Tree testing is a great follow up to card sorting, but it can be conducted on its own as well. In tree testing, you create a visual information hierarchy (or “tree) and ask users to complete a task using the tree. For example, you might ask users, “You want to accomplish X with this product. Where do you go to do that?” Then you observe how easily users are able to find what they’re looking for.

This is another great technique to employ early in the design process. It can be conducted with paper prototypes or spreadsheets, but you can also use tools such as TreeJack to accomplish this digitally and remotely.

In the 5-second test, you expose your users to one portion of your product (one screen, probably the top half of it) for five seconds and then interview them to see what they took away regarding:

  • The product/page’s purpose and main features or elements
  • The intended audience and trustworthiness of the brand
  • Their impression of the usability and design of the product

You can conduct this kind of testing in person rather simply, or remotely with tools like UsabilityHub .

This one may seem somewhat new, but it’s been around for a while–though the tools and technology around it have evolved. Eye tracking on its own isn’t enough to determine usability, but it’s a great compliment to your other usability testing measures.

In eye tracking you literally track where most users’ eyes land on the screen you’re designing. The reason this is important is that you want to make sure that the elements users’ eyes are drawn to are the ones that communicate the most important information. This is a difficult one to conduct in any kind of analog fashion, but there are a lot of tools out there that make it simple— CrazyEgg and HotJar are both great places to start.

6. How to learn more about usability testing

There you have it: your 15-minute overview of the what, why, and how of usability testing. But don’t stop here! Usability testing and UX research as a whole have a deeply humanizing impact on the design process. It’s a fascinating field to discover and the result of this kind of work has the power of keeping companies, design teams, and even the lone designer accountable to what matters most: the needs of the end user.

If you’d like to learn more about usability testing and UX research, take the free UX Research for Beginners Course with CareerFoundry. This tutorial is jam-packed with information that will give you a deeper understanding of the value of this kind of testing as well as a number of other UX research methods.

You can also enroll in a UX design course or bootcamp to get a comprehensive understanding of the entire UX design process (to which usability testing and UX research are an integral part). For guidance on the best programs, check out our list of the 10 best UX design certification programs . And if you’ve already started your learning process, and you’re thinking about the job hunt, here are the top 5 UX research interview questions to be ready for.

For further reading about usability testing and UX research, check out these other articles:

  • How to conduct usability testing: a step-by-step guide
  • What does a UX researcher actually do? The ultimate career guide
  • 11 usability heuristics every designer should know
  • How to conduct a UX audit
  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

Usability Testing

What is usability testing.

Usability testing is the practice of testing how easy a design is to use with a group of representative users. It usually involves observing users as they attempt to complete tasks and can be done for different types of designs. It is often conducted repeatedly, from early development until a product’s release.

“It’s about catching customers in the act, and providing highly relevant and highly contextual information.”

— Paul Maritz, CEO at Pivotal

  • Transcript loading…

Usability Testing Leads to the Right Products

Through usability testing, you can find design flaws you might otherwise overlook. When you watch how test users behave while they try to execute tasks, you’ll get vital insights into how well your design/product works. Then, you can leverage these insights to make improvements. Whenever you run a usability test, your chief objectives are to:

1) Determine whether testers can complete tasks successfully and independently .

2) Assess their performance and mental state as they try to complete tasks, to see how well your design works.

3) See how much users enjoy using it.

4) Identify problems and their severity .

5) Find solutions .

While usability tests can help you create the right products, they shouldn’t be the only tool in your UX research toolbox. If you just focus on the evaluation activity, you won’t improve the usability overall.

case study usability test

There are different methods for usability testing. Which one you choose depends on your product and where you are in your design process.

Usability Testing is an Iterative Process

To make usability testing work best, you should:

a. Define what you want to test . Ask yourself questions about your design/product. What aspect/s of it do you want to test? You can make a hypothesis from each answer. With a clear hypothesis, you’ll have the exact aspect you want to test.

b. Decide how to conduct your test – e.g., remotely. Define the scope of what to test (e.g., navigation) and stick to it throughout the test. When you test aspects individually, you’ll eventually build a broader view of how well your design works overall.

2) Set user tasks –

a. Prioritize the most important tasks to meet objectives (e.g., complete checkout), no more than 5 per participant. Allow a 60-minute timeframe.

b. Clearly define tasks with realistic goals .

c. Create scenarios where users can try to use the design naturally . That means you let them get to grips with it on their own rather than direct them with instructions.

3) Recruit testers – Know who your users are as a target group. Use screening questionnaires (e.g., Google Forms) to find suitable candidates. You can advertise and offer incentives . You can also find contacts through community groups , etc. If you test with only 5 users, you can still reveal 85% of core issues.

4) Facilitate/Moderate testing – Set up testing in a suitable environment . Observe and interview users . Notice issues . See if users fail to see things, go in the wrong direction or misinterpret rules. When you record usability sessions, you can more easily count the number of times users become confused. Ask users to think aloud and tell you how they feel as they go through the test. From this, you can check whether your designer’s mental model is accurate: Does what you think users can do with your design match what these test users show?

If you choose remote testing , you can moderate via Google Hangouts, etc., or use unmoderated testing. You can use this software to carry out remote moderated and unmoderated testing and have the benefit of tools such as heatmaps.

case study usability test

Keep usability tests smooth by following these guidelines.

1) Assess user behavior – Use these metrics:

Quantitative – time users take on a task, success and failure rates, effort (how many clicks users take, instances of confusion, etc.)

Qualitative – users’ stress responses (facial reactions, body-language changes, squinting, etc.), subjective satisfaction (which they give through a post-test questionnaire) and perceived level of effort/difficulty

2) Create a test report – Review video footage and analyzed data. Clearly define design issues and best practices. Involve the entire team.

Overall, you should test not your design’s functionality, but users’ experience of it . Some users may be too polite to be entirely honest about problems. So, always examine all data carefully.

Learn More about Usability Testing

Take our course on usability testing .

Here’s a quick-fire method to conduct usability testing .

See some real-world examples of usability testing .

Take some helpful usability testing tips .

Questions related to Usability Testing

To conduct usability testing effectively:

Start by defining clear, objective goals and recruit representative users.

Develop realistic tasks for participants to perform and set up a controlled, neutral environment for testing.

Observe user interactions, noting difficulties and successes, and gather qualitative and quantitative data.

After testing, analyze the results to identify areas for improvement.

For a comprehensive understanding and step-by-step guidance on conducting usability testing, refer to our specialized course on Conducting Usability Testing .

Conduct usability testing early and often, from the design phase to development and beyond. Early design testing uncovers issues when they are more accessible and less costly to fix. Regular assessments throughout the project lifecycle ensure continued alignment with user needs and preferences. Usability testing is crucial for new products and when redesigning existing ones to verify improvements and discover new problem areas. Dive deeper into optimal timing and methods for usability testing in our detailed article “Usability: A part of the User Experience.”

Incorporate insights from William Hudson, CEO of Syntagm, to enhance usability testing strategies. William recommends techniques like tree testing and first-click testing for early design phases to scrutinize navigation frameworks. These methods are exceptionally suitable for isolating and evaluating specific components without visual distractions, focusing strictly on user understanding of navigation. They're advantageous for their quantitative nature, producing actionable numbers and statistics rapidly, and being applicable at any project stage. Ideal for both new and existing solutions, they help identify problem areas and assess design elements effectively.

To conduct usability testing for a mobile application:

Start by identifying the target users and creating realistic tasks for them.

Collect data on their interactions and experiences to uncover issues and areas for improvement.

For instance, consider the concept of ‘tappability’ as explained by Frank Spillers, CEO: focusing on creating task-oriented, clear, and easily tappable elements is crucial.

Employing correct affordances and signifiers, like animations, can clarify interactions and enhance user experience, avoiding user frustration and errors. Dive deeper into mobile usability testing techniques and insights by watching our insightful video with Frank Spillers.

For most usability tests, the ideal number of participants depends on your project’s scope and goals. Our video featuring William Hudson, CEO of Syntagm, emphasizes the importance of quality in choosing participants as it significantly impacts the usability test's results.

He shares insightful experiences and stresses on carefully selecting and recruiting participants to ensure constructive and reliable feedback. The process involves meticulous planning and execution to identify and discard data from non-contributive participants and to provide meaningful and trustworthy insights are gathered to improve the interactive solution, be it an app or a website. Remember the emphasis on participant's attentiveness and consistency while performing tasks to avoid compromising the results. Watch the full video for a more comprehensive understanding of participant recruitment and usability testing.

To analyze usability test results effectively, first collate the data meticulously. Next, identify patterns and recurrent issues that indicate areas needing improvement. Utilize quantitative data for measurable insights and qualitative data for understanding user behavior and experience. Prioritize findings based on their impact on user experience and the feasibility of implementation. For a deeper understanding of analysis methods and to ensure thorough interpretation, refer to our comprehensive guides on Analyzing Qualitative Data and Usability Testing . These resources provide detailed insights, aiding in systematically evaluating and optimizing user interaction and interface design.

Usability testing is predominantly qualitative, focusing on understanding users' thoughts and experiences, as highlighted in our video featuring William Hudson, CEO of Syntagm. 

It enables insights into users' minds, asking why things didn't work and what's going through their heads during the testing phase. However, specific methods, like tree testing and first-click testing , present quantitative aspects, providing hard numbers and statistics on user performance. These methods can be executed at any design stage, providing actionable feedback and revealing navigation and visual design efficacy.

To conduct remote usability testing effectively, establish clear objectives, select the right tools, and recruit participants fitting your user profile. Craft tasks that mirror real-life usage and prepare concise instructions. During the test, observe users’ interactions and note their challenges and behaviors. For an in-depth understanding and guide on performing unmoderated remote usability testing, refer to our comprehensive article, Unmoderated Remote Usability Testing (URUT): Every Step You Take, We Won’t Be Watching You .

Some people use the two terms interchangeably, but User Testing and Usability Testing, while closely related, serve distinct purposes. User Testing focuses on understanding users' perceptions, values, and experiences, primarily exploring the 'why' behind users' actions. It is crucial for gaining insights into user needs, preferences, and behaviors, as elucidated by Ann Blanford, an HCI professor, in our enlightening video. 

She elaborates on the significance of semi-structured interviews in capturing users' attitudes and explanations regarding their actions. Usability Testing primarily assesses users' ability to achieve their goals efficiently and complete specific tasks with satisfaction, often emphasizing the ease of interface use. Balancing both methods is pivotal for comprehensively understanding user interaction and product refinement.

Usability testing is crucial as it determines how usable your product is, ensuring it meets user expectations. It allows creators to validate designs and make informed improvements by observing real users interacting with the product. Benefits include:

Clarity and focus on user needs.

Avoiding internal bias.

Providing valuable insights to achieve successful, user-friendly designs. 

By enrolling in our Conducting Usability Testing course, you’ll gain insights from Frank Spillers, CEO of Experience Dynamics, extensive experience learning to develop test plans, recruit participants, and convey findings effectively.

Explore our dedicated Usability Expert Learning Path at Interaction Design Foundation to learn Usability Testing. We feature a specialized course, Conducting Usability Testing , led by Frank Spillers, CEO of Experience Dynamics. This course imparts proven methods and practical insights from Frank's extensive experience, guiding you through creating test plans, recruiting participants, moderation, and impactful reporting to refine designs based on the results. Engage with our quality learning materials and expert video lessons to become proficient in usability testing and elevate user experiences!

Literature on Usability Testing

Here’s the entire UX literature on Usability Testing by the Interaction Design Foundation, collated in one place:

Learn more about Usability Testing

Take a deep dive into Usability Testing with our course Conducting Usability Testing .

Do you know if your website or app is being used effectively? Are your users completely satisfied with the experience? What is the key feature that makes them come back? In this course, you will learn how to answer such questions—and with confidence too—as we teach you how to justify your answers with solid evidence .

Great usability is one of the key factors to keep your users engaged and satisfied with your website or app. It is crucial you continually undertake usability testing and perceive it as a core part of your development process if you want to prevent abandonment and dissatisfaction. This is especially important when 79% of users will abandon a website if the usability is poor, according to Google! As a designer, you also have another vital duty—you need to take the time to step back, place the user at the center of the development process and evaluate any underlying assumptions. It’s not the easiest thing to achieve, particularly when you’re in a product bubble, and that makes usability testing even more important. You need to ensure your users aren’t left behind!

As with most things in life, the best way to become good at usability testing is to practice! That’s why this course contains not only lessons built on evidence-based approaches, but also a practical project . This will give you the opportunity to apply what you’ve learned from internationally respected Senior Usability practitioner, Frank Spillers, and carry out your own usability tests .

By the end of the course, you’ll have hands-on experience with all stages of a usability test project— how to plan, run, analyze and report on usability tests . You can even use the work you create during the practical project to form a case study for your portfolio, to showcase your usability test skills and experience to future employers!

All open-source articles on Usability Testing

7 great, tried and tested ux research techniques.

case study usability test

  • 1.2k shares
  • 3 years ago

How to Conduct a Cognitive Walkthrough

case study usability test

  • 2 years ago

How to Conduct User Observations

case study usability test

Mobile Usability Research – The Important Differences from the Desktop

case study usability test

How to Recruit Users for Usability Studies

case study usability test

Best Practices for Mobile App Usability from Google

case study usability test

Unmoderated Remote Usability Testing (URUT) - Every Step You Take, We Won’t Be Watching You

case study usability test

Making Use of the Crowd – Social Proof and the User Experience

case study usability test

Agile Usability Engineering

case study usability test

Four Assumptions for Usability Evaluations

case study usability test

  • 7 years ago

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this page , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share the knowledge!

Share this content on:

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

New to UX Design? We’re Giving You a Free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

  • (855) 776-7763

All Products

BIGContacts CRM

Survey Maker

ProProfs.com

  • Get Started Free

A Complete Usability Testing Guide - Methods, Questions & More

Create perfect products that delight your customers, what is usability testing, types of usability testing, benefits of usability testing, examples of what usability testing can do, usability testing methods, when should you conduct a usability test, how many tests to do, the 9 phases of a usability study, best practices for usability testing, usability testing script & questions, qualaroo templates for usability testing, mistakes to avoid in usability testing, over to you.

We’re heading into the holiday season of 2023, and still, the words that Philip Kotler said decades ago ring truer than ever: “ customer is king .”

With the approval of your kings (and queens), your product - app, website, software, or any other offering to them - is bound to do well in the market. Which brings us to the question:

How can you tell what goes on in your customer’s minds?

Put another way, is your product going to help your target segment overcome their challenges?

Before you go all-in and launch, ensure that you’ve made a sure bet - with usability testing.

This guide answers ‘what is usability testing’ and delves into some methods, questions, and ways to analyze the data.

TL;DR tip: Scroll to the end for a convenient list of FAQs about usability testing.

  • Usability Testing - What Is It?
  • How You Benefit From Usability Testing
  • Common Usability Testing Methods
  • Remote [Moderated/Unmoderated]
  • Benchmark Comparison
  • Open-Ended [Exploratory]
  • Contextual [Informed users]
  • Guerilla [Random public]
  • What Questions to Ask During Usability Testing
  • Best Practices To Follow for Usability Testing
  • Mistakes To Avoid in Usability Testing

Usability testing answers your questions about any and all parts of your product’s design. From the moment users land on your product page to the time they show exit intent, you can get feedback about user experience.

Using this feedback, you can create products from perfect prototypes that delight your customers and improve conversion rates. This also works out well if you want to improve your product that is already in the market.

Usability testing is essential because it makes your website or product easier to use. With so much competition out there, a user-friendly experience can be the difference between converting the visitors to customers or making them bounce off to your competitors.

For example, the bounce rate on a web page that loads slower than 3 seconds can jump up to 38% , meaning more people leaving the site without converting.

In the same way, a product with a complicated information architecture will add to the learning curve for the users. It may confuse them enough to abandon it and move to a more valuable product.

When you’re told by unbiased testers or users that your product “ feels like it’s missing something ,” but they can’t tell you what “ something ” is, isn’t it frustrating? It’s a pity you can’t turn into Sherlock Holmes and ‘deduce’ what they mean by that vague statement.

Philip Kotler even quoted the author of Sherlock Holmes, Sir Arthur Conan Doyle: “ It is a capital mistake to theorize before one has data. ” in his book Kotler on Marketing: How to Create, Win, and Dominate Markets .

How cool would it be to turn the subjective, vague survey responses into objective, undeniable data? This data can be the source of quite ‘elementary’ as well as pretty deep insights that you’ve been seeking.

That’s where usability testing comes in.

It puts the users in the driver's seat to guide you to all the points which can deter them from having a good experience.

Usability

Be it a prototype or working product, you can use usability tests at any stage to streamline the flows and other elements.

It’s time to see what benefits usability testing can bring to your product development cycle. Irrespective of the stage at which your product is, you can run usability testing to pinpoint critical issues and optimize it to improve user experience.

1. Identify Issues With Process Flows

There are so many processes & flows on your website or product, from creating an account to checking out. How would you know if they are easy to use and understand or not?

For example, a two-step checkout process may seem like a better optimization strategy than a three-step process. You can also use A/B testing to see which one brings more conversion.

But the question is, which is more straightforward to understand? Even though a two-step checkout involves fewer steps than 3 step checkout, it may complicate the checkout process more, leaving first-time shoppers confused.

That is where usability testing helps to find the answer. It helps to simplify the processes for your users.

Case Study- How McDonald's improved its mobile ordering process

In 2016, McDonald's rolled out their mobile app and wanted to optimize it for usability. They ran the first usability session with 15 participants and at the same time collected survey feedback from 150 users after they had used the app.

It allowed them to add more flexibility to the ordering process by introducing new collection methods. They also added the feature which allowed users to place the order from wherever they want. The customers could then collect the food upon arriving at the restaurant. It improved customer convenience and avoided congestion at the restaurants.

Following the success of their first usability test iteration, they ran successive usability tests before releasing the app nationwide.

They wanted to test the end-to-end process for all collection methods to understand the app usability, measure customer satisfaction, identify areas of improvement, and check how intuitive the app was.

They ran 20 usability tests for all end-to-end processes and surveyed 225 first-time app users.

With the help of the findings, the app received a major revamp to optimize the User Interface or UI elements, such as more noticeable CTAs. The testing also uncovered new collection methods which were added to the app.

2. Optimize Your Prototype

Just like in the case of McDonald's in the previous point, usability testing can help you evaluate whether your prototype or proof of concept meets the user's expectations or not.

It is an important step during the early development stages to improve the flows, information architecture, and other elements before you finalize the product design and start working on it.

It saves you from the painstaking effort of tearing down what you've built because it does not agree with your users.

Case Study - SoundCloud used testing to build an optimized mobile app for users

When SoundCloud moved its focus from desktop to mobile app, it wanted to build a user-friendly experience for the app users and at the same time explore monetization options. With increased demand on the development schedule because of the switch from website to mobile, the SoundCloud development team decided to use usability testing to discover issues and maintain a continuous development cycle.

The first remote usability test iteration involving 150 testers from 22 countries found over 150 bugs that affected real users. It also allowed the team to scale up the testing to include participants from around the world.

Soundcloud now follows an extensive testing culture to release smoother updates with fewer bugs. It helps them to add new features with better mobile compatibility to streamline their revenue models.

Bonus Read: Step by Step: Testing Your Prototype

3. Evaluate Product Expectations

With usability, it is possible to map whether the product performs as intended in the actual users' environment. You can test whether the product is easy to use or not. The question that testing answers is - Does it have all the features to help the user complete their goal in their preferred conditions?

Case Study - Udemy maps expectations and behavior of mobile vs. desktop app users

Udemy, an online education platform, always follows a user-centric approach and learns from its users' feedback. It has helped them to put customer insights into the product roadmap.

The team wanted to understand the behavioral differences between users from different platforms such as desktop and mobile to map their expectations.

So, they decided to create a diary study using remote unmoderated testing as it would allow participants from different user segments and provide rich insights into the product.

They asked the participants to use a mobile camera to show what they were doing while using the app to study their behavior and the environment.

The data from the studies were centralized into one place for the teams to take notes of the issues and problems that people faced while using the Udemy platform.

With remote testing, the Udemy team got insights into how students used the app in their environment, whether on mobile or desktop. For example, the initial assumption that people used the mobile app on the go was proven wrong. Instead, the users were stationary even when they were on the Udemy mobile app.

It also shed new light on the behavior of mobile users helping the team to use the insights into future product and feature planning.

Bonus Read: Product Feedback Survey Questions & Examples

4. Optimizing Your Product or Website

With every usability test, you can find major and minor issues that hinder the user experience and fix them to optimize your website or product.

Case Study - Satchel uses usability testing to optimize their website for conversions

Satchel, an online learning platform, wanted to test the usability of their website to use the inputs into their conversion rate optimization process. The test recruited 5 participants to perform specific tasks and answer the questions to help the team review the functionality and usability of user journeys.

The finding revealed one major usability issue with the website flow of one process in which users were asked to fetch the pricing information.

satchel

It indicated a high lostness score (0.6), which is the ratio of optimal to the actual number of steps taken to complete a task. It means that users were getting lost while completing the assignment or were taking longer than expected to complete it. Some participants even got frustrated, which meant churned customers in the case of real users.

satchel

Using the insights, the team decided to test a new design by adding the pricing and 'booking the demo' link into the navigation menu.

The results showed a 34% increase in the 'book a demo' requests validating their hypothesis.

The example clearly shows how different tests can be used to develop a working hypothesis and test it out with statistical confidence.

Bonus Read: Website Optimization Guide: Strategies, Tips & Tools

5. Improve User Experience

In all the studies discussed above, the ultimate aim of usability testing was to improve user experience. Because if the users find your website or product interactive and easy to use, they are more likely to convert into customers.

Case study - Autotrader.com improves user experience with live interviews

Autotrader.com, an online marketplace for car buyers and sellers, was looking to improve users' car buying experience on the website. To understand user behavior and map their journey, the team used live conversations to connect with recent customers.

Remote interviews made it possible to test users from across the country to understand how the behavior changed with location, geography, demographics, and other socioeconomic factors. It helped the testing team to connect with users across different segments to compare their journeys.

They discovered one shared experience - both new and experienced customers found the process of finding a new car very exhausting.

“Live Conversation allows me to do journey mapping type interviews and persona type work that I couldn’t do before because of staff and budget constraints— bringing these insights into the company faster and at a much lower cost.”

Bradley Miller, Senior User Experience Researcher at Autotrader

Live interviews also helped uncover new insights about the consumer shopping process. The team found out that most consumers started their car buying journey from search engine queries.

The users were not looking for a third-party car website as thought earlier. It meant they could land on any page on the website. It became clear that the landing pages needed to be revamped to provide a more customer-centric experience to the visitors.

The team redesigned each page to act as a starting point for the visitor journey, dispelling the assumption that people already knew about the website.

Qualaroo's Guide to Collecting User Feedback for Digital Products

Before you conduct a usability test, it is crucial to understand different usability testing types to pick the one that suits your needs and resource availability.

There are mainly six types of usability testing:

1. Moderated & Unmoderated Usability Testing

2. remote and in-person usability testing.

The difference between remote and unmoderated testing is that a moderator may be present during a remote usability test.

3. Qualitative and Quantitative Usability Testing

Qualitative and Quantitative Usability Testing

4. Benchmark or Comparison Usability Testing

Benchmark or Comparison testing is done to compare two or more design flows to find out which works best for the users.

For example, you can test two different designs for your shopping cart menu -

  • one that appears after hovering the cursor over the cart icon
  • the other that shows as a dropdown after clicking on it.

It is a great way to test different solutions to the same problem/issue to find the optimal solution preferred by your users.

You can run benchmark testing at any stage of your product development cycle.

benchmark

Now that you know about usability tests types, let's discuss different usability testing methods.

Each method has different applicability and approach towards testing the participants. You can use multiple methods in conjunction to get deeper insights into your users.

1. Lab Testing

Lab usability tests are conducted in the presence of a moderator under controlled conditions. The users perform the tasks on the website, product, or software, and the moderator observes them making notes of their actions and behavior. The moderator may ask the users to explain their actions to collect more information.

The designers, developers, or other personnel related to the project may be present during the test as observers. They do not interfere with the testing conditions.

lab-testing

Advantages:

  • It lets you observe the users closely and interact with them personally to collect in-depth insights.
  • Since the test is performed under controlled conditions, it offers a standardized testing environment for all users.
  • It is an excellent method to test product usability early in the development stage. You can also perform concept usability testing called paper prototyping using wireframes.

Limitations of lab usability testing

  • It is one of the most expensive and time-consuming testing methods.
  • The sample size is usually small (5-10 participants), which may not reflect the consensus of your entire customer base.

Tips for conducting effective lab usability tests:

  • Always make the participants aware that you are testing the website or product and not testing them. It will help to alleviate the stress in the users' minds.
  • Keep your inquiries neutral. Don't ask leading questions to the users. You can pose questions like 'tell me more about it, or do you have something to add about the task.'
  • Prioritize user behavior over their answers. Sometimes the feedback does not reflect their actual experience. The user may have had a bad experience but provide four or five-star ratings to be polite.

2. Paper Prototyping

Paper prototyping is an early-stage lab usability testing performed before the product, website, or software is put into production. It uses wire diagrams and paper sketches of the design interface to perform the usability test.

paper-prototype

Different paper screens are created for multiple scenarios in the test tasks. The participants are given the tasks, and they point to the elements they would click on the paper model interface. The human-computer (developer or moderator) then changes the paper sketches to bring a new layout, snippets, or dropdowns as it would occur in the product UI.

The moderator observes the user's behavior and may ask questions about their actions to get more information about the choices.

Example Usability Test with a Paper Prototype

  • One of the fastest and cheapest methods to optimize the design process.
  • No coding or designing is required. It helps to test the usability of the design before putting effort into creating a working prototype.
  • Since the usability issue is addressed in the planning phase, it saves time and effort in the development cycle.

Limitations

  • The paper layouts are very time-consuming to prepare.
  • It requires a controlled environment to perform the test properly, which adds more cost.

3. Moderated Card Sorting

Card sorting is helpful in optimizing the information architecture on your website, product, or software. This usability testing method lets you test how users view the information and its hierarchy on your website or product.

In moderated card sorting, the users are asked to organize different topics (labels) into categories. Once they are done, the moderator tries to find out the logic behind their grouping. Successful card sorting requires around 15 participants.

When to use card sorting for usability testing?

You can use card sorting to:

  • Streamline the information architecture of your website or product.
  • Design a new website or improve existing website design elements, such as a navigation menu.

Types of card sorting tests:

Whether it’s an moderated or unmoderated card sorting test, there are three types of card sorting tests:

card-slot

Advantages of card sorting:

  • It is a user-focused method that helps to create streamlined flows for users.
  • It is one of the fastest and inexpensive ways to optimize the website information architecture.
  • It’s highly dependent on users’ perception, especially free sorting.
  • There can be instances when each user has variable categories without common attributes, leading to a failed test.

4. Unmoderated Card Sorting

In unmoderated card sorting, the users sort the cards alone without a moderator. You can set up the test remotely or in lab conditions. It is much quicker and inexpensive than a moderated sorting method.

Unmoderated card sorting is usually done using an online sorting tool like Trello or Mural. The tool records the user behavior and actions for analysis later.

5. Tree Testing or Reverse Card Sorting

If card sorting helps you design the website hierarchy, tree testing lets you test the efficiency of a given website architecture design.

You can evaluate how easily users can find the information from the given categories, subcategories, and topics.

The participants are asked to use the categories and subcategories to locate the desired information in a given task. The moderator assesses the user behavior and the time taken to find the information.

You can use Tree testing to:

  • Test if the designed groups make sense to the people.
  • See if the categories are easy to navigate.
  • Find out what problems people face while using the information hierarchy.

Example of Card Sorting & Tree Sorting

6. Guerilla Testing

Guerilla testing requires you to approach random people outside, such as parks, coffee houses, or any other public area, and ask them to take the test. Since it eliminates the need to find qualified participants and a testing venue, it is one of the most time-efficient and cost-effective testing methods to collect rich insights about your design prototype or the concept itself. The acceptable sample size is between 6 to 12 participants.

  • It can be used as an ad hoc usability testing method to gather user insights during the early stages of development.
  • It is an inexpensive and fast way of collecting feedback as you don’t need to hire a specific target audience or moderator to conduct the test.
  • You can even use the paper prototype to conduct guerilla testing to optimize your design.
  • Since participants are chosen at random, they may not represent your actual audience sample.
  • The test needs to be short in length as people may be reluctant to give much time for the test. The usual length for guerilla testing is 10-15 minutes per session.

7. Session Recordings or Screen Recording

Session recording is a very effective way to visualize the user interactions on your functional website or product. It is one of the best unmoderated remote usability testing methods to identify the visitors' pain points, bugs, and other issues that might prevent them from completing the actions.

This type of testing requires screen recording tools such as SessionCam. Once set up, the tool anonymously records the users' actions on your website or product. You can analyze the recording later to evaluate usability and user experience.

It can help you visualize the user's journey to examine the checkout process, perform form analysis, uncover bugs and broken pathways, or any other issues leading to a negative experience.

  • It doesn't require hiring the participants. You can test the website using your core audience.
  • Since the data is collected anonymously, the visitors are not interrupted at any time.
  • You can use screen recording in conjunction with other methods like surveys to explore the reasons behind users' actions and collect their feedback.

Remote Screen recording with qualified participants

You can also use screen recording using specific participants and the think-aloud method, where people say their thoughts aloud as they perform the given tasks during the test.

In this method, the participants are selected and briefed before the test. It requires more resources than anonymous session recording. It's a fantastic method to collect in-the-moment feedback and actual thoughts of the participants.

8. Eye-Tracking Usability Test

Eye-tracking testing utilizes a pupil tracking device to monitor participants' eye movements as they perform the tasks on your website or product. Like session recording, it is an advanced testing technique that can help you collect nuanced information often missed by inquiry or manual observation.

The eye-tracking device follows the users' eye movements to measure the location and duration of a user's gaze on your website elements.

The results are rendered in the form of:

  • Gaze plots (pathway diagrams) - The size of the bubble represents the duration of the user's gaze at the point.

reserach-page-eye-movements

  • Gaze replays - Recording of how the user processed the page and its elements.
  • Heatmaps - A color spectrum indicating the most gazed portions of the webpage by all the users. You may have to test up to 39 users to create reliable heatmaps.

heatmap-humira

This type of testing is useful when you want to understand how users perceive the design UI.

  • You can evaluate how users are scanning your website pages.
  • Identify the elements that stand out and grab users' attention first.
  • Identify the ideal portions of the website that attract users to place your CTAs, banners, and messaging.

9. Expert Reviews

Expert reviews involve a UX expert to review the website or product for usability and compliance issues.

There are different ways to conduct an expert review:

  • Heuristic evaluation - The UX expert examines the website or product against the accepted heuristic usability principles .
  • Cognitive psychology - The expert reviews the design from a user's perspective to gauge the system's usability.

A typical expert review comprises of following elements:

  • Compilation of areas where the design excels in usability.
  • List of points where the usability heuristics and compliance standards fail.
  • Possible fixes for the indicated usability problems
  • Criticality of the usability issues to help the team prioritize the optimization process.

An expert review can be conducted at any stage of the product development. It is an excellent method to uncover the issues with product design and other elements quickly.

But since it requires industry experts and in-depth planning, this type of testing can add substantial cost and time to your design cycle.

10. Automated Usability Evaluation

This last method is more of a proof of concept than a working usability testing methodology. Various papers and studies call for an automated usability tool that can iron out the limitations of conventional testing methods.

Here are two interesting studies that outline the possibilities and applications of an automated usability testing framework.

  • Automated usability testing framework
  • USEFul – A Framework To Automate Website Usability Evaluation

Conventional testing methods, though effective, carry various shortcomings such as inefficiency of the moderator, high resource demands, time consumption, and observer bias.

With automated usability testing, the tool would be self-sufficient to conduct the following functions:

  • Point out major usability issues just like conventional methods.
  • Carry out analysis and calculations by itself, providing quicker results.
  • Provide more accurate and reliable data.
  • Allow for increased flexibility and customization of the test settings to favor all the stages of development.

It will allow developers and researchers to reduce development time as the testing and optimization iterations could be carried simultaneously.

One of the common questions asked about usability testing is - 'when can I do it?' The answer is anytime during the product life cycle. It means during the planning stage, design stage, and even after release.

1. Usability Testing During the Planning Stage

Whether you are creating a new product or redesigning it, conducting usability tests during the planning or initial design stage can reveal useful information that can prevent you from wasting time in the wrong place.

It's when you are coming up with the idea of the product or website design. So testing it out can help you dispel initial assumptions and refine the product flows while still on paper.

For example, you can test whether the information architecture you are planning will be easy to understand and navigate for users. Since nothing is committed, it will help restructure it if needed without much effort.

planning-stage

You can use usability testing methods like paper prototyping, lab testing, and card sorting to test your design concept.

2. Usability Testing During the Design or Development Stage

Now that you have moved into the development stage and produced a working prototype, you can conduct tests to do behavioral research.

At this point, usability testing aims to find out how the functionality and design come together for the users.

  • With a clickable prototype, you can uncover issues with flows as well as design elements.
  • You can study actual user behavior as they interact with your product or website to gain deeper insights into their actions.

While usability test during the planning stage gives you qualitative insights, with design prototype, you can measure quantitative metrics as well to measure the product's usability such as:

  • Task completion time
  • Success rate
  • Number of clicks or scrolls

This data can help validate the design and make the necessary adjustments to the process flows before continuing to the next phase of development.

3. Usability Testing After Product Release

There is always room for improvement, so usability testing is as crucial after product launch as well.

You may want to optimize the current design or add new features to improve the product or website.

It is beneficial to test the redesign or update for usability issues before deploying it. It will help to evaluate if the new planned update works better or worse than the current design.

Running a successful usability test depends on multiple factors such as time constraints, budget, and tools available at your disposal.

Though each usability testing method has a slightly different approach due to the testing conditions and depth of research, they share some common attributes, as explained in this section.

Let’s explore the eight common steps to conduct a usability test.

Step 1: Determine What to Test

Irrespective of the usability testing method, the first step is to draw the plan for the test. It includes finding out what to test on your website or product.

It can be the navigation menu, checkout flow, new landing page design, or any other crucial process.

If it is a new website design, you probably have the design flow in mind. You can create a prototype or wire diagram depicting the test elements.

But if you are trying to test the usability of an existing website or product flow, you can use the data from various tools to find friction points.

a. Google Analytics (GA)

Use the GA reports and charts as the starting point to narrow your scope. You can locate the pages with low conversions and high bounce rates, compare the difference between desktop vs. mobile website performance, and compare the traffic sources and other details.

google-analytics

b. Survey feedback

The next step is to deploy surveys at the desired points and use the survey feedback to uncover the issues and problems with these pages.

  • It can be an issue with the navigation menu.
  • Payment problems during checkout like payment failure or pending order status even after successful payment.
  • Issues with the shopping cart.
  • Missing feature on the website or webpage

You can choose from the below list of different types of survey tools based on your requirements:

1. 25 Best Online Survey Tools & Software

2. Best Customer Feedback Tools

3. 30 Best Website Feedback Tools You Need

4. 11 Best Mobile In-App Feedback Tools

survey-feedback

c. Tickets, emails, and other communication mediums

Complete the circle by collating the data from tickets, live chat, emails, and other interaction points.

These can be valuable, especially when you are hosting a SaaS product. Customers’ emails can reveal helpful information about bugs and glitches in the process flows and other elements.

Once you have the data, compile it under a single screen and start marking the issues based on the number of times mentioned, criticality, number of requests, and other factors. It will let you set the priority to choose the element for the test. Plus, it will help set clear measurement goals.

Step 2: Set Target Goals

It is necessary to set the goals for the test to examine its success or failure. The test goal can be qualitative or quantitative in nature, depending on what you want to test.

Let's say you want to test the usability of your navigation menu. Start by asking questions to yourself to identify the purpose of your test. For example,

  • Are users able to find the 'register your product' tab easily?
  • What is the first thing users notice when they land on the page?
  • How much time does it take to find the customer support tab in the navigation?
  • Is the menu easy to understand?

Once you have the specific goals in mind, assign suitable metrics to measure during the test.

For example:

  • Successful task completion: Whether the participants were able to complete the given task or not.
  • Time to complete a task: The time taken to complete the given task.
  • Error-free rate: The number of participants who were able to complete the task without making any error.
  • Customer ratings and feedback: Customer feedback after completing the task or test, such as satisfaction ratings, ease of use, star ratings, etc.

These metrics will help to establish the outcome of the test and plan the iteration.

Here is a sample goal template you can use in the usability test

sample-goal-template

This is how it will look once filled:

filled-goal-template

Step 3: Identify the Best Method

The next step is to find the most suited method to run the test and plan the essential elements for the chosen usability testing method.

  • If you are in the initial stage of design, you can use paper prototyping.
  • If it is a new product, go for lab usability testing to get detailed information into user behavior and product usability.
  • If you want to restructure the website hierarchy, you can use card sorting to observe how users interact with the new information structure.
  • If it is proprietary software, you can also conduct an expert review to see if it meets all the compliance measures.

Once you have decided on the method, it is time to think about the overheads.

  • If it is an in-person moderated test, you need a moderator, venue, and participants. You would also need to calculate the length for each session and the equipment required.
  • If it is a remote moderated task, find the right tool to run the test. It should be able to connect the moderator and participants through a suitable medium like phone or video. At the same time, it should allow the moderator to observe the participants' behavior and actions to ask follow-up questions.
  • If it is a remote unmoderated test, the usability testing tool would have to explain the instructions, schedule the tasks, guide the participants to each task and record the necessary behavioral attributes simultaneously.

Step 4: Write the Usability Tasks

Writing pre-test script.

Along with the task for the actual test, prepare a pre-test and introductory script to get to know about the participant (user persona) and tell them the purpose of the usability test. You can create scenarios to help the participants relate the product or website to their real-world experience.

Suppose you are testing a SaaS-based project management system. You can use the following warm up questions to build user personas:

  • What is your current role at your company?
  • Have you used project management software before?
  • If yes, for how long? Are you currently using it?
  • If no, do you know what a project management system does?

Use the information to introduce the participant to the test's purpose and tell them about the product if they have never heard of the concept.

Writing the Test Tasks

Probably the most important part of usability testing; tasks are designed as scenarios that prompt the participant to find the required information, get to a specific product page, or any other action.

The task can be a realistic scenario, straightforward instructions to complete a goal or a use case.

btn-shirt

Pro tips: Use the data from customer feedback and knowledge of customer behavior to come up with practical tasks.

Using the previous example, let's say you have to create a task for usability testing of your project management tool. You can use first scenarios like this:

'You are a manager of a dev-ops team with 20 people. You have to add each team member to your main project - 'Theme development.' How will you do it?'

This scenario will help you assess the following:

  • Can the participant navigate the menu to find the teams section from the navigation menu?
  • Can they find the correct project in the team menu, which shows the project name - Theme development?
  • How fast can they find the required setting?

The second scenario can be;

'Once you have added the team members, you want to assign a task to two lead developers, Jon and Claire, under the theme development project. The deadline for the task needs to be next Friday. How will you do it?

Use this scenario to test the following:

  • How easy is it to navigate the menu?
  • Is the design of the task form easy to follow?
  • How easily can the participant find all the fields in the task form, such as deadline, task name, developer name, etc.?

If the test is moderated, ask follow-up questions to find the reason behind user actions. If the test is unmoderated, use a screen recording tool or eye-tracking mechanism to record users' actions.

Remember the sequence of the task and the associated scenario will depend on the elements you want to test for usability.

  • For project management software, the primary function is to assign tasks, track productivity and monitor the deadlines.
  • For an e-commerce website, the main function is conversions, so your tasks and scenarios would be oriented towards letting the users place an order on the website.

Step 5: Find the Participants

There are multiple ways to choose the participants for your usability test.

  • Use your website audience: If you have a website, you can add survey popups to screen the visitors and recruit the right participants for the test. Once you have the required number of submissions, you can stop the popup.

website-audience

  • Recruit from your social media platforms: You can also use the social channel to find the right participants.
  • Hire an agency: You can use a professional agency to find the participants, especially if you are looking for SMEs and a specific target audience, like people working in the IT industry who have experience with a project management tool.

To increase the chances of participation, always add an incentive for your participants, such as gift cards or discounts codes.

Step 6: Run a Pilot Test

With everything in place, it is time to run a pre-test simulation to see if everything is working as intended or not. A pilot test can help you find issues with the scenarios, equipment, or other test-related processes. It is a quality check of your usability test preparation.

  • Choose a candidate who is not related to the project. It can be a random person or a member of a different team not involved with the project.
  • Perform the test as if they were the actual participant. Go through all the test sessions and equipment to check everything works fine.

With pilot testing, you can check :

  • If the scenarios are task-focused and easy to understand.
  • If there are any faulty equipment
  • The pre and post-test questions are up to mark.
  • The testing conditions are ideal.

Step 7: Conduct the Usability Test

If it is an in-person moderated test, start with the warmup questions and introductions from the pre-test script. Make sure the participants are relaxed.

Start with an easier task to help the participants feel comfortable. Then transition into more specific tasks. Make sure to ask for their feedback and explore the reasons behind their actions, such as;

  • How was your experience completing this task?
  • What did you think of the design overall?
  • Would you like to say something about the task?

For the remote unmoderated tests, make sure that the instructions are clear and concise for the participants.

maze

You can also include post-test questions for the participants, such as;

  • Did I forget to ask you about anything?
  • What are the three things you liked the most about the product/website/software?
  • Did it lack anything?
  • On a scale from 1 to 10, how easy was the product/website/software to use?

Step 8: Analyze the Results

Once the test is over, it is time to analyze the results and turn the raw data into actionable insights.

a. Start by going over the recordings, notes, transcripts, and organize the data points under a single spreadsheet. Note down each error encountered by the user and associated task.

b. One way to organize your data is to list the tasks in one column, the issues encountered in the tasks in the next column, and then add the participant's name next to each issue. It will help you point out how many users faced the same problem.

c. Also, calculate the quantitative metrics for each task, such as success rate, average completion time, error-free rate, satisfaction ratings, and others. It will help you track the goals of the test as defined in point 2.

d. Next, mark each issue based on its criticality. According to NNGroup, the issues can be graded on five severity ratings ranging from 0-4 based on their frequency, impact, and persistence:

0 = I don't agree that this is a usability problem at all

1 = Cosmetic problem only: Need not be fixed unless extra time is available on project

2 = Minor usability problem: Fixing this should be given low priority

3 = Major usability problem: Important to fix, so should be given high priority

4 = Usability catastrophe: Imperative to fix this before product can be released

e. Create a final report detailing the highest priority issues on the top and lowest priority problems at the bottom. Add a short, clear description of each issue, where and how it occurred. You can add evidence like recordings to help the team to reproduce it at their end.

f. Add the proposed solutions to your report. Take the help of other teams to discuss the issues, find out the possible solutions, and include them in the usability testing report.

proposed-solution

g. Once done, share the report with different teams to optimize the product/website/software for improving usability.

Do you feel that you know everything there is to know about your product and its users?

If you answer with a yes, then what is the purpose of a usability test, you may ask.

No matter how much you know about your customers, it isn’t wise to ignore the possibility that there is more to be learned about them, or about any shortcomings in your product.

That is why what you ask, when you ask, and how you ask is of uttermost importance.

Here are a few examples of usability testing questions to help you form your own.

Questions for ‘first glance’ testing

Check if your design communicates what the product/website is at first glance.

  • What do you think this tool/ website is for?
  • What do you think you can do on this website/ in this app?
  • When (in what situations) do you think would you use this?
  • Who do you think this tool is for? / Does this tool suit your purposes?
  • Does this tool resemble anything else you have seen before? If yes, what?
  • What, if anything, doesn’t make sense here? Feel free to type in this text box.

Pro-Tip: If you’re testing digitally with a feedback tool like Qualaroo, you can even time your questions to pop up after a pre-set time spent on-site for a more accurate first glance test.

Questions for specific tasks or use cases

Develop task-specific questions for common user actions (depending upon your industry).

  • How did you recognize that the product was on sale? (E-commerce and retail)
  • What information did you feel was missing, if any? (E-commerce and retail)
  • What payment methods would you like to be added to those already accepted? (E-commerce and retail/SaaS)
  • How did you decide that the plan you have picked was the right one for you? (SaaS)
  • Do you think booking a flight on this website was easier or more difficult than on other websites you have used in the past? (Travel)
  • Did sending money via this app feel safe? (Banking/FinTech)
  • Do you think data gathered by this app is reliable, safe, and secure from breaches or hacks? (Internet)

Pro-tip: If you want to test the ease with which users perform specific tasks (like the ones listed above), consider structuring your tasks as scenarios instead of questions.

Questions for assessing product usability

Ask these questions after users complete test tasks to understand usability better.

  • Was there anything that surprised you? If yes, what?
  • Was there anything you expected but was not there?
  • What was difficult or strange about this task, if anything?
  • What did you find easiest about this task?
  • Did you find everything you were looking for? / What was missing, if anything?
  • Was there anything that didn’t look the way you expected? If so, what was it?
  • What was unnecessary, if anything? / Was anything out of place? If so, what was it?
  • If you had a magic wand, what would you change about this experience/task?
  • How would you rate the difficulty level of this task?
  • Did it take you more or less time than you expected to complete this task?
  • Would you normally spend this amount of time on doing this task?

Pro-tip: If you are getting users to complete more than one task, limit yourself to no more than 3 questions after each task to help prevent survey fatigue.

Questions for evaluating the holistic (overall) user experience

Finalize testing with broad questions that collect new information you haven’t considered.

  • Try to list the features you saw in our tool/product.
  • Do you feel this application/tool/website is easy to use?
  • What would you change in this application/website, if anything?
  • How would you improve this tool/website/service?
  • Would you be interested in participating in future research?

Pro-tip: No matter which way you phrase your final questions, we recommend using an open-ended answer format so that you can provide users with a space to share feedback more freely. Doing so allows them to flesh out their experience during testing and might even inadvertently entice them to bring up issues that you may never have considered.

If you’re wondering how to conduct usability testing for the first time or without having to jump through hoops and loops of code, you can simply stroll over to Qualaroo’s survey templates.

We have created customizable templates for usability testing, like SUPR-Q (Standardized User Experience Percentile Rank Questionnaire - with or without NPS) , UMUX (Usability Metric for User Experience - 2 positive & 2 negative statements) , and UMUX Lite (2 positive statements).

SUPR-Q is a validated way to measure the general user experience on a website or application. It includes 8 questions within 4 areas: usability, trust/credibility, loyalty (including NPS), and appearance. However, it doesn’t identify bottlenecks or problems with navigation or specific elements of the interface. 50 is the comparison benchmark for assessing your product’s UX.

supr-q

UMUX allows you to measure the general usability of a product (software, website, or app). It has 4 statements for users to rate on a 5- or 7-point Likert scale. However, it isn’t generally used to measure specific characteristics like usefulness or accessibility, nor for identifying navigation issues, bottlenecks, or problems that are related to specific elements of your product’s interface.

smux

On a related note, if you have launched a product aimed specifically at smartphone users and you wish to understand the contextual in-app user experience (UX), simply take these 3 steps .

Even though there are multiple usability test methods, they share some general guidelines to ensure the accuracy of the test and results. Let’s discuss some of the do’s and don’ts to keep in mind while planning and conducting a usability test.

1. Always Do a Pilot Test

The first thing to remember is to do a quality check of the usability test before going live. You wouldn't want anything to fall apart during actual testing. Be it a broken link, faulty equipment, or ineffective questions/tasks. It would mean a wastage of time and resources.

Use a testee who is not associated with the usability test. It can be a member of another team in your organization. Run the usability test simulation under the actual conditions to gauge the efficiency of your tasks and prototype. Pilot testing can help to uncover previously undetected bugs.

It can help to:

  • Measure the session time, inspect the quality of the equipment and other parameters.
  • Test whether the prototype is designed according to the task. You can add missing flows or elements.
  • Test the questions and tasks. You can gauge whether the user understands them easily or not. Are the scenarios clear or not?

2. Leverage the Observer Position

It is always a good practice to let your team attend the usability test as observers. It can produce a two-pronged effect:

  • The team can learn firsthand about user experience and how people interact with your product.
  • It can help them follow the user journey and observe the points of friction which can aid them in coming up with the optimal solutions keeping the user journey in mind.

Pro tip : Be cautious as not to disturb or talk to the participants. The observer's role is to be invisible.

3. Determine the Sample Size for Your Usability Test

According to NNGroup , five users in any usability test can reveal 75% of the usability problems.

However, considering other external factors, there are different acceptable sample sizes for different usability testing methods.

  • Guerilla testing would require 5-8 testees.
  • Eye-tracking requires at least 39 users.
  • Card sorting can produce reliable results with 15 participants.

Pro tip : If you aim to measure usability for multiple audience segments, the sample size would increase accordingly to include representation for each segment.

So, use the correct sample size for your usability test to make the results statistically significant.

4. Recruit More Users

Not all participants may show up for the usability test, whether it is in-person or remote. That's why it is helpful to recruit more participants than your target sample size. It will ensure that testing reaches the required statistical significance and you obtain reliable results.

5. Always Have a Quick Post-Session Questionnaire

A post-test interview is a potential gold mine to collect deeper insights into user behavior. The users are more relaxed than during the test to provide meaningful feedback about their difficulties in performing the task, delights about the product, and overall experience. Plus, it also presents the opportunity to ask follow-up questions that you may have missed during the tasks.

Pro tip : The best way is to calculate the total session time by including the post-test interview period in it. For example, if you are planning each session to be 10-15 minutes long, keep 2-3 minutes for post-test questions.

6. Make the Participants Feel Comfortable

If your participants are nervous or stressed out, they won't be able to perform the tasks in the best way, which means skewed test results. So, try to make the participants feel relaxed before they start the test.

One way is to have a small introduction round during the pre-test session. Instead of strictly adhering to the question sheet, ask a few general friendly questions as the moderator to establish a relationship with the user. From there, you can smoothly transition into the testing phase without putting too much pressure on them.

7. Mitigate the Observer Effect

The “observer effect” or “Hawthorne Effect'' is when people in studies change their behavior because they are being watched. In moderated usability testing, the participants may get nervous or shy away from being critical about the product. They may not share their actual feedback or ask questions that may come to their mind. All these behavioral traits may lead to test failure or unreliable test results.

So, make sure that the moderator does not influence the participants. A simple trick is to pretend that you are writing something instead of constantly watching over the participants.

The observer effect is one more reason to have a friendly pre-test conversation and tell the participants to ask questions when they don't understand something and share their feedback openly. Discuss the test's purpose so they understand their feedback is valuable to make the product better.

The overarching purpose of this usability testing guide was to help answer this essential question: how do I create the best product that satisfies customers (and as a bonus, outshines the competition)? We hope it threw a bright light on the possible ways to answer the question. Plus, here are a few pitfalls that are best avoided as you search for the answers:

1. Creating Incorrect or Convoluted Scenarios

The success of the usability test depends on the tasks and scenarios you provide to the participants. If the scenarios are hard to understand, the participants may get confused. It will lead to a drop in task success rate but not due to problems with usability but the questions. The problem may get compounded in unmoderated usability testing where the participant cannot approach a moderator if stuck.

That's why it is essential to keep your sentences concise and clear so that the tester can follow the instructions easily. A pilot test is an excellent way to test the quality of the questions and make changes.

2. Asking Leading Questions

Leading questions are those that carry a response bias in them. These questions can unintentionally steer the participants in a specific direction. It can point towards a step that you may want the participants to take or an element you want them to select.

Leading questions nullify the usability test as they help the participants to reach the answer. So, test your scenarios and questions during the pilot run to weed out such questions (if any).

3. Not Assigning Proper Goals and Metrics

It is essential to set the goals clearly to deliver a successful usability test. Whether the goals are qualitative or quantitative, assign suitable metrics to measure them properly.

For example, if the task aims to test the usability of your navigation menu, you may want to see whether the users can find the information or not. But to quantify this assessment, you can also calculate the success rate and time taken by participants to complete their tasks.

While the qualitative analysis will reveal points about user experience, the quantitative data will help calculate the reliability of your findings. In this way, you can approach the test results objectively.

4. Testing With Incorrect Audience

One of the biggest mistakes in usability testing is using an incorrect audience sample, which leads to inaccurate results.

For example, If you use your friends or coworkers who may know about the product/software, they may not face the same problems that actual first-time users may experience.

In the same way, if they are entirely unaware of the product fundamentals, they might get stuck at points which your actual audience may easily navigate.

To recruit the right audience, start by focusing on the question - who will be the actual users of the test elements? Are they new users, verified customers, or any other user segment?

Once you have your answers, you can set the proper goals for the test.

5. Interrupting or Interacting With the Participants

Another grave mistake is repeatedly interrupting the participants. The usability test is aimed at observing the users testing the product/software without any outer influence. The moderator can ask the questions and guide them if necessary.

Constantly bugging the participants may make them nervous or frustrated, disturbing the testing environment and providing false results.

6. Not Running a Pilot Test

As mentioned before, a pilot test is a must in usability testing. It helps weed out the issues with your test conditions, scenarios, equipment, test prototype, and other elements.

With pilot testing, you can sort out these problems before you start the test.

7. Guiding Users During the Test

The purpose of usability testing is to simulate actual user behavior to measure how easy it would be for them to use the product. If you guide the users through the scenario, you are compromising the test results.

The moderator can help the participants understand the scenario, but they may not help them complete the task or point towards the solution.

8. Forming Premature Conclusions

Another mistake to avoid is to draw conclusions from the test result of the first two or three users. It is necessary to be vigilant towards all the participants before concluding any presumptions.

Also, do not rush the testing process. You may be tempted to feel that you have all the information after testing a few participants but do not rush the test. Scan all the participants to establish the reliability of your results. It may point you towards new issues and problems.

Bonus Read: 30 Best A/B Testing Tools Compared

9. Running Only One Testing Phase

Experimentation and testing is an iterative process. Plus. since the sample size in usability testing is usually small, it is a big mistake to treat the results from one test phase as definitive.

The same problem is with the implemented solution of the issues found in the test. There can be many solutions to a single problem, so how will you know which one will work the best?

The only way to find out is to run successive tests after implementing the solution to optimize product usability. Without iterations, you cannot tell if the new solution is better or worse than the previous one.

It is true what they say: experience is the best teacher. As you do more tests, you will gain a better understanding of what usability testing actually is about - creating a perfect product.

It stands to reason that the easier it will be for your prospective customers to use your product, the more sales you will see. Usability testing helps you eliminate any unforeseen glitches and improve user experience by collecting pertinent user feedback for actionable insights. To get the best insights, Qualaroo makes usability testing a delightful experience for your testers.

Irrespective of its size, every organization needs to hone listening to its customers by creating a robust VoC strategy suitable to its internal business model and existing VoC feedback.

Each Voice of Customer technique could be used on its own or by coupling it with other ways for optimum results. But for your efforts to turn into desired results, you need the right tool by your side. With Qualaroo surveys, you can get started with collecting real-time, unbiased feedback and procure qualitative insights from your quantitative data.

Do you want to run usability tests?

Qualaroo surveys gather insights from real users & improve product design

Integrations

What's new?

Prototype Testing

Live Website Testing

Feedback Surveys

Interview Studies

Card Sorting

Tree Testing

In-Product Prompts

Participant Management

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Research Maturity Model

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Maze Guides | Resources Hub

A Beginner's Guide to Usability Testing

0% complete

11 Usability testing templates for better UX

Whether you’re looking to start usability testing or have conducted usability tests before, there’s always room to streamline your process, and maximize your results.

Usability testing templates are the answer. A tried-and-tested template is an invaluable tool in any user researcher's toolkit, but before we dive into our top 11 templates—some we built and some we borrowed—let’s cover the what and why of usability testing.

What is usability testing?

Usability testing is the process of evaluating the usability of your product, by asking real users to complete a list of tasks while observing and noting their interactions.

Usability testing can be conducted in person as remote research via different usability testing tools , and can be either moderated or unmoderated—meaning the researcher can be present or absent from the process.

What’s the purpose of usability testing?

The aim of usability testing is to understand if your design is usable and intuitive enough for users to accomplish their goals. Usability tests enable you to gather feedback—both quantitative and qualitative—from users to help improve UX and ensure your product meets user wants and needs.

When planning your usability test, you’ll hone in on a specific goal or area of the product you’re researching. Whether it’s information architecture, a new feature interface, or the layout of a mobile app—there’s a relevant usability testing template out there ready to springboard you toward insight-driven product development.

11 Usability testing templates

Take a look at these 11 free templates for a helpful quick-start to everything from planning your usability test to analyzing the results.

1. Template for usability testing plan

Template for usability testing plan

It’s safe to say a usability testing plan is a necessity, and this simple template from Milanote is a great resource.

It’s best used for outlining your research goals and how you’ll get there—including participants and how you’ll recruit them, usability testing methods, and reporting formats.

Planning for usability testing is half the battle when it comes to getting consistent and accurate insights—it’s the strategy behind the action. It maximizes the benefit of usability testing by providing a step-by-step process for organizing, conducting, and reporting on usability tests.

This plan template includes everything you need to start planning your research—what are you waiting for?

Use this template

2. Template for early prototype usability testing

Run an early prototype test

Run an early prototype test

Easily test your early-stage prototypes with users using this template. Gather key insights, expectations, frustrations, and more from your target audience to help shape product decisions and get your team moving in the right direction.

See this template

Conducting usability testing on prototypes helps you iron out the wrinkles before submitting a prototype for development. It’s a great way to understand how you can improve your product from the get-go, and avoid spending time and effort building products that don’t work as users want and expect them to.

To use this template, simply input your product’s info—such as what you’re working on and the task to be completed—to create your usability test. This can then be shared with your test participants and will generate a report with all the results.

3. Template for usability testing a new product

Usability testing a new product

Usability testing a new product

Validate usability across your wireframes and prototypes with real users early on. Use this pre-built template to capture valuable feedback on accessibility and user experience so you can see what’s working (and what isn’t).

Usability testing your product assesses how easily users can navigate the interface to find what they need. It enables you to identify pain points to iterate and improve on, whilst also validating design decisions by learning what worked well for users.

You can incorporate both qualitative and quantitative testing into product usability testing to ensure you not only identify issues—such as the length of time to complete a task—but also gather details on the user experience. It’s not just the what that matters—but also the why .

This template enables you to do both by including a task—for quantitative data, such as time spent on the task—and related open-ended follow-up questions—for qualitative data, such as what users expected to see. Your quantitative data identifies the issue, and your qualitative data explains why it’s happening.

4. Template for testing mobile app usability

Test mobile app usability

Test mobile app usability

Help deliver a friction-free product experience for users on mobile. Test mobile app usability to discover pain points and validate expectations, so your users can scroll happily (and your Product team can keep smiling too).

If you’re testing a mobile app, this one’s for you. It’s a ready-to-use template pre-populated with open questions, yes-no questions, and opinion scales—alongside the usability task itself.

Mobile and desktop vary greatly when it comes to what users expect from the interface, information architecture, and interactions. Ensuring your app is up-to-scratch requires ample testing and feedback, with a template specifically focused on mobile usability.

5. Template for testing feature discovery

Feature discoverability template

Feature discoverability template

Looking to increase the findability of specific features in your product? This template will help you determine feature discoverability with users, as well as highlight important pain points that need to be addressed.

This feature discoverability template helps you test the navigation process through your product and how discoverable certain features are. Having great features is one thing, but they’re of little use if users can’t find them.

Running a template like this helps identify any roadblocks hindering discoverability, and reveals potential reasons behind poor feature performance.

What’s more, this template—and all Maze templates—enable you to add conditional logic flows to your usability tests. Put simply, this means you can build dynamic tests that adapt to previous user responses and avoid asking non-applicable questions.

6. Template for testing website sign-up flow usability

Test your website sign-up flow

Test your website sign-up flow

Easily test your website sign-up flow with this template. Discover how easy it is for customers to navigate and ultimately sign up to your product or subscription, and learn if any points of friction come between you and them. This template can be run with both lo-fi or hi-fi wireframes or prototypes—simply add your own prototype when prompted.

Zooming in on the sign-up process, this usability template helps you understand what users expect from your website. It identifies how users feel about navigation and design through usability tasks and a series of open and closed follow-up questions.

The results of this test then enable you to optimize your website conversion flows by understanding the underlying issues and challenges your users might be experiencing.

7. Template for testing the usability of product features

Test feature usability

Test feature usability

Whether you’re introducing a new feature to your product, or want to test if an existing one is working, this feature usability template will help capture the feedback you need. Discover if users can complete tasks set and tackle any problem areas you uncover ahead of launch to save your team valuable time and resources.

Designing an intuitive feature interface is a big ask, but it’s made a lot easier with usability testing. Usability testing means you don’t have to take a shot in the dark at how you think users want to engage with features, and instead can shed light on what they actually want.

This template enables you to gather feedback on product feature usability with open questions—such as what did you find difficult about the task?—and opinion scales—where users rate the ease of the experience from very easy to very difficult.

It’s most beneficial to catch issues during the first stages of development, so you want to conduct usability testing on feature prototypes early, often, and between iterations.

8. Template for System Usability Scale testing

System Usability Scale (SUS) Template

System Usability Scale (SUS) Template

Discover if your product is meeting user expectations by using this template. Shaped around the reputable System Usability Scale Framework, you’ll collect valuable insights into what’s working (and what isn’t) when it comes to your product. Then use the feedback to fill in the gaps and make impactful improvements where it matters most.

Next up, the system usability scale template allows you to get an understanding of the overall usability of your product and features. Consisting of a ten-question survey with scaled responses, you can quickly assess and identify a benchmark for your product’s broad usability.

System usability scale surveys are easy to administer, and provide accurate results with relatively small sample sizes. While this kind of template isn’t diagnostic—meaning it will identify issues but not necessarily explain them—it's a great starting point to get an idea of where your testing efforts would be most beneficial.

9. Template for SEQ test for usability insights

Run a Single Ease Question (SEQ) test

Run a Single Ease Question (SEQ) test

Determine difficulty completing usability tasks with this Single Ease Question Template. Gather quick quantitative feedback from your users and scope out potential friction to make more impactful changes to your product

The single ease question (SEQ) test involves asking users to assess task difficulty on a scale from one (very difficult) to seven (very easy). It enables you to get highly-specific feedback from users directly after they’ve completed a usability task.

Acting on SEQ test feedback ensures you’re building a frictionless product that users can easily navigate. These insights help you execute design decisions with confidence, and make changes to existing features to ensure a top user experience.

10. Template for usability survey

Collect insights on features

Collect insights on features

Get an accurate reading on how your features measure up with users. Gather usability feedback quickly so you can make the right changes—faster.

This usability feedback template is a versatile way to get user feedback from existing users. Using multiple choice, yes/no, and open-ended questions, you’re able to ask users about their experience with your product.

Insights gathered from a usability survey help shape your product roadmap and optimize existing features within your product. Feedback is an essential part of development—making this template ideal, as it can be adapted to a wide variety of features and processes.

11. Template for usability testing reporting

usability testing report template

Last, but by no means least, we’ve got the usability testing report template. This is the template you want for reviewing your results—it’s ideal to give context, share testing methodologies, and highlight findings to your team or organizational leaders.

It’s important to consider the difference between reporting and analytics when gathering your test results—as a report only gathers your findings, it doesn’t explain them. Make sure you also analyze usability findings to uncover the why—a good user research tool helps visualize your data to aid this process—before putting together key takeaways and next steps to share with your team

Sharing your usability research findings and insights with other stakeholders is a key part of being an effective UX leader . This usability report template makes space for all the key aspects of your research process to back up your plan of action.

Usability testing your product

Usability testing is an essential part of creating intuitive products that serve and delight users. It tests your designs with real people for real feedback you can form into actionable insights.

Stick with these 11 templates to guide you in your usability journey process—whether that’s before or after the product launch—and check out these usability testing examples for some inspiration on where to start. With the right tools, preparation, and checklists, usability testing doesn’t need to be a mystery.

Frequently asked questions about usability testing templates

What is a usability testing template?

A usability testing template is a pre-made outline of a usability test that suits specific research focuses, such as reviewing mobile app navigation or a website sign-up process There are a wide variety of usability templates available to help conduct faster, more reliable, and more efficient usability testing.

Why use a usability testing template?

A usability testing template speeds up the research process by including tasks and questions that are typically needed in each type of usability study. By using templates, you can streamline your preparation and research process, as well as ensure usability tests remain consistent across a project.

What are the goals of usability testing?

The goal of usability testing is to understand how users interact with your product—specifically any difficulties they encounter. This helps improve your product by enabling you to identify friction and optimize the user experience throughout.

How to analyze and report usability results

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

How to use the System Usability Scale in modern UX

case study usability test

We would have fewer unused products on the market if most product owners paid attention to user experience and usability testing. They think about it, but most designers and product owners believe it’s too expensive and may require more funds and a larger workforce to do so.

System Usability Scale

What if I told you there was a way to conduct usability testing without spending a fortune or requiring a large team? Would you give it a shot? Let’s dive into the world of the System Usability Scale.

If you’re a designer or product owner aiming to determine usability levels, you should consider using the System Usability Scale.

What is the System Usability Scale?

  • The relevance of the SUS to product development and technology

The step-by-step process for the System Usability Scale

  • System Usability Scale form and calculator

How to use the System Usability Scale: Case study

Ways to improve the system usability scale for better testing, interpreting and analyzing system usability scale results.

Developed in 1986 by John Brooke, the System Usability Scale is a ten-question template that is used to provide assessments of usability. Before being spread out to other parts of technology, the System Usability Scale was first used in systems engineering.

The original idea was to measure the usability of the engineering systems. In modern times, the System Usability Scale is used to measure efficiency and effectiveness through usability.

The System Usability Scale (SUS) is a reliable “quick and dirty” tool for measuring usability. It comprises a ten-item questionnaire with five response options for respondents, ranging from strongly agree to strongly disagree. It allows you to evaluate various products and services, such as hardware, software, mobile devices, websites, and applications, and is not limited to only website usability.

Agree-Disagree Scale

The relevance of the SUS to product development and design

The System Usability Scale is beneficial to product owners and designers because:

  • It is used for administering usability tests to users: as a product owner or designer, you need to conduct a usability test on your final product to see if and how it can be used. The SUS is a tool for identifying the level of usability
  • It can be used to gather reliable results from small groups: the ten-question template has been curated in a way that touches most areas of a product’s usability and gives users the ability to express their concerns about a product. It is also very reliable with small groups of testers
  • It is valid: as a result of the questions included in the template, the questionnaire can be used to effectively differentiate between usable and unusable systems according to users
  • It is cheap: the System Usability Scale is quick to implement, very cheap to administer, and affordable to use, particularly online. Depending on the setting and environment, it can also be printed out on paper and distributed, read out, and scored by the moderator
  • It provides an accurate rating: the System Usability Scale is one of the most effective methods for assigning a straightforward and reasonably precise score to your website. It provides you with a score that you can use to confirm the level of usability. In the later parts of this article, I will describe how to calculate and analyze the ratings

That’s why we still use the System Usability Scale. If you want to gauge customer sentiment, it’s cheap, easy, and reliable.

Let me show you how the SUS works in action.

The System Usability Scale (SUS) implementation process entails a series of steps to effectively measure and evaluate the usability of a system or product. As an organization, you can gain valuable insights into user perceptions and identify areas for improvement by following this process. Remember that this is a template; you can adjust it to fit your organization.

1. Get acquainted with the SUS

If you are hearing about SUS for the first time, the first step would be to begin by learning about the SUS’s history, purpose, and structure. Examine the original SUS questionnaire, and take notes if there may include areas you would want to modify.

case study usability test

Over 200k developers and product managers use LogRocket to create better digital experiences

case study usability test

2. Pick an administration method

The next step would be to determine the mode of administering the SUS. The SUS can be delivered through paper-based questionnaires, online surveys, or software integration. Choose the best method for your resources, target audience, and research goals.

3. Determine the procedure for usability testing

Plan the overall protocol for usability testing. Determine the tasks or scenarios participants will complete, the sample size, and the recruitment process. Ensure that the usability testing protocol aligns with the study’s goals and provides enough data to evaluate usability effectively.

4. If necessary, modify the SUS

The SUS may need to be modified in order to better suit your system or target users, depending on your unique needs and context. Maintain the validity and reliability of the scale, and make sure any modifications are supported by research.

5. Administer the SUS

If you have successfully gone through the steps mentioned above, you can go ahead and set the date, invite the users for testing, and start the test. During the usability testing sessions, give the participants the SUS questionnaire.

The SUS is typically administered after the testing session to capture overall impressions of usability. Give participants clear instructions on how to rate each item, and make sure they understand the scale.

System Usability Scale form and calculator template

To make things easier on you, we’ve made a template with the questions for the System Usability Scale, a Google Form to copy to send to users, and a sheet to calculate your SUS.

Download the System Usability Scale here.

System Usability Scale Template

Here’s a hypothetical case study of how the System Usability Scale can be applied to a product: a financial institution recently launched a mobile banking application to provide convenient and user-friendly access to banking services.

The institution has, however, received user complaints about the app’s usability. Users need help navigating the interface, have confusion with certain features, and are frustrated with error messages. The institution decides to use the SUS to conduct a usability evaluation and identify pain points and improve the user experience.

Step 1: Organize a team

The first step is to set up a usability testing team if the organization does not already have one. In a more organized company, the research is done by the design team, composed of UX researchers and designers.

This team decides on the study’s objectives, the sample size, and the tasks participants will perform during the testing session. The team decides to bring in twenty existing customers with varying experiences with mobile banking applications.

Step 2: Invite the participants

The participants are placed in a controlled testing environment and asked to complete a series of tasks using the mobile banking application. These tasks could range from transferring funds between accounts to reviewing transaction history and setting up recurring payments. The team observes participants’ interactions, notes their difficulties, and documents any usability issues.

Step 3: Run the test

The SUS questionnaire is administered after the participants complete the usability tasks. Participants rate their agreement with usability statements like, “I found the mobile banking app unnecessarily complex” and “I felt confident using the app to perform banking tasks.” Participants rate each statement on a five-point Likert scale.

Step 4: Analyze the SUS data

The team collects and analyzes SUS response data. They compute each participant’s SUS score by averaging their ratings across the ten items. The team also goes over the qualitative feedback provided by test participants, paying particular attention to specific pain points and areas for improvement mentioned in their comments.

Step 5: Identify the pain points

The team identifies the pain points that users experienced with the mobile banking app based on SUS scores and qualitative feedback. They noticed several participants expressed difficulty locating specific features, confusion with the app’s terminology, and frustration with error messages that needed clear explanations.

Following the above case study, this is how the scoring of the above usability test should go;

On a five-point Likert scale, participants rate their agreement with each of the ten SUS items, ranging from strongly disagree to strongly agree. The scale is represented by the following:

  • Strongly Disagree
  • Neither Agree nor Disagree
  • Strongly Agree

Assign numeric values to the Likert scale responses to make score calculation easier. We can use the following numbers:

  • Strongly Disagree: 1
  • Disagree: 2
  • Neither Agree nor Disagree: 3
  • Strongly Agree: 5

For calculating SUS scores, we follow these steps for each participant:

  • Subtract 1 from the score of each of the odd-numbered items (1, 3, 5, 7, 9)
  • Subtract the value of each of the even-numbered questions from 5 (2, 4, 6, 8, 10)
  • Add up the total score of the new values
  • Multiply the result by 2.5 to get a value between 0 and 100

For instance, suppose a participant’s responses were as follows:

  • Item 1: Strongly Agree
  • Item 2: Disagree
  • Item 3: Strongly Agree
  • Item 4: Strongly Disagree
  • Item 5: Agree
  • Item 6: Disagree
  • Item 7: Agree
  • Item 8: Neither Agree nor Disagree
  • Item 9: Strongly Agree
  • Item 10: Disagree

The calculation would be:

  • ((5-1)+(5-1)+(4-1)+(4-1)+(5-1))+((5-2)+(5-1)+(5-2)+(5-3)+(5-2)) = 33

So the SUS score = 33 * 2.5 = 82.5

Following that, we interpret the SUS scores ranging from 0 to 100:

  • Higher scores indicate improved usability, but lower scores indicate areas for improvement
  • To evaluate the overall usability of the mobile banking app, compare the mean SUS score across all participants
  • To understand the variability in user perceptions, consider the standard deviation

Testing and validation are essential to ensuring the validity and dependability of any updates or improvements. Here are some ways I would raise the usability scale of the current system:

Use simple English

By simplifying the questionnaire’s language, you can make it more accessible to a broader range of participants, ensuring that their feedback is accurately captured.

The current questionnaire of the SUS, as simple as it looks, may be quite tricky for non-English speakers or those with low English proficiency. For instance, one question on the SUS is, “I find the system functioning smoothly and is well integrated.” It can be reworded for simplicity as, “I find the system to be working well and to be well-connected.”

Include a broader range of items

The SUS comprises ten question items. One method for improving the SUS is to include a wider variety of question items covering other usability aspects. Things such as accessibility, mobile responsiveness, speed, task efficiency, and learnability could be included.

Since the initial design was made for engineering systems, an update could be made for its current use. An example of a question that reflects accessibility and responsiveness is, “I find the system easy to use on all devices.”

Use more open-ended questions

Including more open-ended questions can yield qualitative insights. Incorporating prompts for users to provide specific feedback, suggestions for improvement, or examples of their experiences can improve usability evaluations and offer deeper insights into user perceptions.

For example, ask “what” questions like, “What would you change about this system?”

Cultural and demographic factors

Usability testing is done in various cultural and demographic contexts. Adjusting the SUS to account for cultural and demographic differences can help ensure its applicability and accuracy across user groups. Additions of illustrations or graphics would be a significant improvement on this.

User feedback

Finally, usability assessment tools such as the SUS should be treated as living documents that evolve in response to user needs and technological advancements. Collecting feedback from users, usability professionals, and researchers on an ongoing basis can provide valuable insights into how to refine and improve the SUS over time.

The above suggestions refer to website product testing, not system engineering. You can download the system usability scale here .

Normalizing your score to a percentile rank is the best way to interpret it. Scoring can be difficult to interpret. To convert the original scores of 0–40 to 0–100, the participant’s scores for each question are converted to a new number, added together, and multiplied by 2.5.

Despite the fact that the scores range from 0–100, they are not percentages and should only be considered in terms of their percentile ranking. The average score on the System Usability Scale is 68. If your score is less than 68, there are likely serious usability issues with your website that you should address. If your score is higher than 68, you are good to go.

Calculate the SUS scores

Each participant rates the SUS’s ten items on a five-point Likert scale ranging from strongly disagree to strongly agree. To determine the SUS scores, add the ratings for the odd-numbered items (1, 3, 5, 7, 9) and subtract the ratings for the even-numbered items (2, 4, 6, 8, 10). To get a value between 0 and 100, multiply the score by 2.5.

Analyze individual SUS scores

Begin by looking at each participant’s SUS score separately. Higher ratings indicate better-perceived usability, while lower ratings indicate areas for improvement. Examine the scores for patterns and outliers to identify common trends or extreme user experiences.

Benchmark or baseline comparison

If available, compare the mean SUS score to established benchmarks or a baseline from a previous evaluation. This allows you to compare the system’s performance to industry standards or previous iterations. A higher score than the benchmark indicates better usability, whereas a lower score indicates room for improvement.

Determine your strengths and weaknesses

The system’s usability strengths and weaknesses are based on SUS scores, benchmark comparisons, standard deviation, and qualitative feedback. Concentrate on areas with lower scores, higher variability, or user feedback consistently highlighting challenges or frustrations. These are the primary pain points and opportunities for improvement.

Make recommendations

Using the identified strengths and weaknesses, make actionable recommendations to improve the system’s usability. Prioritize the recommendations based on the severity and impact of the pain points and the ease with which the changes can be implemented. Consider design changes and any necessary changes to the system’s functionality, terminology, or navigation.

An overview of how your scores should compare is provided below:

  • 80.3 or higher is a pass on usability, which means people found your website easy to use
  • 68 or less says your website is doing good, but there is always room for improvement
  • Anything less than 51 is proof that you need to focus on the usability and effectiveness of your website, as users do not find it easy to use

The System Usability Scale is a useful tool for assessing system and product usability. As an organization or individual, you can gather meaningful data and insights to improve user experiences by following the SUS implementation process. You can also enhance the overall usability of your systems and products by iterating and incorporating user feedback continuously, increasing user satisfaction and engagement.

The System Usability Scale is not diagnostic and will not tell you which specific problems you face, but it will give you a red or green light to know how badly your usability needs work.

LogRocket : Analytics that give you UX insights without the need for interviews

LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.

See how design choices, interactions, and issues affect your users — get a demo of LogRocket today .

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #ux research

case study usability test

Stop guessing about your digital experience with LogRocket

Recent posts:.

Podcast Logo on Laptop Screen Icon Over Microphone Background

13 UX Podcasts to subscribe to in 2024

Interested in everything UX design? Check out these 13 UX podcasts that are sure to be a blast to listen to.

case study usability test

Using AI writing tools for quick UX content

Here are some of the most noteworthy AI tools for UX writing, including their strengths, quirks, and price tags.

case study usability test

Integrating ProtoPie into a Figma prototyping workflow

With ProtoPie’s compatibility with Figma, it’s easy to integrate this ultimate prototyping tool into your design workflow.

case study usability test

InVision is shutting down — where does that leave design collaboration services?

InVision has announced the sale of their design collaboration tool, along with the discontinuation of their services by the end of 2024.

case study usability test

Leave a Reply Cancel reply

Next Autopilot trial to test Tesla's blame-the-driver defense

Tesla Model X which crashed on U.S. Highway 101 is seen in Mountain View

  • Tesla Inc Follow

NEARLY 1,000 CRASHES

'ready to take control', lulled into distraction.

Stay up to date with the latest news, trends and innovations that are driving the global automotive industry with the Reuters Auto File newsletter. Sign up here.

Reporting by Dan Levine and Hyunjoo Jin in San Francisco; Additional reporting by David Shepardson in Washington; Editing by David Clarke, Peter Henderson and Brian Thevenot

Our Standards: The Thomson Reuters Trust Principles. , opens new tab

Premiere of ''Lola'' held at the Regency Bruin Theatre in Los Angeles

Ukrainian air carrier SkyUp builds up Europe business to survive war

Airline SkyUp has become Ukraine's largest air carrier during the war with Russia by building up its business in Europe to offset the domestic closure of civilian airspace that has lasted more than two years, the company's CEO has said.

China International Fair for Trade in Services (CIFTIS) in Beijing

  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

  • Your Health
  • Treatments & Tests
  • Health Inc.
  • Public Health

Shots - Health News

A simple blood test can detect colorectal cancer early, study finds.

Allison Aubrey - 2015 square

Allison Aubrey

case study usability test

If the FDA approves it, a new blood test could become another screening option for colorectal cancer. Srinophan69/Getty Images hide caption

If the FDA approves it, a new blood test could become another screening option for colorectal cancer.

At a time when colorectal cancer is on the rise, a new study finds the disease can be detected through a blood test.

The results of a clinical trial, published Wednesday, in The New England Journal of Medicine, show that the blood-based screening test detects 83% of people with colorectal cancer. If the FDA approves it, the blood test would be another screening tool to detect the cancer at an early stage.

The test, developed by Guardant Health , can be done from a blood draw. The company says its test detects cancer signals in the bloodstream by identifying circulating tumor DNA.

Dr. Barbara Jung , president of the American Gastroenterological Association, says the test could help improve early detection of colorectal cancer.

"I do think having a blood draw versus undergoing an invasive test will reach more people," she says. "My hope is that with more tools we can reach more people."

Colorectal cancer is rising among Gen X, Y & Z. Here are 5 ways to protect yourself

Colorectal cancer is rising among Gen X, Y & Z. Here are 5 ways to protect yourself

But even if the blood test is approved, it will not replace the dreaded colonoscopy. "If the test is positive, the next step will be a colonoscopy," Jung says. That's because a colonoscopy can detect precancerous lesions — called polyps.

"And when you find those, you can also remove them, which in turn prevents the cancer from forming," Jung says.

The U.S. Preventive Services Task Force recommends regular screening should begin at age 45. But approximately 1 in 3 eligible adults are not screened as recommended, according to the American Cancer Society.

"Over 50 million eligible Americans do not get recommended screenings for colorectal cancer, partly because current screening methods are inconvenient or unpleasant," Guardant Health CEO, AmirAli Talasaz, wrote in a release about the results of the study.

Currently, effective screening options include stool tests and colonoscopies.

"It's never been easier to get the screening," T.R. Levin, a gastroenterologist at Kaiser Permanente told NPR last year.

Some of the early symptoms of colorectal cancer can include blood in your stool, a change in bowel habits, weight loss for no known reason, a feeling of bloating or fullness and fatigue. If you experience any of these symptoms, you should talk to your doctor.

And while colorectal cancer is still rare in young adults, the rate has been increasing. About 20, 000 people in the U.S. under the age of 50 are diagnosed each year.

"Colorectal cancer is rapidly shifting to diagnosis at a younger age," conclude the authors of an American Cancer Society report released last year. Since the mid-1990s, cases among people under 50 have increased by about 50%. It's one of the deadliest cancers in this age group.

Guardant Health has already filed for approval with the FDA. The decision is expected to come later this year.

This story was edited by Jane Greenhalgh.

  • cancer screening
  • colorectal cancer

Tzu Hsu Chu

Tzu Hsu Chu

Masters in Educational Technology at the University of British Columbia

Tipping Point – Open Education Resource Textbooks Case Study

Creation of open education resource textbook with interactive h5p elements for fren1205 – french conversation course in the modern languages department at langara college, introduction.

For the case of technological displacement, we were curious to explore the tendency and shift from physical textbooks to digital Open Education Resources (OERs) in higher education institutions. We were specifically interested in the tensions and opportunities that arose from the transition to online teaching and learning after the pandemic, especially with the normalization of online and hybrid e-learning. 

We are grounding this inquiry of technological displacement in the case study of the creation of OER textbook with interactive H5P elements for a French conversation course at Langara College. In this assignment, we analyze the usability aspect of OERs from the instructor and student perspective, as well as explore the concerns of artificial intelligence, and issues surrounding digital labor in the process of creating OERs in higher education institutions. 

Motivation and Background

The FREN1205 – French Conversation course at Langara College is offered in-person with the utilization of a digital OER textbook Le Français Interactif created by the instructor Mirabelle Tinio. To support our work, we had the opportunity to speak with the instructor to learn more about the case study. All case study context provided in this assignment came from this conversation. Below are some of the motivators for the creation of the OER textbook from both the students’ and instructor’s perspectives. 

From the student perspective, the education landscape had been drastically transformed during the emergency transition to online teaching and learning during the beginning of the pandemic in 2020. The effects can be seen gradually resuming in-person teaching and learning once again in 2021, in which student surveys reflected that having additional supportive resources online available helped with their learning process and overall experience taking online courses. In addition, students reflected that physical textbooks were expensive and inaccessible, especially the ones that were ‘single-use’ for an individual course, and were less inclined to make such purchases.

From the instructor’s perspective, there were many factors that contributed to the transition of physical textbooks to a digital OER. The instructor that we interviewed had been teaching the French conversation course for the past at least 12 years. Though the original textbook they were using provided activities and exercises for everyday conversation scenarios, she found that the content was not up-to-date or culturally relevant enough for the students within the classroom. The instructor therefore found herself turning to other available language learning resources to patch together a curriculum plan that included vocabulary, grammar structure, and socio-cultural activities. The process was rather time consuming and she was never really satisfied with the existing resources. 

With both students and instructor identifying that the current resources were not meeting their needs, it became clear that another resource should be introduced to solve the problem of learning resources for this course. Here, we can use the concept of technological utility to demonstrate, in part, why a tipping point occurred. Utility asks the question of if the technology fulfills the users’ needs or if it does what the users need it to do (Issa & Isaias, 2015, p. 4). Physical textbooks were not meeting the learners’ and instructor’s utility needs, therefore, a new technology needed to be introduced. 

Simultaneously while working partially in the Educational Technology Department, there were many other instructors utilizing Pressbooks and other OER platforms to input resources into Brightspace, a learning management system. The existing integration of the learning management system and potential for further adaptation was an additional motivator for developing her own textbook as an OER for the class. 

The Tipping Point

The opportunity and tipping point presented itself when BCcampus Open Education Foundation Grant for Institutions applications were open for project proposals for specifically utilizing H5P for Pressbooks in 2021. The grant was intended for British Columbia post-secondary institutions wishing to explore, initiate or relaunch open educational practices, resources, support and training on their campuses. Through this grant, the instructor was able to secure additional funding and support for creating the French Conversation OER textbook. 

Multi-modality, Interactivity and Flexibility  

Learning languages is an activity that is inherently multimodal and incorporates a combination of multi-sensory and communicative modes (Dressman, 2019). The utilization of online OERs makes it possible to include multimedia and interactive H5P elements such that students can actively engage with the learning content, allows for more diversity in learning methods, as well as increasing the accessibility of course content. 

Though the OER textbook included many different chapters and topics, each unit contained a similar format: the learning objectives, pre-test questionnaire, vocabulary, practice exercises, oral comprehension exercises, a post-test evaluation questionnaire, and self-reflection. This repeated format increases the OER’s usability because it is quickly learnable and memorable (Issa & Isaias, 2015, p. 33). The OER therefore creates a smoother user experience with less friction or frustration to navigate to the content than the physical textbooks, demonstrating again why this tipping point occurred (Issa & Isaias, 2015, p. 30).

The goal was to make the learning content accessible to both students and instructors with maximum flexibility and adaptability. Students could preview the units and prepare ahead of time before the classes; or review the units and practice on areas for further improvement, all at their own pace, with self-assessments available. Instructors can supplement the course delivery with additional resources, in-class activities or outing experiences, and utilize the textbook in a non-linear manner tailored to the needs and pace of the students in the classroom. 

Living Texts 

The content in the OER included resources that the instructor created and showcased content that previous students created as well, and can be seen as a co-created ‘living text’ (Philips, 2014) as a pedagogical tool, as well as a co-creation of knowledge within the classroom. 

For example, in the activity “Interview a Francophone”, the instructor uploaded recorded interview videos of previous student’s work, as an exemplar of what the assignment would look like when current students approached the activity themselves, but also as an exercise for current students to practice their listening comprehension and understanding of French conversation in context. The instructor identified that this was to also make the students feel appreciated for their active contribution towards the course, and recognized students as part of the co-construction of literacy knowledge through this kind of interaction (Philips, 2014). 

Creating an OER that operates as a living text supports increased usability because it allows for feedback to be implemented when offered by the learners (the users). A living text can push back against the challenge of “configuring the user”, where the designers imagine the “right way” for a user to engage with their technology instead of being open to how the users actually will engage with the technology (Woolgar, 1990). This OER as a living text can be adapted to user feedback and therefore there is not only one “right way” to use the resource. Instead, the OER can increase usability for a wider variety of users as instructors adapt it based on learner feedback. The instructor noted that keeping an OER like this up-to-date is very important. This is especially true if the OER is described by an instructor to learners as a living text that is responsive to their needs. 

Equity, Diversity and Inclusion 

As mentioned above, the multi-modality, interactivity and flexibility of the living texts contributes towards a classroom climate that reflects equity, diversity, and inclusion of the students that are currently taking the courses. This approach takes into consideration the positionality, lived-experiences, interests, and abilities of students within the classroom and their agency as an active participant in their own learning. For example, taking the aforementioned activity of interview with a Francophone, with the crowd-sourced collaborative effort of the different interviewees, students are able to see the different kinds of ‘francophone-ness’ outside of the mainstream Eurocentric depiction of French speaking people, especially when it comes to the deep-rooted history of the French language as a tool of colonization. 

By embracing inclusive pedagogical approaches and recognizing students’ diverse contributions, this approach to creating OER textbooks creates a supportive and accessible learning environment, fosters a sense of belonging, and affirms the value of students’ unique contributions to the learning process. 

Challenges 

Current concerns: teamwork makes the dream work .

One major challenge that the instructor encountered during the creation of this OER textbook was the lack of support from the institutional level, especially when new technological adaptations require more incentive and supporting resources to push for incorporation and utilization within the college, and furthermore, across institutions. Though the instructor did collaborate with other language instructors from the Modern Languages Department and advisors from the Educational Technology Department, there is a strong suggestion for creating a community of practice across institutions to support this work’s sustainability. The production of a brand new OER like this (as as its ongoing maintenance) involves significantly more time and energy than maintaining the status quo of using physical textbooks. There is a risk that the instructor’s digital labour of producing this kind of resource might be unknown by the institution if it is unseen. 

On a practical and logistical consideration, this ensures the articulation of courses are leveled and aligned across institutions, especially when it concerns the transferability of courses and credits for pathway programs, such as Langara College. On a more idealized and aspirational endeavour, this promotes the collaboration and commitment to sharing knowledge and resources, encouraging accountability, peer reviews and continuous development of teaching and learning practices, enabling the community to build on each other’s work and fostering a culture of openness and collaboration in education. 

Future Concerns: The Rise of Artificial Intelligence and Impact of Digital Labor  

Though the BCcampus grant did provide funding for the instructor to develop the OER textbook, there needs to be more support when it comes to compensation of the unseen invisible work that is added on to the already existing duties of a teaching faculty member. With increased digitization of instruction within higher education, comes an expectation of an accelerated pace of work (Woodcock, 2018, p. 135). There can be an expectation, even implicitly, within institutions that work becomes “easier” as a result of digital resources like this OER textbook. This can result in work pressures and time pressures expanding for instructors who have created digitized aspects of their work. 

Another risk for instructors is the value that is placed on published work to push an academic career forward (Woodcock, 2018, p. 136). The motivation to pursue the creation of open access work can be reduced if the institution the academic is working within has rewards for published work. While an OER like the one described in this case is a different kind of open access work than a journal piece, its creation and upkeep exist within the same labour hours for an instructor. The instructor must be significantly committed to the creation of the OER if there is limited institutional support, as described in this case, and also if there is institutional pressure to spend time doing other, more valued work, such as publishing at a more prestigious journal. 

Finally, there is a tension inherent in the use of artificial intelligence in relation to OERs. As with this case study, we know that producing and maintaining OERs can be time, labour, and resource-intensive. With the rise of large language models like ChatGPT in the past year, there is a potential to employ AI tools like this to support the creation of OERs. This might seem to reduce the human labour needed to create an OER like Le Français Interactif . However, we also know that AI tools like ChatGPT do not appropriately cite sources and can even ‘make up’ information. Uncited sources are problematic because they effectively steal intellectual property from other academics and false information is problematic because it diminishes the reliability and utility of the OER. 

Even more concerning is that AI language models are trained with data that can be biased and produce content that is embedded with this bias (Buolamwini, 2019). With an OER project like this outlined in our case study, it could be counter to the desire to create more culturally-relevant and inclusive resources to produce them in “partnership” with an AI tool. More relevant to this case study, regarding language translation, AI tools like DeepL can be helpful but are not yet at the point where they can translate as effectively as a human who speaks multiple languages. For this reason, instructors might be wary of using AI tools as “co-authors” for OERs to ensure the quality of the instructional or learning resource remains high. 

This case study demonstrates how the creation of an OER textbook for the FREN1205 – French Conversation course at Langara College exemplifies a pivotal shift in educational resources toward digital platforms. This tipping point is a response to the evolving needs of both students and instructors in the post-pandemic era of education. Ideally, an OER textbook offers learners enhanced accessibility, flexibility, and more inclusivity within their educational experience. However, challenges such as institutional support for digital labour and concerns surrounding the rise of artificial intelligence underscore the importance of institutional buy-in and ethical considerations as we integrate OER textbooks into the student experience.

Buolamwini, J. (2019, February 7). Artificial Intelligence has a problem with gender and racial bias . Time. https://time.com/5520558/artificial-intelligence-racial-gender-bias/ . 

Dressman, M. (2019). Multimodality and language learning. In M. Dressman, & R. W. Sadler (Eds.), The handbook of informal language learning (pp. 39-55). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781119472384.ch3

Issa, T., & Isaias, P. (2015) Usability and human computer interaction (HCI). In Sustainable Design (pp. 19-35). Springer.

Phillips, L. G., & Willis, L. (2014). Walking and talking with living texts: Breathing life against static standardisation. English Teaching : Practice and Critique, 13(1), 76.

Woodcock, J. (2018). Digital Labour in the University: Understanding the Transformations of Academic Work in the UK. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 16(1) pp. 129-142.

Woolgar, S. (1990). Configuring the user: The case of usability trials . The Sociological Review , 38 (1, Suppl.), S58-S99.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Spam prevention powered by Akismet

IMAGES

  1. What is Usability Testing? How to Evaluate the User Experience

    case study usability test

  2. HOW TO CONDUCT USABILITY TESTS

    case study usability test

  3. Usability Testing 101

    case study usability test

  4. What is Usability Testing?

    case study usability test

  5. Introducción a la prueba de Usabilidad: Evaluación de prototipo

    case study usability test

  6. What is usability testing [2021 guide]

    case study usability test

VIDEO

  1. Micro usability test recording

  2. Usability testing session 2

  3. Usability Test Recording_VCD_Elvir Erden

  4. #Usability Testing in der Software Entwicklung: wie viele User brauche ich für meinen Test? #scrum

  5. Usability Test Findings V2

  6. Case study practical

COMMENTS

  1. 6 Usability Testing Examples & Case Studies

    This case study demonstrates the power of regular usability testing in products with frequent updates. Source: SoundCloud case study (.pdf) by test IO. Example #4: AutoTrader.com. AutoTrader.com is one of the world's largest online marketplaces for buying and selling used cars, with over 28 million monthly visitors. The mission of AutoTrader ...

  2. 5 Real Usability Testing Examples & Methods to Take Away

    5 Real-life usability testing examples & approaches to apply. Get a feel for what an actual test looks like with five real-life usability test examples from Shopify, Typeform, ElectricFeel, Movista, and Trint. You'll learn about these companies' test scenarios, the types of questions and tasks these designers and UX researchers asked, and the ...

  3. 2 Usability Testing Case Studies

    Usability testing is the process of studying potential end-users as they interact with a product prototype. Usability testing occurs before you develop and launch a product, and is an essential planning step that can guide a product's features, functions and purpose. Developing with a clear purpose and research-based data will ensure your ...

  4. Task Scenarios for Usability Testing

    A scenario puts the task into context and, thus, ideally motivates the participant. The following 3 task-writing tips will improve the outcome of your usability studies. 1. Make the Task Realistic. User goal: Browse product offerings and purchase an item. Poor task: Purchase a pair of orange Nike running shoes.

  5. The Beginner's Guide to Usability Testing [+ Sample Questions]

    Stress test across many environments and use cases. Our products don't exist in a vacuum, and sometimes development environments are unable to compensate for all the variables. ... Usability Testing Examples & Case Studies. Now that you have an idea of the scenarios in which usability testing can help, here are some real-life examples of it in ...

  6. Usability Testing 101

    Usability testing is a popular UX research methodology. Definition: In a usability-testing session, a researcher (called a "facilitator" or a "moderator") asks a participant to perform tasks, usually using one or more specific user interfaces. While the participant completes each task, the researcher observes the participant's ...

  7. Guide for Conducting Effective Usability Testing: Improving the User

    After the usability test is complete, you should analyze the results and identify any common patterns or issues across multiple participants. It's important to look for feedback that conflicts with your assumptions or expectations and be open to making changes to improve the user experience. ... Case Study Usability Testing from my last ...

  8. Case study: Quora usability testing

    In this case, we choose Quora as an object in this test. This testing process includes determining the objectives and target users, recruiting users, creating a test script, conducting testing, and finally creating a test result report. In this study, the authors used 5 users from various different backgrounds.

  9. A Guide To Usability Testing: Tools, Methods & Examples

    Usability testing takes your existing product and places it in the hands of your users (or potential users) to see how the product actually works for them—how they're able to accomplish what they need to do with the product. 3. Formative vs. summative usability testing. Alright!

  10. A case study in competitive usability testing (Part 1)

    A case study in competitive usability testing (Part 1) - Trymata. In this step-by-step competitive usability case study, we guide you through the logisitcal concerns of testing your competitor's UX against your own.

  11. Zara: A Usability Case Study

    Usability Testing. With a better understanding of the user, I went to a mall with a Zara store and selected mall-goers to perform guerilla usability testing. I sampled people to test and verified that they were at least frequent online shoppers prior to beginning the testing. I ended up testing seven users.

  12. What is Usability Testing?

    By the end of the course, you'll have hands-on experience with all stages of a usability test project—how to plan, run, analyze and report on usability tests. You can even use the work you create during the practical project to form a case study for your portfolio, to showcase your usability test skills and experience to future employers!

  13. A Complete Guide To Usability Testing

    Case Study- How McDonald's improved its mobile ordering process. In 2016, McDonald's rolled out their mobile app and wanted to optimize it for usability. They ran the first usability session with 15 participants and at the same time collected survey feedback from 150 users after they had used the app. ... Following the success of their first ...

  14. Website Usability Testing: A Beginner's Guide

    Some specific benefits of web usability testing are: 1. Prototype Validation. When checking for usability from the early stages of the software release flow, web usability testing helps validate the concept itself. This allows devs, designers, and QAs to plan out features and layout for maximum usability, cutting down on extra work later.

  15. A Complete Guide to Usability Testing

    Moderated usability testing involves a moderator, such as a usability specialist or UX professional, being present during the test, instructing the participant, or asking them questions directly. The most common form of moderated usability testing is in-lab testing, which typically involves 5-10 participants and one moderator.

  16. A comprehensive usability test case study

    This comprehensive case study delves into these challenges and presents design improvements, focusing on the landing page for the podcast feature. This was a 4-week solo project in September 2023. I conducted a short pre-test survey and interviews to determine the scope of the project and focused on landing page issues for the podcast feature ...

  17. A Step-by-Step Guide to Usability Testing

    The fine print behind five people per usability study is that it means five users per segment. Here are the general guidelines for the top evaluative methods: 1. Usability testing. Moderated: At least five participants per segment; Unmoderated: At least 15 participants per segment, in case you get messy data; 2. Concept testing

  18. 11 Usability Testing Templates for Building Great Products

    Take a look at these 11 free templates for a helpful quick-start to everything from planning your usability test to analyzing the results. 1. Template for usability testing plan. It's safe to say a usability testing plan is a necessity, and this simple template from Milanote is a great resource. It's best used for outlining your research ...

  19. Usability testing for UX writing: A step-by-step guide

    Step 2: Get a budget (or don't) Let's get this out of the way first: you don't need a budget to do usability testing. Any idle colleagues will do. And you'd be surprised at the insights you can get from a fresh pair of eyes, especially if those eyes are outside of the product team. Colleagues are also great to practise with before you ...

  20. How to Write a Usability Testing Report (With Templates and ...

    Think carefully about your usability testing approach to ensure that your report uncovers the information you need to improve your product development. Here's how you can do it effectively: 1. Company name and logo. Start your report by highlighting your company: Company name. Company logo.

  21. How to use the System Usability Scale in modern UX

    Here's a hypothetical case study of how the System Usability Scale can be applied to a product: a financial institution recently launched a mobile banking application to provide convenient and user-friendly access to banking services. ... Following the above case study, this is how the scoring of the above usability test should go; On a five ...

  22. Usability Testing Report. UI/UX Case Study

    Executive Summary. We conducted the usability test on 8th to 14th July 2021. The purpose of the test was to assess the usability of the web interface design, information flow, and information ...

  23. Usability Testing Projects :: Photos, videos, logos ...

    Behance is the world's largest creative network for showcasing and discovering creative work

  24. Next Autopilot trial to test Tesla's blame-the-driver defense

    The case, set for trial in San Jose the week of March 18, involves a fatal March 2018 crash and follows two previous California trials over Autopilot that Tesla won by arguing the drivers involved ...

  25. A simple blood test can detect colorectal cancer early, study finds

    Blood test can detect colorectal cancer early, new study finds : Shots - Health News At a time when colorectal cancer is rising, researchers say a blood test can detect 83% of people with the ...

  26. Tipping Point

    This case study demonstrates how the creation of an OER textbook for the FREN1205 - French Conversation course at Langara College exemplifies a pivotal shift in educational resources toward digital platforms. This tipping point is a response to the evolving needs of both students and instructors in the post-pandemic era of education.