Integrations

What's new?

Prototype Testing

Live Website Testing

Feedback Surveys

Interview Studies

Card Sorting

Tree Testing

In-Product Prompts

Participant Management

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Research Maturity Model

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Maze Guides | Resources Hub

A Beginner's Guide to Usability Testing

0% complete

5 Real-life usability testing examples & approaches to apply

Get a feel for what an actual test looks like with five real-life usability test examples from Shopify, Typeform, ElectricFeel, Movista, and Trint. You'll learn about these companies' test scenarios, the types of questions and tasks these designers and UX researchers asked, and the key things they learned.

If you've been working through this guide in order, you should now know pretty much everything you need to run your own usability test. All that’s left is to get your designs in front of some users.

Just arrived here? Here’s a quick re-cap to make sure you have the context you need:

  • Usability testing is the practice of conducting tests with real users to see how easily they can navigate your product, understand how to use it, and achieve their goals
  • There are many usability testing methods . Picking the right one is crucial for getting the insights you need.
  • Qualitative usability testing involves more open-ended questions, and is good for sourcing ideas or validating early assumptions
  • Quantitative testing is good for testing a higher number of people, which is useful for fine-tuning your design once you have a high-fidelity prototype
  • If it’s too difficult to organize in-person tests, remote usability testing is a fast and cost-effective way to get the info you need
  • Guerrilla usability testing is a great option for some fast, easy insights from real people
  • Ask usability testing questions before, during, and after your test to give more context and detail to your results

Why you need usability testing studies & examples

While it’s essential to learn about each aspect of usability testing , it can be helpful to get a feel for what an actual test looks like before creating your first test plan. Creating user testing scenarios to get the feedback you need comes naturally after you’ve run a few tests, but it’s normal to feel less confident at first. Remember: running usability tests isn’t just useful for identifying usability problems and improving your product’s user experience—it’s also the best way to fine-tune your usability testing process.

For inspiration, this chapter contains real-world examples of usability tests, with some advice from designers and UX researchers on writing usability tasks and scenarios for testing products.

If you’re not sure whether you are at the right stage of the design process to conduct usability studies, the answer is almost certainly: yes !

It’s important to test your design as early and often as possible . As long as you have some kind of prototype, running a usability test will help you avoid relying on assumptions by involving real users from the beginning. So start testing early.

The scenarios, questions, and tasks you should create, as well as the overall testing process, will vary depending on the stage you’re at. Let’s look at five examples of usability tests at different stages in the design process.

Usability tests made easy

Find the real snags in your user journey and fix them. Maze makes it easier to make your product truly customer-centric.

user testing data insights

Discovery phase usability test example: Shopify

The Shopify Experts Marketplace is a platform that connects Shopify merchants with trusted Shopify experts who have demonstrated proven expertise in the services they offer. All partners on the Experts Marketplace are experienced and skilled Shopify partners who help merchants grow their businesses by providing high-quality services.

Feature being tested

When Shopify merchants look for a Shopify-recommended service provider, the first page they find is the Expert profile . There, they can find an overview of services provided, recent client testimonials, examples of past work, and more. If a merchant finds the expert profile page easy to navigate, they’re more likely to reach out to experts and potentially hire them.

Usability testing approach

The Shopify team wanted to make sure they were including all the relevant information in the right place. To do so, they first gathered insights about what merchants would need to know about Experts from generative user interviews .

Once they knew what information was most important, they moved on to evaluative research and conducted card sorting and tree testing studies to evaluate the information architecture of the product.

At that stage of the research process, usability testing was the best way to understand how Expert profiles could create more value for users. Melanie Buset, User Experience Researcher at Spotify and former User Experience Researcher at Shopify, explains:

Now that we knew what information we needed to surface, we needed to evaluate how and where we surfaced this information. Usability testing provided us with insight into how well we were meeting user’s expectations.

Melanie Buset

Melanie Buset , User Experience Researcher

Melanie worked closely with the designer on the team to identify what the research questions should be. Based on these questions, the team created a UX research plan and a discussion guide for the usability test. After having tested the usability plan with coworkers, they recruited the participants and ran the actual test.

By usability testing, Melanie and the team were able to gather actionable feedback and implement changes quickly. They continued to test until they reached a point where users felt they had access to the most relevant information about Experts and felt ready to potentially hire them.

Test scenario

"Imagine that you’re interested in hiring a Shopify Expert to help with developing a marketing campaign.”

The team wanted to recreate a scenario that would be as close to the real world as possible. For this purpose, they selected participants who had previously been interested in hiring a Shopify Expert.

Task and question examples

Participants were first given a set of general tasks and asked to think aloud as much as possible and to share any feedback throughout the session. Melanie would ask them to show her how they would go about trying to find someone to hire via the Experts Marketplace and go through the process as if they were ready to hire someone.

If the participants got lost or weren't sure how to proceed, she would gently encourage them to try to find another way to accomplish their goal or to share what they would expect to do or see.

The team also asked participants more specific questions, such as:

  • What information is helping you determine if an Expert is a good fit for your needs?
  • What does this button mean? What happens if you click on it?

Unsure about what to ask in your usability test? Take a look at our guide to writing usability questions + examples 💭

The key thing they learned

After testing, we learned so much about what’s important to people when they’re looking to hire a freelancer for their business and specific projects. For example, people want to know upfront how freelancers will communicate with them, and they prefer profiles that feel more human and less transactional.

Ready-to-use Maze Templates for product discovery phase

Run a product discovery survey

Run a product discovery survey

Base your next product moves on your users’ needs, not your assumptions. With this template, you can develop a clear picture of what your audience wants so you can work on faster solutions to their problems.

See this template

Discover jobs to be done

Discover jobs to be done

Tap into the minds of your customers and discover their desired outcomes with this easy-to-use JTBD survey. Collect valuable, actionable feedback to help build more customer-centric products that helps users achieve their goals—and reduce their pain points.

Early-stage usability test example: ElectricFeel

ElectricFeel 's product is a software platform for entrepreneurs and public transport companies to launch, grow, and scale fleets of shared electric bicycles and mopeds. It includes a mobile app for riders to rent vehicles and a system for mobility companies to run day-to-day fleet operations.

When a new rider signs up to the ElectricFeel app, a fleet management team member from the mobility company has to verify their personal info and driver’s license before they can rent a vehicle.

The ElectricFeel team hypothesized that if they could make this process smoother for fleet management teams, they could reduce the time between someone registering and taking their first ride. This would make the overall experience for new riders more frictionless.

The idea to improve the rider activation process came from a wider user testing initiative, which the team saw as a vital first step before they started working on new designs. Product designer, Gerard Marti, explains:

To address the gap between how you want your product to be received and how it is received, it’s key to understand your users’ day-to-day experience.

Gerard Marti, Product Designer at ElectricFeel

Gerard Marti , Product Designer at ElectricFeel

After comparing the results of user persona workshops conducted both within the company and with real customers, the team used the insights to sketch wireframes of the new rider activation user interface.

Then Gerard ran some usability tests with fleet managers to validate whether the new designs actually made it easier to verify new riders, tweaking the design based on people’s feedback.

The next step in their process is conducting quantitative tests on alternative designs, then continuing to test and iterate the option that wins with more quantitative testing. Gerard sees quantitative testing as a vital step towards validating designs with real human behavior:

What people say and what they actually end up doing is not always the same. While opinions are important and you should listen to them, behavior is what matters in the end.

“You have four riders in the pipeline waiting to be accepted.”

Gerard would often leave the scenario at just this, as he wanted to observe the order in which users perceive each element of the design without sending them in a direction with a goal.

When testing early versions of designs, leaving the usability test scenario open lets you find out whether users naturally understand the purpose of the screen without prompting.

To generate a conversational and open atmosphere with participants, Gerard starts with open questions that don’t invite criticism or praise from the user:

  • What do you see on the screen?
  • What do you think this is for?

He then moves on to asking about specific elements of the design:

  • What information do you find is most valuable?
  • Are pictures or text more important for you?

By asking users to evaluate individual elements of the design, Gerard invites participants to give deeper consideration to their thought process when activating riders. This yields crucial insights on how the fundamentals of the interface should be designed.

After testing, we realized that people scan the page, look for the name, then check the image to see if it matches. So while we assumed the picture ID should be on the right, this insight revealed that it should be on the left.

Mid-stage usability test example: Typeform

Typeform is a people-friendly online form and survey maker. Its unique selling point is its focus on design, which aims to make the experience for respondents as smooth and interactive as possible. As a result, typeforms have a high completion rate.

Since completion rates are a big deal for Typeform users, being able to see the exact questions where people leave your form was a highly requested feature for a long time. Typeform’s interface asks respondents one question at a time, so this is especially important. The feature is now called ‘Drop-off Analysis’.

Product tip ✨

Before you even start designing a prototype for a usability test, do research to discover the kind of products, features, or solutions that your target audience needs. Maze Discovery can help you validate ideas before you start designing.

Yuri Martins, Product Designer at Typeform, explains the point when his team felt like it was time to test their designs for the new Drop-off Analysis feature:

We had a lot of different ideas and drawings for how the feature could work. But we felt like we couldn’t commit to any of them without input from users to see things from their perspective.

Yuri Martins, Product Designer at Typeform

Yuri Martins , Product Designer at Typeform

Fortunately, they had already contacted users and arranged some moderated tests one or two weeks before this point, anticipating that they’d need user feedback after the first design sprints. By the time the tests rolled around, Yuri had designed “a few alternative ways that users could achieve their objectives” in Figma.

Since the team wanted to iterate the design fast, they tested each prototype, then created a new updated version based on user feedback for the next testing session a day or two later. Yuri says they “kept running tests until we saw that feedback was repeating itself in a positive way.”

Finding participants is often the biggest obstacle to conducting usability tests. So schedule them in advance, then spend the following weeks refining what you’d like to test.

“One of your typeforms has already collected a number of responses. The info you see appears in the ‘Results’ page.”

usability test example typeform

This scenario was designed to be relatable for Typeform users that had already:

Made a typeform Shared it and collected responses Visited the ‘Results’ page to check on their responses. Choosing a scenario that appeals to this group of users ensured the feedback was as relevant as possible, as the people being tested were more likely to use the Drop-off Analysis feature to analyze their typeform’s results further.

Typeform’s Drop-off Analysis prototypes only existed in Figma at this point, which meant that users couldn’t interact with the design to complete usability tasks.

Instead, Yuri and the team came up with broader, more open-ended tasks and questions that aimed to test their assumptions about the design:

  • Tell us what you understand about the information on this page.
  • Describe anything missing that you would need to fully interpret the interface.

After the general questions, they asked questions about specific elements of the design to get feedback where they needed it most:

  • At the drop-off point, what do you understand?
  • What would you expect to see here?
  • Does this information make sense to you?

This example shows that you don’t need a fully functional prototype to start testing your assumptions. For useful qualitative feedback midway through the design process, tweak your questions to be more open-ended .

Maze is fully integrated with Figma, so you can easily upload your designs and create an unmoderated usability test with your Figma prototype. Learn more .

We’d assumed that people would want to know how many respondents dropped off at each question. But by usability testing, we discovered that people were much more concerned with the percentage of respondents who dropped off—not the total number.

Late-stage usability test example: Movista

Movista is a workforce management software used by retail and manufacturing suppliers. It helps its users coordinate and execute tasks both in-store and in the field with a mobile app.

As part of a wider design update on their entire product, Movista is about to launch a new product for communications, messaging, chats, and sending announcements. This will let people in-store communicate with people out in the field better.

Movista’s new comms feature is at a late stage of the design process, so they tested a high fidelity prototype. Product designer, Matt Elbert, explains:

For the final round of usability testing before sending our designs to be developed, we wanted to test an MVP that’s as close as possible to the final product.

Matt Elbert, Product Designer at Movista

Matt Elbert , Product Designer at Movista

By this point, the team were confident about the fundamental aspects of the design. These tests were to iron out any final usability issues, which can be harder to identify during the process. By testing with a higher number of people, they hoped to get more statistically significant results to validate their designs before launch.

The team used Maze to conduct remote testing with their prototype, which included an overall goal broken down into tasks, and questions to find out how easy or difficult the previous step was.

“You have received new messages. Navigate to your messages.”

The usability tests would often begin in different parts of the product, with participants given a clear navigational goal. This prompts people to act straight away—without getting sidetracked by other areas of the app.

Matt advises people to be specific when using testing tools for unmoderated tests, as you won’t be there to make sure the user understands what you’re asking them to do.

The general format of the usability test was giving people a very specific task, then following up with an open question to ask participants how it went.

  • How would you delete the message, “yeah, what’s up?” that you sent to Mark Fuentes.
  • How did you find the experience of completing that task?

Matt and the team would also sometimes ask questions before a task to see if their designs matched users’ expectations:

  • What options would you expect to be available in the menu on the top-right corner of the message?

“Questions like this are super useful because this is such a new feature that we don’t know for sure what people’s priorities are," said Matt. The team would rank people’s responses, then consider including different options if there was consistent demand for them.

Finally, Matt says it’s important to always include an invitation for participants to share any last thoughts at the end:

Some people might take a long time to complete a task because they’re checking out other areas of the product—not because they found it difficult. Letting people express their overall opinion stops these instances from skewing your test results.

Based on the insights we got from final results and feedback, we ended up shifting the step of selecting a recipient to much earlier in the process.

Live website usability test example: Trint

Trint is a speech-to-text platform for transcription and content creation. The tool uses artificial intelligence to automatically transcribe audio and video from different file formats and generate editable and shareable transcripts.

The ultimate goal of any B2B website is to attract visitors and convert them into loyal customers. The Trint team wanted to optimize their conversion funnel, and testing the website for usability was the best way to diagnose problems and find the right solutions.

The product team at Trint was already using quantitative data to understand what was happening on the website. They used Mixpanel to look at the conversion rates at every step of the funnel. However, it was never enough information to make design decisions. Lidia Sambito, UX Researcher at Trint, explains:

We had to use other pieces of evidence like usability testing to learn how people experienced our marketing funnel and how they felt throughout the customer journey before we were in a position to make the right changes.

Lidia Sambito, UX Researcher at Trint

Lidia Sambito , UX Researcher at Trint

Lidia worked closely with the product manager and the designer to identify the research questions and plan the sessions. She then recruited the participants and ran the usability test.

The test was run using Zoom. Lidia asked the participants to share their screens and moderated the sessions while the product designer was taking notes. All the sessions were recorded, and the observers could leave their comments by using the Realtime Transcription feature in Trint.

After each session, there was a 30-minute debrief with the team to discuss key takeaways, issues, and surprises. This helped the team reflect on what happened during the session and lay the groundwork for the larger synthesis.

To successfully synthesize the research findings, Lidia listened to the sessions, transcribed them using Trint, and then coded the data using different tags, such as pain points, needs, or goals. Finally, she held a workshop with the designer, engineer, and data scientist to identify common themes for each page of the onboarding process.

This research helped us understand how potential users move across the acquisition funnel and the most painful points of their experience. We identified the main problems and tackled them through ideation, prototyping, and more research.

"You have many files to transcribe and your colleague mentioned a software called Trint. He suggested you take a look at it."

Lidia and the team wanted to make the scenario as realistic as possible. They decided to use an open-ended scenario, giving participants minimal explanation about how to perform the task. The key was to see how users would spontaneously interact with the website.

During the test, the participants were asked to share their comments and thoughts while thinking out loud. The main tasks were:

  • Walk me through how you would use Trint for the first time
  • Show me what you would do next

Lidia would also ask participants more specific questions to get deeper insights. Here are some examples:

  • What information is helping you determine if Trint is a good fit for your needs?
  • Tell us what you understand about the information on this page
  • Are pictures, videos, or text important for you?

We saw that the participants wanted to see and try out the product as early as possible. Still, it took several screens to get to the product. I recommended removing the onboarding survey. We also worked on the website's content to make it easier for people to understand what Trint is about.

Key usability testing takeaways

The examples above offer a heap of insight into how to conduct your usability test, so let’s end with a rundown of the main takeaways:

  • Conduct usability testing early, and often: Users want to try a product out asap, and while it may be nerve-wracking to send a fresh product out there, it’s a great opportunity to gather feedback early in the design process. But don’t let that be your only usability test! Take the feedback, iterate, and test again.
  • Check your biases, and be open to change: Don’t go into your usability test with opinions and expectations set in stone. Like any user research or testing, it’s a good idea to record your assumptions ahead of time. That way, if something comes up unexpectedly—for example, users don’t navigate the platform in the way you expect—you can run with it and consider new options, rather than feeling stuck in your ways or heartbroken over an idea. Remember, the user should always be at the center of the design.
  • Don’t be afraid of a practice run: Usability tests are most effective when they run smoothly, so iron out any wrinkles by conducting a dry run before the real thing. Use colleagues or connections to double check your test, including any questions or software used. A test run may feel like an additional step, but it’s a lot quicker and cheaper than redoing your real test when an error occurs!

Frequently asked questions about usability testing examples

What is an example of usability testing?

Usability testing is a proven method to evaluate your product with real people by getting them to complete a list of tasks while observing and noting their interactions. For example, if you're designing a website for an e-commerce store that sells beauty products, a good way to test your design would be to ask the users to try to buy a particular hair care product.

By observing how users interact with your product, where they click, how long it takes them to select the specific product, and by listening to their feedback, you will be able to identify usability issues and areas of improvement.

How is usability testing performed?

Typically, during a usability test, users complete a set of tasks with a prototype or live product while observers watch and take notes of their interactions. The ultimate goal is to discover usability problems and see how users experience your product.

To run a successful usability test, you need to create a prototype and write an effective usability testing script to outline the goal of your research and the questions and tasks you're going to ask the users. You also need to recruit the participants, run the test, and finally analyze and report your test results.

What is usability testing?

Usability testing is the process of testing your product with real users, by asking them to complete a list of tasks while noting their interactions. The purpose of usability testing is to understand whether your product is usable for people to navigate and achieve their goals.

How do you carry out usability testing?

Usability testing can be carried out a number of ways. The most common methods of usability testing include utilizing online usability testing platforms, guerrilla testing, lab usability testing and phone/video interviews.

Start usability testing with Maze templates

Usability testing a new product

Usability testing a new product

Validate usability across your wireframes and prototypes with real users early on. Use this pre-built template to capture valuable feedback on accessibility and user experience so you can see what’s working (and what isn’t).

Test mobile app usability

Test mobile app usability

Help deliver a friction-free product experience for users on mobile. Test mobile app usability to discover pain points and validate expectations, so your users can scroll happily (and your Product team can keep smiling too).

11 Usability testing templates to try

6 Usability Testing Examples & Case Studies

Interested in analyzing real-world examples of successful usability tests?

In this article, we’ll be examining six examples of usability testing being conducted with substantial results.

Conducting usability testing takes only seven simple steps and does not have to require a massive budget. Yet it can achieve remarkable results for companies across all industries.

If you’re someone who cannot be convinced by theory alone, this is the guide for you. These are tried-and-tested case studies from well-known companies that showcase the true power of a successful usability test.

Here are the usability testing examples and case studies we’ll be covering in this article:

  • McDonald’s
  • AutoTrader.com
  • Halo: Combat Evolved

Example #1: Ryanair

Ryanair is one of the world’s largest airline groups, carrying 152 million passengers each year. In 2014, the company launched Ryanair Labs, a digital innovation hub seeking to “reinvent online traveling”. To make this dream a reality, they went on a recruiting spree that resulted in a team of 200+ members. This team included user experience specialists, data analysts, software developers, and digital marketers – all working towards a common goal of improving the user experience of the Ryanair website.

What made matters more complicated, however, is that Ryanair’s website and app together received 1 billion visits per year. Working with a website this large, combined with the paper-thin profit margins of around 5% for the airline industry, Ryanair had no room for errors. To make matters even more stressful, one of the first missions for the new team included launching an entirely new website with a superior user experience.

To give you a visual idea of what they were up against, take a look at their old website design:

case study usability test

Not great, not terrible. But the website undoubtedly needed a redesign for the 21st century.

This is what the Ryanair team set out to accomplish:

  • Reducing the number of steps needed to book a flight on the website;
  • Allowing customers to store their travel documents and payment cards on the website;
  • Delivering a better mobile device user experience for both the website and app.

With these goals in mind, they chose remote and unmoderated usability testing types for their user tests. This by itself was a change for the team, as the Ryanair team had relied on in-lab, face-to-face testing until that point. 

By collaborating with the UX agency UserZoom , however, new opportunities opened up for Ryanair. With UzerZoom’s massive roster of user testers, Ryanair could access large amounts of qualitative and quantitative usability data. Data that they badly needed during the design process of the new website.

By going with remote unmoderated usability testing, the Ryanair team managed to:

  • Reduce the time spent on usability testing;
  • Conduct simultaneous usability tests with hundreds of users and without geographical barriers;
  • Increase the overall reach and scale of the tests;
  • Carry out tests across many devices, operating systems, and multiple focus groups.

With continuous user testing, the new website was taken through alpha and beta testing in 2015. The end result of all work this was the vastly improved look, functionality, and user experience of the new website:

Ryanair's new website design

Even before launch, Ryanair knew that the new website was superior. Usability tests had shown that to be the case and they had no need to rely on “educated guesses”. This usability testing example demonstrates that a well-executed testing plan can give remarkable results.

Source:   Ryanair case study  by UserZoom

Example #2: McDonald’s

McDonald’s is one of the world’s largest fast-food restaurant chains, with a staggering 62 million daily customers . Yet, McDonald’s was late to embrace the mobile revolution as their smartphone app launched rather recently – in August 2015. In comparison, Starbucks’ smartphone app was already a booming success and accounted for 20% of its’ overall revenue in 2015.

Considering the competition, McDonald’s had some catching up to do. Before the launch of their app in the UK, they decided to hire UK-based  SimpleUsability  to identify any usability problems before release. The test plan involved conducting 20 usability tests, where the task scenarios covered the entire customer journey from end-to-end. In addition to that, the test plan included 225 end-user interviews.

Not exactly a large-scale usability study considering the massive size of McDonald’s, but it turned out to be valuable nonetheless. A number of usability issues were detected during the study:

  • Poor visibility and interactivity of the call-to-action buttons;
  • Communication problems between restaurants and the smartphone app;
  • Lack of order customization and favoriting impaired the overall user experience.

Here’s what the McDonald’s mobile app looks like today:

case study usability test

This case study demonstrates that investing even a tiny percentage of a company’s resources into usability testing can result in meaningful insights.

Source:   McDonald’s case study  by SimpleUsability

Example #3: SoundCloud

SoundCloud is the world’s largest music and audio distribution platform, with over 175 million unique monthly listeners . In 2019, SoundCloud hired test IO , a Berlin-based usability testing agency, to conduct continuous usability testing for the SoundCloud mobile app. With SoundCloud’s rigorous development schedule, the company needed regular human user testers to make sure that all new updates work across all devices and OS versions.

The key research objectives for SoundCloud’s regular usability studies were to:

  • Provide a user-friendly listening experience for mobile app users;
  • Identify and fix software bugs before wide release;
  • Improve the mobile app development cycle.

In the very first usability tests, more than 150 usability issues (including 11 critical issues) were discovered. These issues likely wouldn’t have been discovered through internal bug testing. That is because the user testers experimented on the app from a plethora of devices and geographical locations (144 devices and 22 countries). Without remote usability testing, a testing scale as large as this would have been very difficult and expensive to achieve.

Today, SoundCloud’s mobile app looks like this:

SoundCloud usability testing example

This case study demonstrates the power of regular usability testing in products with frequent updates. 

Source:   SoundCloud case study (.pdf)  by test IO

Example #4: AutoTrader.com

AutoTrader.com is one of the world’s largest online marketplaces for buying and selling used cars, with over 28 million monthly visitors . The mission of AutoTrader’s website is to empower car shoppers in the researching process by giving them all the tools necessary to make informed decisions about vehicle purchases.

Sounds fantastic.

However, with competitors such as CarGurus gaining increasing amounts of market share in the online car shopping industry, AutoTrader had to do reinvent itself to stay competitive. 

In e-commerce, competitors with a superior website can gain massive followings in an instant. Fifty years ago this was not the case – well-established car marketplaces had massive car parks all over the country, and a newcomer would have little in ways to compete.

Nowadays, however, it’s all about user experience. Digital shoppers will flock to whichever site offers a better user experience. Websites unwilling or unable to improve their user experience over time will get left in the dust. No matter how big or small they are.

Going back to AutoTrader, the majority of its website traffic comes from organic Google search, meaning that in addition to website usability, search engine optimization (SEO) is a major priority for the company. According to John Muller from Google, changing the layout of a website can affect rankings , and that is why AutoTrader had to be careful with making any large-scale changes to their website.

AutoTrader did not have a large team of user researchers nor a massive budget dedicated to usability testing. But they did have Bradley Miller – Senior User Experience Researcher at the company. To test the usability of AutoTrader, Miller decided to partner with UserTesting.com to conduct live user interviews with AutoTrader users.

Through these live user interviews, Miller was able to:

  • Find and connect with target personas;
  • Communicate with car buyers from across the country;
  • Reduce the costs of conducting usability tests while increasing the insights gained.

From these remote usability live interviews, Miller learned that the customer journey almost always begins from a single source: search engines. Here, it’s important to note that search engines rarely direct users to the homepage. Instead, they drive traffic to the inner pages of websites. In the case of AutoTrader, for example, only around 20% of search engine traffic goes to the homepage (data from SEMrush ).

These insights helped AutoTrader redesign their inner pages to better match the customer journey. They no longer assumed that any inner page visitor already has a greater contextual knowledge of the website. Instead, they started to treat each page as if it’s the initial point of entry by providing more contextual information right then and there inside the inner page.

This usability testing example demonstrates not only the power of user interviews but also the importance of understanding your customer journey and SEO.

Source: AutoTrader case study  by UserTesting.com

Example #5: Udemy

Udemy is one of the world’s largest online learning platforms with over  40 million students across the world. The e-learning giant also has a massively popular smartphone app, and the usability testing example in question was aimed at the smartphone users of Udemy.

To find out when, where, and why Udemy users chose to opt for the mobile app rather than the desktop version, Udemy conducted user tests. As Udemy is a 100% digital company, they chose fully remote unmoderated user testing as their testing method. 

Test participants were asked to take small videos showing where they were located and what tasks they were focused on at the time of learning and recording. 

What the user researchers found was that their initial theory of “users prefer using the mobile app while on the go” was false. Instead, what they found was that the majority of mobile app users were stationary. Udemy users, for various reasons, used the mobile app at home on the couch, or in a cafeteria. The key findings of this user test were utilized for the next year’s product and feature development.

This is what Udemy’s mobile app looks like today:

case study usability test

This usability testing case study demonstrates that a company’s perception of target audience behavior does not always match the behavior of the real end-users. And, that is why user testing is crucial.

Source:   Udemy case study  by UserTesting.com

Example #6: Halo: Combat Evolved

“Halo: Combat Evolved” was the first video game in the massively popular Halo franchise. It was developed by Bungie and published by Microsoft Game Studios in 2001. Within 10 years after its’ release, the Halo games sold more than 46 million copies worldwide and generated Microsoft more than $5 billion in video game and hardware sales. Owing it all to the usability test we’re about to discuss may be a bit of stretch, but usability testing the game during development was undeniably one of the factors that helped the franchise take off like a rocket.

In this usability study, the Halo team gathered a focus group of console gamers to try out their game’s prototype to see if they had fun playing the game. And, if they did not have fun – they wanted to find out what prevented them from doing so. 

In the usability sessions, the researchers placed test subjects (players) in a large outdoor environment with enemies waiting for them across the open space.

The designers of the game expected the players to sprint closer towards the enemies, sparking a massive battle full of action and excitement. But, the test participants had a different plan in mind. Instead of putting themselves in danger by springing closer, they would stay at a maximum distance from the enemies and shoot from far across the outdoor space. While this was a safe and effective strategy, it proved to be rather uneventful and boring for the players.

To entice players to enjoy combat up close, the user researchers decided that changes would have to be made. Their solution – changing the size and color of the aiming indicator in the center of the screen to notify players when they were too far away from enemies. 

Here, you can see the finalized aiming indicator in action:

case study usability test

Subsequent usability tests proved these changes to be effective, as the majority of user testers now engaged in combat from a closer distance.

User testing is not restricted to any particular industry, OS, or platform. Testing user experience is an invaluable tool for any product – not just for websites or mobile apps. 

This example of usability testing from the video game industry shows that players (users) will optimize the fun out of a game if given the chance. It’s up to the designers to bring the fun back through well-designed game mechanics and notifications.

Source:  “ Designing for Fun – User-Testing Case Studies ” by Randy J. Pagulayan

The Beginner’s Guide to Usability Testing [+ Sample Questions]

Clifford Chi

Published: July 28, 2021

In practically any discipline, it's a good idea to have others evaluate your work with fresh eyes, and this is especially true in user experience and web design. Otherwise, your partiality for your own work can skew your perception of it. Learning directly from the people that your work is actually for — your users — is what enables you to craft the best user experience possible.

implementing feedback from usability testing

UX and design professionals leverage usability testing to get user feedback on their product or website’s user experience all the time. In this post, you'll learn:

What usability testing is

  • Its purpose and goals
  • Scenarios where it can work
  • Real-life examples and case studies
  • How to conduct one of your own
  • Scripted questions you can use along the way

What is usability testing?

Usability testing is a method of evaluating a product or website’s user experience. By testing the usability of their product or website with a representative group of their users or customers, UX researchers can determine if their actual users can easily and intuitively use their product or website.

UX researchers will usually conduct usability studies on each iteration of their product from its early development to its release.

During a usability study, the moderator asks participants in their individual user session to complete a series of tasks while the rest of the team observes and takes notes. By watching their actual users navigate their product or website and listening to their praises and concerns about it, they can see when the participants can quickly and successfully complete tasks and where they’re enjoying the user experience, encountering problems, and experiencing confusion.

After conducting their study, they’ll analyze the results and report any interesting insights to the project lead.

case study usability test

Free UX Research Kit + Templates

3 templates for conducting user tests, summarizing your UX research, and presenting your findings.

  • User Testing Template
  • UX Research Testing Report Template
  • UX Research Presentation Template

You're all set!

Click this link to access this resource at any time.

What is the purpose of usability testing?

Usability testing allows researchers to uncover any problems with their product's user experience, decide how to fix these problems, and ultimately determine if the product is usable enough.

Identifying and fixing these early issues saves the company both time and money: Developers don’t have to overhaul the code of a poorly designed product that’s already built, and the product team is more likely to release it on schedule.

Benefits of Usability Testing

Usability testing has five major advantages over the other methods of examining a product's user experience (such as questionnaires or surveys):

  • Usability testing provides an unbiased, accurate, and direct examination of your product or website’s user experience. By testing its usability on a sample of actual users who are detached from the amount of emotional investment your team has put into creating and designing the product or website, their feedback can resolve most of your team’s internal debates.
  • Usability testing is convenient. To conduct your study, all you have to do is find a quiet room and bring in portable recording equipment. If you don’t have recording equipment, someone on your team can just take notes.
  • Usability testing can tell you what your users do on your site or product and why they take these actions.
  • Usability testing lets you address your product’s or website’s issues before you spend a ton of money creating something that ends up having a poor design.
  • For your business, intuitive design boosts customer usage and their results, driving demand for your product.

Usability Testing Scenario Examples

Usability testing sounds great in theory, but what value does it provide in practice? Here's what it can do to actually make a difference for your product:

1. Identify points of friction in the usability of your product.

As Brian Halligan said at INBOUND 2019, "Dollars flow where friction is low." This just as true in UX as it is in sales or customer service. The more friction your product has, the more reason your users will have to find something that's easier to use.

Usability testing can uncover points of friction from customer feedback.

For example: "My process begins in Google Drive. I keep switching between windows and making multiple clicks just to copy and paste from Drive into this interface."

Even though the product team may have had that task in mind when they created the tool, seeing it in action and hearing the user's frustration uncovered a use case that the tool didn't compensate for. It might lead the team to solve for this problem by creating an easy import feature or way to access Drive within the interface to reduce the number of clicks the user needs to make to accomplish their task.

2. Stress test across many environments and use cases.

Our products don't exist in a vacuum, and sometimes development environments are unable to compensate for all the variables. Getting the product out and tested by users can uncover bugs that you may not have noticed while testing internally.

For example: "The check boxes disappear when I click on them."

Let's say that the team investigates why this might be, and they discover that the user is on a browser that's not commonly used (or a browser version that's outdated).

If the developers only tested across the browsers used in-house, they may have missed this bug, and it could have resulted in customer frustration.

3. Provide diverse perspectives from your user base.

While individuals in our customer bases have a lot in common (in particular, the things that led them to need and use our products), each individual is unique and brings a different perspective to the table. These perspectives are invaluable in uncovering issues that may not have occurred to your team.

For example: "I can't find where I'm supposed to click."

Upon further investigation, it's possible that this feedback came from a user who is color blind, leading your team to realize that the color choices did not create enough contrast for this user to navigate properly.

Insights from diverse perspectives can lead to design, architectural, copy, and accessibility improvements.

4. Give you clear insights into your product's strengths and weaknesses.

You likely have competitors in your industry whose products are better than yours in some areas and worse than yours in others. These variations in the market lead to competitive differences and opportunities. User feedback can help you close the gap on critical issues and identify what positioning is working.

For example: "This interface is so much easier to use and more attractive than [competitor product]. I just wish that I could also do [task] with it."

Two scenarios are possible based on that feedback:

  • Your product can already accomplish the task the user wants. You just have to make it clear that the feature exists by improving copy or navigation.
  • You have a really good opportunity to incorporate such a feature in future iterations of the product.

5. Inspire you with potential future additions or enhancements.

Speaking of future iterations, that comes to the next example of how usability testing can make a difference for your product: The feedback that you gather can inspire future improvements to your tool.

It's not just about rooting out issues but also envisioning where you can go next that will make the most difference for your customers. And who best to ask but your prospective and current customers themselves?

Usability Testing Examples & Case Studies

Now that you have an idea of the scenarios in which usability testing can help, here are some real-life examples of it in action:

1. User Fountain + Satchel

Satchel is a developer of education software, and their goal was to improve the experience of the site for their users. Consulting agency User Fountain conducted a usability test focusing on one question: "If you were interested in Satchel's product, how would you progress with getting more information about the product and its pricing?"

During the test, User Fountain noted significant frustration as users attempted to complete the task, particularly when it came to locating pricing information. Only 80% of users were successful.

Usability Test Example: User Fountain + Satchel

Image Source

This led User Fountain to create the hypothesis that a "Get Pricing" link would make the process clearer for users. From there, they tested a new variation with such a link against a control version. The variant won, resulting in a 34% increase in demo requests.

By testing a hypothesis based on real feedback, friction was eliminated for the user, bringing real value to Satchel.

2. Kylie.Design + Digi-Key

Ecommerce site Digi-Key approached consultant Kylie.Design to uncover which site interactions had the highest success rates and what features those interactions had in common.

They conducted more than 120 tests and recorded:

  • Click paths from each user
  • Which actions were most common
  • The success rates for each

Usability Test Example: Kylie.Design + Digi-Key

This as well as the written and verbal feedback provided by participants informed the new design, which resulted in increasing purchaser success rates from 68.2% to 83.3%.

In essence, Digi-Key was able to identify their most successful features and double-down on them, improving the experience and their bottom line.

3. Sparkbox + An Academic Medical Center

An academic medical center in the midwest partnered with consulting agency Sparkbox to improve the patient experience on their homepage, where some features were suffering from low engagement.

Sparkbox conducted a usability study to determine what users wanted from the homepage and what didn't meet their expectations. From there, they were able to propose solutions to increase engagement.

Usability Test Example: Sparkbox + Medical Center

For example, one key action was the ability to access electronic medical records. The new design based on user feedback increased the success rate from 45% to 94%.

This is a great example of putting the user's pains and desires front-and-center in a design.

The 9 Phases of a Usability Study

1. decide which part of your product or website you want to test..

Do you have any pressing questions about how your users will interact with certain parts of your design, like a particular interaction or workflow? Or are you wondering what users will do first when they land on your product page? Gather your thoughts about your product or website’s pros, cons, and areas of improvement, so you can create a solid hypothesis for your study.

2. Pick your study’s tasks.

Your participants' tasks should be your user’s most common goals when they interact with your product or website, like making a purchase.

3. Set a standard for success.

Once you know what to test and how to test it, make sure to set clear criteria to determine success for each task. For instance, when I was in a usability study for HubSpot’s Content Strategy tool, I had to add a blog post to a cluster and report exactly what I did. Setting a threshold of success and failure for each task lets you determine if your product's user experience is intuitive enough or not.

4. Write a study plan and script.

At the beginning of your script, you should include the purpose of the study, if you’ll be recording, some background on the product or website, questions to learn about the participants’ current knowledge of the product or website, and, finally, their tasks. To make your study consistent, unbiased, and scientific, moderators should follow the same script in each user session.

5. Delegate roles.

During your usability study, the moderator has to remain neutral, carefully guiding the participants through the tasks while strictly following the script. Whoever on your team is best at staying neutral, not giving into social pressure, and making participants feel comfortable while pushing them to complete the tasks should be your moderator

Note-taking during the study is also just as important. If there’s no recorded data, you can’t extract any insights that’ll prove or disprove your hypothesis. Your team’s most attentive listener should be your note-taker during the study.

6. Find your participants.

Screening and recruiting the right participants is the hardest part of usability testing. Most usability experts suggest you should only test five participants during each study , but your participants should also closely resemble your actual user base. With such a small sample size, it’s hard to replicate your actual user base in your study.

To recruit the ideal participants for your study, create the most detailed and specific persona as you possibly can and incentivize them to participate with a gift card or another monetary reward.

Recruiting colleagues from other departments who would potentially use your product is also another option. But you don’t want any of your team members to know the participants because their personal relationship can create bias -- since they want to be nice to each other, the researcher might help a user complete a task or the user might not want to constructively criticize the researcher’s product design.

7. Conduct the study.

During the actual study, you should ask your participants to complete one task at a time, without your help or guidance. If the participant asks you how to do something, don’t say anything. You want to see how long it takes users to figure out your interface.

Asking participants to “think out loud” is also an effective tactic -- you’ll know what’s going through a user’s head when they interact with your product or website.

After they complete each task, ask for their feedback, like if they expected to see what they just saw, if they would’ve completed the task if it wasn’t a test, if they would recommend your product to a friend, and what they would change about it. This qualitative data can pinpoint more pros and cons of your design.

8. Analyze your data.

You’ll collect a ton of qualitative data after your study. Analyzing it will help you discover patterns of problems, gauge the severity of each usability issue, and provide design recommendations to the engineering team.

When you analyze your data, make sure to pay attention to both the users’ performance and their feelings about the product. It’s not unusual for a participant to quickly and successfully achieve your goal but still feel negatively about the product experience.

9. Report your findings.

After extracting insights from your data, report the main takeaways and lay out the next steps for improving your product or website’s design and the enhancements you expect to see during the next round of testing.

The 3 Most Common Types of Usability Tests

1. hallway/guerilla usability testing.

This is where you set up your study somewhere with a lot of foot traffic. It allows you to ask randomly-selected people who have most likely never even heard of your product or website -- like passers-by -- to evaluate its user-experience.

2. Remote/Unmoderated Usability Testing

Remote/unmoderated usability testing has two main advantages: it uses third-party software to recruit target participants for your study, so you can spend less time recruiting and more time researching. It also allows your participants to interact with your interface by themselves and in their natural environment -- the software can record video and audio of your user completing tasks.

Letting participants interact with your design in their natural environment with no one breathing down their neck can give you more realistic, objective feedback. When you’re in the same room as your participants, it can prompt them to put more effort into completing your tasks since they don’t want to seem incompetent around an expert. Your perceived expertise can also lead to them to please you instead of being honest when you ask for their opinion, skewing your user experience's reactions and feedback.

3. Moderated Usability Testing

Moderated usability testing also has two main advantages: interacting with participants in person or through a video a call lets you ask them to elaborate on their comments if you don’t understand them, which is impossible to do in an unmoderated usability study. You’ll also be able to help your users understand the task and keep them on track if your instructions don’t initially register with them.

Usability Testing Script & Questions

Following one script or even a template of questions for every one of your usability studies wouldn't make any sense -- each study's subject matter is different. You'll need to tailor your questions to the things you want to learn, but most importantly, you'll need to know how to ask good questions.

1. When you [action], what's the first thing you do to [goal]?

Questions such as this one give insight into how users are inclined to interact with the tool and what their natural behavior is.

Julie Fischer, one of HubSpot's Senior UX researchers, gives this advice: "Don't ask leading questions that insert your own bias or opinion into the participants' mind. They'll end up doing what you want them to do instead of what they would do by themselves."

For example, "Find [x]" is a better than "Are you able to easily find [x]?" The latter inserts connotation that may affect how they use the product or answer the question.

2. How satisfied are you with the [attribute] of [feature]?

Avoid leading the participants by asking questions like "Is this feature too complicated?" Instead, gauge their satisfaction on a Likert scale that provides a number range from highly unsatisfied to highly satisfied. This will provide a less biased result than leading them to a negative answer they may not otherwise have had.

3. How do you use [feature]?

There may be multiple ways to achieve the same goal or utilize the same feature. This question will help uncover how users interact with a specific aspect of the product and what they find valuable.

4. What parts of [the product] do you use the most? Why?

This question is meant to help you understand the strengths of the product and what about it creates raving fans. This will indicate what you should absolutely keep and perhaps even lead to insights into what you can improve for other features.

5. What parts of [the product] do you use the least? Why?

This question is meant to uncover the weaknesses of the product or the friction in its use. That way, you can rectify any issues or plan future improvements to close the gap between user expectations and reality.

6. If you could change one thing about [feature] what would it be?

Because it's so similar to #5, you may get some of the same answers. However, you'd be surprised about the aspirational things that your users might say here.

7. What do you expect [action/feature] to do?

Here's another tip from Julie Fischer:

"When participants ask 'What will this do?' it's best to reply with the question 'What do you expect it do?' rather than telling them the answer."

Doing this can uncover user expectation as well as clarity issues with the copy.

Your Work Could Always Use a Fresh Perspective

Letting another person review and possibly criticize your work takes courage -- no one wants a bruised ego. But most of the time, when you allow people to constructively criticize or even rip apart your article or product design, especially when your work is intended to help these people, your final result will be better than you could've ever imagined.

Editor's note: This post was originally published in August 2018 and has been updated for comprehensiveness.

Don't forget to share this post!

Related articles.

The Top 13 Paid & Free Alternatives to Adobe Illustrator of 2023

The Top 13 Paid & Free Alternatives to Adobe Illustrator of 2023

Using Human-Centered Design to Create Better Products (with Examples)

Using Human-Centered Design to Create Better Products (with Examples)

9 Breadcrumb Tips to Make Your Site Easier to Navigate [+ Examples]

9 Breadcrumb Tips to Make Your Site Easier to Navigate [+ Examples]

UX vs. UI: What's the Difference?

UX vs. UI: What's the Difference?

The 10 Best Storyboarding Software of 2022 for Any Budget

The 10 Best Storyboarding Software of 2022 for Any Budget

The Ultimate Guide to Designing for the User Experience

The Ultimate Guide to Designing for the User Experience

It’s the Little Things: How To Write Microcopy

It’s the Little Things: How To Write Microcopy

10 Tips That Can Drastically Improve Your Website's User Experience

10 Tips That Can Drastically Improve Your Website's User Experience

Intro to Adobe Fireworks: 6 Great Ways Designers Can Use This Software

Intro to Adobe Fireworks: 6 Great Ways Designers Can Use This Software

Fitts's Law: The UX Hack that Will Strengthen Your Design

Fitts's Law: The UX Hack that Will Strengthen Your Design

3 templates for conducting user tests, summarizing UX research, and presenting findings.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

UX designers working at a large desk, reviewing testing data

Usability Testing: Everything You Need to Know (Methods, Tools, and Examples)

case study usability test

As you crack into the world of UX design, there’s one thing you absolutely must understand and learn to practice like a pro: usability testing.

Precisely because it’s such a critical skill to master, it can be a lot to wrap your head around. What is it exactly, and how do you do it? How is it different from user testing? What are some actual methods that you can employ?

In this guide, we’ll give you everything you need to know about usability testing—the what, the why, and the how.

Here’s what we’ll cover:

  • What is usability testing and why does it matter?
  • Usability testing vs. user testing
  • Formative vs. summative usability testing
  • Attitudinal vs. behavioral research

Performance testing

Card sorting, tree testing, 5-second test, eye tracking.

  • How to learn more about usability testing

Ready? Let’s dive in.

1. What is usability testing and why does it matter?

Simply put, usability testing is the process of discovering ways to improve your product by observing users as they engage with the product itself (or a prototype of the product). It’s a UX research method specifically trained on—you guessed it—the usability of your products. And what is usability ? Usability is a measure of how easily users can accomplish a given task with your product.

Usability testing, when executed well, uncovers pain points in the user journey and highlights barriers to good usability. It will also help you learn about your users’ behaviors and preferences as these relate to your product, and to discover opportunities to design for needs that you may have overlooked.

You can conduct usability testing at any point in the design process when you’ve turned initial ideas into design solutions, but the earlier the better. Test early and test often! You can conduct some kind of usability testing with low- and high- fidelity prototypes alike—and testing should continue after you’ve got a live, out-in-the-world product.

2. Usability testing vs. user testing

Though they sound similar and share a somewhat similar end goal, usability testing and user testing are two different things. We’ll look at the differences in a moment, but first, here’s what they have in common:

  • Both share the end goal of creating a design solution to meet real user needs
  • Both take the time to observe and listen to the user to hear from them what needs/pain points they experience
  • Both look for feasible ways of meeting those needs or addressing those pain points

User testing essentially asks if this particular kind of user would want this particular kind of product—or what kind of product would benefit them in the first place. It is entirely user-focused.

Usability testing, on the other hand, is more product-focused and looks at users’ needs in the context of an existing product (even if that product is still in prototype stages of development). Usability testing takes your existing product and places it in the hands of your users (or potential users) to see how the product actually works for them—how they’re able to accomplish what they need to do with the product.

Designer working with a paper prototype, considering options for usability.

3. Formative vs. summative usability testing

Alright! Now that you understand what usability testing is, and what it isn’t, let’s get into the various types of usability testing out there.

There are two broad categories of usability testing that are important to understand— formative and summative . These have to do with when you conduct the testing and what your broad objectives are—what the overarching impact the testing should have on your product.

Formative usability testing: 

  • Is a qualitative research process 
  • Happens earlier in the design, development, or iteration process
  • Seeks to understand what about the product needs to be improved
  • Results in qualitative findings and ideation that you can incorporate into prototypes and wireframes

Summative usability testing:

  • Is a research process that’s more quantitative in nature
  • Happens later in the design, development, or iteration process
  • Seeks to understand whether the solutions you are implementing (or have implemented) are effective
  • Results in quantitative findings that can help determine broad areas for improvement or specific areas to fine-tune (this can go hand in hand with competitive analysis )

4. Attitudinal vs. behavioral research

Alongside the timing and purpose of the testing (formative vs. summative), it’s important to understand two broad categories that your research (both your objectives and your findings) will fall into: behavioral and attitudinal.

Attitudinal research is all about what people say—what they think  and communicate about your product and how it works. Behavioral research focuses on what people do—how they actually do interact with your product and the feelings that surface as a result.

What people say and what people do are often two very different things. These two categories help define those differences, choose our testing methods more intentionally, and categorize our findings more effectively.

5. Five essential usability testing methods

Some usability testing methods are geared more towards uncovering either behavioral or attitudinal findings; but many have the potential to result in both.

Of the methods you’ll learn about in this section, performance testing has the greatest potential for targeting both—and will perhaps require the greatest amount of thoughtfulness regarding how you approach it.

Naturally, then, we’ll spend a little more time on that method than the other four, though that in no way diminishes their usefulness! Here are the methods we’ll cover:

These are merely five common and/or interesting methods—it is not a comprehensive list of every method you can use to get inside the hearts and minds of your users. But it’s a place to start. So here we go!

In performance testing, you sit down with a user and give them a task (or set of tasks) to complete with the product.

This is often a combination of methods and approaches that will allow you to interview users, see how they use your product, and find out how they feel about the experience afterward. Depending on your approach, you’ll observe them, take notes, and/or ask usability testing questions before, after, or along the way.

Performance testing is by far the most talked-about form of usability testing—especially as it’s often combined with other methods. Performance testing is what most commonly comes to mind in discussions of usability testing as a whole, and it’s what many UX design certification programs focus on—because it’s so broadly useful and adaptive.

While there’s no one right way to conduct performance testing, there are a number of approaches and combinations of methods you can use, and you’ll want to be intentional about it.

It’s a method that you can adapt to your objectives—so make sure you do! Ask yourself what kind of attitudinal or behavioral findings you’re really looking for, how much time you’ll have for each testing session, and what methods or approaches will help you reach your objectives most efficiently.

Performance testing is often combined with user interviews . For a quick guide on how to ask great questions during this part of a testing session, watch this video:

Even if you choose not to combine performance testing with user interviews, good performance testing will still involve some degree of questioning and moderating.

Performance testing typically results in a pretty massive chunk of qualitative insights, so you’ll need to devote a fair amount of intention and planning before you jump in.

Maximize the usefulness of your research by being thoughtful about the task(s) you assign and what approach you take to moderating the sessions. As your test participants go about the task(s) you assign, you’ll watch, take notes, and ask questions either during or after the test—depending on your approach.

Four approaches to performance testing

There are four ways you can go about moderating a performance test , and it’s worth understanding and choosing your approach (or combination of approaches) carefully and intentionally. As you choose, take time to consider:

  • How much guidance the participant will actually need
  • How intently participants will need to focus
  • How guidance or prompting from you might affect results or observations

With these things in mind, let’s look at the four approaches.

Concurrent Think Aloud (CTA)

With this approach, you’ll encourage participants to externalize their thought process—to think out loud. Your job during the session will be to keep them talking through what they’re looking for, what they’re doing and why, and what they think about the results of their actions.

A CTA approach often uncovers a lot of nuanced details in the user journey, but if your objectives include anything related to the accuracy or time for task completion, you might be better off with a Retrospective Think Aloud.

Retrospective Think Aloud (RTA)

Here, you’ll allow participants to complete their tasks and recount the journey afterward . They can complete tasks in a more realistic time frame  and degree of accuracy, though there will certainly be nuanced details of participants’ thoughts and feelings you’ll miss out on.

Concurrent Probing (CP)

With Concurrent Probing, you ask participants about their experience as they’re having it. You prompt them for details on their expectations, reasons for particular actions, and feeling about results.

This approach can be distracting, but used in combination with CTA, you can allow participants to complete the tasks and prompt only when you see a particularly interesting aspect of their experience, and you’d like to know more. Again, if accuracy and timing are critical objectives, you might be better off with Retrospective Probing.

Retrospective Probing (RP)

If you note that a participant says or does something interesting as they complete their task(s), you can note it and ask them about it later—this is Retrospective Probing. This is an approach very often combined with CTA or RTA to ensure that you’re not missing out on those nuanced details of their experience without distracting them from actually completing the task.

Whew! There’s your quick overview of performance testing. To learn more about it, read to the final section of this article: How to learn more about usability testing.

With this under our belts, let’s move on to our other four essential usability testing methods.

Card sorting is a way of testing the usability of your information architecture. You give users blank cards (open card sorting) or cards labeled with the names and short descriptions of the main items/sections of the product (closed card sorting), then ask them to sort the cards into piles according to which items seem to go best together. You can go even further by asking them to sort the cards into larger groups and to name the groups or piles.

Rather than structuring your site or app according to your understanding of the product, card sorting allows the information architecture to mirror the way your users are thinking.

This is a great technique to employ very early in the design process as it is inexpensive and will save the time and expense of making structural adjustments later in the process. And there’s no technology required! If you want to conduct it remotely, though, there are tools like OptimalSort that do this effectively.

For more on how to conduct card sorting, watch this video:

Tree testing is a great follow up to card sorting, but it can be conducted on its own as well. In tree testing, you create a visual information hierarchy (or “tree) and ask users to complete a task using the tree. For example, you might ask users, “You want to accomplish X with this product. Where do you go to do that?” Then you observe how easily users are able to find what they’re looking for.

This is another great technique to employ early in the design process. It can be conducted with paper prototypes or spreadsheets, but you can also use tools such as TreeJack to accomplish this digitally and remotely.

In the 5-second test, you expose your users to one portion of your product (one screen, probably the top half of it) for five seconds and then interview them to see what they took away regarding:

  • The product/page’s purpose and main features or elements
  • The intended audience and trustworthiness of the brand
  • Their impression of the usability and design of the product

You can conduct this kind of testing in person rather simply, or remotely with tools like UsabilityHub .

This one may seem somewhat new, but it’s been around for a while–though the tools and technology around it have evolved. Eye tracking on its own isn’t enough to determine usability, but it’s a great compliment to your other usability testing measures.

In eye tracking you literally track where most users’ eyes land on the screen you’re designing. The reason this is important is that you want to make sure that the elements users’ eyes are drawn to are the ones that communicate the most important information. This is a difficult one to conduct in any kind of analog fashion, but there are a lot of tools out there that make it simple— CrazyEgg and HotJar are both great places to start.

Usability test participant looking at a computer screen

6. How to learn more about usability testing

There you have it: your 15-minute overview of the what, why, and how of usability testing. But don’t stop here! Usability testing and UX research as a whole have a deeply humanizing impact on the design process. It’s a fascinating field to discover and the result of this kind of work has the power of keeping companies, design teams, and even the lone designer accountable to what matters most: the needs of the end user.

If you’d like to learn more about usability testing and UX research, take the free UX Research for Beginners Course with CareerFoundry. This tutorial is jam-packed with information that will give you a deeper understanding of the value of this kind of testing as well as a number of other UX research methods.

You can also enroll in a UX design course or bootcamp to get a comprehensive understanding of the entire UX design process (to which usability testing and UX research are an integral part). For guidance on the best programs, check out our list of the 10 best UX design certification programs . And if you’ve already started your learning process, and you’re thinking about the job hunt, here are the top 5 UX research interview questions to be ready for.

For further reading about usability testing and UX research, check out these other articles:

  • How to conduct usability testing: a step-by-step guide
  • What does a UX researcher actually do? The ultimate career guide
  • 11 usability heuristics every designer should know
  • How to conduct a UX audit
  • Learn about our financial technology consulting, UX design, and engineering services
  • Investing and Wealth
  • Fintech SaaS
  • Why Praxent
  • Capabilities
  • Capabilities demo
  • Schedule a call
  • Uncategorized
  • Development
  • Life at Praxent
  • Project Management
  • UX & Design
  • Tech & Business News
  • Product Management
  • Financial Services Innovators
  • UX Insights

Usability Testing Case Studies: Validate Assumptions and Build Software with Confidence

We define usability and examine some usability testing case studies to demonstrate the benefits.  

As we’ve said before, one of the most important benefits of software prototyping is the early ability to conduct usability testing. The truth of the matter is that no one will use your product if it’s not easy and intuitive or if it doesn’t solve a problem that users have in the first place.

The easiest way to make sure your software project meets these requirements is with usability testing, and the most effective way to implement usability testing early in the development process is with a prototype .

What Is Usability Testing?

Usability testing is the process of studying potential end-users as they interact with a product prototype. Usability testing occurs before you develop and launch a product, and is an essential planning step that can guide a product’s features, functions and purpose. Developing with a clear purpose and research-based data will ensure your goals and plans are in alignment with what an end user wants and needs, and as a result that your product will be more likely to succeed. Usability testing is a type of user research, and like all user research is instrumental in building more informed products that contribute to a business’ long term success.

Intentionally observing real-life people as they interact with a product is an important step in effective user experience design that should not be missed. Without usability testing, it’s very difficult to determine or validate that your product will provide something people are willing to pay for. Companies that don’t invest in this type of upfront testing often create products that are built around their own goals, as opposed to those of their customers, which do not always align. People don’t simply want products just because they exist, and users sometimes approach applications in unexpected ways. Thus, usability testing is key for confidence building during product development.

In this post, we look at a few usability testing examples to illustrate how the process works and why it’s so essential to the overall development process.

Download Praxent user journey map template and ebook Praxent

Create Your Own User Journey Maps in Sketch or Illustrator

Maximize ROI on usability tests and foster smart decisions for new products with user journey maps.

Get the free step-by-step guide and handy template..

>> Download the e-book and templates for creating user journey maps in Sketch and Illustrator.

User Testing Case Studies

Usability testing case study #1: cisco, usability testing for user experience.

We worked with Cisco’s developer program group to craft a new, more immersive user experience for Cisco DevNet, their developer resources website. Their usability case study illustrates how we tackled their challenge, and the instrumental role that an effective prototyping strategy played in the process.

The Challenge

The depth and breadth of content on Cisco’s DevNet had spawned hundreds of micro-sites, each with different organizational structures and their own navigation paradigms. Existing visitors to the site would only visit a few specific pages, meaning they were never exposed to newly released tools and technologies. Also, new visitors struggled to discover where to begin or how to find the resources most relevant to them. Users were missing out on a lot of valuable resources, and the user experience was less than ideal.

ClickModel® Usability Testing

Cisco wanted to implement a new user experience to the homepage of DevNet in order to make it easier to dive from the homepage deep within the site’s resources to find information on a particular tool or technology. We were charged with prototyping the proposed user experience, so that Cisco could conduct usability testing with developer focus groups. To build our prototype, we implemented our ClickModel tool.

At Praxent, prototyping the user experience allows stakeholders and users to give feedback before the software development process begins.

Confidence to Move Forward with Development

The ClickModel prototype emulated the new site that would appear to users. The prototype prompted insightful feedback from the developer focus groups regarding both the proposed information architecture and the priority and placement of various navigational elements on the homepage and subsequent interior landing pages. The prototype also made it easier to collect feedback on the utility of a proposed color-coding scheme for sorting resources into major technology categories.

This feedback and testing allowed Cisco’s DevNet project to course correct in the Structure, Skeleton, and Surface areas before they spent significant money building in the wrong direction. Cisco took their prototype in-house and moved forward decisively and with confidence to create better resources for the developer community.

DeveloperProgram.com runs developer programs for some of the world’s largest technology and telecoms companies. We rely on our partner Praxent who understands our business, our clients, the developer’s needs, and are able to articulate that into a portal design that is easy to navigate and understand, with the foresight to create an infrastructure that allows for untethered growth. The design team is a pleasure to work with, quickly comprehending our needs and converting that to tangible deliverables, on time and always outstanding.

— Steve Glagow, Executive Vice President • DeveloperProgram.com

Usability Testing Case Study #2: NORCAL

Responsive data displays with usability testing.

In the wake of a corporate merger, NORCAL, a provider of medical professional liability insurance, was looking to build a new online portal. The portal would allow their insurance brokers to review their book of business and track which policyholders were behind on payments. Their billing department was inundated with phone inquiries from brokers who needed information about specific policyholder accounts, which was hindering their ability to attend to important billing tasks.

NORCAL’s insurance brokers are constantly on the go, so it was crucial that the proposed portal not just be accessible by mobile smartphones and tablets, but the portal be optimized specifically for use on those devices.

A native app solution was discussed, but NORCAL determined early on that they wanted to invest in a responsive web application that could be accessed on desktops and mobile devices by both their internal teams and brokers in the field.

Prototyping to the Rescue

The primary user experience challenge tackled during the engagement was how to display complex data tables in a way that would be equally useful on large screen desktop computers as well as handheld smartphone screens. Since multi touch smartphone devices don’t have cursors, they can’t display information using hover states like a desktop computer can.

During the ClickModel process, we prototyped various on- and off-screen methods of data interaction displays for NORCAL’s team to review and test. This provided a few real-life usability testing examples of how they might tackle their problem.

Praxent prototypes the user experience across smartphone, tablet, laptop, and desktop devices to arrive at a responsive web design that works in various contexts.

Interacting with the clickable, tappable prototype on both desktop and mobile devices gave NORCAL crucial insight to determine what pieces of data were most essential to be displayed on the smaller smartphone screens and which additional data fields would be displayed only on desktop screens.

The ClickModel iterative prototyping process provided a clear-cut way for stakeholders from billing, marketing, and engineering to communicate effectively about the user experience. This led to important consensus and direction regarding feature requirements and scope, which was able to guide their project as they moved forward.

What Next? Getting Started With Usability Testing Studies for UX

As you can see, there are many benefits of having a prototype that looks, feels and acts real. In the two usability testing case studies above, ClickModel was an effective tool to build such prototypes, and helping clients garner the information and data-backed insight they needed to proceed with confidence. Learn more about our testing process, and how it also leads to reliable project estimates that are so important as you move forward with the development process.

ClickModel® Overview Guide

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

  • User Experience (UX) Testing User Interface (UI) Testing Unified User Testing Ecommerce Testing Remote Usability Testing User Experience (UX) Research About the company ' data-html="true"> Why Trymata
  • Usability testing

Run remote usability tests on any digital product to deep dive into your key user flows

  • Product analytics

Learn how users are behaving on your website in real time and uncover points of frustration

  • Research repository

A tool for collaborative analysis of qualitative data and for building your research repository and database.

See an example

Trymata Blog

How-to articles, expert tips, and the latest news in user testing & user experience

Knowledge Hub

Detailed explainers of Trymata’s features & plans, and UX research terms & topics

Visit Knowledge Hub

See pricing

  • Get paid to test
  • For UX & design teams
  • For product teams
  • For marketing teams
  • For ecommerce teams
  • For agencies
  • For startups & VCs
  • Customer Stories

How do you want to use Trymata?

As a tester, as a researcher, desktop usability video.

You’re on a business trip in Oakland, CA. You've been working late in downtown and now you're looking for a place nearby to grab a late dinner. You decided to check Zomato to try and find somewhere to eat. (Don't begin searching yet).

  • Look around on the home page. Does anything seem interesting to you?
  • How would you go about finding a place to eat near you in Downtown Oakland? You want something kind of quick, open late, not too expensive, and with a good rating.
  • What do the reviews say about the restaurant you've chosen?
  • What was the most important factor for you in choosing this spot?
  • You're currently close to the 19th St Bart station, and it's 9PM. How would you get to this restaurant? Do you think you'll be able to make it before closing time?
  • Your friend recommended you to check out a place called Belly while you're in Oakland. Try to find where it is, when it's open, and what kind of food options they have.
  • Now go to any restaurant's page and try to leave a review (don't actually submit it).

What was the worst thing about your experience?

It was hard to find the bart station. The collections not being able to be sorted was a bit of a bummer

What other aspects of the experience could be improved?

Feedback from the owners would be nice

What did you like about the website?

The flow was good, lots of bright photos

What other comments do you have for the owner of the website?

I like that you can sort by what you are looking for and i like the idea of collections

You're going on a vacation to Italy next month, and you want to learn some basic Italian for getting around while there. You decided to try Duolingo.

  • Please begin by downloading the app to your device.
  • Choose Italian and get started with the first lesson (stop once you reach the first question).
  • Now go all the way through the rest of the first lesson, describing your thoughts as you go.
  • Get your profile set up, then view your account page. What information and options are there? Do you feel that these are useful? Why or why not?
  • After a week in Italy, you're going to spend a few days in Austria. How would you take German lessons on Duolingo?
  • What other languages does the app offer? Do any of them interest you?

I felt like there could have been a little more of an instructional component to the lesson.

It would be cool if there were some feature that could allow two learners studying the same language to take lessons together. I imagine that their screens would be synced and they could go through lessons together and chat along the way.

Overall, the app was very intuitive to use and visually appealing. I also liked the option to connect with others.

Overall, the app seemed very helpful and easy to use. I feel like it makes learning a new language fun and almost like a game. It would be nice, however, if it contained more of an instructional portion.

LEGACY SYSTEM

  • Paid plan customers
  • Free trial users pre-Nov 1
  • Testers (choose either)
  • Free trial users post-Nov 1

Note to testers: During the transition period, you can freely log in & out of both systems

What’s the new system about? Read more about our transition & what it-->

A case study in competitive usability testing (Part 1)

' src=

This post is Part 1 of a 2-part competitive usability study. In Part 1, we deal with how to set up a competitive usability testing study. In Part 2 , we showcase results from the study we ran, with insights on how to approach your data and what to look for in a competitive UX study.

There are many good reasons to do competitive usability testing. Watching users try out a competitor’s website or app can show you what their designs are doing well, and where they’re lacking; which features competitors have that users really like; how they display and organize information and options, and how well it works.

A less obvious, but perhaps even more valuable reason, is that competitive usability testing improves the quality of feedback on your own website or app. By giving users something to compare your interface to, it sharpens their critiques and increases their awareness.

Read more: 5 secrets to comparative usability testing

If a user has only experienced your website’s way of doing something, for example, it’s easy for them to take it for granted. As long as they were able to complete what was asked of them, they may have relatively little to say about how it could be improved. But send them to a competitor’s site and have them complete the same tasks, and they’ll almost certainly have a lot more to say about whose way was better, in what ways, and why they liked it more.

Thanks to this effect alone, the feedback you collect about your own designs will be much more useful and insight-dense.

Quantifying the differences

Not only can competitive user testing get you more incisive feedback on what and how users think, it’s also a great opportunity to quantitatively measure the effectiveness of different pages, flows, and features on your site or app, and to quantify users’ attitudes towards them.

Quantitative metrics and hard data provides landmarks of objectivity as you plan your roadmap and make decisions about your designs. They deepen your understanding of user preferences, and strengthen your ability to gauge the efficacy of different design choices.

When doing competitive UX testing – whether between your products and a competitor’s, or between multiple versions of your own products – quantitative metrics are a valuable baseline that provide quick, unambiguous answers and lay the groundwork for a thorough qualitative analysis.

Domino's za Pizza Hut pizzas

Domino’s vs Pizza Hut: A competitive user testing case study

We revisited our old Domino’s vs Pizza Hut UX faceoff , this time with 20 test participants, to see what we would find – not just about the UX of ordering pizza online, but also about how to run competitive usability tests, and how to use quantitative data in your competitive study.

Why 20 users? It’s the minimum sample size to get statistically reliable quantitative data, as NNGroup and other UX research experts have demonstrated. In our post-test survey, we included a number of new multiple choice, checkbox-style, and slider rating questions to get some statistically sound quantitative data points.

Read more: How many users should I test with?

Setup of the study

The first choice you need to make when setting up a competitive UX study is whether to test each interface with separate groups of users, or send the same users to each one.

As described above, we prefer sending the same users to both if possible, so that they can directly compare their experiences with a sharp and keenly aware eye. We recommend trying this method if it’s feasible for your situation, but there are a few things to consider:

1. Time: How long will it take users to go through both (or all) of the interfaces you’re testing? If the flows aren’t too long and the tasks aren’t too complicated, you can safely fit 2 or even 3 different sites or apps into a single session.

The default session duration for TryMyUI tests is 30 minutes, which we’ve found to be a good upper limit. The longer the session goes, the more your results could degrade due to tester fatigue, so keep this in mind and make sure you’re not asking too much of your participants.

2. Depth: There will necessarily be a trade-off between how many different sites or apps users visit in a single session, and how deeply they interact with each one. If you need users to go into serious depth, it may be better to use separate groups for each different interface.

3. Scale: To get statistically reliable quantitative data, at least 20 users should be reviewing each interface. If every tester tries out both sites during their session, you only need 20 in all. If you use different batches of testers per site, you would need 40 total users to compare two sites.

So if you don’t have the ability or bandwidth to recruit and test with lots of users, you may want to simplify each flow such that they can fit into a single session; but if your team can handle larger numbers, you can have 20 visit each site separately (or even have some users visit multiple sites, and others go deeper into a single one).

For our Domino’s vs Pizza Hut test, we chose to send the same users to both sites so they could directly compare their experience on each. This wasn’t too much of a challenge, as ordering pizza is a relatively simple flow that doesn’t require intense or deep interaction, and the experience of both sites could fit easily into a 30-minute window.

Learn more: user testing better products and user testing new products

Balancing on a tightrope

Accounting for bias

As with any kind of usability testing, it’s critical to be aware of potential sources of bias in your test setup. In addition to the typical sources, competitive testing can also be biased by the order of the websites.

There’s several ways that this bias can play out: in many cases, users are biased in favor of the first site they use, as this site gets to set their expectations of how things will look and work, and where different options or features might be found. When the user moves on to the next website, they may have a harder time simply because it’s different from the first one.

On the other hand, users may end up finding the second site easier if they had to struggle through a learning curve on the first one . In such cases, the extra effort they put in to understand key functions or concepts on the first site might make it seem harder, while simultaneously giving them a jump-start on understanding the second site.

Lastly, due to simple recency effects , the last interface might be more salient in users’ minds and therefore viewed more favorably (or perhaps just more extremely).

To account for bias, we set up 2 tests: one going from A→B, and one from B→A , with 10 users per flow. This way, both sites would get 20 total pairs of eyes checking them out, but half would see each site first and half of them second.

No matter whether the site order would bias users in favor of the second platform or the first, the 10/10 split would balance these effects out as much as possible.

The other benefit of setting up the study this way is that we would get to observe how brand new visitors and visitors with prior expectations would view and interact with each site. Both Domino’s and Pizza Hut would get their share of open-minded new orderers and judging, sharp-eyed pizza veterans.

Writing the task script for the competitive user test

Writing the task script

We re-used the same task script from our previous Domino’s vs Pizza Hut test, which has been dissected and explained in an old blog post here . You can read all about how we chose the wording for those tasks in that post.

You can do a quick skim of the task list below:

Scenario: You’re having a late night in with a few friends and people are starting to get hungry, so you decide to order a couple of pizzas for delivery.

  • Have you ordered pizza online before? Which website(s) did you use?
  • Does this site have any deals you can take advantage of for your order?
  • Customize your pizzas with the toppings, sauce, and crust options you would like.
  • Finalize your order with any other items you want besides your pizzas.
  • Go through the checkout until you are asked to enter billing information.
  • Please now go to [link to second site] and go through the pizza ordering process there too. Compare your experience as you go.
  • Which site was easier to use, and why? Which would you use next time you order pizza online?

We also could have broken down Task 6 into several more discrete steps – for example, mirroring the exact same steps we wrote for the first website. This would have allowed us to collect task usability ratings , time on task, and other user testing metric s that could be compared between the sites.

However, we decided to keep the flow more free-form and let users chart their own course through the second site. You can choose between a looser task script and a more structured one based on the kinds of data you want to collect for your study.

Writing the questions for the post test survey

The post-test survey

After users complete the tasks during their video session, we have them respond to a post-test survey . This is where we posed a number of different rating-style and multiple-choice type questions to try and quantify users’ attitudes and determine which site performed better in which areas.

Our post-test survey:

After completing both flows and giving feedback on each step, we wanted the users to unequivocally choose one of the websites. This way we could instantly see the final outcome from each of the tests, without trying to parse unemphatic verbal responses from the videos.

For each test, we listed the sites in the order they were experienced, to avoid creating any additional variables between the test.

  • How would you rate your experience on the Domino’s website, on a scale of 1 (Hated it!) to 10 (Loved it!)? (slider rating, 1-10)
  • How would you rate your experience on the Pizza Hut website, on a scale of 1 (Hated it!) to 10 (Loved it!)? (slider rating, 1-10)

Here again we showed the questions in an order corresponding to the order from the video session. First users rated the site they started on, then they rated the site they finished on.

  • Overall mood/feel of the site
  • Attractive pictures, images, and illustrations
  • Ease of navigating around the site
  • Clarity of information provided by the site
  • None of the above

For the fourth question, we listed several different aspects of the user experience to see which site held the edge in each. Users could check any number of options, and we also included a “none of the above” option.

In this case, we asked users to select areas in which the second site they had tested was superior to the first. We felt that since users might tend to favor the first site they experienced, it would be most illuminating to see where they felt the second site had been better.

If we were to run this test again, we would include more options to pick from that later came up in our results, such as the availability of appealing promotions/deals, and the choices of pizza toppings and customizations and other food options.

  • What is the #1 most important thing about a pizza website, to you? (free response)

Since we knew that we probably wouldn’t think of every possible area of the experience that users cared about, we followed up by asking a free-response question about what users prioritized the most in their online ordering experience. This allowed us to get more insight into the previous question, and build a deeper understanding of each user’s mindset while viewing their videos and other responses.

  • Several times a week
  • About once a week
  • Once or twice a month
  • Less than once a month
  • Domino’s
  • Papa John’s
  • Little Caesars

The final 2 questions were just general information-gathering questions. We were interested to see what kind of backgrounds the testers had (and were maybe also a little excited to try out more of the new post-test question types ).

Besides expanding the options in question 4, the other thing we would change about the post-test survey if we ran this study again would be to ask more free-response type questions. We found that with so many quantitative type questions, we actually missed out on some qualitative answers that would have been useful (especially in conjunction with the data we did get).

Some example questions we would add, which we thought of after getting the results in, are:

  • What did you like the best about the [Domino’s/Pizza Hut] website?
  • What did you like the least about the [Domino’s/Pizza Hut] website?
  • Do you feel that your experience on the two sites would influence your choice between Domino’s and Pizza Hut if you were going to order online from one of them in the future?

Wrapping up Part 1

Besides the task script and post-test survey, the rest of the setup just consisted of choosing a target demographic – we selected users in the US, ages 18-34, making under $50,000. Once the tests were finalized, we launched them and collected the 20 results in less than a day.

In Part 2 of this series, we’ll go over the results of the study, including the quantitative data we got, the contents of the videos, and what we learned about doing competitive usability testing.

Part2: Results

Sign up for a free user testing trial of our usability testing tools

By Tim Rotolo

Tim Rotolo is a co-founder at Trymata, and the company's Chief Growth Officer. He is a born researcher whose diverse interests include design, architecture, history, psychology, biology, and more. Tim holds a Bachelor's Degree in International Relations from Claremont McKenna College in southern California. You can reach him on Linkedin at linkedin.com/in/trotolo/ or on Twitter at @timoroto

Related Post

What is user satisfaction definition, metrics and best practices.

' src=

What is Market Testing? Definition, Methods and Examples

What is user engagement definition, metrics and strategy, leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

What is Product Testing? Definition, Types and Process

  • (855) 776-7763

All Products

BIGContacts CRM

Survey Maker

ProProfs.com

  • Get Started Free

A Complete Usability Testing Guide - Methods, Questions & More

Create perfect products that delight your customers, what is usability testing, types of usability testing, benefits of usability testing, examples of what usability testing can do, usability testing methods, when should you conduct a usability test, how many tests to do, the 9 phases of a usability study, best practices for usability testing, usability testing script & questions, qualaroo templates for usability testing, mistakes to avoid in usability testing, over to you.

We’re heading into the holiday season of 2023, and still, the words that Philip Kotler said decades ago ring truer than ever: “ customer is king .”

With the approval of your kings (and queens), your product - app, website, software, or any other offering to them - is bound to do well in the market. Which brings us to the question:

How can you tell what goes on in your customer’s minds?

Put another way, is your product going to help your target segment overcome their challenges?

Before you go all-in and launch, ensure that you’ve made a sure bet - with usability testing.

This guide answers ‘what is usability testing’ and delves into some methods, questions, and ways to analyze the data.

TL;DR tip: Scroll to the end for a convenient list of FAQs about usability testing.

  • Usability Testing - What Is It?
  • How You Benefit From Usability Testing
  • Common Usability Testing Methods
  • Remote [Moderated/Unmoderated]
  • Benchmark Comparison
  • Open-Ended [Exploratory]
  • Contextual [Informed users]
  • Guerilla [Random public]
  • What Questions to Ask During Usability Testing
  • Best Practices To Follow for Usability Testing
  • Mistakes To Avoid in Usability Testing

Usability testing answers your questions about any and all parts of your product’s design. From the moment users land on your product page to the time they show exit intent, you can get feedback about user experience.

Using this feedback, you can create products from perfect prototypes that delight your customers and improve conversion rates. This also works out well if you want to improve your product that is already in the market.

Usability testing is essential because it makes your website or product easier to use. With so much competition out there, a user-friendly experience can be the difference between converting the visitors to customers or making them bounce off to your competitors.

For example, the bounce rate on a web page that loads slower than 3 seconds can jump up to 38% , meaning more people leaving the site without converting.

In the same way, a product with a complicated information architecture will add to the learning curve for the users. It may confuse them enough to abandon it and move to a more valuable product.

When you’re told by unbiased testers or users that your product “ feels like it’s missing something ,” but they can’t tell you what “ something ” is, isn’t it frustrating? It’s a pity you can’t turn into Sherlock Holmes and ‘deduce’ what they mean by that vague statement.

Philip Kotler even quoted the author of Sherlock Holmes, Sir Arthur Conan Doyle: “ It is a capital mistake to theorize before one has data. ” in his book Kotler on Marketing: How to Create, Win, and Dominate Markets .

How cool would it be to turn the subjective, vague survey responses into objective, undeniable data? This data can be the source of quite ‘elementary’ as well as pretty deep insights that you’ve been seeking.

That’s where usability testing comes in.

It puts the users in the driver's seat to guide you to all the points which can deter them from having a good experience.

Usability

Be it a prototype or working product, you can use usability tests at any stage to streamline the flows and other elements.

It’s time to see what benefits usability testing can bring to your product development cycle. Irrespective of the stage at which your product is, you can run usability testing to pinpoint critical issues and optimize it to improve user experience.

1. Identify Issues With Process Flows

There are so many processes & flows on your website or product, from creating an account to checking out. How would you know if they are easy to use and understand or not?

For example, a two-step checkout process may seem like a better optimization strategy than a three-step process. You can also use A/B testing to see which one brings more conversion.

But the question is, which is more straightforward to understand? Even though a two-step checkout involves fewer steps than 3 step checkout, it may complicate the checkout process more, leaving first-time shoppers confused.

That is where usability testing helps to find the answer. It helps to simplify the processes for your users.

Case Study- How McDonald's improved its mobile ordering process

In 2016, McDonald's rolled out their mobile app and wanted to optimize it for usability. They ran the first usability session with 15 participants and at the same time collected survey feedback from 150 users after they had used the app.

It allowed them to add more flexibility to the ordering process by introducing new collection methods. They also added the feature which allowed users to place the order from wherever they want. The customers could then collect the food upon arriving at the restaurant. It improved customer convenience and avoided congestion at the restaurants.

Following the success of their first usability test iteration, they ran successive usability tests before releasing the app nationwide.

They wanted to test the end-to-end process for all collection methods to understand the app usability, measure customer satisfaction, identify areas of improvement, and check how intuitive the app was.

They ran 20 usability tests for all end-to-end processes and surveyed 225 first-time app users.

With the help of the findings, the app received a major revamp to optimize the User Interface or UI elements, such as more noticeable CTAs. The testing also uncovered new collection methods which were added to the app.

2. Optimize Your Prototype

Just like in the case of McDonald's in the previous point, usability testing can help you evaluate whether your prototype or proof of concept meets the user's expectations or not.

It is an important step during the early development stages to improve the flows, information architecture, and other elements before you finalize the product design and start working on it.

It saves you from the painstaking effort of tearing down what you've built because it does not agree with your users.

Case Study - SoundCloud used testing to build an optimized mobile app for users

When SoundCloud moved its focus from desktop to mobile app, it wanted to build a user-friendly experience for the app users and at the same time explore monetization options. With increased demand on the development schedule because of the switch from website to mobile, the SoundCloud development team decided to use usability testing to discover issues and maintain a continuous development cycle.

The first remote usability test iteration involving 150 testers from 22 countries found over 150 bugs that affected real users. It also allowed the team to scale up the testing to include participants from around the world.

Soundcloud now follows an extensive testing culture to release smoother updates with fewer bugs. It helps them to add new features with better mobile compatibility to streamline their revenue models.

Bonus Read: Step by Step: Testing Your Prototype

3. Evaluate Product Expectations

With usability, it is possible to map whether the product performs as intended in the actual users' environment. You can test whether the product is easy to use or not. The question that testing answers is - Does it have all the features to help the user complete their goal in their preferred conditions?

Case Study - Udemy maps expectations and behavior of mobile vs. desktop app users

Udemy, an online education platform, always follows a user-centric approach and learns from its users' feedback. It has helped them to put customer insights into the product roadmap.

The team wanted to understand the behavioral differences between users from different platforms such as desktop and mobile to map their expectations.

So, they decided to create a diary study using remote unmoderated testing as it would allow participants from different user segments and provide rich insights into the product.

They asked the participants to use a mobile camera to show what they were doing while using the app to study their behavior and the environment.

The data from the studies were centralized into one place for the teams to take notes of the issues and problems that people faced while using the Udemy platform.

With remote testing, the Udemy team got insights into how students used the app in their environment, whether on mobile or desktop. For example, the initial assumption that people used the mobile app on the go was proven wrong. Instead, the users were stationary even when they were on the Udemy mobile app.

It also shed new light on the behavior of mobile users helping the team to use the insights into future product and feature planning.

Bonus Read: Product Feedback Survey Questions & Examples

4. Optimizing Your Product or Website

With every usability test, you can find major and minor issues that hinder the user experience and fix them to optimize your website or product.

Case Study - Satchel uses usability testing to optimize their website for conversions

Satchel, an online learning platform, wanted to test the usability of their website to use the inputs into their conversion rate optimization process. The test recruited 5 participants to perform specific tasks and answer the questions to help the team review the functionality and usability of user journeys.

The finding revealed one major usability issue with the website flow of one process in which users were asked to fetch the pricing information.

satchel

It indicated a high lostness score (0.6), which is the ratio of optimal to the actual number of steps taken to complete a task. It means that users were getting lost while completing the assignment or were taking longer than expected to complete it. Some participants even got frustrated, which meant churned customers in the case of real users.

satchel

Using the insights, the team decided to test a new design by adding the pricing and 'booking the demo' link into the navigation menu.

The results showed a 34% increase in the 'book a demo' requests validating their hypothesis.

The example clearly shows how different tests can be used to develop a working hypothesis and test it out with statistical confidence.

Bonus Read: Website Optimization Guide: Strategies, Tips & Tools

5. Improve User Experience

In all the studies discussed above, the ultimate aim of usability testing was to improve user experience. Because if the users find your website or product interactive and easy to use, they are more likely to convert into customers.

Case study - Autotrader.com improves user experience with live interviews

Autotrader.com, an online marketplace for car buyers and sellers, was looking to improve users' car buying experience on the website. To understand user behavior and map their journey, the team used live conversations to connect with recent customers.

Remote interviews made it possible to test users from across the country to understand how the behavior changed with location, geography, demographics, and other socioeconomic factors. It helped the testing team to connect with users across different segments to compare their journeys.

They discovered one shared experience - both new and experienced customers found the process of finding a new car very exhausting.

“Live Conversation allows me to do journey mapping type interviews and persona type work that I couldn’t do before because of staff and budget constraints— bringing these insights into the company faster and at a much lower cost.”

Bradley Miller, Senior User Experience Researcher at Autotrader

Live interviews also helped uncover new insights about the consumer shopping process. The team found out that most consumers started their car buying journey from search engine queries.

The users were not looking for a third-party car website as thought earlier. It meant they could land on any page on the website. It became clear that the landing pages needed to be revamped to provide a more customer-centric experience to the visitors.

The team redesigned each page to act as a starting point for the visitor journey, dispelling the assumption that people already knew about the website.

Qualaroo's Guide to Collecting User Feedback for Digital Products

Before you conduct a usability test, it is crucial to understand different usability testing types to pick the one that suits your needs and resource availability.

There are mainly six types of usability testing:

1. Moderated & Unmoderated Usability Testing

2. remote and in-person usability testing.

The difference between remote and unmoderated testing is that a moderator may be present during a remote usability test.

3. Qualitative and Quantitative Usability Testing

Qualitative and Quantitative Usability Testing

4. Benchmark or Comparison Usability Testing

Benchmark or Comparison testing is done to compare two or more design flows to find out which works best for the users.

For example, you can test two different designs for your shopping cart menu -

  • one that appears after hovering the cursor over the cart icon
  • the other that shows as a dropdown after clicking on it.

It is a great way to test different solutions to the same problem/issue to find the optimal solution preferred by your users.

You can run benchmark testing at any stage of your product development cycle.

benchmark

Now that you know about usability tests types, let's discuss different usability testing methods.

Each method has different applicability and approach towards testing the participants. You can use multiple methods in conjunction to get deeper insights into your users.

1. Lab Testing

Lab usability tests are conducted in the presence of a moderator under controlled conditions. The users perform the tasks on the website, product, or software, and the moderator observes them making notes of their actions and behavior. The moderator may ask the users to explain their actions to collect more information.

The designers, developers, or other personnel related to the project may be present during the test as observers. They do not interfere with the testing conditions.

lab-testing

Advantages:

  • It lets you observe the users closely and interact with them personally to collect in-depth insights.
  • Since the test is performed under controlled conditions, it offers a standardized testing environment for all users.
  • It is an excellent method to test product usability early in the development stage. You can also perform concept usability testing called paper prototyping using wireframes.

Limitations of lab usability testing

  • It is one of the most expensive and time-consuming testing methods.
  • The sample size is usually small (5-10 participants), which may not reflect the consensus of your entire customer base.

Tips for conducting effective lab usability tests:

  • Always make the participants aware that you are testing the website or product and not testing them. It will help to alleviate the stress in the users' minds.
  • Keep your inquiries neutral. Don't ask leading questions to the users. You can pose questions like 'tell me more about it, or do you have something to add about the task.'
  • Prioritize user behavior over their answers. Sometimes the feedback does not reflect their actual experience. The user may have had a bad experience but provide four or five-star ratings to be polite.

2. Paper Prototyping

Paper prototyping is an early-stage lab usability testing performed before the product, website, or software is put into production. It uses wire diagrams and paper sketches of the design interface to perform the usability test.

paper-prototype

Different paper screens are created for multiple scenarios in the test tasks. The participants are given the tasks, and they point to the elements they would click on the paper model interface. The human-computer (developer or moderator) then changes the paper sketches to bring a new layout, snippets, or dropdowns as it would occur in the product UI.

The moderator observes the user's behavior and may ask questions about their actions to get more information about the choices.

Example Usability Test with a Paper Prototype

  • One of the fastest and cheapest methods to optimize the design process.
  • No coding or designing is required. It helps to test the usability of the design before putting effort into creating a working prototype.
  • Since the usability issue is addressed in the planning phase, it saves time and effort in the development cycle.

Limitations

  • The paper layouts are very time-consuming to prepare.
  • It requires a controlled environment to perform the test properly, which adds more cost.

3. Moderated Card Sorting

Card sorting is helpful in optimizing the information architecture on your website, product, or software. This usability testing method lets you test how users view the information and its hierarchy on your website or product.

In moderated card sorting, the users are asked to organize different topics (labels) into categories. Once they are done, the moderator tries to find out the logic behind their grouping. Successful card sorting requires around 15 participants.

When to use card sorting for usability testing?

You can use card sorting to:

  • Streamline the information architecture of your website or product.
  • Design a new website or improve existing website design elements, such as a navigation menu.

Types of card sorting tests:

Whether it’s an moderated or unmoderated card sorting test, there are three types of card sorting tests:

card-slot

Advantages of card sorting:

  • It is a user-focused method that helps to create streamlined flows for users.
  • It is one of the fastest and inexpensive ways to optimize the website information architecture.
  • It’s highly dependent on users’ perception, especially free sorting.
  • There can be instances when each user has variable categories without common attributes, leading to a failed test.

4. Unmoderated Card Sorting

In unmoderated card sorting, the users sort the cards alone without a moderator. You can set up the test remotely or in lab conditions. It is much quicker and inexpensive than a moderated sorting method.

Unmoderated card sorting is usually done using an online sorting tool like Trello or Mural. The tool records the user behavior and actions for analysis later.

5. Tree Testing or Reverse Card Sorting

If card sorting helps you design the website hierarchy, tree testing lets you test the efficiency of a given website architecture design.

You can evaluate how easily users can find the information from the given categories, subcategories, and topics.

The participants are asked to use the categories and subcategories to locate the desired information in a given task. The moderator assesses the user behavior and the time taken to find the information.

You can use Tree testing to:

  • Test if the designed groups make sense to the people.
  • See if the categories are easy to navigate.
  • Find out what problems people face while using the information hierarchy.

Example of Card Sorting & Tree Sorting

6. Guerilla Testing

Guerilla testing requires you to approach random people outside, such as parks, coffee houses, or any other public area, and ask them to take the test. Since it eliminates the need to find qualified participants and a testing venue, it is one of the most time-efficient and cost-effective testing methods to collect rich insights about your design prototype or the concept itself. The acceptable sample size is between 6 to 12 participants.

  • It can be used as an ad hoc usability testing method to gather user insights during the early stages of development.
  • It is an inexpensive and fast way of collecting feedback as you don’t need to hire a specific target audience or moderator to conduct the test.
  • You can even use the paper prototype to conduct guerilla testing to optimize your design.
  • Since participants are chosen at random, they may not represent your actual audience sample.
  • The test needs to be short in length as people may be reluctant to give much time for the test. The usual length for guerilla testing is 10-15 minutes per session.

7. Session Recordings or Screen Recording

Session recording is a very effective way to visualize the user interactions on your functional website or product. It is one of the best unmoderated remote usability testing methods to identify the visitors' pain points, bugs, and other issues that might prevent them from completing the actions.

This type of testing requires screen recording tools such as SessionCam. Once set up, the tool anonymously records the users' actions on your website or product. You can analyze the recording later to evaluate usability and user experience.

It can help you visualize the user's journey to examine the checkout process, perform form analysis, uncover bugs and broken pathways, or any other issues leading to a negative experience.

  • It doesn't require hiring the participants. You can test the website using your core audience.
  • Since the data is collected anonymously, the visitors are not interrupted at any time.
  • You can use screen recording in conjunction with other methods like surveys to explore the reasons behind users' actions and collect their feedback.

Remote Screen recording with qualified participants

You can also use screen recording using specific participants and the think-aloud method, where people say their thoughts aloud as they perform the given tasks during the test.

In this method, the participants are selected and briefed before the test. It requires more resources than anonymous session recording. It's a fantastic method to collect in-the-moment feedback and actual thoughts of the participants.

8. Eye-Tracking Usability Test

Eye-tracking testing utilizes a pupil tracking device to monitor participants' eye movements as they perform the tasks on your website or product. Like session recording, it is an advanced testing technique that can help you collect nuanced information often missed by inquiry or manual observation.

The eye-tracking device follows the users' eye movements to measure the location and duration of a user's gaze on your website elements.

The results are rendered in the form of:

  • Gaze plots (pathway diagrams) - The size of the bubble represents the duration of the user's gaze at the point.

reserach-page-eye-movements

  • Gaze replays - Recording of how the user processed the page and its elements.
  • Heatmaps - A color spectrum indicating the most gazed portions of the webpage by all the users. You may have to test up to 39 users to create reliable heatmaps.

heatmap-humira

This type of testing is useful when you want to understand how users perceive the design UI.

  • You can evaluate how users are scanning your website pages.
  • Identify the elements that stand out and grab users' attention first.
  • Identify the ideal portions of the website that attract users to place your CTAs, banners, and messaging.

9. Expert Reviews

Expert reviews involve a UX expert to review the website or product for usability and compliance issues.

There are different ways to conduct an expert review:

  • Heuristic evaluation - The UX expert examines the website or product against the accepted heuristic usability principles .
  • Cognitive psychology - The expert reviews the design from a user's perspective to gauge the system's usability.

A typical expert review comprises of following elements:

  • Compilation of areas where the design excels in usability.
  • List of points where the usability heuristics and compliance standards fail.
  • Possible fixes for the indicated usability problems
  • Criticality of the usability issues to help the team prioritize the optimization process.

An expert review can be conducted at any stage of the product development. It is an excellent method to uncover the issues with product design and other elements quickly.

But since it requires industry experts and in-depth planning, this type of testing can add substantial cost and time to your design cycle.

10. Automated Usability Evaluation

This last method is more of a proof of concept than a working usability testing methodology. Various papers and studies call for an automated usability tool that can iron out the limitations of conventional testing methods.

Here are two interesting studies that outline the possibilities and applications of an automated usability testing framework.

  • Automated usability testing framework
  • USEFul – A Framework To Automate Website Usability Evaluation

Conventional testing methods, though effective, carry various shortcomings such as inefficiency of the moderator, high resource demands, time consumption, and observer bias.

With automated usability testing, the tool would be self-sufficient to conduct the following functions:

  • Point out major usability issues just like conventional methods.
  • Carry out analysis and calculations by itself, providing quicker results.
  • Provide more accurate and reliable data.
  • Allow for increased flexibility and customization of the test settings to favor all the stages of development.

It will allow developers and researchers to reduce development time as the testing and optimization iterations could be carried simultaneously.

One of the common questions asked about usability testing is - 'when can I do it?' The answer is anytime during the product life cycle. It means during the planning stage, design stage, and even after release.

1. Usability Testing During the Planning Stage

Whether you are creating a new product or redesigning it, conducting usability tests during the planning or initial design stage can reveal useful information that can prevent you from wasting time in the wrong place.

It's when you are coming up with the idea of the product or website design. So testing it out can help you dispel initial assumptions and refine the product flows while still on paper.

For example, you can test whether the information architecture you are planning will be easy to understand and navigate for users. Since nothing is committed, it will help restructure it if needed without much effort.

planning-stage

You can use usability testing methods like paper prototyping, lab testing, and card sorting to test your design concept.

2. Usability Testing During the Design or Development Stage

Now that you have moved into the development stage and produced a working prototype, you can conduct tests to do behavioral research.

At this point, usability testing aims to find out how the functionality and design come together for the users.

  • With a clickable prototype, you can uncover issues with flows as well as design elements.
  • You can study actual user behavior as they interact with your product or website to gain deeper insights into their actions.

While usability test during the planning stage gives you qualitative insights, with design prototype, you can measure quantitative metrics as well to measure the product's usability such as:

  • Task completion time
  • Success rate
  • Number of clicks or scrolls

This data can help validate the design and make the necessary adjustments to the process flows before continuing to the next phase of development.

3. Usability Testing After Product Release

There is always room for improvement, so usability testing is as crucial after product launch as well.

You may want to optimize the current design or add new features to improve the product or website.

It is beneficial to test the redesign or update for usability issues before deploying it. It will help to evaluate if the new planned update works better or worse than the current design.

Running a successful usability test depends on multiple factors such as time constraints, budget, and tools available at your disposal.

Though each usability testing method has a slightly different approach due to the testing conditions and depth of research, they share some common attributes, as explained in this section.

Let’s explore the eight common steps to conduct a usability test.

Step 1: Determine What to Test

Irrespective of the usability testing method, the first step is to draw the plan for the test. It includes finding out what to test on your website or product.

It can be the navigation menu, checkout flow, new landing page design, or any other crucial process.

If it is a new website design, you probably have the design flow in mind. You can create a prototype or wire diagram depicting the test elements.

But if you are trying to test the usability of an existing website or product flow, you can use the data from various tools to find friction points.

a. Google Analytics (GA)

Use the GA reports and charts as the starting point to narrow your scope. You can locate the pages with low conversions and high bounce rates, compare the difference between desktop vs. mobile website performance, and compare the traffic sources and other details.

google-analytics

b. Survey feedback

The next step is to deploy surveys at the desired points and use the survey feedback to uncover the issues and problems with these pages.

  • It can be an issue with the navigation menu.
  • Payment problems during checkout like payment failure or pending order status even after successful payment.
  • Issues with the shopping cart.
  • Missing feature on the website or webpage

You can choose from the below list of different types of survey tools based on your requirements:

1. 25 Best Online Survey Tools & Software

2. Best Customer Feedback Tools

3. 30 Best Website Feedback Tools You Need

4. 11 Best Mobile In-App Feedback Tools

survey-feedback

c. Tickets, emails, and other communication mediums

Complete the circle by collating the data from tickets, live chat, emails, and other interaction points.

These can be valuable, especially when you are hosting a SaaS product. Customers’ emails can reveal helpful information about bugs and glitches in the process flows and other elements.

Once you have the data, compile it under a single screen and start marking the issues based on the number of times mentioned, criticality, number of requests, and other factors. It will let you set the priority to choose the element for the test. Plus, it will help set clear measurement goals.

Step 2: Set Target Goals

It is necessary to set the goals for the test to examine its success or failure. The test goal can be qualitative or quantitative in nature, depending on what you want to test.

Let's say you want to test the usability of your navigation menu. Start by asking questions to yourself to identify the purpose of your test. For example,

  • Are users able to find the 'register your product' tab easily?
  • What is the first thing users notice when they land on the page?
  • How much time does it take to find the customer support tab in the navigation?
  • Is the menu easy to understand?

Once you have the specific goals in mind, assign suitable metrics to measure during the test.

For example:

  • Successful task completion: Whether the participants were able to complete the given task or not.
  • Time to complete a task: The time taken to complete the given task.
  • Error-free rate: The number of participants who were able to complete the task without making any error.
  • Customer ratings and feedback: Customer feedback after completing the task or test, such as satisfaction ratings, ease of use, star ratings, etc.

These metrics will help to establish the outcome of the test and plan the iteration.

Here is a sample goal template you can use in the usability test

sample-goal-template

This is how it will look once filled:

filled-goal-template

Step 3: Identify the Best Method

The next step is to find the most suited method to run the test and plan the essential elements for the chosen usability testing method.

  • If you are in the initial stage of design, you can use paper prototyping.
  • If it is a new product, go for lab usability testing to get detailed information into user behavior and product usability.
  • If you want to restructure the website hierarchy, you can use card sorting to observe how users interact with the new information structure.
  • If it is proprietary software, you can also conduct an expert review to see if it meets all the compliance measures.

Once you have decided on the method, it is time to think about the overheads.

  • If it is an in-person moderated test, you need a moderator, venue, and participants. You would also need to calculate the length for each session and the equipment required.
  • If it is a remote moderated task, find the right tool to run the test. It should be able to connect the moderator and participants through a suitable medium like phone or video. At the same time, it should allow the moderator to observe the participants' behavior and actions to ask follow-up questions.
  • If it is a remote unmoderated test, the usability testing tool would have to explain the instructions, schedule the tasks, guide the participants to each task and record the necessary behavioral attributes simultaneously.

Step 4: Write the Usability Tasks

Writing pre-test script.

Along with the task for the actual test, prepare a pre-test and introductory script to get to know about the participant (user persona) and tell them the purpose of the usability test. You can create scenarios to help the participants relate the product or website to their real-world experience.

Suppose you are testing a SaaS-based project management system. You can use the following warm up questions to build user personas:

  • What is your current role at your company?
  • Have you used project management software before?
  • If yes, for how long? Are you currently using it?
  • If no, do you know what a project management system does?

Use the information to introduce the participant to the test's purpose and tell them about the product if they have never heard of the concept.

Writing the Test Tasks

Probably the most important part of usability testing; tasks are designed as scenarios that prompt the participant to find the required information, get to a specific product page, or any other action.

The task can be a realistic scenario, straightforward instructions to complete a goal or a use case.

btn-shirt

Pro tips: Use the data from customer feedback and knowledge of customer behavior to come up with practical tasks.

Using the previous example, let's say you have to create a task for usability testing of your project management tool. You can use first scenarios like this:

'You are a manager of a dev-ops team with 20 people. You have to add each team member to your main project - 'Theme development.' How will you do it?'

This scenario will help you assess the following:

  • Can the participant navigate the menu to find the teams section from the navigation menu?
  • Can they find the correct project in the team menu, which shows the project name - Theme development?
  • How fast can they find the required setting?

The second scenario can be;

'Once you have added the team members, you want to assign a task to two lead developers, Jon and Claire, under the theme development project. The deadline for the task needs to be next Friday. How will you do it?

Use this scenario to test the following:

  • How easy is it to navigate the menu?
  • Is the design of the task form easy to follow?
  • How easily can the participant find all the fields in the task form, such as deadline, task name, developer name, etc.?

If the test is moderated, ask follow-up questions to find the reason behind user actions. If the test is unmoderated, use a screen recording tool or eye-tracking mechanism to record users' actions.

Remember the sequence of the task and the associated scenario will depend on the elements you want to test for usability.

  • For project management software, the primary function is to assign tasks, track productivity and monitor the deadlines.
  • For an e-commerce website, the main function is conversions, so your tasks and scenarios would be oriented towards letting the users place an order on the website.

Step 5: Find the Participants

There are multiple ways to choose the participants for your usability test.

  • Use your website audience: If you have a website, you can add survey popups to screen the visitors and recruit the right participants for the test. Once you have the required number of submissions, you can stop the popup.

website-audience

  • Recruit from your social media platforms: You can also use the social channel to find the right participants.
  • Hire an agency: You can use a professional agency to find the participants, especially if you are looking for SMEs and a specific target audience, like people working in the IT industry who have experience with a project management tool.

To increase the chances of participation, always add an incentive for your participants, such as gift cards or discounts codes.

Step 6: Run a Pilot Test

With everything in place, it is time to run a pre-test simulation to see if everything is working as intended or not. A pilot test can help you find issues with the scenarios, equipment, or other test-related processes. It is a quality check of your usability test preparation.

  • Choose a candidate who is not related to the project. It can be a random person or a member of a different team not involved with the project.
  • Perform the test as if they were the actual participant. Go through all the test sessions and equipment to check everything works fine.

With pilot testing, you can check :

  • If the scenarios are task-focused and easy to understand.
  • If there are any faulty equipment
  • The pre and post-test questions are up to mark.
  • The testing conditions are ideal.

Step 7: Conduct the Usability Test

If it is an in-person moderated test, start with the warmup questions and introductions from the pre-test script. Make sure the participants are relaxed.

Start with an easier task to help the participants feel comfortable. Then transition into more specific tasks. Make sure to ask for their feedback and explore the reasons behind their actions, such as;

  • How was your experience completing this task?
  • What did you think of the design overall?
  • Would you like to say something about the task?

For the remote unmoderated tests, make sure that the instructions are clear and concise for the participants.

maze

You can also include post-test questions for the participants, such as;

  • Did I forget to ask you about anything?
  • What are the three things you liked the most about the product/website/software?
  • Did it lack anything?
  • On a scale from 1 to 10, how easy was the product/website/software to use?

Step 8: Analyze the Results

Once the test is over, it is time to analyze the results and turn the raw data into actionable insights.

a. Start by going over the recordings, notes, transcripts, and organize the data points under a single spreadsheet. Note down each error encountered by the user and associated task.

b. One way to organize your data is to list the tasks in one column, the issues encountered in the tasks in the next column, and then add the participant's name next to each issue. It will help you point out how many users faced the same problem.

c. Also, calculate the quantitative metrics for each task, such as success rate, average completion time, error-free rate, satisfaction ratings, and others. It will help you track the goals of the test as defined in point 2.

d. Next, mark each issue based on its criticality. According to NNGroup, the issues can be graded on five severity ratings ranging from 0-4 based on their frequency, impact, and persistence:

0 = I don't agree that this is a usability problem at all

1 = Cosmetic problem only: Need not be fixed unless extra time is available on project

2 = Minor usability problem: Fixing this should be given low priority

3 = Major usability problem: Important to fix, so should be given high priority

4 = Usability catastrophe: Imperative to fix this before product can be released

e. Create a final report detailing the highest priority issues on the top and lowest priority problems at the bottom. Add a short, clear description of each issue, where and how it occurred. You can add evidence like recordings to help the team to reproduce it at their end.

f. Add the proposed solutions to your report. Take the help of other teams to discuss the issues, find out the possible solutions, and include them in the usability testing report.

proposed-solution

g. Once done, share the report with different teams to optimize the product/website/software for improving usability.

Do you feel that you know everything there is to know about your product and its users?

If you answer with a yes, then what is the purpose of a usability test, you may ask.

No matter how much you know about your customers, it isn’t wise to ignore the possibility that there is more to be learned about them, or about any shortcomings in your product.

That is why what you ask, when you ask, and how you ask is of uttermost importance.

Here are a few examples of usability testing questions to help you form your own.

Questions for ‘first glance’ testing

Check if your design communicates what the product/website is at first glance.

  • What do you think this tool/ website is for?
  • What do you think you can do on this website/ in this app?
  • When (in what situations) do you think would you use this?
  • Who do you think this tool is for? / Does this tool suit your purposes?
  • Does this tool resemble anything else you have seen before? If yes, what?
  • What, if anything, doesn’t make sense here? Feel free to type in this text box.

Pro-Tip: If you’re testing digitally with a feedback tool like Qualaroo, you can even time your questions to pop up after a pre-set time spent on-site for a more accurate first glance test.

Questions for specific tasks or use cases

Develop task-specific questions for common user actions (depending upon your industry).

  • How did you recognize that the product was on sale? (E-commerce and retail)
  • What information did you feel was missing, if any? (E-commerce and retail)
  • What payment methods would you like to be added to those already accepted? (E-commerce and retail/SaaS)
  • How did you decide that the plan you have picked was the right one for you? (SaaS)
  • Do you think booking a flight on this website was easier or more difficult than on other websites you have used in the past? (Travel)
  • Did sending money via this app feel safe? (Banking/FinTech)
  • Do you think data gathered by this app is reliable, safe, and secure from breaches or hacks? (Internet)

Pro-tip: If you want to test the ease with which users perform specific tasks (like the ones listed above), consider structuring your tasks as scenarios instead of questions.

Questions for assessing product usability

Ask these questions after users complete test tasks to understand usability better.

  • Was there anything that surprised you? If yes, what?
  • Was there anything you expected but was not there?
  • What was difficult or strange about this task, if anything?
  • What did you find easiest about this task?
  • Did you find everything you were looking for? / What was missing, if anything?
  • Was there anything that didn’t look the way you expected? If so, what was it?
  • What was unnecessary, if anything? / Was anything out of place? If so, what was it?
  • If you had a magic wand, what would you change about this experience/task?
  • How would you rate the difficulty level of this task?
  • Did it take you more or less time than you expected to complete this task?
  • Would you normally spend this amount of time on doing this task?

Pro-tip: If you are getting users to complete more than one task, limit yourself to no more than 3 questions after each task to help prevent survey fatigue.

Questions for evaluating the holistic (overall) user experience

Finalize testing with broad questions that collect new information you haven’t considered.

  • Try to list the features you saw in our tool/product.
  • Do you feel this application/tool/website is easy to use?
  • What would you change in this application/website, if anything?
  • How would you improve this tool/website/service?
  • Would you be interested in participating in future research?

Pro-tip: No matter which way you phrase your final questions, we recommend using an open-ended answer format so that you can provide users with a space to share feedback more freely. Doing so allows them to flesh out their experience during testing and might even inadvertently entice them to bring up issues that you may never have considered.

If you’re wondering how to conduct usability testing for the first time or without having to jump through hoops and loops of code, you can simply stroll over to Qualaroo’s survey templates.

We have created customizable templates for usability testing, like SUPR-Q (Standardized User Experience Percentile Rank Questionnaire - with or without NPS) , UMUX (Usability Metric for User Experience - 2 positive & 2 negative statements) , and UMUX Lite (2 positive statements).

SUPR-Q is a validated way to measure the general user experience on a website or application. It includes 8 questions within 4 areas: usability, trust/credibility, loyalty (including NPS), and appearance. However, it doesn’t identify bottlenecks or problems with navigation or specific elements of the interface. 50 is the comparison benchmark for assessing your product’s UX.

supr-q

UMUX allows you to measure the general usability of a product (software, website, or app). It has 4 statements for users to rate on a 5- or 7-point Likert scale. However, it isn’t generally used to measure specific characteristics like usefulness or accessibility, nor for identifying navigation issues, bottlenecks, or problems that are related to specific elements of your product’s interface.

smux

On a related note, if you have launched a product aimed specifically at smartphone users and you wish to understand the contextual in-app user experience (UX), simply take these 3 steps .

Even though there are multiple usability test methods, they share some general guidelines to ensure the accuracy of the test and results. Let’s discuss some of the do’s and don’ts to keep in mind while planning and conducting a usability test.

1. Always Do a Pilot Test

The first thing to remember is to do a quality check of the usability test before going live. You wouldn't want anything to fall apart during actual testing. Be it a broken link, faulty equipment, or ineffective questions/tasks. It would mean a wastage of time and resources.

Use a testee who is not associated with the usability test. It can be a member of another team in your organization. Run the usability test simulation under the actual conditions to gauge the efficiency of your tasks and prototype. Pilot testing can help to uncover previously undetected bugs.

It can help to:

  • Measure the session time, inspect the quality of the equipment and other parameters.
  • Test whether the prototype is designed according to the task. You can add missing flows or elements.
  • Test the questions and tasks. You can gauge whether the user understands them easily or not. Are the scenarios clear or not?

2. Leverage the Observer Position

It is always a good practice to let your team attend the usability test as observers. It can produce a two-pronged effect:

  • The team can learn firsthand about user experience and how people interact with your product.
  • It can help them follow the user journey and observe the points of friction which can aid them in coming up with the optimal solutions keeping the user journey in mind.

Pro tip : Be cautious as not to disturb or talk to the participants. The observer's role is to be invisible.

3. Determine the Sample Size for Your Usability Test

According to NNGroup , five users in any usability test can reveal 75% of the usability problems.

However, considering other external factors, there are different acceptable sample sizes for different usability testing methods.

  • Guerilla testing would require 5-8 testees.
  • Eye-tracking requires at least 39 users.
  • Card sorting can produce reliable results with 15 participants.

Pro tip : If you aim to measure usability for multiple audience segments, the sample size would increase accordingly to include representation for each segment.

So, use the correct sample size for your usability test to make the results statistically significant.

4. Recruit More Users

Not all participants may show up for the usability test, whether it is in-person or remote. That's why it is helpful to recruit more participants than your target sample size. It will ensure that testing reaches the required statistical significance and you obtain reliable results.

5. Always Have a Quick Post-Session Questionnaire

A post-test interview is a potential gold mine to collect deeper insights into user behavior. The users are more relaxed than during the test to provide meaningful feedback about their difficulties in performing the task, delights about the product, and overall experience. Plus, it also presents the opportunity to ask follow-up questions that you may have missed during the tasks.

Pro tip : The best way is to calculate the total session time by including the post-test interview period in it. For example, if you are planning each session to be 10-15 minutes long, keep 2-3 minutes for post-test questions.

6. Make the Participants Feel Comfortable

If your participants are nervous or stressed out, they won't be able to perform the tasks in the best way, which means skewed test results. So, try to make the participants feel relaxed before they start the test.

One way is to have a small introduction round during the pre-test session. Instead of strictly adhering to the question sheet, ask a few general friendly questions as the moderator to establish a relationship with the user. From there, you can smoothly transition into the testing phase without putting too much pressure on them.

7. Mitigate the Observer Effect

The “observer effect” or “Hawthorne Effect'' is when people in studies change their behavior because they are being watched. In moderated usability testing, the participants may get nervous or shy away from being critical about the product. They may not share their actual feedback or ask questions that may come to their mind. All these behavioral traits may lead to test failure or unreliable test results.

So, make sure that the moderator does not influence the participants. A simple trick is to pretend that you are writing something instead of constantly watching over the participants.

The observer effect is one more reason to have a friendly pre-test conversation and tell the participants to ask questions when they don't understand something and share their feedback openly. Discuss the test's purpose so they understand their feedback is valuable to make the product better.

The overarching purpose of this usability testing guide was to help answer this essential question: how do I create the best product that satisfies customers (and as a bonus, outshines the competition)? We hope it threw a bright light on the possible ways to answer the question. Plus, here are a few pitfalls that are best avoided as you search for the answers:

1. Creating Incorrect or Convoluted Scenarios

The success of the usability test depends on the tasks and scenarios you provide to the participants. If the scenarios are hard to understand, the participants may get confused. It will lead to a drop in task success rate but not due to problems with usability but the questions. The problem may get compounded in unmoderated usability testing where the participant cannot approach a moderator if stuck.

That's why it is essential to keep your sentences concise and clear so that the tester can follow the instructions easily. A pilot test is an excellent way to test the quality of the questions and make changes.

2. Asking Leading Questions

Leading questions are those that carry a response bias in them. These questions can unintentionally steer the participants in a specific direction. It can point towards a step that you may want the participants to take or an element you want them to select.

Leading questions nullify the usability test as they help the participants to reach the answer. So, test your scenarios and questions during the pilot run to weed out such questions (if any).

3. Not Assigning Proper Goals and Metrics

It is essential to set the goals clearly to deliver a successful usability test. Whether the goals are qualitative or quantitative, assign suitable metrics to measure them properly.

For example, if the task aims to test the usability of your navigation menu, you may want to see whether the users can find the information or not. But to quantify this assessment, you can also calculate the success rate and time taken by participants to complete their tasks.

While the qualitative analysis will reveal points about user experience, the quantitative data will help calculate the reliability of your findings. In this way, you can approach the test results objectively.

4. Testing With Incorrect Audience

One of the biggest mistakes in usability testing is using an incorrect audience sample, which leads to inaccurate results.

For example, If you use your friends or coworkers who may know about the product/software, they may not face the same problems that actual first-time users may experience.

In the same way, if they are entirely unaware of the product fundamentals, they might get stuck at points which your actual audience may easily navigate.

To recruit the right audience, start by focusing on the question - who will be the actual users of the test elements? Are they new users, verified customers, or any other user segment?

Once you have your answers, you can set the proper goals for the test.

5. Interrupting or Interacting With the Participants

Another grave mistake is repeatedly interrupting the participants. The usability test is aimed at observing the users testing the product/software without any outer influence. The moderator can ask the questions and guide them if necessary.

Constantly bugging the participants may make them nervous or frustrated, disturbing the testing environment and providing false results.

6. Not Running a Pilot Test

As mentioned before, a pilot test is a must in usability testing. It helps weed out the issues with your test conditions, scenarios, equipment, test prototype, and other elements.

With pilot testing, you can sort out these problems before you start the test.

7. Guiding Users During the Test

The purpose of usability testing is to simulate actual user behavior to measure how easy it would be for them to use the product. If you guide the users through the scenario, you are compromising the test results.

The moderator can help the participants understand the scenario, but they may not help them complete the task or point towards the solution.

8. Forming Premature Conclusions

Another mistake to avoid is to draw conclusions from the test result of the first two or three users. It is necessary to be vigilant towards all the participants before concluding any presumptions.

Also, do not rush the testing process. You may be tempted to feel that you have all the information after testing a few participants but do not rush the test. Scan all the participants to establish the reliability of your results. It may point you towards new issues and problems.

Bonus Read: 30 Best A/B Testing Tools Compared

9. Running Only One Testing Phase

Experimentation and testing is an iterative process. Plus. since the sample size in usability testing is usually small, it is a big mistake to treat the results from one test phase as definitive.

The same problem is with the implemented solution of the issues found in the test. There can be many solutions to a single problem, so how will you know which one will work the best?

The only way to find out is to run successive tests after implementing the solution to optimize product usability. Without iterations, you cannot tell if the new solution is better or worse than the previous one.

It is true what they say: experience is the best teacher. As you do more tests, you will gain a better understanding of what usability testing actually is about - creating a perfect product.

It stands to reason that the easier it will be for your prospective customers to use your product, the more sales you will see. Usability testing helps you eliminate any unforeseen glitches and improve user experience by collecting pertinent user feedback for actionable insights. To get the best insights, Qualaroo makes usability testing a delightful experience for your testers.

Irrespective of its size, every organization needs to hone listening to its customers by creating a robust VoC strategy suitable to its internal business model and existing VoC feedback.

Each Voice of Customer technique could be used on its own or by coupling it with other ways for optimum results. But for your efforts to turn into desired results, you need the right tool by your side. With Qualaroo surveys, you can get started with collecting real-time, unbiased feedback and procure qualitative insights from your quantitative data.

Do you want to run usability tests?

Qualaroo surveys gather insights from real users & improve product design

  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

Usability Testing

What is usability testing.

Usability testing is the practice of testing how easy a design is to use with a group of representative users. It usually involves observing users as they attempt to complete tasks and can be done for different types of designs. It is often conducted repeatedly, from early development until a product’s release.

“It’s about catching customers in the act, and providing highly relevant and highly contextual information.”

— Paul Maritz, CEO at Pivotal

  • Transcript loading…

Usability Testing Leads to the Right Products

Through usability testing, you can find design flaws you might otherwise overlook. When you watch how test users behave while they try to execute tasks, you’ll get vital insights into how well your design/product works. Then, you can leverage these insights to make improvements. Whenever you run a usability test, your chief objectives are to:

1) Determine whether testers can complete tasks successfully and independently .

2) Assess their performance and mental state as they try to complete tasks, to see how well your design works.

3) See how much users enjoy using it.

4) Identify problems and their severity .

5) Find solutions .

While usability tests can help you create the right products, they shouldn’t be the only tool in your UX research toolbox. If you just focus on the evaluation activity, you won’t improve the usability overall.

case study usability test

There are different methods for usability testing. Which one you choose depends on your product and where you are in your design process.

Usability Testing is an Iterative Process

To make usability testing work best, you should:

a. Define what you want to test . Ask yourself questions about your design/product. What aspect/s of it do you want to test? You can make a hypothesis from each answer. With a clear hypothesis, you’ll have the exact aspect you want to test.

b. Decide how to conduct your test – e.g., remotely. Define the scope of what to test (e.g., navigation) and stick to it throughout the test. When you test aspects individually, you’ll eventually build a broader view of how well your design works overall.

2) Set user tasks –

a. Prioritize the most important tasks to meet objectives (e.g., complete checkout), no more than 5 per participant. Allow a 60-minute timeframe.

b. Clearly define tasks with realistic goals .

c. Create scenarios where users can try to use the design naturally . That means you let them get to grips with it on their own rather than direct them with instructions.

3) Recruit testers – Know who your users are as a target group. Use screening questionnaires (e.g., Google Forms) to find suitable candidates. You can advertise and offer incentives . You can also find contacts through community groups , etc. If you test with only 5 users, you can still reveal 85% of core issues.

4) Facilitate/Moderate testing – Set up testing in a suitable environment . Observe and interview users . Notice issues . See if users fail to see things, go in the wrong direction or misinterpret rules. When you record usability sessions, you can more easily count the number of times users become confused. Ask users to think aloud and tell you how they feel as they go through the test. From this, you can check whether your designer’s mental model is accurate: Does what you think users can do with your design match what these test users show?

If you choose remote testing , you can moderate via Google Hangouts, etc., or use unmoderated testing. You can use this software to carry out remote moderated and unmoderated testing and have the benefit of tools such as heatmaps.

case study usability test

Keep usability tests smooth by following these guidelines.

1) Assess user behavior – Use these metrics:

Quantitative – time users take on a task, success and failure rates, effort (how many clicks users take, instances of confusion, etc.)

Qualitative – users’ stress responses (facial reactions, body-language changes, squinting, etc.), subjective satisfaction (which they give through a post-test questionnaire) and perceived level of effort/difficulty

2) Create a test report – Review video footage and analyzed data. Clearly define design issues and best practices. Involve the entire team.

Overall, you should test not your design’s functionality, but users’ experience of it . Some users may be too polite to be entirely honest about problems. So, always examine all data carefully.

Learn More about Usability Testing

Take our course on usability testing .

Here’s a quick-fire method to conduct usability testing .

See some real-world examples of usability testing .

Take some helpful usability testing tips .

Questions related to Usability Testing

To conduct usability testing effectively:

Start by defining clear, objective goals and recruit representative users.

Develop realistic tasks for participants to perform and set up a controlled, neutral environment for testing.

Observe user interactions, noting difficulties and successes, and gather qualitative and quantitative data.

After testing, analyze the results to identify areas for improvement.

For a comprehensive understanding and step-by-step guidance on conducting usability testing, refer to our specialized course on Conducting Usability Testing .

Conduct usability testing early and often, from the design phase to development and beyond. Early design testing uncovers issues when they are more accessible and less costly to fix. Regular assessments throughout the project lifecycle ensure continued alignment with user needs and preferences. Usability testing is crucial for new products and when redesigning existing ones to verify improvements and discover new problem areas. Dive deeper into optimal timing and methods for usability testing in our detailed article “Usability: A part of the User Experience.”

Incorporate insights from William Hudson, CEO of Syntagm, to enhance usability testing strategies. William recommends techniques like tree testing and first-click testing for early design phases to scrutinize navigation frameworks. These methods are exceptionally suitable for isolating and evaluating specific components without visual distractions, focusing strictly on user understanding of navigation. They're advantageous for their quantitative nature, producing actionable numbers and statistics rapidly, and being applicable at any project stage. Ideal for both new and existing solutions, they help identify problem areas and assess design elements effectively.

To conduct usability testing for a mobile application:

Start by identifying the target users and creating realistic tasks for them.

Collect data on their interactions and experiences to uncover issues and areas for improvement.

For instance, consider the concept of ‘tappability’ as explained by Frank Spillers, CEO: focusing on creating task-oriented, clear, and easily tappable elements is crucial.

Employing correct affordances and signifiers, like animations, can clarify interactions and enhance user experience, avoiding user frustration and errors. Dive deeper into mobile usability testing techniques and insights by watching our insightful video with Frank Spillers.

For most usability tests, the ideal number of participants depends on your project’s scope and goals. Our video featuring William Hudson, CEO of Syntagm, emphasizes the importance of quality in choosing participants as it significantly impacts the usability test's results.

He shares insightful experiences and stresses on carefully selecting and recruiting participants to ensure constructive and reliable feedback. The process involves meticulous planning and execution to identify and discard data from non-contributive participants and to provide meaningful and trustworthy insights are gathered to improve the interactive solution, be it an app or a website. Remember the emphasis on participant's attentiveness and consistency while performing tasks to avoid compromising the results. Watch the full video for a more comprehensive understanding of participant recruitment and usability testing.

To analyze usability test results effectively, first collate the data meticulously. Next, identify patterns and recurrent issues that indicate areas needing improvement. Utilize quantitative data for measurable insights and qualitative data for understanding user behavior and experience. Prioritize findings based on their impact on user experience and the feasibility of implementation. For a deeper understanding of analysis methods and to ensure thorough interpretation, refer to our comprehensive guides on Analyzing Qualitative Data and Usability Testing . These resources provide detailed insights, aiding in systematically evaluating and optimizing user interaction and interface design.

Usability testing is predominantly qualitative, focusing on understanding users' thoughts and experiences, as highlighted in our video featuring William Hudson, CEO of Syntagm. 

It enables insights into users' minds, asking why things didn't work and what's going through their heads during the testing phase. However, specific methods, like tree testing and first-click testing , present quantitative aspects, providing hard numbers and statistics on user performance. These methods can be executed at any design stage, providing actionable feedback and revealing navigation and visual design efficacy.

To conduct remote usability testing effectively, establish clear objectives, select the right tools, and recruit participants fitting your user profile. Craft tasks that mirror real-life usage and prepare concise instructions. During the test, observe users’ interactions and note their challenges and behaviors. For an in-depth understanding and guide on performing unmoderated remote usability testing, refer to our comprehensive article, Unmoderated Remote Usability Testing (URUT): Every Step You Take, We Won’t Be Watching You .

Some people use the two terms interchangeably, but User Testing and Usability Testing, while closely related, serve distinct purposes. User Testing focuses on understanding users' perceptions, values, and experiences, primarily exploring the 'why' behind users' actions. It is crucial for gaining insights into user needs, preferences, and behaviors, as elucidated by Ann Blanford, an HCI professor, in our enlightening video. 

She elaborates on the significance of semi-structured interviews in capturing users' attitudes and explanations regarding their actions. Usability Testing primarily assesses users' ability to achieve their goals efficiently and complete specific tasks with satisfaction, often emphasizing the ease of interface use. Balancing both methods is pivotal for comprehensively understanding user interaction and product refinement.

Usability testing is crucial as it determines how usable your product is, ensuring it meets user expectations. It allows creators to validate designs and make informed improvements by observing real users interacting with the product. Benefits include:

Clarity and focus on user needs.

Avoiding internal bias.

Providing valuable insights to achieve successful, user-friendly designs. 

By enrolling in our Conducting Usability Testing course, you’ll gain insights from Frank Spillers, CEO of Experience Dynamics, extensive experience learning to develop test plans, recruit participants, and convey findings effectively.

Explore our dedicated Usability Expert Learning Path at Interaction Design Foundation to learn Usability Testing. We feature a specialized course, Conducting Usability Testing , led by Frank Spillers, CEO of Experience Dynamics. This course imparts proven methods and practical insights from Frank's extensive experience, guiding you through creating test plans, recruiting participants, moderation, and impactful reporting to refine designs based on the results. Engage with our quality learning materials and expert video lessons to become proficient in usability testing and elevate user experiences!

Literature on Usability Testing

Here’s the entire UX literature on Usability Testing by the Interaction Design Foundation, collated in one place:

Learn more about Usability Testing

Take a deep dive into Usability Testing with our course Conducting Usability Testing .

Do you know if your website or app is being used effectively? Are your users completely satisfied with the experience? What is the key feature that makes them come back? In this course, you will learn how to answer such questions—and with confidence too—as we teach you how to justify your answers with solid evidence .

Great usability is one of the key factors to keep your users engaged and satisfied with your website or app. It is crucial you continually undertake usability testing and perceive it as a core part of your development process if you want to prevent abandonment and dissatisfaction. This is especially important when 79% of users will abandon a website if the usability is poor, according to Google! As a designer, you also have another vital duty—you need to take the time to step back, place the user at the center of the development process and evaluate any underlying assumptions. It’s not the easiest thing to achieve, particularly when you’re in a product bubble, and that makes usability testing even more important. You need to ensure your users aren’t left behind!

As with most things in life, the best way to become good at usability testing is to practice! That’s why this course contains not only lessons built on evidence-based approaches, but also a practical project . This will give you the opportunity to apply what you’ve learned from internationally respected Senior Usability practitioner, Frank Spillers, and carry out your own usability tests .

By the end of the course, you’ll have hands-on experience with all stages of a usability test project— how to plan, run, analyze and report on usability tests . You can even use the work you create during the practical project to form a case study for your portfolio, to showcase your usability test skills and experience to future employers!

All open-source articles on Usability Testing

7 great, tried and tested ux research techniques.

case study usability test

  • 1.2k shares
  • 3 years ago

How to Conduct a Cognitive Walkthrough

case study usability test

How to Conduct User Observations

case study usability test

Mobile Usability Research – The Important Differences from the Desktop

case study usability test

How to Recruit Users for Usability Studies

case study usability test

Best Practices for Mobile App Usability from Google

case study usability test

  • 10 mths ago

Unmoderated Remote Usability Testing (URUT) - Every Step You Take, We Won’t Be Watching You

case study usability test

Making Use of the Crowd – Social Proof and the User Experience

case study usability test

Agile Usability Engineering

case study usability test

Four Assumptions for Usability Evaluations

case study usability test

  • 7 years ago

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this page , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

New to UX Design? We’re Giving You a Free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Turn user goals into task scenarios for usability testing.

Portrait of Marieke McCloskey

January 12, 2014 2014-01-12

  • Email article
  • Share on LinkedIn
  • Share on Twitter

The most effective way of understanding what works and what doesn’t in an interface is to watch people use it . This is the essence of usability testing . When the right participants attempt realistic activities, you gain qualitative insights into what is causing users to have trouble . These insights help you determine how to improve the design.

Also, you can measure the percentage of tasks that users complete correctly as a way to communicate a site’s overall usability.

In This Article:

What users need to be able to do, engage users with task scenarios.

In order to observe participants you need to give them something to do. These assignments are frequently referred to as tasks . (During testing I like to call them “ activities ” to avoid making the participants feel like they’re being tested).

Rather than simply ordering test users to "do X" with no explanation, it's better to situate the request within a short scenario that sets the stage for the action and provides a bit of explanation and context for why the user is "doing X."

Before you can write the task scenarios used in testing, you have to come up with a list of general user goals that visitors to your site (or application) may have. Ask yourself: What are the most important things that every user must be able to accomplish on the site?

For example, nngroup.com users must be able to accomplish 3 main goals:

  • Find articles on a specific topic
  • Sign up for UX Week seminars
  • Learn about our consulting services

Once you’ve figured out what the users' goals are, you need to formulate task scenarios that are appropriate for usability testing. A task scenario is the action that you ask the participant to take on the tested interface. For example, a task scenario could be:

You're planning a vacation to New York City, March 3 − March 14. You need to buy both airfare and hotel. Go to the American Airlines site and jetBlue Airlines site and see who has the best deals.

Task scenarios need to provide context so users engage with the interface and pretend to perform business or personal tasks as if they were at home or in the office.

Poorly written tasks often focus too much on forcing users to interact with a specific feature, rather than seeing if and how the user chooses to use the interface. A scenario puts the task into context and, thus, ideally motivates the participant.

The following 3 task-writing tips will improve the outcome of your usability studies.

1. Make the Task Realistic

User goal: Browse product offerings and purchase an item. Poor task: Purchase a pair of orange Nike running shoes. Better task: Buy a pair of shoes for less than $40.

Asking a participant to do something that he wouldn’t normally do will make him try to complete the task without really engaging with the interface. Poorly written tasks make it more difficult for participants to suspend disbelief about actually owning the task. In the example, the participant should have the freedom to compare products based on his own criteria.

Coming up with realistic tasks will depend on the participants that you recruit and on the features that you test. For example, if you test a hotel website, you need to make sure that the participants would be the ones in their family responsible for travel research and reservations.

Alternatively, you can decide to let the participants define their own tasks. For example, you could recruit users who are in the process of buying a car and let them continue their research during the session, instead of giving them a task scenario. ( Field studies are ideal for observing users in their own environment as they perform their own tasks, but field studies are more expensive and time consuming.)

2. Make the Task Actionable

User goal: Find movie and show times. Poor task: You want to see a movie Sunday afternoon. Go to www.fandango.com and tell me where you’d click next. Better task: Use www.fandago.com to find a movie you’d be interested in seeing on Sunday afternoon.

It’s best to ask the users to do the action , rather than asking them how they would do it. If you ask “How would you find a way to do X?” or “Tell me how you would do Y” the participant is likely to answer in words, not actions. And unfortunately, people’s self-reported data is not as accurate as when they actually use a system. Additionally, having them talk through what they would do doesn’t allow you to observe the ease or frustration that comes with using the interface.

You can tell that the task isn’t actionable enough if the participant turns to the facilitator, takes her hand off the mouse, and says something like “I would first click here, and then there would be a link to where I want to go, and I’d click on that.”

3. Avoid Giving Clues and Describing the Steps

User goal: Look up grades. Poor task: You want to see the results of your midterm exams. Go to the website, sign in, and tell me where you would click to get your transcript. Better task: Look up the results of your midterm exams.

Step descriptions often contain hidden clues as to how to use the interface. For example, if you tell someone to click on Benefits in the main menu, you won’t learn if that menu label is meaningful to her. These tasks bias users’ behavior and give you less useful results.

Task scenarios that include terms used in the interface also bias the users. If you’re interested in learning if people can sign up for the newsletter and your site has a large button labeled Sign up for newsletter , you should not phrase the task as " Sign up for this company's weekly newsletter. " It's better to use a task such as: “ Find a way to get information on upcoming events sent to your email on a regular basis .” 

Avoiding words used in the interface is not always easy or natural and can even be confusing to users, especially if you try to derive roundabout ways to describe something that already has a standard, well-known name. In that case, you may want to use the established term. Avoiding clues does not mean being vague. For example, compare the following 2 tasks:

Poor task: Make an appointment with your dentist. Better task: Make an appointment for next Tuesday at 10am with your dentist, Dr. Petersen.

You might think that this second task violates the guideline for tasks to be realistic if the user's dentist isn't really Dr. Petersen. However, this is one of those cases in which users are very good at suspending disbelief and proceeding to make the appointment just as they would with a differently-named dentist. You might need to have the user pretend to be seeing Dr. Petersen if you're testing a paper prototype or other early prototype design that includes only a few dentists.

If the task scenario is too vague, the participant will likely ask you for more information or will want to confirm that she is on the right path. Provide the participant with all the information that she needs to complete a task, without telling her where to click. During a usability test, mimic the real world as much as possible. Recruit representative users and ensure that each task scenario:

  • is realistic and typical for how people actually use the system when they are on their own time, doing their own activities
  • encourages users to interact with the interface
  • doesn’t give away the answer.

Related Courses

Usability testing.

Learn how to plan, conduct, and analyze your own studies, whether in person or remote

Remote User Research

Collect insights without leaving your desk

Related Topics

  • User Testing User Testing

Learn More:

case study usability test

Data vs. Findings vs. Insights

Sara Ramaswamy · 3 min

case study usability test

Usability Test Facilitation: 6 Mistakes to Avoid

Kate Moran and Maria Rosala · 6 min

case study usability test

Help Users Think Aloud

Kate Kaplan · 4 min

Related Articles:

Qualitative Usability Testing: Study Guide

Kate Moran · 5 min

Project Management for User Research: The Plan

Susan Farrell · 7 min

Employees as Usability-Test Participants

Angie Li · 5 min

Team Members Behaving Badly During Usability Tests

Hoa Loranger · 7 min

Affinity Diagramming for Collaboratively Sorting UX Findings and Design Ideas

Kara Pernice · 9 min

Avoid Leading Questions to Get Better Insights from Participants

Amy Schade · 4 min

12 Usability Testing Templates: Checklist & Examples

How-to guides.

case study usability test

Usability testing can be tough. 

That’s why it’s important to pick the right tools and templates for your specific use case. 

In this post, we are giving away 12 free usability testing templates for various use cases that you can easily copy or download to implement immediately with your team. 

Find the right usability testing template for your use cases in this list: 

Usability Test Plan Template

  • Usability Checklist Template

Usability Task Template

  • Prototype Usability Template
  • New Product/Feature Usability Template
  • Sign-up Usability Template
  • Checkout Process Usability Template
  • Content Navigation Usability Template

Accessibility Testing Template

Usability survey template.

  • Helpdesk Usability Template
  • Homepage Usability Template

In usability testing, templates are useful for:

  • Consistency: Re-using the same template or usability testing script for different test cases means that every aspect of usability testing is consistently covered. This way, your team never misses essential details and actions that need to be taken.  ‍
  • Efficiency: With a clear, simple structured format, such as a template, you can customize each one for specific use cases. And, in case you don’t have time for that, we’ve got 12 different usability testing templates for the most common use cases that dev agencies and in-house teams are testing for regularly. 
  • ‍ Communication: Between your team and any testers, templates ensure everyone is on the same page, with a straightforward process, methodology, and goals. 

Usability testing is a crucial step of software development.

Based on the outcome of the usability testing report, it might mean your product or website’s functionality still need several iterations.

All of this is made easier when using the right tools to collect quantitative and qualitative data from every user during testing.

Tools with features like session replay and automatic collection of technical data—like Marker.io —are useful for usability testing. 

General Usability Testing Templates

We’ve included two categories of free templates for usability testing and collecting user feedback in this post: general and use-case specific. 

For more general use cases, we’ve included:

  • A usability test plan template (standard operating procedure for all usability testing);
  • A usability checklist template (a simple-form version of the above you can check off during a project);
  • And a usability task template (that can be adapted and customized for specific use cases).

We’ve also included task examples for common use cases, such as checkout process, website navigation, and more!

Either way, we have you covered.

Let’s dive in.

case study usability test

What’s this template for?

A usability test plan template is the working document or standard operating procedure (SOP) that is a single source of truth for your entire usability testing process. 

What’s included? 

Within this template, you need to include: 

  • Goals, scope, business requirements , and key performance indicators (KPIs). Include everything, such as why we are building a new website or app or new product features. What user/client pain point are we solving? What are the technical objectives?  ‍
  • Who’s on the usability test team? Include everyone involved and relevant information: internal QA team members, testers, demographics, experience level, and skillset—the project's who, what, why, and when. 
  • ‍ Testing environment: Hardware, operating systems, browsers, and devices. List them here, including how you will establish a suitable testing environment and tools to monitor testers and collect feedback. 
  • ‍ Usability testing project milestones and deliverables. Outline every phase of the usability testing plan here. 
  • ‍ User profiles and personas. Who are our end-users? Fill in detailed demographic information to determine who should be testing this website, software product, or app. 
  • ‍ How to write test tasks. Clear instructions on how to write a usability testing script. Include everything that should go into it.
  • ‍ Recording test results. How are we recording usability test results? What tool(s) are we using? Document this here. 
  • ‍ Implementation of usability testing results. And finally, the process for implementing any bug fixes and UX changes users notice, including informing them and any clients or stakeholders once the usability testing phase is complete. 

How to use this usability test plan template?

You can download the Usability Test Plan Template here . 

This document is 100% editable—simply: 

  • Make a copy;
  • Fill in the blanks with your details;
  • Save and share internally!

Need a usability testing tool? 

Try Marker.io free for 15 days : Collect visual feedback, technical data, bug reports, and loads more with our powerful, easy-to-use usability testing tool. 

Pricing: From $39/mo.

Usability Testing Checklist

case study usability test

What’s this template for? 

This template is a general usability checklist. As every product, SaaS tool, app, and website is different, it will need to be adapted for your project(s). 

As a starting point, you can include the following in a usability checklist: 

  • Navigation: Is the app or software navigation and UX intuitive and easy to use? Can testers find their way around the app without difficulty?
  • Readability: Is the text/copy legible and easily understood? Are fonts and colors consistent across all pages?    
  • Accessibility: Is the product or app accessible to users with disabilities, such as visual impairments?  Are we adhering to web accessibility guidelines and best practices? 
  • Error handling and bugs: Are error messages, such as 404 links, correctly displayed, and are they clear and helpful for users?
  • Performance: Does the software/product run as expected across different devices, operating systems, or browsers? If so, does it perform well across these testing environments?
  • Load times and speed: Are page load times as expected across every page of the app or product?
  • User control: Does the product give users control over their interactions? Does it feel responsive? Can they take the actions you want them to take?
  • User flow and UX: is the user flow logical and intuitive? Can users complete tasks within a reasonable timeframe? When do they experience frustration?
  • Self-help and documentation: Can users easily find self-help documents and content, is it accessible and understandable for non-technical users? 

How to use this usability checklist template?

You can download the Usability Checklist Template here . 

This document is 100% editable—simply make a copy, fill in the blanks with your details, save, and share internally!

Or you can copy and paste the checklist above into a Google or Notion Doc, and then use it whenever it’s needed. 

case study usability test

A usability task template is also known as a usability testing script. It’s designed to evaluate the product’s usability and observe user interactions and decision-making processes.

A usability task template always includes:

  • Task name, product, testing environment, and the user performing the test tasks. 
  • Instructions: What are and how should users perform the relevant tasks? 
  • Success criteria: What’s a pass/fail mark for each task? 
  • Space for notes. 

It’s mission-critical to achieve the results you want from usability testing that: 

  • Tasks should reflect the most common user scenarios/actions . Out-of-the-ordinary scenarios shouldn’t be included because the aim is to find out how the average user navigates and performs actions on your product or website.
  • Instructions should be clear, concise, and as simple as possible —no jargon or niche language. In particular, if you’re dealing with a non-technical audience. 
  • Instructions should be unbiased. Otherwise, you could lead the testers to the wrong conclusions/actions, or lead them too easily to perform the tasks as expected.
  • Include a range of tasks of varying degrees of difficulty (e.g., from “Login” to “Create a new filter for [X] on your dashboard). This way, you get the widest range of data possible. 
  • Tasks should be replicable . Make your tasks more general and not specific to a certain type of user—again, for a wider range of data.
  • Measurable success criteria. What’s a pass/fail, and ultimately, how does this test help us improve our website, app, or software?

How to use this usability task template?

It’s easy. You can:

Download the Usability Task Template here . 

Use Case-specific Templates

You can use these premade templates for specific user experience (UX) workflows, such as signing up to a web app, onboarding, product search, accessibility, and loads more. 

We’ve made each template as easy to customize as possible.

Prototype Usability Testing Template

case study usability test

Running usability testing on your early-stage product or app prototypes. Source the insights you need from alpha and beta testers to see whether they’re able to navigate the user flow and UX and perform the tasks you expect.

User research at this stage will influence the product development roadmap, feature and functionality iterations, and even the go-to-market strategy. 

For almost any product, you need to test for these user experience expectations: 

  • Navigation, UX, and user flow: Is it easy enough for users to navigate? 
  • User experience across different devices, browsers, and operating systems; 
  • Accessibility and readability of text and copy within the product or app; 
  • Can users perform the tasks expected that align with business goals for the product or app? 
  • Do users have enough control within the app? 
  • Can users easily find and use self-help documentation or contact customer support? 

How to use this prototype usability template?

It’s simple to get started with this: 

Download our free Prototype Usability Testing Template here. 

New Feature Usability Testing Template

case study usability test

Finding out whether your product would benefit from new features and functionality.

Whether you’ve already developed new features or have an MVP (minimum viable product) version of a new feature, it can be helpful to source user feedback before committing to the development phase. 

You can even use this template to validate user feedback and usability with simple wireframes of proposed new features in the product roadmap. 

As a minimum, you can ask users during the testing phase: 

  • Do you understand what pain point we are trying to solve with this new product or feature? 
  • Can you navigate the UX and user flow easily enough? 
  • Would you use this new product or feature? 
  • Can users perform the tasks expected that align with business goals for the product or app?

How to use this new product/feature usability testing template?

Get started by downloading our free New Product/Feature Usability Testing Template here. 

Need a usability testing tool?

Collect visual feedback, technical data, bug reports, and loads more with our powerful, easy-to-use usability testing tool— try Marker.io for free today .

Sign Up Usability Testing Template

case study usability test

Sign-up flows are crucial—and testing the most basic feature of your app can uncover friction points you may not have thought of during development. 

You want to make sure your users understand and can fill out fields easily, where they get stuck, and find areas of improvement for your sign-up process.

What’s included?  

For this template, there are simple questions that need answering: 

  • Can you sign-up for our product or app? 
  • How easy did you find the sign-up process? Did you get frustrated at any point?
  • Are there any fields you don’t find important to fill out? Why?
  • Does it work across different devices, browsers, or operating systems? 

How to use our free web app sign-up usability testing template?

Download our Web App Sign-up Usability Testing Template here. 

Checkout Process Usability Testing Template

case study usability test

For any eCommerce website or app, the checkout process is crucial. Online stores live or die according to whether users complete a purchase or abandon the cart at checkout. 

Anything you can do to improve the checkout conversion rate will increase top-line revenue, sales, and, ultimately, profits. 

A checkout process usability testing template is to test the UX and user flow of an eCommerce site's checkout. 

  • A series of checkout tasks that align with the different ways customers can buy products (e.g., card, PayPal, or others).
  • Different checkout user flows, including Guest or Logged-in users. 
  • Pass/fail tasks to identify pain points or anything that causes friction when users are going through checkout. 
  • Quantitative, data-driven feedback and qualitative questions about how users feel about the checkout user flow. 

How to use this checkout process usability testing template?

All you need to do is download our Checkout Process Usability Testing Template here. 

Try Marker.io free for 15 days : Collect visual feedback, technical data, bug reports, and loads more with our powerful, easy-to-use usability testing tool.

Content Navigation Usability Testing Template

case study usability test

How easy is your website or in-app content to navigate? That’s the question you can answer with the right tools and our usability testing template for content navigation. 

  • Simple navigational tasks and questions to identify whether users can navigate around the website, app, or product easily enough. 
  • Questions about whether users have found what you’ve asked them to look for.
  • Navigational tasks aligned with different user personas and for numerous testing environments. 

How to use this content navigation usability testing template?

Download our Content Navigation Usability Testing Template here. 

case study usability test

Web accessibility is a way of ensuring anyone with a disability, such as auditory, cognitive, neurological, physical, speech, or visual, can access and use any website and app as easily as someone without a disability. 

Find out more about what this means here: W3C Website Accessibility Initiative (WAI) . 

W3C sets the gold standard for global website accessibility initiatives.

As the WAI states: “Accessibility is essential for developers and organizations that want to create high-quality websites and web tools, and not exclude people from using their products and services.” 

Making websites and apps accessible to everyone is a smart business move and, in many countries, is a legal requirement. 

What’s included?

As this is a more specific use case, you might need to partner with a provider who can have your website or app tested by users with disabilities, temporary disabilities, and situational limitations. 

In general, you’ll be looking to test stuff like:

  • Website is usable while zoomed in
  • Links are clearly recognizable, clear color contrast across the entire site
  • Logical structure

At the same time, we’ve included a useful template you can use, following the same outline as the Usability Checklist Template, adapted for the disabilities outlined above. 

How to use this accessibility testing template?

Get started by downloading our free Accessibility Testing Template here. 

case study usability test

A usability survey is a survey for users to ask for qualitative feedback during user testing. It’s helpful to understand how they rate it compared to similar products, such as competitors. 

Getting user opinions can help shape the product roadmap, functionality, features, and even the go-to-market strategy. 

Include questions such as: 

  • How positive or negative was the experience of using our product or app? A rating scale (e.g., 1-5 or 1-10) is useful for gauging opinions). 
  • How likely are you to use our product or app again?
  • Would you recommend us to a friend?

How to use our free usability survey template?

Download our Usability Survey Template here. 

Help and Support Docs Usability Template

case study usability test

How easily can users find self-help and support documents and content? 

The last thing you want is for users to churn because they don’t understand how to seek the help they need. 

Included in this template are simple tasks that test whether users can find self-help and support documents and content. It includes questions such as: 

  • What problems have you encountered? 
  • Did you find the right self-help support easily enough? 
  • Were the self-help documents and content understandable? 
  • Could you follow the steps in the self-help section to resolve your problem? 
  • If not, what else could we include to make this process easier? 

How to use this help & support docs usability template?

Download our Help & Support Docs Usability Template here. 

Are you using unmoderated testers? Get the feedback you need, including bug reports, technical data, and loads more— try Marker.io for free today .

Website Homepage Usability Template

case study usability test

This is a template for finding out how users feel about a website homepage, including whether it’s visually appealing and easy to navigate. 

It’s useful to include a series of questions, such as: 

  • Based on our homepage, do you understand what our company does? 
  • Did you find the homepage easy to navigate? 
  • Can you find everything we asked you to?

How to use this website homepage usability template?

Download our Website Homepage Usability Template here. 

Frequently Asked Questions

What is usability testing.

You’ve finished building an amazing new website, app, or software solution. 

What's next?

It needs testing. That’s where usability testing comes into the picture. You can do this internally, as part of your usual QA testing , and many web dev agencies and in-house teams do that. 

But you also need to see how real users navigate your website or use your product. 

Product or project managers can give testers a checklist of tasks to see whether their interactions align with expected outcomes. 

Usability testing can be conducted remotely, in-person, crowd-sourced, moderated, or unmoderated, and there are numerous tools, checklists, and templates you can use for usability testing. 

What are the benefits of usability testing?

The benefit of usability testing is that you can see, in real-time, whether users can complete tasks on a new website, app, or software product. 

Usability testing gives web dev agencies and QA teams crucial feedback to improve their UX.

This ensures the product is easy-to-use, to navigate, accessible, and free from any bugs. 

How do you structure a usability test?

Before implementing any usability test, you need to be clear on the specific goals you want to achieve.

Once those are clear, there are dozens of templates you can use (like those in this article) and tools, such as Marker.io, for usability testing. 

And then, follow this simple usability testing process: 

  • Plan the test and goals. 
  • Provide a timescale for the testing phase. 
  • Source testers (such as those you can hire through testing tools and platforms).
  • Prepare the usability testing script, or questionnaire, based on any of the free templates and checklists above.
  • Invite testers to try out (specific areas of) the product or website, following the instructions in the testing questionnaire. 
  • Get quantitative and qualitative feedback from the testers via the questionnaire and any usability testing tools you’ve deployed. 
  • Implement the relevant improvements, and inform testers that their feedback was appreciated. 
  • Let the client or internal customers know that the usability testing phase is complete and relevant fixes/changes have been made. 

As you can see, every usability testing template is different. 

For basic website testing, we recommend the usability checklist template. It covers everything you need from your testers. 

For checking how accessible a website or app is, you’d need the accessibility testing template, and for testing eCommerce website checkouts, you can use the checkout process usability testing template. 

We hope you found this list of usability checklists and templates helpful!

Let us know if we missed something!

case study usability test

Continue reading

15 best developer productivity tools in 2024: which one is right for you, 15 user acceptance testing templates [examples + download], 14 bug report templates to copy for qa testing [2024].

case study usability test

What is Marker.io?

Who is marker.io for, how easy is it to set up, will marker.io slow down my website, do clients need an account to send feedback, how much does it cost, get started now.

How UX Researchers Can 4X Their Usability Test Response Rates With Userpilot

How UX Researchers Can 4X Their Usability Test Response Rates With Userpilot cover

Usability testing is an invaluable resource for UX researchers…but only if you’re able to recruit participants in the first place. This is a problem that our own UX researcher at Userpilot, Lisa, faced when she tried recruiting participants the traditional way.

But when things don’t work out, we are reminded of the importance of eating your own dog food, aka using your own product to solve the problem you set out to address for your customers.

Learn how Lisa (and you) can quadruple your UX research response rates with Userpilot.

  • Challenge : Lisa, our UX researcher, found it difficult to recruit participants for usability tests via email since B2B users are busy individuals with cluttered inboxes. This challenge was critical because it could lead to inaccurate product decisions, delays in development , and missed opportunities in a competitive industry.
  • Solution : Lisa realized she needed a channel with less distraction and more engagement, leading her to choose our own product for inviting users. Using Userpilot’s survey functionality, Lisa created an interview invite survey and triggered it to the right user segment inside Userpilot.
  • Results : In only a few days, Lisa was able to recruit 19 participants when she only expected to speak to 5 users. Hence, achieving four times better results than expected by inviting users inside the app.
  • If you, too, are looking to streamline your user research process, there is no tool better than Userpilot. It enables you to collect customer feedback in-app, analyze user behavior using multiple reports, and conduct product experiments. Book a demo to learn more.

case study usability test

Try Userpilot and Take Your User Experience to the Next Level

  • 14 Day Trial
  • No Credit Card Required

case study usability test

Challenge: Recruiting usability test participants

After leaving Microsoft for Userpilot, Lisa set out to conduct usability test interviews to learn how users utilized our popular customer segmentation feature. She began to recruit participants by emailing them but much to Lisa’s dismay, all she got was crickets.

usability-test-recruitment-email

This was not a problem in Microsoft because when you have millions of B2C users, it’s far easier to recruit participants with a $100 voucher incentive. There are even websites created specifically for enlisting user interview participants, cutting the efforts of researchers in half.

tool-for-recruiting-interview-participants

However, as Lisa experienced, this was not the case in the B2B space. B2B users, especially on the executive level, are busy individuals with even busier inboxes filled with spam, cold emails , meeting invites, and whatnot.

This left Lisa with a challenge because when you’re working in a competitive industry and a high-growth startup, you can’t afford to lose time getting users on board with your UX research efforts.

What’s the business impact of this challenge for SaaS teams?

Being unable to recruit participants for usability testing is a problem no UX researcher would want to face.

Usability tests are used to validate design decisions, monitor ease of use, and identify areas for improvement . But if you’re unable to gather a decent pool of participants, you bear the risk of your decisions being inaccurate, which may lead to costly changes post-launch.

But even before that, difficulty in enlisting participants can lead to delays in conducting usability tests, which can prolong product development timelines. Not only will this impact time-to-market, but it will also result in missed opportunities if your competitors beat you to the finish line.

Solution: Finding a different channel to recruit test participants

This delay in recruiting participants made one thing clear – there was a need for a better channel, one with fewer distractions and greater engagement. For Lisa, the solution became evident: leveraging our own product, i.e., Userpilot , as the channel.

Using our product to recruit interview participants was a logical choice. Unlike other channels with spam messages or advertisements, our app offered a focused environment where participants’ attention wasn’t divided. And, since our product was already integral to their work, it served as a familiar channel for engagement .

Lisa then used Userpilot’s in-app survey functionality to create a close-ended survey in mere minutes to recruit participants.

user-testing-invite-survey

The best part? Lisa was able to send this interview invite to the right user segment – those users who had already previously used the segmentation feature.

She didn’t have to use another tool for this purpose. This was done no-code, right inside Userpilot .

survey-segmentation-in-userpilot

Results: 4X more response rates for in-app invitations

So was Lisa’s decision to recruit participants in-app any fruitful?

In just a few days, Lisa was able to recruit 19 participants when she only wanted to speak to 5 people. She witnessed 4x better results than what she expected by inviting users inside the app (using Userpilot, of course).

results-of-usability-test-survey

This high response rate is not just limited to interview invite surveys. The team also launched a feature feedback survey and got 55 quality responses in just 2 days.

feedback-feature-survey-results

How to perform user research with Userpilot?

If you want to conduct user research seamlessly like Lisa, Userpilot is the solution you need. Here are a few features that will streamline your UX research efforts.

Collect customer feedback through surveys

We’ve already touched upon Userpilot’s survey functionality but that was just the tip of the iceberg. There are several survey templates you can customize for different purposes: market research, product-market fit , NPS – you name it!

You can trigger these surveys for relevant segments and choose where and when to display them. If you have a diverse audience, you can localize these surveys by translating them into different languages.

And if you still want to send email surveys (even though we proved why you shouldn’t), you can integrate Userpilot with HubSpot and Salesforce to send emails to highly targeted audiences.

Analyze customer behavior data over time

Analyzing user behavior can provide valuable insights, especially if some users struggle to articulate their feedback effectively.

Here are a few analytics features Userpilot offers that you can use and for which purposes:

  • Funnel analysis – to identify conversion and friction points across the customer journey.
  • Trend analysis – to monitor how user behavior patterns vary over time.
  • Cohort retention analysis – to track retention rates of similar user cohorts .
  • Path analysis – to discover the shortest path to value .
  • Feature heatmaps – to identify features of high and low engagement.
  • Custom event tracking – to track multiple user actions as if they were one.
  • Analytics dashboards – to monitor important metrics and reports in visual dashboards .

trend-analysis-in-userpilot

Conduct A/B testing for product experiments

With A/B testing in Userpilot, you can ascertain how product changes impact user behavior and preferences. You can compare different variants of in-app flows to determine which version performs better and for which customer segment.

You can also conduct multivariate tests to compare more than two in-app flows.

ab-testing-in-userpilot-copy

This example showed us that there is no better channel for collecting customer feedback than in-app as it offers a less cluttered environment and enables you to reach your most active users.

If you want to streamline your feedback efforts in-app, there is no better tool than Userpilot. Book a demo to see it in action.

Try Userpilot and Take Your Feedback Collection to the Next Level

Leave a comment cancel reply.

Save my name, email, and website in this browser for the next time I comment.

Book a demo with on of our product specialists

Get The Insights!

The fastest way to learn about Product Growth,Management & Trends.

The coolest way to learn about Product Growth, Management & Trends. Delivered fresh to your inbox, weekly.

case study usability test

The fastest way to learn about Product Growth, Management & Trends.

You might also be interested in ...

How clearcalcs improves user activation with cohort analysis.

Saffa Faisal

How Kontentino Increased New User Activation by 10% Using Userpilot

go_logo.png

  • How it works

case study usability test

Online Usability Testing

Understand user behavior on your website.

case study usability test

Mobile & Tablet UX Testing

 Test usability on mobile devices and tablets.

case study usability test

Moderated Testing

Conduct live, guided user testing sessions.

case study usability test

Information Architecture Testing

Design or refine your information architecture.

case study usability test

Unmoderated Testing

Allow users to test without assistance.

case study usability test

True Intent Studies

Understand visitors’ goals and satisfaction.

case study usability test

A/B Testing

Determine which design performs better.

case study usability test

UX Benchmarking

Analyze your website against competitors.

case study usability test

Prototype Testing

Optimize design before development.

case study usability test

Search Engine Findability

Measure ease of finding your online properties.

Explore all Loop11 features →

Reporting Features

case study usability test

AI Insights

Gain deeper insights using the power of AI.

case study usability test

Clickstream Analytics

Track users’ clicks and navigation patterns.

case study usability test

Heatmap Analysis

Visualize user engagement and interaction.

case study usability test

User Session Recording & Replay

 Capture user interactions for usability analysis.

Get access to all Loop11 features for free. Start free trial

View Pricing Plans

  • Clients & Testimonials

></center></p><p>Test usability on mobile devices and tablets.</p><h2>Information Architecture (IA) Testing</h2><p>Capture user interactions for usability analysis.</p><ul><li>support@loop11.com</li><li>Case Studies</li></ul><h2>Usability Case Study: Wireframe Usability Testing</h2><p>30 November, 2010</p><p>Think it’s too costly to integrate usability testing in the early stages of web development? Think it requires fully designed prototypes? Think again. Usability testing in the early stages of web development can be both efficient and cost-effective. With wireframes you can easily ensure that you’ve streamlined the user-experience before even completing your site.</p><p>The Media Department at a University in Sweden recently used Loop 11 to run usability testing on wireframes. Two different prototypes of a tourism website were tested. Running this quick investigation, the researcher discovered how users would naturally navigate and interact with their site before the design phase commenced. Their testing yielded some interesting results.</p><p>In the early stages of the project—just after drafting the outline and organisation of a tourist website for a major city in Sweden—the team wanted feedback from users. Using two prototypes made of very low-fidelity wireframes, the team witnessed how user-experience differed across different versions of the same site.</p><p>Prototype 1:                                                                                                                                                           Prototype 2:</p><p>These two sites were presented to 60 participants in total.</p><p>All participants completed a series of six tasks: find an events list, locate city maps, learn more about language courses, and other actions usually performed by visitors to a tourism website. The project recorded if tasks were completed successfully and how long each task took.</p><p>At first blush, with only minor tweaks on layout and information architecture, the two prototypes might not seem distinct enough to yield significant test results. But, as we know, even the smallest change can make a huge difference on overall web experience.</p><h2>The Results</h2><p>On the whole, Prototype 1 performed the best. Prototype 1 demonstrated a task completion rate of 58%, while only 51% of the tasks were completed successfully on Prototype 2. Looking closely at each prototype, however, there are some nuances across the board.</p><p>Four tasks on Prototype 1 benefited from higher rates of task completion. But most of those tasks actually took significantly longer to complete on Prototype 1. On the other hand, Prototype 2 only had two tasks with higher rates of task completion. And overall, there was only one task on Prototype 1 which was completed more quickly than on the second prototype. Looking for student accommodation took at full 24 seconds longer on Prototype 2.</p><h2>The Interpretation</h2><p>Generally speaking, the first prototype appears to be more usable. But it does take users longer on Prototype 1 to get to their destination. So instead of using Prototype 1 wholesale as the ultimate guiding draft for the final site, the researchers can take a closer look at their results and their website.</p><p>The researchers might question why, though many tasks were easier to complete on Prototype 1, did they take longer to complete. To do this, they might take a critical look at layout, link naming, or site organisation on Prototype 1. Or since both prototypes demonstrated better completion rates on a few tasks, the team might explore ways to combine the best of both into one site.</p><p>By using the results, this team can use the newly gained insight to revisit fundamental decisions on web design.</p><h2>The Meaning of It All</h2><p>The project demonstrates how you can gain significant insight on usability with only the barest of wireframes.</p><p>It allows teams to enhance page layout, navigation paths, and information architecture without investing great funds on fully designed prototypes. Easy and quick, a wireframe usability test helps flesh out foundational details before designers and developers commit to creating polished, finalised site. Wireframe usability tests—despite their simplicity—can help teams seriously question, rethink and fine-tune a site’s overall experience.</p><p><center><img style=

Were sorry to hear about that, give us a chance to improve.

Article contents

Related articles

User Testing a User Testing Tool

User Testing a User Testing Tool

I realize the title of this article sounds unbelievably meta but stick with me. This was exactly what we had to do when reimagining the Loop11 participant interface used during usability studies run by our customers. The context you need for this article is that Loop11 is a remote usability testing platform used by UX professionals around the world to test […]

Usability Case Study: iPad vs PC

We conducted a split-tested usability study to compare a website's user experience on the iPad vs a PC.

Finding The Holes In Your Website’s UX

Finding The Holes In Your Website’s UX

Fact #1: Many companies spend thousands of dollars on their website. Fact #2: Most of these companies have little idea whether their website is achieving its goals. UX testing is the mechanism that informs a company as to whether their website is working for them or against them. The following article provides a high-level report […]

Create your free trial account

  • Free 14-day trial. Easy set-up. Cancel anytime
  • No UX or coding experience required

We love sharing interesting UX topics and work by creatives out there. Follow us for weekly UX posts, inspirations and reels!

Follow us on Instagram

Maybe next time!

  • Open access
  • Published: 22 April 2024

Artificial intelligence and medical education: application in classroom instruction and student assessment using a pharmacology & therapeutics case study

  • Kannan Sridharan 1 &
  • Reginald P. Sequeira 1  

BMC Medical Education volume  24 , Article number:  431 ( 2024 ) Cite this article

424 Accesses

1 Altmetric

Metrics details

Artificial intelligence (AI) tools are designed to create or generate content from their trained parameters using an online conversational interface. AI has opened new avenues in redefining the role boundaries of teachers and learners and has the potential to impact the teaching-learning process.

In this descriptive proof-of- concept cross-sectional study we have explored the application of three generative AI tools on drug treatment of hypertension theme to generate: (1) specific learning outcomes (SLOs); (2) test items (MCQs- A type and case cluster; SAQs; OSPE); (3) test standard-setting parameters for medical students.

Analysis of AI-generated output showed profound homology but divergence in quality and responsiveness to refining search queries. The SLOs identified key domains of antihypertensive pharmacology and therapeutics relevant to stages of the medical program, stated with appropriate action verbs as per Bloom’s taxonomy. Test items often had clinical vignettes aligned with the key domain stated in search queries. Some test items related to A-type MCQs had construction defects, multiple correct answers, and dubious appropriateness to the learner’s stage. ChatGPT generated explanations for test items, this enhancing usefulness to support self-study by learners. Integrated case-cluster items had focused clinical case description vignettes, integration across disciplines, and targeted higher levels of competencies. The response of AI tools on standard-setting varied. Individual questions for each SAQ clinical scenario were mostly open-ended. The AI-generated OSPE test items were appropriate for the learner’s stage and identified relevant pharmacotherapeutic issues. The model answers supplied for both SAQs and OSPEs can aid course instructors in planning classroom lessons, identifying suitable instructional methods, establishing rubrics for grading, and for learners as a study guide. Key lessons learnt for improving AI-generated test item quality are outlined.

Conclusions

AI tools are useful adjuncts to plan instructional methods, identify themes for test blueprinting, generate test items, and guide test standard-setting appropriate to learners’ stage in the medical program. However, experts need to review the content validity of AI-generated output. We expect AIs to influence the medical education landscape to empower learners, and to align competencies with curriculum implementation. AI literacy is an essential competency for health professionals.

Peer Review reports

Artificial intelligence (AI) has great potential to revolutionize the field of medical education from curricular conception to assessment [ 1 ]. AIs used in medical education are mostly generative AI large language models that were developed and validated based on billions to trillions of parameters [ 2 ]. AIs hold promise in the incorporation of history-taking, assessment, diagnosis, and management of various disorders [ 3 ]. While applications of AIs in undergraduate medical training are being explored, huge ethical challenges remain in terms of data collection, maintaining anonymity, consent, and ownership of the provided data [ 4 ]. AIs hold a promising role amongst learners because they can deliver a personalized learning experience by tracking their progress and providing real-time feedback, thereby enhancing their understanding in the areas they are finding difficult [ 5 ]. Consequently, a recent survey has shown that medical students have expressed their interest in acquiring competencies related to the use of AIs in healthcare during their undergraduate medical training [ 6 ].

Pharmacology and Therapeutics (P & T) is a core discipline embedded in the undergraduate medical curriculum, mostly in the pre-clerkship phase. However, the application of therapeutic principles forms one of the key learning objectives during the clerkship phase of the undergraduate medical career. Student assessment in pharmacology & therapeutics (P&T) is with test items such as multiple-choice questions (MCQs), integrated case cluster questions, short answer questions (SAQs), and objective structured practical examination (OSPE) in the undergraduate medical curriculum. It has been argued that AIs possess the ability to communicate an idea more creatively than humans [ 7 ]. It is imperative that with access to billions of trillions of datasets the AI platforms hold promise in playing a crucial role in the conception of various test items related to any of the disciplines in the undergraduate medical curriculum. Additionally, AIs provide an optimized curriculum for a program/course/topic addressing multidimensional problems [ 8 ], although robust evidence for this claim is lacking.

The existing literature has evaluated the knowledge, attitude, and perceptions of adopting AI in medical education. Integration of AIs in medical education is the need of the hour in all health professional education. However, the academic medical fraternity facing challenges in the incorporation of AIs in the medical curriculum due to factors such as inadequate grounding in data analytics, lack of high-quality firm evidence favoring the utility of AIs in medical education, and lack of funding [ 9 ]. Open-access AI platforms are available free to users without any restrictions. Hence, as a proof-of-concept, we chose to explore the utility of three AI platforms to identify specific learning objectives (SLOs) related to pharmacology discipline in the management of hypertension for medical students at different stages of their medical training.

Study design and ethics

The present study is observational, cross-sectional in design, conducted in the Department of Pharmacology & Therapeutics, College of Medicine and Medical Sciences, Arabian Gulf University, Kingdom of Bahrain, between April and August 2023. Ethical Committee approval was not sought given the nature of this study that neither had any interaction with humans, nor collection of any personal data was involved.

Study procedure

We conducted the present study in May-June 2023 with the Poe© chatbot interface created by Quora© that provides access to the following three AI platforms:

Sage Poe [ 10 ]: A generative AI search engine developed by Anthropic © that conceives a response based on the written input provided. Quora has renamed Sage Poe as Assistant © from July 2023 onwards.

Claude-Instant [ 11 ]: A retrieval-based AI search engine developed by Anthropic © that collates a response based on pre-written responses amongst the existing databases.

ChatGPT version 3.5 [ 12 ]: A generative architecture-based AI search engine developed by OpenAI © trained on large and diverse datasets.

We queried the chatbots to generate SLOs, A-type MCQs, integrated case cluster MCQs, integrated SAQs, and OSPE test items in the domain of systemic hypertension related to the P&T discipline. Separate prompts were used to generate outputs for pre-clerkship (preclinical) phase students, and at the time of graduation (before starting residency programs). Additionally, we have also evaluated the ability of these AI platforms to estimate the proportion of students correctly answering these test items. We used the following queries for each of these objectives:

Specific learning objectives

Can you generate specific learning objectives in the pharmacology discipline relevant to undergraduate medical students during their pre-clerkship phase related to anti-hypertensive drugs?

Can you generate specific learning objectives in the pharmacology discipline relevant to undergraduate medical students at the time of graduation related to anti-hypertensive drugs?

A-type MCQs

In the initial query used for A-type of item, we specified the domains (such as the mechanism of action, pharmacokinetics, adverse reactions, and indications) so that a sample of test items generated without any theme-related clutter, shown below:

Write 20 single best answer MCQs with 5 choices related to anti-hypertensive drugs for undergraduate medical students during the pre-clerkship phase of which 5 MCQs should be related to mechanism of action, 5 MCQs related to pharmacokinetics, 5 MCQs related to adverse reactions, and 5 MCQs should be related to indications.

The MCQs generated with the above search query were not based on clinical vignettes. We queried again to generate MCQs using clinical vignettes specifically because most medical schools have adopted problem-based learning (PBL) in their medical curriculum.

Write 20 single best answer MCQs with 5 choices related to anti-hypertensive drugs for undergraduate medical students during the pre-clerkship phase using a clinical vignette for each MCQ of which 5 MCQs should be related to the mechanism of action, 5 MCQs related to pharmacokinetics, 5 MCQs related to adverse reactions, and 5 MCQs should be related to indications.

We attempted to explore whether AI platforms can provide useful guidance on standard-setting. Hence, we used the following search query.

Can you do a simulation with 100 undergraduate medical students to take the above questions and let me know what percentage of students got each MCQ correct?

Integrated case cluster MCQs

Write 20 integrated case cluster MCQs with 2 questions in each cluster with 5 choices for undergraduate medical students during the pre-clerkship phase integrating pharmacology and physiology related to systemic hypertension with a case vignette.

Write 20 integrated case cluster MCQs with 2 questions in each cluster with 5 choices for undergraduate medical students during the pre-clerkship phase integrating pharmacology and physiology related to systemic hypertension with a case vignette. Please do not include ‘none of the above’ as the choice. (This modified search query was used because test items with ‘None of the above’ option were generated with the previous search query).

Write 20 integrated case cluster MCQs with 2 questions in each cluster with 5 choices for undergraduate medical students at the time of graduation integrating pharmacology and physiology related to systemic hypertension with a case vignette.

Integrated short answer questions

Write a short answer question scenario with difficult questions based on the theme of a newly diagnosed hypertensive patient for undergraduate medical students with the main objectives related to the physiology of blood pressure regulation, risk factors for systemic hypertension, pathophysiology of systemic hypertension, pathological changes in the systemic blood vessels in hypertension, pharmacological management, and non-pharmacological treatment of systemic hypertension.

Write a short answer question scenario with moderately difficult questions based on the theme of a newly diagnosed hypertensive patient for undergraduate medical students with the main objectives related to the physiology of blood pressure regulation, risk factors for systemic hypertension, pathophysiology of systemic hypertension, pathological changes in the systemic blood vessels in hypertension, pharmacological management, and non-pharmacological treatment of systemic hypertension.

Write a short answer question scenario with questions based on the theme of a newly diagnosed hypertensive patient for undergraduate medical students at the time of graduation with the main objectives related to the physiology of blood pressure regulation, risk factors for systemic hypertension, pathophysiology of systemic hypertension, pathological changes in the systemic blood vessels in hypertension, pharmacological management, and non-pharmacological treatment of systemic hypertension.

Can you generate 5 OSPE pharmacology and therapeutics prescription writing exercises for the assessment of undergraduate medical students at the time of graduation related to anti-hypertensive drugs?

Can you generate 5 OSPE pharmacology and therapeutics prescription writing exercises containing appropriate instructions for the patients for the assessment of undergraduate medical students during their pre-clerkship phase related to anti-hypertensive drugs?

Can you generate 5 OSPE pharmacology and therapeutics prescription writing exercises containing appropriate instructions for the patients for the assessment of undergraduate medical students at the time of graduation related to anti-hypertensive drugs?

Both authors independently evaluated the AI-generated outputs, and a consensus was reached. We cross-checked the veracity of answers suggested by AIs as per the Joint National Commission Guidelines (JNC-8) and Goodman and Gilman’s The Pharmacological Basis of Therapeutics (2023), a reference textbook [ 13 , 14 ]. Errors in the A-type MCQs were categorized as item construction defects, multiple correct answers, and uncertain appropriateness to the learner’s level. Test items in the integrated case cluster MCQs, SAQs and OSPEs were evaluated with the Preliminary Conceptual Framework for Establishing Content Validity of AI-Generated Test Items based on the following domains: technical accuracy, comprehensiveness, education level, and lack of construction defects (Table  1 ). The responses were categorized as complete and deficient for each domain.

The pre-clerkship phase SLOs identified by Sage Poe, Claude-Instant, and ChatGPT are listed in the electronic supplementary materials 1 – 3 , respectively. In general, a broad homology in SLOs generated by the three AI platforms was observed. All AI platforms identified appropriate action verbs as per Bloom’s taxonomy to state the SLO; action verbs such as describe, explain, recognize, discuss, identify, recommend, and interpret are used to state the learning outcome. The specific, measurable, achievable, relevant, time-bound (SMART) SLOs generated by each AI platform slightly varied. All key domains of antihypertensive pharmacology to be achieved during the pre-clerkship (pre-clinical) years were relevant for graduating doctors. The SLOs addressed current JNC Treatment Guidelines recommended classes of antihypertensive drugs, the mechanism of action, pharmacokinetics, adverse effects, indications/contraindications, dosage adjustments, monitoring therapy, and principles of monotherapy and combination therapy.

The SLOs to be achieved by undergraduate medical students at the time of graduation identified by Sage Poe, Claude-Instant, and ChatGPT listed in electronic supplementary materials 4 – 6 , respectively. The identified SLOs emphasize the application of pharmacology knowledge within a clinical context, focusing on competencies needed to function independently in early residency stages. These SLOs go beyond knowledge recall and mechanisms of action to encompass competencies related to clinical problem-solving, rational prescribing, and holistic patient management. The SLOs generated require higher cognitive ability of the learner: action verbs such as demonstrate, apply, evaluate, analyze, develop, justify, recommend, interpret, manage, adjust, educate, refer, design, initiate & titrate were frequently used.

The MCQs for the pre-clerkship phase identified by Sage Poe, Claude-Instant, and ChatGPT listed in the electronic supplementary materials 7 – 9 , respectively, and those identified with the search query based on the clinical vignette in electronic supplementary materials ( 10 – 12 ).

All MCQs generated by the AIs in each of the four domains specified [mechanism of action (MOA); pharmacokinetics; adverse drug reactions (ADRs), and indications for antihypertensive drugs] are quality test items with potential content validity. The test items on MOA generated by Sage Poe included themes such as renin-angiotensin-aldosterone (RAAS) system, beta-adrenergic blockers (BB), calcium channel blockers (CCB), potassium channel openers, and centrally acting antihypertensives; on pharmacokinetics included high oral bioavailability/metabolism in liver [angiotensin receptor blocker (ARB)-losartan], long half-life and renal elimination [angiotensin converting enzyme inhibitors (ACEI)-lisinopril], metabolism by both liver and kidney (beta-blocker (BB)-metoprolol], rapid onset- short duration of action (direct vasodilator-hydralazine), and long-acting transdermal drug delivery (centrally acting-clonidine). Regarding the ADR theme, dry cough, angioedema, and hyperkalemia by ACEIs in susceptible patients, reflex tachycardia by CCB/amlodipine, and orthostatic hypotension by CCB/verapamil addressed. Clinical indications included the drug of choice for hypertensive patients with concomitant comorbidity such as diabetics (ACEI-lisinopril), heart failure and low ejection fraction (BB-carvedilol), hypertensive urgency/emergency (alpha cum beta receptor blocker-labetalol), stroke in patients with history recurrent stroke or transient ischemic attack (ARB-losartan), and preeclampsia (methyldopa).

Almost similar themes under each domain were identified by the Claude-Instant AI platform with few notable exceptions: hydrochlorothiazide (instead of clonidine) in MOA and pharmacokinetics domains, respectively; under the ADR domain ankle edema/ amlodipine, sexual dysfunction and fatigue in male due to alpha-1 receptor blocker; under clinical indications the best initial monotherapy for clinical scenarios such as a 55-year old male with Stage-2 hypertension; a 75-year-old man Stage 1 hypertension; a 35-year-old man with Stage I hypertension working on night shifts; and a 40-year-old man with stage 1 hypertension and hyperlipidemia.

As with Claude-Instant AI, ChatGPT-generated test items on MOA were mostly similar. However, under the pharmacokinetic domain, immediate- and extended-release metoprolol, the effect of food to enhance the oral bioavailability of ramipril, and the highest oral bioavailability of amlodipine compared to other commonly used antihypertensives were the themes identified. Whereas the other ADR themes remained similar, constipation due to verapamil was a new theme addressed. Notably, in this test item, amlodipine was an option that increased the difficulty of this test item because amlodipine therapy is also associated with constipation, albeit to a lesser extent, compared to verapamil. In the clinical indication domain, the case description asking “most commonly used in the treatment of hypertension and heart failure” is controversial because the options listed included losartan, ramipril, and hydrochlorothiazide but the suggested correct answer was ramipril. This is a good example to stress the importance of vetting the AI-generated MCQ by experts for content validity and to assure robust psychometrics. The MCQ on the most used drug in the treatment of “hypertension and diabetic nephropathy” is more explicit as opposed to “hypertension and diabetes” by Claude-Instant because the therapeutic concept of reducing or delaying nephropathy must be distinguished from prevention of nephropathy, although either an ACEI or ARB is the drug of choice for both indications.

It is important to align student assessment to the curriculum; in the PBL curriculum, MCQs with a clinical vignette are preferred. The modification of the query specifying the search to generate MCQs with a clinical vignette on domains specified previously gave appropriate output by all three AI platforms evaluated (Sage Poe; Claude- Instant; Chat GPT). The scenarios generated had a good clinical fidelity and educational fit for the pre-clerkship student perspective.

The errors observed with AI outputs on the A-type MCQs are summarized in Table  2 . No significant pattern was observed except that Claude-Instant© generated test items in a stereotyped format such as the same choices for all test items related to pharmacokinetics and indications, and all the test items in the ADR domain are linked to the mechanisms of action of drugs. This illustrates the importance of reviewing AI-generated test items by content experts for content validity to ensure alignment with evidence-based medicine and up-to-date treatment guidelines.

The test items generated by ChatGPT had the advantage of explanations supplied rendering these more useful for learners to support self-study. The following examples illustrate this assertion: “ A patient with hypertension is started on a medication that works by blocking beta-1 receptors in the heart (metoprolol)”. Metoprolol is a beta blocker that works by blocking beta-1 receptors in the heart, which reduces heart rate and cardiac output, resulting in a decrease in blood pressure. However, this explanation is incomplete because there is no mention of other less important mechanisms, of beta receptor blockers on renin release. Also, these MCQs were mostly recall type: Which of the following medications is known to have a significant first-pass effect? The explanation reads: propranolol is known to have a significant first pass-effect, meaning that a large portion of the drug is metabolized by the liver before it reaches systemic circulation. Losartan, amlodipine, ramipril, and hydrochlorothiazide do not have significant first-pass effect. However, it is also important to extend the explanation further by stating that the first-pass effect of propranolol does not lead to total loss of pharmacological activity because the metabolite hydroxy propranolol also has potent beta-blocking activity. Another MCQ test item had a construction defect: “A patient with hypertension is started on a medication that can cause photosensitivity. Which of the following medications is most likely responsible?” Options included: losartan, amlodipine, ramipril, hydrochlorothiazide, hydrochlorothiazide/triamterene. The explanation states that hydrochlorothiazide (HCTZ) can cause photosensitivity, which is increased sensitivity to sunlight that can cause skin rashes and sunburns. Losartan, amlodipine, ramipril, and HCTZ/triamterene are less likely to cause photosensitivity. However, it is irrational to claim that HCTZ/triamterene, one of the wrong options, is less likely to cause photosensitivity. The difficulty index of such test items is likely unacceptable in high-stake tests to make equitable psychometric decisions.

The responses from Sage Poe, Claude-Instant, and ChatGPT on guidance for standard-setting methods are listed in the electronic supplementary materials 13 – 15 . Sage Poe offered this information with the caveat that “the results of this simulation were purely hypothetical and do not reflect the actual performance of any real students”. The simulation estimates of examinees correctly answering test items ranged from 60 to 90% for the mechanism of action, 65–85% for pharmacokinetics, 60–80% for adverse reactions, and 75–90% for clinical indications of antihypertensives.

In contrast, Claude-Instant admitted that “Performing a full simulation with 100 medical students is beyond the scope of what I can do as an AI assistant. I can generate sample MCQ questions and answers, but I do not have access to a population of medical students to administer the questions. A true simulation that administers questions to real students and analyzes the results would require access to a database of medical students, a platform to deliver the questions, and code to analyze the responses. As an AI assistant, I lack these capabilities. While I cannot perform a full simulation, here are some considerations for administering MCQ questions to medical students: provide details on the purpose and instructions for the test to ensure students understand what is expected out of them; consider including some easier questions in addition to more challenging ones to boost students’ confidence and motivation; avoid ambiguous wording and ensure each question has only one unambiguously correct answer option; when analyzing results, look for questions factors that may have impacted performance, like unclear wording or answer options that are too similar. Focusing on weak questions can help improve future tests; compare performance across different student groups to identify areas where more teaching may be needed; provide detailed answer explanations and review sessions after the test to help students learn from any errors and reinforce correct concepts”. However, ChatGPT offered the simulation result estimates for the 20 MCQ on antihypertensive drugs: mechanism of action ranged from 67 to 92%, pharmacokinetics 63–86%, adverse effects 65–82%, and clinical indications 64–89%. Furthermore, it also stated that “Overall, the performance of the students was quite good, with most questions having a response rate of over 70%. However, there were some questions where the projected correct response rate was rather low, such as Question #5 (Mechanism of Action of Hydrochlorothiazide; see Electronic Supplementary Material 12 ) and Question 18 (Indications for Verapamil; see Electronic Supplementary Material 10 ). This may suggest areas where students need more focused education or review.”

We asked AI assistants to generate 20 integrated case cluster MCQs with 2 test items in each cluster with five options for undergraduate medical students in the pre-clerkship phase integrating pharmacology and physiology related to systemic hypertension with a case vignette and the responses by Sage Poe, Claude-Instant, and ChatGPT are listed in the electronic supplementary materials ( 16 – 18 ). In all instances, the test items generated had focused case descriptions in the form of a clinical vignette, and horizontal integration across the pathophysiology of hypertension and pharmacology of antihypertensive drugs. These test items mostly targeted the ‘knows (knowledge)’ or ‘knows how (competence)’ level on Miller’s pyramid and are suitable for assessing the clinical competence of pre-clerkship medical students, especially in an integrated PBL curriculum. Both the AI assistants generated excellent clinical vignettes and themes; however, most of the cluster MCQs by ChatGPT had “None of the above” as an option, which is often considered a test item construction flaw. Notwithstanding these limitations, case cluster integrated test items are valuable for learners to integrate their knowledge of different basic medical sciences and their application to clinical sciences. This integrated approach can be used for both instructional and student assessment purposes to make the course more meaningful. Indeed, one of the basic tenets of PBL is curriculum integration.

In the next query, we asked AI assistants to write integrated case cluster MCQs with 2 test items in each cluster with 5 options for undergraduate medical students at the time of graduation integrating pharmacology and physiology of hypertension with a case vignette. The responses are listed in the electronic supplementary materials 19 – 21 . Claude-Instant has provided only partial responses to both the queries. Overall, the information provided in case vignettes requires the learner to interpret the clinical information and make therapeutic decisions. Such test items are appropriate for assessing the minimum competency of medical school graduates to start internship or residency programs; these MCQs are generally comparable to test items used in USMLE Part 2. The evaluation of the integrated case cluster test items provided by all the AI platforms is summarized in Table  3 . In general, the AI outputs had at least one construction defect, and did not include all the core sub-topics to a sufficient degree.

When asked to simulate the above-generated test items to 100 undergraduate medical students at the time of graduation and provide a percentage of students who are likely to correctly answer each item, the AI assistant responded:

“ As an AI language model, I cannot simulate the responses of 100 undergraduate medical students likely to obtain the correct answer for each question. However, I can provide some general information about the difficulty level of each question based on the complexity of the concepts being tested and the level of integration between pharmacology and physiology ”.

AI assistant went on to clarify that questions that require knowledge of basic pharmacology principles, such as the mechanism of action of specific drugs, are likely to be easier for students to answer correctly. Test items that require an understanding of the physiological mechanisms underlying hypertension and correlating with symptoms are likely to be more challenging for students. The AI assistant sorted these test items into two categories accordingly. Overall, the difficulty level of the test item is based on the level of integration between pharmacology and pathophysiology. Test items that require an understanding of both pharmacological and physiological mechanisms are likely to be more challenging for students requiring a strong foundation in both pharmacology and physiology concepts to be able to correctly answer integrated case-cluster MCQs.

Short answer questions

The responses to a search query on generating SAQs appropriate to the pre-clerkship phase Sage Poe, Claude-Instant, and ChatGPT generated items are listed in the electronic supplementary materials 22 – 24 for difficult questions and 25–27 for moderately difficult questions.

It is apparent from these case vignette descriptions that the short answer question format varied. Accordingly, the scope for asking individual questions for each scenario is open-ended. In all instances, model answers are supplied which are helpful for the course instructor to plan classroom lessons, identify appropriate instructional methods, and establish rubrics for grading the answer scripts, and as a study guide for students.

We then wanted to see to what extent AI can differentiate the difficulty of the SAQ by replacing the search term “difficult” with “moderately difficult” in the above search prompt: the changes in the revised case scenarios are substantial. Perhaps the context of learning and practice (and the level of the student in the MD/medical program) may determine the difficulty level of SAQ generated. It is worth noting that on changing the search from cardiology to internal medicine rotation in Sage Poe the case description also changed. Thus, it is essential to select an appropriate AI assistant, perhaps by trial and error, to generate quality SAQs. Most of the individual questions tested stand-alone knowledge and did not require students to demonstrate integration.

The responses of Sage Poe, Claude-Instant, and ChatGPT for the search query to generate SAQs at the time of graduation are listed in the electronic supplementary materials 28 – 30 . It is interesting to note how AI assistants considered the stage of the learner while generating the SAQ. The response by Sage Poe is illustrative for comparison. “You are a newly graduated medical student who is working in a hospital” versus “You are a medical student in your pre-clerkship.”

Some questions were retained, deleted, or modified to align with competency appropriate to the context (Electronic Supplementary Materials 28 – 30 ). Overall, the test items at both levels from all AI platforms were technically accurate and thorough addressing the topics related to different disciplines (Table  3 ). The differences in learning objective transition are summarized in Table  4 . A comparison of learning objectives revealed that almost all objectives remained the same except for a few (Table  5 ).

A similar trend was apparent with test items generated by other AI assistants, such as ChatGPT. The contrasting differences in questions are illustrated by the vertical integration of basic sciences and clinical sciences (Table  6 ).

Taken together, these in-depth qualitative comparisons suggest that AI assistants such as Sage Poe and ChatGPT consider the learner’s stage of training in designing test items, learning outcomes, and answers expected from the examinee. It is critical to state the search query explicitly to generate quality output by AI assistants.

The OSPE test items generated by Claude-Instant and ChatGPT appropriate to the pre-clerkship phase (without mentioning “appropriate instructions for the patients”) are listed in the electronic supplementary materials 31 and 32 and with patient instructions on the electronic supplementary materials 33 and 34 . For reasons unknown, Sage Poe did not provide any response to this search query.

The five OSPE items generated were suitable to assess the prescription writing competency of pre-clerkship medical students. The clinical scenarios identified by the three AI platforms were comparable; these scenarios include patients with hypertension and impaired glucose tolerance in a 65-year-old male, hypertension with chronic kidney disease (CKD) in a 55-year-old woman, resistant hypertension with obstructive sleep apnea in a 45-year-old man, and gestational hypertension at 32 weeks in a 35-year-old (Claude-Instant AI). Incorporating appropriate instructions facilitates the learner’s ability to educate patients and maximize safe and effective therapy. The OSPE item required students to write a prescription with guidance to start conservatively, choose an appropriate antihypertensive drug class (drug) based on the patients’ profile, specifying drug name, dose, dosing frequency, drug quantity to be dispensed, patient name, date, refill, and caution as appropriate, in addition to prescribers’ name, signature, and license number. In contrast, ChatGPT identified clinical scenarios to include patients with hypertension and CKD, hypertension and bronchial asthma, gestational diabetes, hypertension and heart failure, and hypertension and gout (ChatGPT). Guidance for dosage titration, warnings to be aware, safety monitoring, and frequency of follow-up and dose adjustment. These test items are designed to assess learners’ knowledge of P & T of antihypertensives, as well as their ability to provide appropriate instructions to patients. These clinical scenarios for writing prescriptions assess students’ ability to choose an appropriate drug class, write prescriptions with proper labeling and dosing, reflect drug safety profiles, and risk factors, and make modifications to meet the requirements of special populations. The prescription is required to state the drug name, dose, dosing frequency, patient name, date, refills, and cautions or instructions as needed. A conservative starting dose, once or twice daily dosing frequency based on the drug, and instructions to titrate the dose slowly if required.

The responses from Claude-Instant and ChatGPT for the search query related to generating OSPE test items at the time of graduation are listed in electronic supplementary materials 35 and 36 . In contrast to the pre-clerkship phase, OSPEs generated for graduating doctors’ competence assessed more advanced drug therapy comprehension. For example, writing a prescription for:

(1) A 65-year- old male with resistant hypertension and CKD stage 3 to optimize antihypertensive regimen required the answer to include starting ACEI and diuretic, titrating the dosage over two weeks, considering adding spironolactone or substituting ACEI with an ARB, and need to closely monitor serum electrolytes and kidney function closely.

(2) A 55-year-old woman with hypertension and paroxysmal arrhythmia required the answer to include switching ACEI to ARB due to cough, adding a CCB or beta blocker for rate control needs, and adjusting the dosage slowly and monitoring for side effects.

(3) A 45-year-old man with masked hypertension and obstructive sleep apnea require adding a centrally acting antihypertensive at bedtime and increasing dosage as needed based on home blood pressure monitoring and refer to CPAP if not already using one.

(4) A 75-year-old woman with isolated systolic hypertension and autonomic dysfunction to require stopping diuretic and switching to an alpha blocker, upward dosage adjustment and combining with other antihypertensives as needed based on postural blood pressure changes and symptoms.

(5) A 35-year-old pregnant woman with preeclampsia at 29 weeks require doubling methyldopa dose and consider adding labetalol or nifedipine based on severity and educate on signs of worsening and to follow-up immediately for any concerning symptoms.

These case scenarios are designed to assess the ability of the learner to comprehend the complexity of antihypertensive regimens, make evidence-based regimen adjustments, prescribe multidrug combinations based on therapeutic response and tolerability, monitor complex patients for complications, and educate patients about warning signs and follow-up.

A similar output was provided by ChatGPT, with clinical scenarios such as prescribing for patients with hypertension and myocardial infarction; hypertension and chronic obstructive pulmonary airway disease (COPD); hypertension and a history of angina; hypertension and a history of stroke, and hypertension and advanced renal failure. In these cases, wherever appropriate, pharmacotherapeutic issues like taking ramipril after food to reduce side effects such as giddiness; selection of the most appropriate beta-blocker such as nebivolol in patients with COPD comorbidity; the importance of taking amlodipine at the same time every day with or without food; preference for telmisartan among other ARBs in stroke; choosing furosemide in patients with hypertension and edema and taking the medication with food to reduce the risk of gastrointestinal adverse effect are stressed.

The AI outputs on OSPE test times were observed to be technically accurate, thorough in addressing core sub-topics suitable for the learner’s level and did not have any construction defects (Table  3 ). Both AIs provided the model answers with explanatory notes. This facilitates the use of such OSPEs for self-assessment by learners for formative assessment purposes. The detailed instructions are helpful in creating optimized therapy regimens, and designing evidence-based regimens, to provide appropriate instructions to patients with complex medical histories. One can rely on multiple AI sources to identify, shortlist required case scenarios, and OSPE items, and seek guidance on expected model answers with explanations. The model answer guidance for antihypertensive drug classes is more appropriate (rather than a specific drug of a given class) from a teaching/learning perspective. We believe that these scenarios can be refined further by providing a focused case history along with relevant clinical and laboratory data to enhance clinical fidelity and bring a closer fit to the competency framework.

In the present study, AI tools have generated SLOs that comply with the current principles of medical education [ 15 ]. AI tools are valuable in constructing SLOs and so are especially useful for medical fraternities where training in medical education is perceived as inadequate, more so in the early stages of their academic career. Data suggests that only a third of academics in medical schools have formal training in medical education [ 16 ] which is a limitation. Thus, the credibility of alternatives, such as the AIs, is evaluated to generate appropriate course learning outcomes.

We observed that the AI platforms in the present study generated quality test items suitable for different types of assessment purposes. The AI-generated outputs were similar with minor variation. We have used generative AIs in the present study that could generate new content from their training dataset [ 17 ]. Problem-based and interactive learning approaches are referred to as “bottom-up” where learners obtain first-hand experience in solving the cases first and then indulge in discussion with the educators to refine their understanding and critical thinking skills [ 18 ]. We suggest that AI tools can be useful for this approach for imparting the core knowledge and skills related to Pharmacology and Therapeutics to undergraduate medical students. A recent scoping review evaluating the barriers to writing quality test items based on 13 studies has concluded that motivation, time constraints, and scheduling were the most common [ 19 ]. AI tools can be valuable considering the quick generation of quality test items and time management. However, as observed in the present study, the AI-generated test items nevertheless require scrutiny by faculty members for content validity. Moreover, it is important to train faculty in AI technology-assisted teaching and learning. The General Medical Council recommends taking every opportunity to raise the profile of teaching in medical schools [ 20 ]. Hence, both the academic faculty and the institution must consider investing resources in AI training to ensure appropriate use of the technology [ 21 ].

The AI outputs assessed in the present study had errors, particularly with A-type MCQs. One notable observation was that often the AI tools were unable to differentiate the differences between ACEIs and ARBs. AI platforms access several structured and unstructured data, in addition to images, audio, and videos. Hence, the AI platforms can commit errors due to extracting details from unauthenticated sources [ 22 ] created a framework identifying 28 factors for reconstructing the path of AI failures and for determining corrective actions. This is an area of interest for AI technical experts to explore. Also, this further iterates the need for human examination of test items before using them for assessment purposes.

There are concerns that AIs can memorize and provide answers from their training dataset, which they are not supposed to do [ 23 ]. Hence, the use of AIs-generated test items for summative examinations is debatable. It is essential to ensure and enhance the security features of AI tools to reduce or eliminate cross-contamination of test items. Researchers have emphasized that AI tools will only reach their potential if developers and users can access full-text non-PDF formats that help machines comprehend research papers and generate the output [ 24 ].

AI platforms may not always have access to all standard treatment guidelines. However, in the present study, it was observed that all three AI platforms generally provided appropriate test items regarding the choice of medications, aligning with recommendations from contemporary guidelines and standard textbooks in pharmacology and therapeutics. The prompts used in the study were specifically focused on the pre-clerkship phase of the undergraduate medical curriculum (and at the time of their graduation) and assessed fundamental core concepts, which were also reflected in the AI outputs. Additionally, the recommended first-line antihypertensive drug classes have been established for several decades, and information regarding their pharmacokinetics, ADRs, and indications is well-documented in the literature.

Different paradigms and learning theories have been proposed to support AI in education. These paradigms include AI- directed (learner as recipient), AI-supported (learner as collaborator), and AI-empowered (learner as leader) that are based on Behaviorism, Cognitive-Social constructivism, and Connectivism-Complex adaptive systems, respectively [ 25 ]. AI techniques have potential to stimulate and advance instructional and learning sciences. More recently a three- level model that synthesizes and unifies existing learning theories to model the roles of AIs in promoting learning process has been proposed [ 26 ]. The different components of our study rely upon these paradigms and learning theories as the theoretical underpinning.

Strengths and limitations

To the best of our knowledge, this is the first study evaluating the utility of AI platforms in generating test items related to a discipline in the undergraduate medical curriculum. We have evaluated the AI’s ability to generate outputs related to most types of assessment in the undergraduate medical curriculum. The key lessons learnt for improving the AI-generated test item quality from the present study are outlined in Table  7 . We used a structured framework for assessing the content validity of the test items. However, we have demonstrated using a single case study (hypertension) as a pilot experiment. We chose to evaluate anti-hypertensive drugs as it is a core learning objective and one of the most common disorders relevant to undergraduate medical curricula worldwide. It would be interesting to explore the output from AI platforms for other common (and uncommon/region-specific) disorders, non-/semi-core objectives, and disciplines other than Pharmacology and Therapeutics. An area of interest would be to look at the content validity of the test items generated for different curricula (such as problem-based, integrated, case-based, and competency-based) during different stages of the learning process. Also, we did not attempt to evaluate the generation of flowcharts, algorithms, or figures for generating test items. Another potential area for exploring the utility of AIs in medical education would be repeated procedural practices such as the administration of drugs through different routes by trainee residents [ 27 ]. Several AI tools have been identified for potential application in enhancing classroom instructions and assessment purposes pending validation in prospective studies [ 28 ]. Lastly, we did not administer the AI-generated test items to students and assessed their performance and so could not comment on the validity of test item discrimination and difficulty indices. Additionally, there is a need to confirm the generalizability of the findings to other complex areas in the same discipline as well as in other disciplines that pave way for future studies. The conceptual framework used in the present study for evaluating the AI-generated test items needs to be validated in a larger population. Future studies may also try to evaluate the variations in the AI outputs with repetition of the same queries.

Notwithstanding ongoing discussions and controversies, AI tools are potentially useful adjuncts to optimize instructional methods, test blueprinting, test item generation, and guidance for test standard-setting appropriate to learners’ stage in the medical program. However, experts need to critically review the content validity of AI-generated output. These challenges and caveats are to be addressed before the use of widespread use of AIs in medical education can be advocated.

Data availability

All the data included in this study are provided as Electronic Supplementary Materials.

Tolsgaard MG, Pusic MV, Sebok-Syer SS, Gin B, Svendsen MB, Syer MD, Brydges R, Cuddy MM, Boscardin CK. The fundamentals of Artificial Intelligence in medical education research: AMEE Guide 156. Med Teach. 2023;45(6):565–73.

Article   Google Scholar  

Sriwastwa A, Ravi P, Emmert A, Chokshi S, Kondor S, Dhal K, Patel P, Chepelev LL, Rybicki FJ, Gupta R. Generative AI for medical 3D printing: a comparison of ChatGPT outputs to reference standard education. 3D Print Med. 2023;9(1):21.

Azer SA, Guerrero APS. The challenges imposed by artificial intelligence: are we ready in medical education? BMC Med Educ. 2023;23(1):680.

Masters K. Ethical use of Artificial Intelligence in Health Professions Education: AMEE Guide 158. Med Teach. 2023;45(6):574–84.

Nagi F, Salih R, Alzubaidi M, Shah H, Alam T, Shah Z, Househ M. Applications of Artificial Intelligence (AI) in Medical Education: a scoping review. Stud Health Technol Inf. 2023;305:648–51.

Google Scholar  

Mehta N, Harish V, Bilimoria K, et al. Knowledge and attitudes on artificial intelligence in healthcare: a provincial survey study of medical students. MedEdPublish. 2021;10(1):75.

Mir MM, Mir GM, Raina NT, Mir SM, Mir SM, Miskeen E, Alharthi MH, Alamri MMS. Application of Artificial Intelligence in Medical Education: current scenario and future perspectives. J Adv Med Educ Prof. 2023;11(3):133–40.

Garg T. Artificial Intelligence in Medical Education. Am J Med. 2020;133(2):e68.

Matheny ME, Whicher D, Thadaney IS. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA. 2020;323(6):509–10.

Sage Poe. Available at: https://poe.com/Assistant (Accessed on. 3rd June 2023).

Claude-Instant: Available at: https://poe.com/Claude-instant (Accessed on 3rd. June 2023).

ChatGPT: Available at: https://poe.com/ChatGPT (Accessed on 3rd. June 2023).

James PA, Oparil S, Carter BL, Cushman WC, Dennison-Himmelfarb C, Handler J, Lackland DT, LeFevre ML, MacKenzie TD, Ogedegbe O, Smith SC Jr, Svetkey LP, Taler SJ, Townsend RR, Wright JT Jr, Narva AS, Ortiz E. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA. 2014;311(5):507–20.

Eschenhagen T. Treatment of hypertension. In: Brunton LL, Knollmann BC, editors. Goodman & Gilman’s the pharmacological basis of therapeutics. 14th ed. New York: McGraw Hill; 2023.

Shabatura J. September. Using Bloom’s taxonomy to write effective learning outcomes. https://tips.uark.edu/using-blooms-taxonomy/ (Accessed on 19th 2023).

Trainor A, Richards JB. Training medical educators to teach: bridging the gap between perception and reality. Isr J Health Policy Res. 2021;10(1):75.

Boscardin C, Gin B, Golde PB, Hauer KE. ChatGPT and generative artificial intelligence for medical education: potential and opportunity. Acad Med. 2023. https://doi.org/10.1097/ACM.0000000000005439 . (Published ahead of print).

Duong MT, Rauschecker AM, Rudie JD, Chen PH, Cook TS, Bryan RN, Mohan S. Artificial intelligence for precision education in radiology. Br J Radiol. 2019;92(1103):20190389.

Karthikeyan S, O’Connor E, Hu W. Barriers and facilitators to writing quality items for medical school assessments - a scoping review. BMC Med Educ. 2019;19(1):123.

Developing teachers and trainers in undergraduate medical education. Advice supplementary to Tomorrow’s Doctors. (2009). https://www.gmc-uk.org/-/media/documents/Developing_teachers_and_trainers_in_undergraduate_medical_education___guidance_0815.pdf_56440721.pdf (Accessed on 19th September 2023).

Cooper A, Rodman A. AI and Medical Education - A 21st-Century Pandora’s Box. N Engl J Med. 2023;389(5):385–7.

Chanda SS, Banerjee DN. Omission and commission errors underlying AI failures. AI Soc. 2022;17:1–24.

Narayanan A, Kapoor S. ‘GPT-4 and Professional Benchmarks: The Wrong Answer to the Wrong Question’. Substack newsletter. AI Snake Oil (blog). https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks (Accessed on 19th September 2023).

Brainard J. November. As scientists face a flood of papers, AI developers aim to help. Science, 21 2023. doi.10.1126/science.adn0669.

Ouyang F, Jiao P. Artificial intelligence in education: the three paradigms. Computers Education: Artif Intell. 2021;2:100020.

Gibson D, Kovanovic V, Ifenthaler D, Dexter S, Feng S. Learning theories for artificial intelligence promoting learning processes. Br J Edu Technol. 2023;54(5):1125–46.

Guerrero DT, Asaad M, Rajesh A, Hassan A, Butler CE. Advancing Surgical Education: the Use of Artificial Intelligence in Surgical Training. Am Surg. 2023;89(1):49–54.

Lee S. AI tools for educators. EIT InnoEnergy Master School Teachers Conference. 2023. https://www.slideshare.net/ignatia/ai-toolkit-for-educators?from_action=save (Accessed on 24th September 2023).

Download references

Author information

Authors and affiliations.

Department of Pharmacology & Therapeutics, College of Medicine & Medical Sciences, Arabian Gulf University, Manama, Kingdom of Bahrain

Kannan Sridharan & Reginald P. Sequeira

You can also search for this author in PubMed   Google Scholar

Contributions

RPS– Conceived the idea; KS– Data collection and curation; RPS and KS– Data analysis; RPS and KS– wrote the first draft and were involved in all the revisions.

Corresponding author

Correspondence to Kannan Sridharan .

Ethics declarations

Ethics approval and consent to participate.

Not applicable as neither there was any interaction with humans, nor any personal data was collected in this research study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sridharan, K., Sequeira, R.P. Artificial intelligence and medical education: application in classroom instruction and student assessment using a pharmacology & therapeutics case study. BMC Med Educ 24 , 431 (2024). https://doi.org/10.1186/s12909-024-05365-7

Download citation

Received : 26 September 2023

Accepted : 28 March 2024

Published : 22 April 2024

DOI : https://doi.org/10.1186/s12909-024-05365-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical education
  • Pharmacology
  • Therapeutics

BMC Medical Education

ISSN: 1472-6920

case study usability test

The Federal Register

The daily journal of the united states government, request access.

Due to aggressive automated scraping of FederalRegister.gov and eCFR.gov, programmatic access to these sites is limited to access to our extensive developer APIs.

If you are human user receiving this message, we can add your IP address to a set of IPs that can access FederalRegister.gov & eCFR.gov; complete the CAPTCHA (bot test) below and click "Request Access". This process will be necessary for each IP address you wish to access the site from, requests are valid for approximately one quarter (three months) after which the process may need to be repeated.

An official website of the United States government.

If you want to request a wider IP range, first request access for your current IP, and then use the "Site Feedback" button found in the lower left-hand side to make the request.

IMAGES

  1. HOW TO CONDUCT USABILITY TESTS

    case study usability test

  2. What is Usability Testing? How to Evaluate the User Experience

    case study usability test

  3. What is Usability Testing?

    case study usability test

  4. Usability Testing 101

    case study usability test

  5. What is usability testing [2021 guide]

    case study usability test

  6. How to Do Usability Testing Right

    case study usability test

VIDEO

  1. Usability test demo

  2. Usability Test Recordings: Lemon 8

  3. Micro usability test recording

  4. Optimizing User Experience A DoorDash Usability Testing Case Study

  5. Usability Test Findings V2

  6. Usability test Tharun Chandranivasan 7799

COMMENTS

  1. 5 Real Usability Testing Examples & Methods to Take Away

    5 Real-life usability testing examples & approaches to apply. Get a feel for what an actual test looks like with five real-life usability test examples from Shopify, Typeform, ElectricFeel, Movista, and Trint. You'll learn about these companies' test scenarios, the types of questions and tasks these designers and UX researchers asked, and the ...

  2. 6 Usability Testing Examples & Case Studies

    This case study demonstrates the power of regular usability testing in products with frequent updates. Source: SoundCloud case study (.pdf) by test IO. Example #4: AutoTrader.com. AutoTrader.com is one of the world's largest online marketplaces for buying and selling used cars, with over 28 million monthly visitors. The mission of AutoTrader ...

  3. The Beginner's Guide to Usability Testing [+ Sample Questions]

    Stress test across many environments and use cases. Our products don't exist in a vacuum, and sometimes development environments are unable to compensate for all the variables. ... Usability Testing Examples & Case Studies. Now that you have an idea of the scenarios in which usability testing can help, here are some real-life examples of it in ...

  4. A Guide To Usability Testing: Tools, Methods & Examples

    Usability testing takes your existing product and places it in the hands of your users (or potential users) to see how the product actually works for them—how they're able to accomplish what they need to do with the product. 3. Formative vs. summative usability testing. Alright!

  5. 2 Usability Testing Case Studies

    Usability testing is the process of studying potential end-users as they interact with a product prototype. Usability testing occurs before you develop and launch a product, and is an essential planning step that can guide a product's features, functions and purpose. Developing with a clear purpose and research-based data will ensure your ...

  6. Zara: A Usability Case Study

    Usability Testing. With a better understanding of the user, I went to a mall with a Zara store and selected mall-goers to perform guerilla usability testing. I sampled people to test and verified that they were at least frequent online shoppers prior to beginning the testing. I ended up testing seven users.

  7. Usability Testing 101

    Usability testing is a popular UX research methodology. Definition: In a usability-testing session, a researcher (called a "facilitator" or a "moderator") asks a participant to perform tasks, usually using one or more specific user interfaces. While the participant completes each task, the researcher observes the participant's ...

  8. A case study in competitive usability testing (Part 1)

    This post is Part 1 of a 2-part competitive usability study. In Part 1, we deal with how to set up a competitive usability testing study. In Part 2, we showcase results from the study we ran, with insights on how to approach your data and what to look for in a competitive UX study. There are many good reasons to do competitive usability testing.

  9. A Complete Guide To Usability Testing

    Case Study- How McDonald's improved its mobile ordering process. In 2016, McDonald's rolled out their mobile app and wanted to optimize it for usability. They ran the first usability session with 15 participants and at the same time collected survey feedback from 150 users after they had used the app. ... Following the success of their first ...

  10. What is Usability Testing?

    Usability testing is the practice of testing how easy a design is to use with a group of representative users. It usually involves observing users as they attempt to complete tasks and can be done for different types of designs. It is often conducted repeatedly, from early development until a product's release.

  11. Task Scenarios for Usability Testing

    A scenario puts the task into context and, thus, ideally motivates the participant. The following 3 task-writing tips will improve the outcome of your usability studies. 1. Make the Task Realistic. User goal: Browse product offerings and purchase an item. Poor task: Purchase a pair of orange Nike running shoes.

  12. 12 Usability Testing Templates: Checklist & Examples

    A usability test plan template (standard operating procedure for all usability testing); A usability checklist template (a simple-form version of the above you can check off during a project); And a usability task template (that can be adapted and customized for specific use cases). We've also included task examples for common use cases, such ...

  13. A Step-by-Step Guide to Usability Testing

    The fine print behind five people per usability study is that it means five users per segment. Here are the general guidelines for the top evaluative methods: 1. Usability testing. Moderated: At least five participants per segment; Unmoderated: At least 15 participants per segment, in case you get messy data; 2. Concept testing

  14. How to Conduct a Usability Test in Six Steps from Start to Finish

    Whether you give your users access to a website, a mobile app, or another software product, usability testing before the launch will be of utmost importance. According to Forbes, various large brands report that usability design and UX testing have taken the business to the next level.IBM reports that every dollar invested in usability brings a return in the range from 10 to 100 dollars.

  15. Case study: Quora usability testing

    In this case, we choose Quora as an object in this test. This testing process includes determining the objectives and target users, recruiting users, creating a test script, conducting testing, and finally creating a test result report. In this study, the authors used 5 users from various different backgrounds.

  16. Usability testing for UX writing: A step-by-step guide

    Step 2: Get a budget (or don't) Let's get this out of the way first: you don't need a budget to do usability testing. Any idle colleagues will do. And you'd be surprised at the insights you can get from a fresh pair of eyes, especially if those eyes are outside of the product team.

  17. How UX Researchers Can 4X Their Usability Test Response Rates With

    Being unable to recruit participants for usability testing is a problem no UX researcher would want to face. Usability tests are used to validate design decisions, monitor ease of use, and identify areas for improvement. But if you're unable to gather a decent pool of participants, you bear the risk of your decisions being inaccurate, which ...

  18. Usability Case Study: Wireframe Usability Testing

    Usability testing in the early stages of web development can be both efficient and cost-effective. With wireframes you can easily ensure that you've streamlined the user-experience before even completing your site. The Media Department at a University in Sweden recently used Loop 11 to run usability testing on wireframes.

  19. How to Write a Usability Testing Report (With Templates and ...

    Think carefully about your usability testing approach to ensure that your report uncovers the information you need to improve your product development. Here's how you can do it effectively: 1. Company name and logo. Start your report by highlighting your company: Company name. Company logo.

  20. Usability testing case study

    These usability tests provided our client with immediate feedback on needed improvements to the website, including adding drop down menus, relocating buttons to make them more visible and clarifying directions for users in some sections. Explore a case study Hardwick Research conducted on usability testing for an e-commerce website.

  21. Usability Testing Report. UI/UX Case Study

    Executive Summary. We conducted the usability test on 8th to 14th July 2021. The purpose of the test was to assess the usability of the web interface design, information flow, and information ...

  22. Usability Testing Projects :: Photos, videos, logos ...

    2 27. Ikea Web | Heuristic Evaluation & Usability Testing. Pelin Kahraman. 8 157. Zoom app Usability testing. Madhu Goyal. 11 334. Clothe Usability Testing. Emaz Modak.

  23. Artificial intelligence and medical education: application in classroom

    Artificial intelligence (AI) tools are designed to create or generate content from their trained parameters using an online conversational interface. AI has opened new avenues in redefining the role boundaries of teachers and learners and has the potential to impact the teaching-learning process. In this descriptive proof-of- concept cross-sectional study we have explored the application of ...

  24. Federal Register :: Energy Conservation Program: Energy Conservation

    Start Preamble Start Printed Page 28856 AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Final rule. SUMMARY: The Energy Policy and Conservation Act, as amended ("EPCA"), prescribes energy conservation standards for various consumer products and certain commercial and industrial equipment, including general service lamps ("GSLs").