Picking the Right Tool for your Remote User Testing

With an abundance of remote testing tools available, it's not always easy to choose the right one(s). Here, author and web designer Matt Milosavljevic provides an overview of the types of tools available, common use cases, and potential pitfalls to look out for.

Picture of various tools

Image courtesy Svadilfari

With an ever-expanding utility belt of user research tools, finding the right one for your needs and getting the most out of it can be daunting.

The last couple of years have seen explosive growth in remote user testing. What used to cost thousands and take weeks you can now set up in a couple of minutes for under 10 dollars. As a result, the number of people testing has steadily been climbing, especially among those from marketing, design and business backgrounds. While it’s great to see so many people testing, those with little-to-no experience running user tests often don’t get the most out of their tests.

For those starting out with user testing, this article aims to provide an overview of the types of tools available, common use cases, and potential pitfalls to watch out for. The information should make it easier to find exactly the right tool to help you achieve your testing goals.

Information architecture analysis tools

Picture of card sorting

image courtesy CannedTuna

Information architecture forms the foundation of your website, so it’s something that you want to get right. Tools that help you assess your architecture can produce both quantitative and qualitative results.

With card sorting, users are asked to create logical groupings of elements. This type of tool produces qualitative results and works best when no information architecture has been put into place.

Tree testing, on the other hand, helps assess existing information architecture. It works by asking users to find specific sections by navigating a basic node tree. The results produced are often qualitative, such as number of clicks taken.

Your testers are an important consideration with tests of this type. Technical users, for example, may group content differently than those less technical. Testing with a representative cross-section of your audience ensures that you avoid biased results.

Tools in this category include: PlainFrame, OptimalSort, TreeJack, WebSort and Buy-A-Feature

Click analysis tools

Click tests are very popular, and there literally dozens of apps available for testing both live sites and mock-ups. This type of test works by presenting users with an interface and a task to carry out. Users are then instructed to click on the interface in order to successfully complete the task.

This type of test works best with interfaces that have a clearly defined purpose. Ideally these are either landing pages, or pages that contain calls to action (eg. purchasing an item or making a donation). Testing home pages that have multiple valid paths often yields poor results. Similarly, it’s best to avoid using click analysis on pages with forms where problems with understanding content may result in users abandoning the test without clicking.

Tools in this category include: Usabilla, IntuitionHQ, Clicktest, and ChalkMark.

Content analysis tools

This type of test helps you assess how well a user understands the purpose of a web page or interface. These tests often produce results that are more qualitative in nature, and while they can take longer to properly analyze, they’re well worth the effort.

They work by asking users to complete a task or present a scenario for why they are viewing an interface. Users then either make observations or carry out an interaction, after which they describe what they saw or did.

Picture of magnifying glass over a book

image courtesy Alpha Six

These tools are excellent for soliciting feedback and gauging how clear your content is. With the addition of screen recording, as found in UserTesting, it is possible to use these tools to test workflows as well. Keep in mind that simple instructions tend to yield better results; asking users to perform overly complicated tasks can quickly lead to frustration and abandoned tests.

Tools in this category include: Feedback Army, UserTesting, and Fivesecondtest.

Conversion analysis tools

Picture of bar chart showing upward trend

image courtesy kevinzhengli

Conversion analysis tests usually involve getting the user to complete a process involving one or more steps. Funnels come into play when there are two or more steps. As users carry out the task, their clicks and associated metrics are recorded for later analysis.

This type of tool is most effective for testing a well-defined process such as registrations and purchases. It is important to carefully select the starting point for the test and to focus on the process at hand. For instance, a purchase process begins at a product or product listing page, therefore your users should be shown those screens when commencing the test.

Tools in this category include: Loop11 and Navflow.

General rules of thumb

  • Test with real users as early as possible.
  • Test iteratively with small groups; Jakob Nielsen makes the following recommendations:
    • 5 users for qualitative tests
    • 20 users for quantitative tests
    • 15 users for card sorting
  • Use different testers on successive iterations.
  • Testing should be a continuous process and occur during all stages of development.

Hopefully this article has given you insight into some of the usability testing tools available and helps you get the most out of your future testing sessions.

Related material

About the Author

Matt Milosavljevic

Matt Milosavljevic is both a web designer and usability nut at Angry Monkeys, makers of the UsabilityHub suite of tools. On the rare occasion he is not sitting in front of a computer, he enjoys cycling…though he once fell off a bike while completely stationary.

Related Articles

21 Comments

  • Joshua Lay Reply

    Thanks for the great list.

    The information architecture tools are all new to me. I can’t wait to try them out.

    Websort seems especially useful to me right now.

    Cheers

    • Matt Milosavljevic Reply

      Really happy to hear that. Hopefully you get some really good data from those card sorting exercises.

  • rokked Reply

    When testing with users don’t put too much weight into their opinions. User opinions while valid are often coming from vastly different places, are misinformed or are just made up on the spot.

    It is often more important to observe a user rather than listen to them.

    • Matt Milosavljevic Reply

      Excellent point. People will often give you an idealised account of what they would do given the opportunity. So it’s important when soliciting feedback to carefully select questions as to not bias their responses.

    • Farhad Reply

      Fair point, but two considerations:

      1) you can coach testers to verbalize their thoughts and impressions exactly, especially when doing remote testing.

      2) just observing *what* they do does not give much insight into *why* they do it. Getting to the *why* really requires articulation by the tester.

  • Reza Reply

    Very informative and helpful. Thaks very much.

  • Gema Reply

    This is exactly what I’ve been researching! There are so many tools out there these days that it’s hard to keep up. The related articles will also be very helpful. Thank you Matt, and thank you UXbooth!

  • Chris Dyer Reply

    Great article! I use UsabilityHub quite a bit. Here are a couple of tips that I think are important when using FiveSecondTest.com. First, when you create the instructions for the test, make sure you set some context. For example, say something like “Imagine you are shopping for your next vacation and land on this page.” That puts the user in the right frame of mind given that they will only have five seconds to examine the page.
    Second, in your follow-up questions, make sure that they relate to the initial instructions you provided. For example, if your instructions ask users to focus on the search form, don’t ask them about the contact us link.

  • Imran Khan Reply

    Good one!!! QA needs to get on these tools =) i know more now about them ..

    Thanks for sharing

  • Jasmine Reply

    I think this article is very insightful and informative. I have learned a lot from it. I look forward to reading more from you about this and other related topics

  • Mark Reply

    This article packed a punch for its short length and provided some great links. Thanks!

    A tip for testing:
    In the spirit of small-group, iterative testing, it often makes sense to test your test before you test your design/IA/calls to action/etc. Having a group of five people take a test isn’t enough to give you the best, most-reliable data, but it is enough to show you how testers might be misinterpreting the test or confused by it. Once you have your small batch of data, look for unexpected results and see whether they’re surprising but informative or just misguided. You can also include a final question (if you’re doing a qualitative test) asking testers whether anything was unclear or confusing.

    Doing this is especially important when your money or site credits are on the line. It makes a lot more sense to spend a tiny bit extra so you don’t waste a whole lot more with bad data.

    • Matt Milosavljevic Reply

      Yeah that’s actually a great tip and works across the board for any type of user research you may carry out.

  • Dave Reply

    Never under estimate the importance of mapping your user testers to your target audience. What I mean is if your site is targeted at elderly people there is no point using automated online user testing services for example, whose testers are primarily of a different demograph. You can often specify certain criteria for testers using online user testing services but it is rare that your average granny has signed up to be an online user tester!

  • Hannahness Reply

    Re: usability tips…I would recommend adding a list of template questions to the fivesecond test that will increase the quantity of actionable data that you can glean from it. I have a hard time coming up with questions that provide useful answers.

  • Farhad Reply

    Great overview of the tools and their uses.

    Just wanted to add our tools to the mix of qualitative testing tools.

    openhallway.com enables you to send your site & tasks to prospective testers and get narrated videos back. It’s good if you already have testers.

    trymyui.com provides testers for you to enable 1-click usability testing.

    Both allow a free introductory plan.

  • Matt Reply

    Testing for usability can be done in different ways before and after a site is launched. When testing you have to be careful to remove biases from the results. Remember bias in a small sample can ruin a test. You have to set the test up so that the participant acts as much like an actual visitor as possible. You/your client’s/boss/partners all know what is on the page and are not “looking” for anything, like a visitor would. This makes it difficult to be objective.
    Removing potential biases:
    When user testing, you have to keep in mind that those who are taking a test may not be your targeted audience (this is even more so for business-to-business companies). As Chris Dyer mentioned above, to help introduce an audience to your test you need to give them a backstory. My clients are almost all B2B, so my tests tend to be very niche. The trick is to write a brief backstory that doesn’t bias the results (doesn’t completely spell out what to do) but does provide enough information to get accurate results. Testing the test can be a useful way to identify if you are biasing your test with too much info.
    Pre and Post Launch Testing:
    A/B split tests are excellent post-launch tests. You can test different variations of a page and let actual visitor data show you the best version. This is not without some risk, but this data will be extremely valuable. Using fivesecondtest.com or theclicktest.com or another usability testing tool to identify the best A and B can minimize the risk of lost conversions. Navflow.com can be very useful for identifying navigation problems before you get to A/B testing, this will help insure that your A/B test results will be as accurate as possible. Tracking actual clicks (with heat mapping) during an A/B test can be done for $10 with crazyegg.com. Tracking clicks will identify what visual elements get your visitors attention.
    A/B tests and click tracking can be very useful, but testing before the launch will help ensure that you start with the best foot forward and don’t have any glaring problems that will drive visitors away.

  • Matthew Kammerer Reply

    Congrats to @ChrisDyer & @mrwweb – You’ve won a 6 month UsabilityHub solo subscription! Thanks for all the insightful comments!

  • Arslan Reply

    Great overview of the tools and their uses.

  • Bud Michael Reply

    I have been introduced to a company called Userlytics that has a tool/service for doing remote unmoderated qualitative UX testing. Are you familiar with them?

Leave a Comment on This Article