Encouraging Negative Feedback During User Testing

Have you ever sat in a user testing session, watching a user really struggle with the task at hand only to have them tell you at the end everything was easy and straight forward? How do you encourage these participants to be negative? I've discovered a few techniques that might be able to help.

Grumpy looking baby

Photo by Gagilas

Have you ever sat in on a user testing session, watching a user really struggle with the task at hand, only to have them tell you at the end that everything was easy and straight forward? How do you encourage these participants to be negative? I’ve discovered a few techniques that might be able to help.

A colleague and I were running some usability tests on a registration form last week. The form was far from perfect and we could see our participants struggling through the process. However, when it came to befriending them at the end of the tests, one by one they would respond positively, informing us that the form was very simple and straight forward to use. Shocked and confused, we started to discuss different techniques to coax some negativity out of them.

The first thing we did was try to understand the reasons why our participants might feel pushed to be positive. Understanding the situation we were putting them in highlighted three areas which might lead them away from negative feedback.

People don’t want to insult someone’s work

Firstly, we realized asking someone to critique a stranger’s work is especially difficult if the user thinks the stranger who designed it is sitting right next to them. So, we started beginning our sessions by informing the participants we had nothing to do with the company who built the form being tested. This puts the users at ease by assuring them that we have no personal attachment to the work being tested and any criticism would not offend us at all.

We’re not testing them

The next point we noted was that some of the less confident people being tested seemed very apologetic when making “mistakes.” It suddenly occurred to us that our participants could be thinking that the troubles they were encountering was down to their human error and not attributing them to the poor design of the form. To solve this issue, we clearly stated at the start of each session that we were testing the system, not them, and that we knew the system had issues. We expected them to encounter problems along the way. Giving them these pieces of information really seemed to set some of our testers free, allowing them to blame every mistake or confusion on the system.

Users focus on the end goal

In a forced situation, where you are paying someone to complete a pre-arranged process, participants are more likely to focus on completing the task at hand. It seemed that our testers didn’t remember the small issues along the way as long as they completed what was being asked of them. To combat this, we ran the users back through the form, one stage at a time, and quizzed them on issues that we noted during the initial task. This worked to remind the users of all the little problems they had encountered and allowed them to take us through their thought processes, identifying exactly what it was that caused it to become a problem.

You’re the expert

At the end of the day, the user can only tell you so much. However, the things they can show you are limitless. At the end of the session, it’s a good idea to ask the participants how they would improve things because it gives them a greater feeling that they are helping you out and providing value. After all, if they could give you all the answers, it would be them doing the testing. The main role of the participants is to highlight all of the problems with the process, and it doesn’t matter if at the end of the session they walk away telling you the form was a breeze and they loved it, as long as you have been able to note all of the areas where they stumbled.

With this in mind, it’s always a good idea to have a second person in on a user testing session, making notes on everything the user is doing. Or if you have the budget and facilities, record/stream the session to be reviewed at a later date. In this particular set of tests, I sat a few meters behind the participant and facilitator, watching and making notes on everything that was going on. It’s a good idea to ask the user to think out loud when completing the task because it gives the person taking notes extra information to interpret.

Having someone there taking notes means that it really doesn’t matter what the users final opinion is of the process. As long as you have all the problems that tripped them up written down, you have what you came for.

To summarize

  • Assure the participants that you did not design the process being tested.
  • Inform the individuals that it’s the system being tested and not them.
  • Run them back through the process step-by-step and bring up any issues that you noted.
  • Have someone else on hand to note everything the user is doing.
  • At then end of the day, it’s the problems you want to find out, not what your participants felt about the process.

Your thoughts

Those are just a few very small things that you can do to try to encourage more feedback from your users. What techniques do you use to draw out feedback from your user testing sessions? I’d love to hear them.

Further reading

About the Author

Michael Wilson

AtiKuSDesign is the creator and editor of the web and graphic design inspiration blog D-Lists. He lives and breathes design, spending 90% of his life online looking at a screen, finding inspiration everywhere. To add to his passion for design he is an experienced front-end and wordpress developer. He's recently taken to the world of UI design with a keen interest in User Experience. Follow him on twitter or follow his complete set of online ramblings via his flavors page

Related Articles

21 Comments

  • Marin Todorov Reply

    Bullet list is hitting exactly on the spot. I’ve seen this as well; especially when presented with more complex UI the testers would intentionally use the very simple and straightforward features in order not to embarrass themselves in front of the observer.

    Great article, thanks for writing

  • Aaron Irizarry Reply

    Nice write up Michael, I recently faced some of the same challenges during some testing sessions.

    By encouraging the user to think out loud and that no one was going to be offended by comments, I was able to get more authentic answers from the users.

    Something I also did which I am not sure would work for everyone, every time was engage the tester in some light general conversation before the test began. Lightening the atmosphere really helped the tester feel a bit more comfortable.

    thanks for the write up

    ~ Aaron I

  • Derek Pennycuff Reply

    All good stuff, but in-house testing is my only option. So I’d be lying if I told them I’m not connected to the project. I’m also out of luck on having someone else to observe things. I rely on Silverback for that (little good that does for debriefing, but it’s better than nothing). The trend over the past decade has been hyper-specialization, but there are a few of us Jack-of-all-trade types out there. That puts us in tough situations, especially when we’re an in-house team of one (or a few) with little (or no) budget for outsourcing services such as usability testing. I hope that Steve Krug is right and even my half-assed testing is better than no testing at all.

    • Michael Wilson Reply

      Thanks for the comments guys.

      Derek, Steve Krug is indeed correct, any testing is better than no testing.

      I’m lucky enough to work for an agency who have the budget and the clients to perform some excellent user testing sessions.

      However, not everyone has that, but the fact that you are conducting your own user tests already outs you ahead of a great deal of web designers out there.

      Keep going with it

    • David Hamill Reply

      Derek, there’s no need to lie. Just be selective with the truth. If you say “I didn’t build it and I’m not precious about it in any way, so don’t feel that you can’t be critical of it. After all I’m trying to find ways of improving it”

      If you built it but didn’t design it you can change “build” for “design”. Essentially you just want to distance yourself from the thing your testing as much as you can.

  • Jacob Creech Reply

    Great article – I’m curious to know how the testing session you conducted worked out in the end? I these are really good ideas for in-person testing though.

    As Derek says, budget and time constraints can often put a halt to testing, but there are all sorts of neat tools out there depending on your price range, and how much time you have for the process. I guess everyone just needs to figure out what suits their purpose at a given point in time.

    • Michael Wilson Reply

      Jacob,

      The user sessions were really great. We discovered a lot of problems that we hadn’t recognized ourselves.

      We then built a prototype with some changes implemented to fix the problems and re-tested that. It was a massive improvement and our 2nd round of tests went really well and because most of the bigger problems were now fixed, we started to discover a few minor issues that we could also rectify.

      I agree with you about the different types of user testing. We try to conduct personal user tests as much as possible, because we find they provide the best research. However, if there is no time or budget available for this then we will try some cheaper/quicker services.

  • Owen Hodda Reply

    All good methodologies you ended up using Michael, but it does beg the question: Does it matter what you’re participant’s say?
    I know you make this point at the end, but it definitely deserves highlighting; if you can see that people are struggling with the form, but at the end of the session they said it was fine, then you still know they struggled with the form.
    When asked, most people would say Craigslist is ugly and hard to use, yet millions do use it. Most people say how easy their iPhone is to use, yet when we observe them in the lab they often can’t complete a range of pretty standard activities.
    Observing what people do and not what they say is the primary role of anybody undertaking user research.

  • Lyn Bain Reply

    Another point that I always make with users is that “they’re just about to do a redesign” (if it has already been launched or looks really polished) or “it’s really early in the process” and tell the user that the team is planning to do a lot of work anyway so feedback will just help make sure the right work is being done. I found that users would say things like “well, I wasn’t going to say this because it sounds like a lot of work…” I wanted to make sure that the users know that they aren’t causing “extra” work by saying something negative. I believe it helps.

    • Michael Wilson Reply

      Lya, you make an excellent point. I’m going to add that to my list of things to mention when user testing.

      Thanks for the tip!

  • Brad Nunnally Reply

    You nailed it right on the head with “People don’t want to insult someone’s work.” It’s by far the hardest hump to get over when conducting usability studies. The tips you offer are spot on though. Each one is standard for Study Scripts that I write. I’ve even stopped users and asked them “Now tell me how you really feel.” Once the grins and chuckles die down, they get a bit more relaxed and open up.

  • Lourens Reply

    “it’s always a good idea to have a second person in on a user testing session, making notes on everything the user is doing. Or if you have the budget and facilities, record/stream the session to be reviewed at a later date.”

    What about a webcam? Most people forget after a few minutes they are being filmed (as long as they can’t see the footage during the test).

  • John Hyde Reply

    I start with a dummy warm-up task on a different website.

    For example “find todays weather forecast for Scotland on the BBC website”.

    This breaks the ice and helps to get the user talking about what she is doing and thinking.

  • John Hyde Reply

    For me ONLY value coming from a usability test is insight into WHY people are doing or not doing things.

    This is why getting users to think aloud is vital.

    Seeing a user struggle for 40 seconds on a “salutation” field is one thing. When she says “what does salutation mean?” then it’s priceless. It’s the insight you wanted.

  • Cliff Tyllick Reply

    Michael, this is good advice. When participants tell me everything went great and I know it didn’t, I start with, “OK, I’ll jot that down. But to give me a little more detail, talk me through what you were thinking and feeling at these points.” I might start with something that was easy, but then I go to something they found hard. Almost always, without my having to say anything else, they will progressively change their evaluation from “super” to “good” to “so-so” to “bad” to “really bad.”

    In the future, I might try to apply Rolf Molich’s advice for feedback: at least one positive for every three negatives. In other words, if there are a large number of places to review, hit something they found easy, then three things they found hard, then something they found easy, and so on.

    After all, I don’t really care whether they decide to rate the task super easy or absolutely undoable. The most important results are what they did and what they reveal about why they did it. To me, the difficulty rating is just another vehicle for coaxing that information out of them.

  • DESIGN Reply

    Decent article. All true, but nothing new here.

    FYI – the author bio eludes to a troll who takes from others rather than using them as “inspiration”.

  • Mike Reply

    We recently launched http://soulofathens.com and relied on volunteers for our user testing environment. We are in an university setting and were able to attract a wide range of testers with varying computer skills.

    We did two rounds, in which the first round we told our users very little about the project or the status of the site’s development. In the second round we gave the users a little more information about the site and mentioned it was in the middle of the development process.

    It was very interesting to observe the first round of users to see where they clicked and where they found difficulties. But we found we received more valuable information from the second round of users when they knew more about the site upfront. I feel that the first round of users thought they were testing a final product and were reluctant to share flaws. The second group was much more open about issues after they knew the site was still in development and could still be influenced.

    • Michael Wilson Reply

      That’s an interesting idea there.

      I agree that informing them that it’s a work in progress helps people to give feedback. It lets them know that their opinions can be taken into account and that you realise it’s not perfect yet.

  • Angela Colter Reply

    I would add two reasons to the four you’ve already listed:

    1. It’s no worse than what they’re used to.
    2. They may not realize they had a problem, failed the task, etc.

    I’ve had both of these issues come up repeatedly during usability testing. The first one — it’s no worse than what I’m used to — can make it really hard to get people to judge a site as a blank slate. Was the form hard to fill out? Yeah. Was it harder to fill out than every other online form you’ve ever used? No, not really. I find that participants are often loathe to criticize a site for doing things that they are used to sites doing.

    The second one requires the moderator to be on the ball. The participant may have failed the task — provided incorrect information, didn’t finish it, etc. — but simply wasn’t aware of any feedback that indicated to them that they hadn’t actually accomplished what they set out to do.

    Ultimately, user feedback about their experience using a site or form has to take a back seat to what we’ve observed them doing.

  • Farhad Reply

    Our experience at TryMyUi matches the points you’ve made.

    At TryMyUi we provide one-click testing for remote usability studies so we provide the testers. We qualify the testers by having them go through a test and then grade them in part based upon how comfortable they are articulating what they didn’t like or struggled with. We’ve gone through several thousand qualifications to date, and like you indicated, you often see a prospective tester struggle on part of a test, and then, in the end-of-test questionnaire indicate that all was fine. And this even though they are remote and so the “fear of offending” is less of an issue.

    The good news is that it’s fairly easy to coach them to be transparent and direct in relaying their thoughts and impressions.

  • Kelly Jones Reply

    I’ve also found that the think out loud approach works really well. Also, when introducing the product or site let them know that the testing is done completely in isolation from the design and that feedback, both positive and negative will help the designers and developers get it spot on.

Leave a Comment on This Article