Essential Ingredients for Successful User Studies

The aganki team recently took it upon ourselves to conduct some user studies for our product. We’ll admit, we were pretty nervous. We put it off for a while, but there’s really only so far a team can go working at a tiny table in a hometown coffee shop before needing real user feedback. We’d done the surveys. We’d talked to our target audience casually. We’d agonized over every last detail of our product as part of the target audience ourselves, but the time had come for a true test. We realized we’d rather not risk heading so far in the wrong direction without real user feedback that it would become impossible to turn back and change things while still maintaining some semblance of our original timeline.

We thought about the best way to carry out this incredibly important research, and spent a lot of time worrying about things that ultimately seemed unnecessary. Rather than over-complicate things, we tried to keep it simple. Based on what we did, how it turned out, and what we changed as a result, we’ve come up with what we think are the 5 essential steps to carrying out a successful research panel. We think this will be especially useful for others in our situation – small teams with limited resources who like to hit the ground running and just get things done.

We’ll be introducing the steps one by one so we can talk about each in depth. Below is a list of all five steps, and then we’ll be diving into the first one!

  1. Know What You Need to Know – determining goals and metrics of your panel
  2. Test the Waters Yourself – a working prototype of your product which will stand up to testing
  3. Make ‘Em an Offer They Can’t Refuse – get target users in the door to actually test for you
  4. Information Overload – properly recording and using responses
  5. Analysis and Implementation – what to make of all this data, and how it actually helps you further your goals


Part 1: Know What You Need to Know

It sounds a bit strange, doesn’t it? If you knew what you needed to know, you’d know it, and wouldn’t need a research panel. What we’re really trying to say here, though, is that you can’t ask questions until you know what it is you want to find out. The very first thing we did was sit down and discuss the things we thought would be the biggest issues, but we were forced to acknowledge that users might see issues we didn’t see. So we came up with a gameplan that would allow us to address things we were specifically concerned with, as well as those issues we couldn’t necessarily foresee.

What we did:

  • Create a list of tasks based on the features of our platform, to make sure the layout and design of the site, as well as the logic behind our features, made sense to our target audience. We gave users one task at a time and recorded how they accomplished the task, or if they weren’t able to complete it at all. The idea was to catch issues which came up repeatedly – usually a sign of bad design or bad explanations.
  • Allow the users to explore the site on their own so they got to see features that they didn’t see during the course of completing their assigned tasks. This way everyone got a complete picture of the site and all its functions!
  • Ask “follow up” questions after the free exploring, which covered the more vague or personal ideas such as how users liked the design, their favorite features, things they thought were missing, etc.. The reason we were adamant about doing this was to address the tunnel vision issue – we wanted to make sure we weren’t just focused on how well the features we had implemented were working, but also talking about whether we had included the right features or not.

As we progressed through our first panel, we not only caught some issues we had anticipated, we also caught some which we had not. Interestingly, however, we also caught issues with how we were conducting the panel itself. After carefully reviewing each team member’s feedback, we added a few more things to the process for the second time around.

Changes we made:

  • After coming up with the list of tasks the first time around, we had split it into two sets so no one user had to spend too much time going through the session. We realized, however, that each of us had a different approach to the user testing, which created a bias in the answers we received. We decided, for the next panel, that each team member would alternate between assigning both sets of tasks to give us a more accurate picture.
  • We realized that because of the complex nature of our platform, it could be difficult for users to navigate the system, no matter how good the design. Ideally such a product usually has some kind of simple tutorial or demo, which we didn’t want to put in the work to develop at this point. So what we decided to do was provide half of the users with a quick explanation or tour of the site, and allow the other half to carry on without this demo. This would allow us to identify which issues were easily corrected with a guide, and which were deeper-seated issues that required a system-level change!

We conducted several panels and made changes after each one, discovering new things every time. There’s almost no end to the improvements you can make, but our hope is that this can give you an idea of what types of things to be on the lookout for!

Our main takeaways:

  1. It’s important to know what you want to get out of a research panel. What are you hoping users will tell you? What parts of your product are you worried about? Is there a chance there’s something you haven’t considered at all?
  2. It’s almost as important to step away and try to take in the whole picture. When working on a project, it’s really easy to get caught up in the minuscule details of it. Not only that, but everything about a product makes sense to the person who built it. Stepping away and trying to view things with a fresh pair of eyes helps you foresee what newbies to your product may struggle with or dislike or miss altogether!
  3. Finally, how you ask the question is almost as important as what you ask. Be sure not to lead users on. Keep questions focused, but open-ended. Account for biases as much as you can! Change questions that aren’t giving you the kind of answers you’re looking for – that doesn’t mean change questions to only get positive answers, but change them to get useful ones, positive or negative.

Of course, we didn’t only change how we conducted the panel, but actually made changes to our platform itself based on the feedback we received each time, so we could then test these changes at the next panel. We’ll talk more about implementing feedback in upcoming steps, but for now we leave you to think about the initial step of deciding what you want to learn from your users!

What are some questions you’ve found useful to ask your test audience?