User Research ain't no Magic Bullet | May 7, 2012

User research is an extremely powerful tool (or set of tools) for understanding customer needs and behaviours. As digital projects grow in size and complexity, the risk of building the wrong thing becomes an increasing danger. So it makes sense to spend a portion of your budget to ensure that you’ve done the due diligence and are investing wisely.

Marketing teams have known this for years, so few company’s would dream of launching a product or service without first understanding the market and ensuring product-market fit. However for some reason this hasn’t filtered into the world of digital products. Large corporations are willing to sign off on six figure projects without the necessary groundwork, while VC backed start-ups are happy to take a “suck-it and see” approach.

At Clearleft we’re big fans of user research and it forms a regular part of our design methodology. Hell, we’ve even created a tool to help promote one form of the research process—usability testing. So I really did think twice about writing an article that looks at a negative side to research, when it’s still so underutilised in our field. However I’m seeing a disturbing trend that shows no sign of abating, so I wanted to bring it up here.

That problem is the over-reliance on user research.

I’m seeing more and more HCI graduates in the UK using research as a crutch. Some express this in their inability to make rational design decisions without first doing the “necessary research”. This is true even when the problems and solutions are staring them in the face. It’s as if they’re scared of making a wrong design decision so are using “research” as an excuse to delay the inevitable and a safety net in case things go wrong.

When the risks of making a bad decision are high, it makes sense to be cautious. However with budgets constantly squeezed, it takes an expert to know when to use a particular tool and the effect it’ll have on the rest of the project. Too little research and you’re designing in the dark (rarely a good thing). Too much research and you end up stealing budget from other parts of the project and crippling your ability to deliver—or at least crippling your ability to deliver anything other than a nicely formatted and bound research report.

I’m also seeing a lot of people confuse the purpose of user research. Good research is used to throw light on a problem so that you can illuminate all the necessary components and make an informed decision. It’s about insight and empathy—all those good things us designers are supposed to be experts in. However I’m seeing far too many practitioners ignore the design process and let the research make their decisions for them.

This “research directed” rather than “research informed” approach to design will usually result in a better product, but you’ll quickly hit your local maxima. Good designers will let research lead them to the solutions. Great designers will feed all of their data points into a process of “design synthesis” and use it to make great leaps of “inductive design reasoning”.

Lastly I’m seeing too many graduates who loved doing research at university and want to further their studies at their clients expense. In fact there are whole companies that seem set-up purely for this purpose. Three month “ethnographic” studies and 100 person usability tests just to show what any good designer could have told them on day one. This approach is wasteful and damages our ability to commission deeper research when it’s truly needed.

I think these issues are part of the reason why Lean Start-up is gaining so much attention at the moment. Developers have seen the uncertainty that too much up-front research can have on a project. They see it as a blocker that needs to route around by “lowering the cycle-time to validated learning”. So when the risks of failure are low, putting together a well considered test case is often more cost effective than doing lots of up front research. It can also offer more concrete answers than other forms of research.

The problem of course is that there is no “one-size fits all” approach to designing digital projects. Some questions can be more effectively and efficiently answered through up front research, while others can be realised through a well constructed test case. Some problems can even be solved simply by thinking about them.

As designers, we need to get much better at knowing which tools to use and when. We also need to be wary whenever we find ourselves relying too heavily on a single technique, as that way leads to dogma. There are no magic bullets after all, just highly skills gunmen with a range of tools at their disposal.

Posted at May 7, 2012 6:54 PM

Comments

Roger Attrill said on May 7, 2012 7:38 PM

Thank you for this article. For me, the big take away here is that: Great designers will feed all of their data points into a process of “design synthesis” and use it to make great leaps of “inductive design reasoning”.

Mike Monteiro says ‘If it helps you do your job, it’s part of your job’. The problem is, many do not think outside the box enough when deciding what it is that helps them do their job, and thus end up relying on the more tangible data such as user research - becoming more analyst than designer in the process.

I think it’s always good to be reminded that while evidence based design is undoubtedly important, it’s always good to question everything; to source inspiration from both parallel and perpendicular technologies; and to allow ‘room to breathe’ between the data and the design so that the direction of one is not constrained by the rigidity of the other. Without that space to innovate, we would all do the same research, find the same results, and design the same product.

Justin Kirby said on May 16, 2012 11:11 AM

As an author and digital entrepreneur who has worked on 2,000+ digital communication projects since 1994, I can’t help thinking of the broader Industry in terms of the parable of the six blind men and elephant. Over the years I’ve worked with multi-nationals and their systems integrators, digital agencies (who Andy recently criticised), high profile web designers and ‘undesigners’, code wallahs, as well as entrepreneurs on their bootstrapped start-ups. They are all looking at the same or similar problems but often from very different perspectives. As such, it’s both difficult to compare apples with apples, and there’s no Zoo Keeper that can provide the over arching view. So despite the Lean Start-up arena being informed by UX design, or at least using similar terminology, I’d argue that the context of the user research they conduct with their functioning prototype is different from the that conducted by UX designers with clients large enough to afford them. Let’s take advocacy for example. I’ve heard and read UX designers (who should know better) praise and promote the Net Promoter Score metric, despite it not being statistically valid in the context they specify. Recommendation rates are seen very differently by bootstrapped digital start-ups looking for viral growth of their user base in order to determine whether their model is both valid and sustainable. So I don’t think it’s really a matter of UX designers relying on too much or too little user research, I think it’s more a question of what research is being used by whom, and for what and why. Certainly, it’s got to be more thought out than using a high profile metric incorrectly just to try and appear credible/more scientific.