User Research ain't no Magic Bullet | May 7, 2012
User research is an extremely powerful tool (or set of tools) for understanding customer needs and behaviours. As digital projects grow in size and complexity, the risk of building the wrong thing becomes an increasing danger. So it makes sense to spend a portion of your budget to ensure that you’ve done the due diligence and are investing wisely.
Marketing teams have known this for years, so few company’s would dream of launching a product or service without first understanding the market and ensuring product-market fit. However for some reason this hasn’t filtered into the world of digital products. Large corporations are willing to sign off on six figure projects without the necessary groundwork, while VC backed start-ups are happy to take a “suck-it and see” approach.
At Clearleft we’re big fans of user research and it forms a regular part of our design methodology. Hell, we’ve even created a tool to help promote one form of the research process—usability testing. So I really did think twice about writing an article that looks at a negative side to research, when it’s still so underutilised in our field. However I’m seeing a disturbing trend that shows no sign of abating, so I wanted to bring it up here.
That problem is the over-reliance on user research.
I’m seeing more and more HCI graduates in the UK using research as a crutch. Some express this in their inability to make rational design decisions without first doing the “necessary research”. This is true even when the problems and solutions are staring them in the face. It’s as if they’re scared of making a wrong design decision so are using “research” as an excuse to delay the inevitable and a safety net in case things go wrong.
When the risks of making a bad decision are high, it makes sense to be cautious. However with budgets constantly squeezed, it takes an expert to know when to use a particular tool and the effect it’ll have on the rest of the project. Too little research and you’re designing in the dark (rarely a good thing). Too much research and you end up stealing budget from other parts of the project and crippling your ability to deliver—or at least crippling your ability to deliver anything other than a nicely formatted and bound research report.
I’m also seeing a lot of people confuse the purpose of user research. Good research is used to throw light on a problem so that you can illuminate all the necessary components and make an informed decision. It’s about insight and empathy—all those good things us designers are supposed to be experts in. However I’m seeing far too many practitioners ignore the design process and let the research make their decisions for them.
This “research directed” rather than “research informed” approach to design will usually result in a better product, but you’ll quickly hit your local maxima. Good designers will let research lead them to the solutions. Great designers will feed all of their data points into a process of “design synthesis” and use it to make great leaps of “inductive design reasoning”.
Lastly I’m seeing too many graduates who loved doing research at university and want to further their studies at their clients expense. In fact there are whole companies that seem set-up purely for this purpose. Three month “ethnographic” studies and 100 person usability tests just to show what any good designer could have told them on day one. This approach is wasteful and damages our ability to commission deeper research when it’s truly needed.
I think these issues are part of the reason why Lean Start-up is gaining so much attention at the moment. Developers have seen the uncertainty that too much up-front research can have on a project. They see it as a blocker that needs to route around by “lowering the cycle-time to validated learning”. So when the risks of failure are low, putting together a well considered test case is often more cost effective than doing lots of up front research. It can also offer more concrete answers than other forms of research.
The problem of course is that there is no “one-size fits all” approach to designing digital projects. Some questions can be more effectively and efficiently answered through up front research, while others can be realised through a well constructed test case. Some problems can even be solved simply by thinking about them.
As designers, we need to get much better at knowing which tools to use and when. We also need to be wary whenever we find ourselves relying too heavily on a single technique, as that way leads to dogma. There are no magic bullets after all, just highly skills gunmen with a range of tools at their disposal.
Posted at May 7, 2012 6:54 PM