Why can’t designers solve more meaningful problems? | July 17, 2016

Every few months, somebody in our industry will question why designers don’t use their talents to solve more meaningful problems; like alleviating the world from illness, hunger or debt. This statement will often be illustrated with a story of how a team from IDEO or Frog spent 3 months in a sub-saharan village creating a new kind of water pump, a micro-payment app, or a revolutionary healthcare delivery service. The implication being that if these people can do it, why can’t you?

As somebody who believes in the power of design, I understand where this sentiment comes from. I also understand the frustration that comes from seeing smart and talented people seemingly wasting their skills on another image sharing platform or social network for cats. However this simple belief that designers should do more with their talent comes loaded with assumptions that make me feel very uncomfortable.

Firstly let me state that I think designers are a genuinely caring group of people who got into this industry to have some visible impact on the world. They may not be saving lives on a daily basis, but they are making our collective experiences slightly more pleasant and less sucky. They do this by observing the world around them, being attuned to the needs of individuals, spotting hidden annoyances and frustrations, and then using their visual problem solving skills to address them. As a result, designers are often in a permanent state dissatisfaction with the world.

Designers also regularly find themselves overwhelmed by the volume of problems they are exposed to and expected to solve. This is partly down to the fact that companies still don’t understand the full value design, and fail to resource accordingly. However it’s also down to the designers natural urge to please, often causing them to take on too much work and spread themselves far too thin.

The message that designers aren’t trying hard enough to solve the really big, meaningful problems taps into this deep insecurity; making them feel even worse about the lack of impact they are having than they already do. As somebody who cares about the industry I feel we should be trying to help lighten the load, rather than adding increasingly difficult to achieve expectations onto an already stressed out workforce.

I also worry about who get’s to define what counts as “meaningful” work. For some people, meaningful may mean taking 6-months off to help solve the refugee crisis—an amazing thing to do I’m sure you agree. For others it may mean impacting hundreds of millions of people by working at Facebook or Twitter. That may seem facile to some, but both these platforms have been used to connect isolated communities, empower individuals, and in some cases, topple regimes. So who are we to judge what “meaningful” means to other people?

Many designers I speak to do actually want to have a bigger impact on the world, but don’t know where to start. It’s not quite as easy as giving up your day job, traveling to a crisis zone, and offering your services as a UX designer. It turns out that a lot of the world favours doctors, nurses and engineers over interaction designers and app developers. I sometimes feel there’s a whiff of Silicon Valley libertarianism tied up in the idea that designers should be solving the really big problems; the kind of things that Universities, Governments and NGOs have been struggling with for decades.

There is also a sense of privilege that comes with this notion. While some designers may be in the position to take a pay cut to join an NGO, or invest their savings into starting a community interest company, that’s not true of everybody. Designers may be better paid than many in society, but they still have mortgages to cover, families to look after, and personal lives to lead.

By comparison, many of the people I see extolling these notions have been very fortunate in their careers, and have the time and resources to tackle problems they find meaningful. Some have run successful companies for many years, while others are living on the proceeds of their stock options. Most are tackling these problems for the right reasons, but I can’t help think that some are doing so out of guilt. Doing so to make amends for all the cigarette and alcohol adverts they worked on as a young designer, or to justify the payout they got for being at the right company at the right time.

There is definitely an element of “mid-arrear crisis” in the sense that we should all be doing more with our lives than we actually are; making a bigger impact before our time is up. However it’s much easier to have these thoughts, and see these opportunities towards the end of one’s career, and then judge younger designers for what they themselves didn’t see at that stage in their lives.

Ironically I believe there are a large number of designers choosing to work for the greater good. Organisations like GDS in the UK, and Code for America in the US, have done a fantastic job of recruiting some of the best and the brightest from the tech world to help improve government and foster civic engagement. Other well known designers have given up their time to work on political campaigns, or donated their skills to charity. This is nothing new. Many famous graphic designers, type designers and advertising executives donate part of their time to good causes, be it fundraising drives, charity campaigns, or education.

Less well known, but no less important, are the tens of thousands of designers who work for organisations like Amnesty International, Greenpeace and the WWF. People who actively choose to work for companies they feel are making a positive impact in thew world. Then we have the individual designers, working under the radar for lesser known charities. Much of their work goes unreported. You’ll never see them on stage at a typical web design conference, or writing an article for your favourite digital magazine for instance. But don’t let this lack of visibility fool you into thinking great work isn’t going on; projects like falling whistles and the lucky iron fish are just the tip of the iceberg.

So why aren’t more designers choosing to solve large, difficult, and meaningful problems? I think a big part of the reason is sociological. We look to our peers and our industry leaders to understand the career options available to us, and see what success looks like. If all the evidence says that being a successful designer means working for a well funded start-up, gaining a large Twitter following, and waiting till they IPO, that’s what people will do.

If we really want designers to be solving bigger problems, two things need to happen. First off, the people who currently own those problems need to recognise the value of design, and make it easier for designers to get involved. I think conferences like TED and publications like HBR have helped with his endeavour, but it’s still not obvious how designers can get involved and move the needle in a meaningful way.

Secondly, we need to create an alternative success narrative that shows it’s possible to be an amazing designer by doing meaningful work, without having done the rounds at a well known design consultancy or large tech company. We need to break the idea that solving big, important and meaningful problems is the preserve of the design-elite, and instead create alternate role models for budding new designers to follow.

Comments (2)

What the hell is design thinking anyway? | April 4, 2016

In a meeting a couple of weeks ago, one of my colleagues asked me to define “design thinking”. This question felt like a potential bear trap—after all “design thinking” isn’t a new or distinct form of cognitive processing that hadn’t existed before us designers laid claim to it—but I decided to blunder in regardless.

For me, design thinking is essentially a combination of three things; abductive reasoning; concept modelling; and the use of common design tools to solve uncommon problems.

If you’re unfamiliar with abductive reasoning, it’s worth checking out this primer by Jon Kolko. Essentially it’s the least well known of the three forms of reasoning; deductive, inductive and abductive, and the one that’s associated with creative problem solving.

Deductive reasoning is the traditional form of reasoning you’ll be familiar with from pure maths or physics. You start with a general hypothesis, then use evidence to prove (or disprove) its validity. In business, this type of thinking is probably how your finance department plans its budget i.e. to generate this much profit we need to invest this much in staff, this much in raw materials and this much in buying attention.

Inductive reasoning is the opposite of deductive reasoning, using experimentation to derive a hypothesis from a set of general observations. In business, inductive reasoning is often the preserve of the customer insight and marketing team i.e. we believe our customers will behave this way, based on a survey sample of x number of people.

By comparison, abductive reasoning is a form of reasoning where you make inferences (or educated guesses) based on an incomplete set of information in order to come up with the most likely solution. This is how doctors come up with their diagnoses, how many well-known scientists formed their hypotheses, and how most designers work. Interestingly it’s also the method fictional detective Sherlock Holmes used, despite being misattributed as deductive reasoning by Sir Arthur Conan Doyle.

Abductive reasoning is a skill, and one that can be developed and finessed over time. It’s a skill many traditional businesses fail to understand, preferring the logical certainty of deductive reasoning or the statistical comfort of inductive reasoning. Fortunately that’s starting to change, as more and more companies start to embrace the “design thinking” movement.

So what else does design thinking entail other than abductive thinking? Well as I mentioned earlier, I believe the second component is the unique ability designers have to model complex problems, processes, environment and solutions as visual metaphors rather than linguistic arguments. This ability allows designers to both understand and communicate complex and multifaceted problems in simple and easy to understand formats, be they domain maps, personas, service diagrams or something else entirely.

All too often businesses are seduced into thinking that everybody is in alignment, by describing complex concepts in language-heavy PowerPoint presentations, only to realise that everybody is holding a slightly different image of the situation in their heads. This is because, despite its amazing power, language is incredibly nuanced and open to interpretation (and manipulation). Some of our biggest wins as a company have involved creating graphic concept maps in the form of posters that can be hung around the office to ensure everybody understands the problem and is aligned on the solution. We call this activity design propaganda, and it’s a vital part of the design process.

A simpler incarnation is the design thinker’s tendency to “design in the open” and cover their walls with their research, models, and early prototypes. By making this work tangible, it allows them to scan the possibility space looking for un-made connections, and drawing inferences that would have been impossible through language alone.

The final aspect of “design thinking” is the tools us designers have developed to help think through these complex conceptual problems. These tools include a wealth of research techniques, prototyping activities and design games, not to mention processes and frameworks like “lean” and “agile”. Designers are often better equipped than typical management consultants and MBAs to tackle the sorts of problems business are starting to experience. This is just one of the reasons consultants and business leaders have started turning to programs like the Singularity University and dSchool, to become versed in the language and practice of design thinking.

It’s really good news that “design thinking” is starting to gain wider adoption, but this success comes with a small warning. While we designers helped pioneer and popularise the practice of “design thinking”, we may eventually lose out to the traditional purveyors of corporate strategy. Why?

Because despite having the skills necessary to deliver these functions, designers have shied away from the term, and resisted immersing ourselves fully in the business world. The large internal consultancies still have the business connections, they speak the same language, and are now starting to adopt the best practices of our field. So unless we get out of our beautifully designed and ergonomically friendly ivory towers, we may find it’s a hollow and short-lived victory for design after all.

Comments (0)

Product shearing layers and the "double-diamond" approach to design | April 26, 2015

The organising principles of agile are based around the needs of developers. So processes and systems are broken down into units of functionality and fed to development teams along a pipeline of delivery.

We all know that estimating big tech projects is a crap shoot, so the focus with agile is on just in time decision making, problem solving and optimising through-put. With so many unknowns this is a much more rational and realistic approach than trying to plan and estimate everything up front.

Unfortunately, while this organising principle performs well for developers, it can be problematic for designers who need to tackle things as part of a coherent system, rather than a series of functional chunks.

Agile allows for iteration, so one way of tackling this problem is to give in to the inherent uncertainty and allow the design to emerge over time. So you slowly end up aquiering design debt, with the hope that you’ll have time at the end of the project to bring all the disparate pieces together. Sometimes this happens, but often this gets relegated in favor of more functionality.

I believe this is one of the reasons why so many established tech companies struggle to produce a holistic user experience and end up creating a disjointed UI instead. Lots of small pieces loosely joined rather than a coherent and considered system.

Lean ux has attempted to add a layer of design thinking to the process. However the minimal shippable unit is still based around features rather than systems and stories rather than journeys or experiences.

With this method experiments are run in the wild on real users rather than on paper. This has the benefit of giving you real rather than aproximated feedback. However it can also lead to significant amounts of technical debt and a poorly considerd product in the hands of your users for longer than is absolutely necessary. This probably doesn’t matter in a new start-up with just a few users, but can be much more damaging for an establish company with millions of customers expecting you to deliver of your brand promis. How often have we seen the MVP end up being the final product?

By comparison, the traditional UX approach sees problems iterated on with paper and prototypes rather than live users and working code, allowing you to iterate in a faster and more cost effective way. The trick is for these sketches to remain as recommendations rather than specifications, which is often not the case.

Of course these two approaches aren’t mutually exclusive, but I’d like to see Lean companies do more of their learning with prototypes and less at the expense of real users. Not everything has to be deduced from first principles, and there is a huge canon of design knowledge to draw upon here.

The tension between design and development reminds me of the famous shearing layer diagram which Stuart Brand used to explain the different speeds at which buildings evolve - the interior moving faster than the shell.

While developers find it easier to break off pieces of functionality, encapsulate them and then tie everything together as they go, designers require a higher vantage point in order to map out the entire system to do their jobs well. The business often appreciates this vantage point as well.

In typical, highly functionality agile environments, a single product will be broken down into multiple product teams with their own product manager, design lead and team of developers which they are tasked with servicing. These smaller “products” will usually focus on “slices” of a user journey rather than the whole experience - another reason why many products feel somewhat disjointed.

The speed of progress is dictated by the speed at which the developrs can ship products, forcing designers to operate at a pace which they’re often uncomfortable with. This also forces them to focus their talents on production and delivery rather than strategic thinking, which may be fine for junior designers but can be both isolating and demoralising for more experienced practitioners.

Ironically, a small group of designers are usually better able to service the needs of a large number of developers by working as a team across the whole proudct, rather than being separated out into individual product teams. However this approach is often branded as “waterfall” and dismissed by many agile proponents.

Now if you have a fairly unreconstructed design team who are more comfortable dictating rather than collaborating, they may have a point. The goal here isn’t to hand a spec document to the Dev team in the form of a set of wireframes or working prototype and simply get them to build what you’ve specified without question.

However I do believe we’re entering into a post agile world where prouduct can adopt the best parts of waterfall and agile, without having to pick one or the other and stick to them dogmatically. Instead, let’s be aware of the differing shearing layers and adopt and approach that works for all parties.

In his recent IA Summit talk, Peter Merholz spoke about the “double diamond” approach, which is the method I personally favour.

At Clearleft we typically undertake an initial definition phase where we talk to the business, interview potential users, develop a product strategy, sketch out the key user journeys and create the basics of a design system. This system isn’t set in stone, but provides just enough of an overview to set the general direction and avoid us aquiering too much design debt.

During this initial phase of the project, the team can be fairly small and efficient. Maybe just one UX designer, one UI designer and one creative technologist. We can test ideas quickly on paper or with low fidelity prototypes and discard work that proves ineffective. We’re iterating, but at a pace dictated by the needs to the product, rather than the artificial tempo of a “sprint”. We’re not looking for perfection, but hope to get the main design problems finished to a level of fidelity all parties (business and tech) are happy with.

Once the plan is fleshed out, we’re more than happy for the tech team to work in whatever way best suites them, be that Scrum, Kanban or some other variant of agile. With a better understanding of the whole, it becomes easier to break things down into chunks and iterate on the individual stories. Designs will change, and the language will evolve, but at a pace that works better for both parties.

This “double diamond” approach places the needs of designers at the forefront of the first “diamond” and sees them leading the initial strategic enguagement. The second “diamond” flips this on its head and sees design servicing the needs of development and production.

I’m sure some people will claim that this to be part of the agile canon already, be that “iteration zero”, “dual track agile” or some other methodological variation. For me I really don’t care, just as long as design gets to dictate the process during the formative phases while development drives production.

Comments (0)

It's Design all the way down | September 14, 2012

A lot of the discussions I have about our profession end with somebody saying “Well it’s all just design really”, or “it’s just good design and bad design”. This is a great way of ending a conversation when you’re bored and have a bus to catch. It’s the designers equivalent of Godwins law.
In the most part I agreed that user-centered design and task-centred design are really just Design. Graphic design, product design and architectural design are also Design. You could even argue that engineering and programming are forms of Design, if you believe that Design is ultimately about making decisions which affect the final manifestation of a thing.

However it’s not an especially helpful statement.

Fashion design, jewellery design and architectural design differ because of the medium and the way that medium is enjoyed. The differences in medium, combine with history to produce vastly differing approaches, and its these differing approaches that I find interesting.

Within digital design, there are varying approaches (or schools) including user-centered design, task-centred design and genius design. All of these approaches have evolved to address different needs, and in turn produce slightly different outcomes. None of these approaches are an island and good designers will often mix and match techniques. However every designer uses a slightly different mixture and comes up with a vast array of results. Some with more success than others.

Defining good and bad design is even harder. Is good design just a matter of aesthetics and personal taste, or can something be described as “good design” if it’s highly functional and fit for purpose, but looks likes crap (I’m thinking of eBay, Amazon and a host of other websites here).

Interesting it’s mostly senior people that use the “It’s just design” argument, and I think there is good reason for this. Like Buddha reaching enlightenment and realising that we’re all basically interconnected, designers at the peak of their carers start looking across disciplines and noticing the similarities. We are all part of this big interconnected thing called Design.

Congratulations. You’ve reached design Nirvana. Let’s all hold hands and pat each other on the back (not at the same time, obviously).

At this point many designers ascend to design heaven (or up their own arses) and detach themselves from the suffering of man. However just like Buddha, I think a few of these design gods would benefit from coming back down to earth and helping their fellow designers reach a similar state of mind.

Once you’ve reached enlightenment, you can’t go around telling people how obvious and interconnected everything is or you’ll start sounding like David Ike (the lizards are responsible for everything, honest). Instead, the way to lead people to that understanding is to provide them with models of the world that expand their understanding and lead them to their own lightbulb moment. Much in the same way that Physics teachers will explain Newtonian mechanics before moving on to Quantum String Theory.

This is why I find conversations about the nature of design useful. It allows designers to expand their horizons in different directions until their models start to overlap. To apply different lenses to their practice in order to understand how the various moving parts work, and where they fit in.

Sure, it’s all just Design in the end, but that doesn’t make user-centered design, task centered-design or any other schools of design any less useful or relevant.

Comments (3)

Visual Designers Are Just As Important As UX Designers | July 19, 2011

As I explained in my previous post, user experience design is a multidisciplinary activity which includes psychology, user research, information architecture, interaction design, graphic design and a host of other disciplines. Due to the complexity of the field a user experience team will typically be made up of individuals with a range of different specialisms.

On larger teams you’ll find people who focus on one specific area, such as user research or information architecture. You may even find people who specialise in specific activities such as usability testing or wireframing. This level of specialism isn’t possible in smaller teams, so practitioners tend to group related activities together.

Conceptually I believe you can break design into tangible and abstract activities. Tangible design typically draws on the artistic skills of the designer and results in some kind of visually pleasing artefact. This is what most people imagine when they think of design and it covers graphic design, typography and visual identity.

However there is also a more abstract type of design which concerns itself with structure and function over form. The output from this type of design tends to be more conceptual in nature; wireframes, site-maps and the like. One type of design isn’t any more valuable or important than another, they’re just different.

When products and teams reach a certain size or level of complexity, one person can’t undertake all these roles. When this happens, natural divisions occur. So in small to mid sized teams it’s quite common to describe people who specialise in tangible design as visual designers, while those who focus on more abstract activities are known as user experience designers.

Now we all know that visual design is an undeniable part of the way people experience a product or service, so it may feel a little odd that user experience designers don’t actually design the entire experience. It may also be confusing that when user experience designers talk about “the UX” of a product, they are often referring to the more abstract essence of the product as described through wireframes, site maps and the like.

This ambiguity can lead many visual designers to misunderstand what user experience design is, especially if they’ve never worked alongside a dedicated user experience designer. This has also led a lot of visual designers to mistakenly believe that because the work they create results in some kind of user experience, that makes them a user experience designer. While this may be true in the purely philosophical sense, this isn’t what people mean when they talk about user experience designers (try applying for a senior UX position without understanding user research, IA and Interaction design and see how far you get).

The term user experience architect was coined in 1990 but the roots reach back to the 1940s and the fields of human factors and ergonomics. We’ve had dedicated user experience consultancies for the last 10 years, and internal divisions before that. We’ve got numerous professional conferences attended by people who have been working in UX for much of their professional life. In short, User experience design is a distinct and well understood discipline that stretches back many years and isn’t simply a new buzzword to describe “the right way to design”.

Over the last 12 months I’ve come across far too many visual designers describing themselves as user experience designers because they don’t fully understand the term. Instead they’ve seen a few articles that explain how UX is the new black and decided to rebrand themselves.

I’ve also come across many fantastic visual designers who feel pressured into becoming user experience designers because they think this is the only way to progress their careers. It seems that due to a lack of supply, user experience design has somehow come to represent a higher order of design, or design done right. At best this is nonsense and at worst this is actually damaging to peoples careers.

So here’s the truth. Good visual designers are just as hard to find as good user experience designers. They have exactly the same status in the industry and earn pretty much the same rates. So you don’t need to became a user experience designer in order to take your career to the next level. Instead, surround yourself with experts, hone your skills and take pride in your work. With so few good designers out there, don’t go throwing away much prized and hard earned skills under the mistaken belief that you must become a UX designer in order to grow, as that’s just not the case.

Comments (23)

Three talks touching on science fiction's view of the future | April 6, 2011

Chris Noessel & Nathan Shedroff from UX Week.

Toby Barnes from Interesting North.

Matt Webb from The Do Lectures.

Comments (0)

Redesign outrage | April 4, 2011

It’s surprisingly common for redesigns to cause outrage amongst their users. People complain that they weren’t consulted, criticise the quality and appropriateness of the new solution, and state that “if it ain’t broke, don’t fix it.” However if you leave the site for a while, you often see the most critical detractors become the most vocal supporters. Why is this?

I think there are three fundamental cognitive biases at play here.

First off we have the concept of status-quo bias, the idea that people tend not to change existing behaviour unless the incentive to change is compelling. So you could argue that many people chose not to switch from DVD to Blu-ray because the benefits of higher definition viewing just weren’t attractive enough. In the context of a redesign, many people may not understand why it was even necessary as the existing site allowed them to do everything they needed and wanted to do.

Next up we have loss aversion, the idea that people prefer to avoid losses rather than acquire gains.So in the context of a redesign, people’s sense of loss may be overshadowing the benefits they have gained.

Lastly we have something called the endowment effect. This bias says that people often place a higher value on something they own than something they don’t. This may have something to do with the memories associated with that item. So in the context of a redesign, users will probably have bonded with the old site, while the new site has yet to create an emotional attachment.

Of course all of these cognitive biases are intertwined so it’s very difficult to tell which ones are having an effect and to what level. I’m also sure there are other factors at play here so I’d be interested to see if anybody has done any original research in this area.

This post was inspired by a recent interview in the Indipendant.

Comments (8)

Practical wisdom | April 4, 2011

A few evenings ago I watched a really interesting TED talk by Barry Schwartz on practical wisdom.

Although his examples were rooted in education and law, I couldn’t help but feel that practical wisdom was also the core of good design. After all, what is design except the ability to improvise novel solutions to new problems based on your knowledge of a set of rules and your ability to apply them with flexibility?

The talk also made me think about my own personal feelings towards project management. I believe that project management processes are often used as a series of inflexible rules (or sticks) intended to ensure average teams reach a minimum level of performance. However this will have the opposite effect on good people, constricting them and eventually demotivating them. I’ve seen this happen with numerous friends who have wanted to do good work but ended up being crushed by industrial age management and forced to leave in order to protect their own sanity.

Instead I think it’s important to hire good people and give them the flexibility to set their own agendas and apply their own rules. This is obviously one of the goals of the agile manifesto. Reduce bureaucracy and let the genuine good nature of designers and developers flourish. Sadly a lot of agile processes seem to be reinstating these rules in order to manage less experienced teams, starting the cycle all over again.

Barry Schwartz talked about two kinds of people who find themselves in this situation. One type of person tries to work within the constraints of the system and bend or subvert the rules in a way which allows them to do good work. Many of the best University educators I’ve met fall into this category. Then you have the change agents. The people who are so incensed by the rules that they want to create systematic change. These are the people who interest me the most. The people who can come into organisations, tear down the walls and build up new structures and new teams who are able to effect real progress.

So in order to become better designers we need to think flexibly, learn through doing and cultivate that sense of practical wisdom.

Comments (0)

Big design up front | March 24, 2011

Like most designers and developers we’ve come to the conclusion that big design up front doesn’t work. Six month requirement gathering exercises which result in thousand page specifications don’t work. In the time it has taken to produce these requirements the business landscape has almost certainly changed. So new requirements appear and designers and developers are forced to battle scope creep and keep these documents alive while at the same time trying to build something that is ever shifting and changing.

So instead we’ve seen a move to agile development and an almost zealot backlash against detailed planning of any kind. However just because big design up front doesn’t work, that doesn’t mean we should ditch design planning altogether. As a race we tend to flip flop between polar opposites rather than exploring the middle ground. So the problem doesnt lie with requirements gathering, design or planningit’s about the amount you do.

Too much planning and you get bogged down in nuances. Sometimes it’s just easier and quicker to design something than it is to discuss it. Too much documentation and you end up spending more time managing the documentation than you do managing the design. The converse is also true. Too little documentation and it’s easy for large teams to lose their path. It’s also easy for the fidelity of the solution to suffer. Just as with too much planning, too little planning leads to inefficiency as work that was done several sprints ago needs to be redone based on decisions made later down the line.

So I don’t think the argument should be agile vs waterfall. Instead it’s about knowing the skills, abilities and interests of your team and initiating a level of planning which is appropriate for the project at hand. There’s no one-size-fits-all approach to developing good products, so I really wish we’d stop chasing the Holy Grail and having holy wars in the process.

Instead let’s go back to the core commandments of agile and prefer conversations to documentations, while understanding that in some instances documentation is necesary. Similarly, zero design won’t work, while all design may be a fiction. Instead you need to find the right level of fidelity and tweak the smaller issues as you go along.

It’s all about balance people, so let’s start finding ours.

Comments (6)

The Power of Info-graphics | January 4, 2007

There has been an interesting story circulating in the press today about food labelling. The government are trying to encourage food manufacturers to label food in such a way that shoppers can clearly tell which of a number of similar products are healthiest just by glancing at them.

The food standards agency realised that the current labelling system—while very good by international standards—is still quite complicated. If you want choose between two products for health reasons, you need to spend a considerable amount to time looking at the two labels, and even then it is difficult to tell which is better unless you know exactly how much salt, fat or sugar you are supposed to eat each day

Old style, information heavy food label

Two rival labelling systems have emerged. One system is called the traffic light system and studies have shown that it provides shoppers with a clear indication of which product is the least healthy. It works though a colour coding system, so green is healthy, amber is medium and red is unhealthy. Four main metrics are communicated; the amount of fat, saturates, sugar and salt an item contains. So by quickly glancing at a product you can tell if it is “unhealthy” by the amount of red and orange displayed on the info-graphic.

Traffic light label on a bag of chips (fries)

Traffic light label on a pizza box

Traffic light label using the alternative pie chart format

The problem is, it turns out that when faced with the traffic light, shoppers naturally (and some would say instinctively) avoid the products containing a lot of red traffic lights. This has obviously upset many manufactures who prefer a less emotive system called the GDA system.

This system shows how much of an adults guideline daily amount (GDA) of calories, sugar, fat, saturates and salt the product contains. The info-graphics the manufacturers prefer don’t include the traffic light system, making it much less emotive. They argue that the info-graphics provide more information to the shopper and leads to an informed decision.

The new GDA label

GDA label from a box of nestle cereal

Supporters of the traffic light system say that the GDA system is flawed because many people don’t have the time, ability or inclination to do mathematical calculations while shopping. This is an interesting argument from a usability, user-centered design and accessibility standpoint, and is actually supported through user testing. They argue that when you are in a hurry, the traffic light system gives the shopper the information they desire at a glance, and is therefore superior.

However supporters of the GDA system counter with the argument that some products like cheese, which are naturally high in fat and would therefore always have a red label, can still be eaten as part of a healthy diet as long as more than the GDA isn’t consumed.

I find it very interesting that a story about info-graphic design has been all over the TV and newspapers today. I also think it is very interesting how the two different camps are reacting to the two different types of info-graphic. To throw salt (sugar, fat and saturates) onto the wound, one option would be to combine both techniques. It would be very simple to add colour to the GDA info-graphic, but desaturate it slightly to make it less emotive. That way you would still be able to see which elements were high, medium or low at a glance while hopefully placating the manufacturers. I was planning to knock up an example but I’ve just got a new laptop and Adobe are forcing me to phone them up again to prove that I own my copy of Photoshop. This is starting to get tedious.

Comments (19)