Developers “Own” The Code, So Shouldn’t Designers “Own” The Experience? | August 24, 2016

We’ve all been there. You spent months gathering business requirements, working out complex user journeys, crafting precision interface elements and testing them on a representative sample of users, only to see a final product that bears little resemblance to the desired experience.

Maybe you should have been more forceful and insisted on an agile approach, despite your belief that the organization wasn’t ready? Perhaps you should have done a better job with your pattern portfolios, ensuring that the developers used your modular code library rather than creating five different variations of a carousel. Or, maybe you even should’ve sat next to the development team every day, making sure what you designed actually came to pass.

Instead you’re left with a jumble of UI elements, with all the subtlety stripped out. Couldn’t they see that you worked for days getting the transitions just right, only for them to drop in a default animation library? And where on earth did that extra check-out step come from. I bet marketing threw that in at the last minute. You knew integration was going to be hard and compromises would need to be made, but we’re supposed to be making the users lives easier here, not the tech team.

When many people are involved in a project, it is very important to make sure that they have a common understanding of the problem and its solution.

Of course, there are loads of good reasons why the site is this way. Different teams with varying levels of skill working on different parts of the project, a bunch of last-minute changes shortening the development cycle, and a whole host of technical challenges. Still, why couldn’t the development team come and ask for your advice on their UI changes? You don’t mess with their code, so why do they have to change your designs around? Especially when the business impact could be huge! You’re only round the corner and would have been happy to help if they had just asked.

While the above story may be fictional, it’s a sentiment I hear from all corners of the design world, whether in-house or agency side. A carefully crafted experienced ruined by a heavy-handed development team.

This experience reminds me of a news story I saw on a US local news channel several years ago. A county fair was running an endurance competition where the last person remaining with their hand on a pickup truck won the prize. I often think that design is like a massive game of “touch the truck”, with the development team always walking away with the keys at the end of the contest. Like the last word in an argument, the final person to come in contact with the site holds all the power and can dictate how it works or what it looks like. Especially if they claim that the particular target experience isn’t “technically possible”, which is often shorthand for “really difficult”, “I can’t be bothered doing it that way” or “I think there’s a better way of doing it so am going to pull the dev card”.

Now I know I’m being unfairly harsh about developers here and I don’t mean to be. There are some amazingly talented technologists out there who really care about usability and want to do the best for the user. However, it often feels as though there’s an asymmetric level of respect between disciplines due to a belief that design is easy and therefore something everybody can have an opinion on, while development is hard and only for the specially initiated. So while designers are encouraged (sometimes expected) to involve everybody in the design process, they often aren’t afforded the same luxury.

To be honest, I don’t blame them. After all, I know just enough development to be dangerous, so you’d be an idiot if you wanted my opinion on database structure and code performance (other than I largely think performance is a good thing). Then again I do know enough to tell when the developers are fudging things and it’s always fun to come back to them with a working prototype of something they said was impossible or take months to implement — but I digress.

The problem is, I think a lot of developers are in the same position about design — they just don’t realize it. So when they make a change to an interface element based on something they had heard at a conference a few years back, they may be lacking important context. Maybe this was something you’ve already tested and discounted because it performed poorly. Perhaps you chose this element over another for a specific reason, like accessibility? Or perhaps the developers opinions were just wrong, based on how they experience the web as superusers rather than an average Jo.

Now let’s get something straight here. I’m not saying that developers shouldn’t show an interest in design or input into the design process. I’m a firm believer in cross-functional pairing and think that some of the best usability solutions emanate from the tech team. There are also a lot of talented people out there who span a multitude of disciplines. However, at some point the experience needs to be owned, and I don’t think it should be owned by the last person to open the HTML file and “touch the truck”.

So, if good designers respect the skill and experience great developers bring to the table, how about a little parity? If designers are happy for developers to “own the code”, why not show a similar amount of respect and let designers “own the experience”?

Everybody has an opinion. However, it’s not a good enough reason to just dive in and start making changes.

Doing this is fairly simple. If you ever find yourself in a situation where you’re not sure why something was designed in a particular way, and think it could be done better, don’t just dive in and start making changes. Similarly, if you hit a technical roadblock and think it would make your lives easier to design something a different way, go talk to your designer. They may be absolutely fine with your suggested changes, or they may want to go away and think about some other ways of solving the same problem.
After all, collaboration goes both ways. So if you don’t want designers to start “optimizing” your code on the live server, outside your version control processes, please stop doing the same to their design.

Originally published at www.smashingmagazine.com on August 9, 2016.

Comments (0)

Are we moving towards a post-Agile age? | August 23, 2016

Agile has been the dominant development methodology in our industry for some time now. While some teams are just getting to grips with Agile, others extended it to the point that it’s no longer recognisable as Agile. In fact, many of the most progressive design and development teams are Agile only in name. What they are actually practicing is something new, different, and innately more interesting. Something I’ve been calling Post-Agile thinking. But what exactly is Post-Agile, and how did it come about?

The age of Waterfall

Agile emerged from the world of corporate IT. In this world it was common for teams of business analysts to spend months gathering requirements. These requirements would be thrown into the Prince2 project management system, from which a detailed specification—and Gantt chart—would eventually emerge. The development team would come up with a budget to deliver the required spec, and once they had been negotiated down by the client, work would start.

Systems analysis and technical architects would spend months modelling the data structure of the system. The more enlightened companies would hire Information Architects—and later UX Designers—to understand user needs and create hundreds of wireframes describing the user interface.

Humans are inherently bad at estimating future states and have the tendency to assume the best outcome—this is called estimation bias. As projects grow in size, they also grow in surface area and visibility, gathering more and more input from the organisation. As time marches on, the market changes, team members come and go, and new requirements get uncovered. Scope creep inevitably sets in.

To manage scope creep, digital teams required every change in scope to come in the form of a formal change request. Each change would be separately estimated, and budgets would dramatically increase. This is the reason you still hear of government IT projects going over budget by hundreds of millions of dollars. The Waterfall process, as it became known, makes this almost inevitable.

Untimely the traditional IT approach put too much responsibility in the hands of planners and middle managers, who were often removed from the day-to-day needs of the project.

The age of Agile

In response to the failures of traditional IT projects, a radical new development philosophy called Agile began to emerge. This new approach favoured just-in-time planning, conversations over documentation, and running code; effectively trying to counter all the things that went wrong with the typical IT project. The core tenets of this new philosophy were captured in the agile manifesto, a document which has largely stood the test of time.

As happens with most philosophies, people started to develop processes, practices and rituals to help explain how the tenets should be implemented in different situations. Different groups interpreted the manifesto differently, and specific schools started to emerge.

The most common Agile methodology we see on the web today is Scrum, although Kanban is another popular approach.

Rather than spending effort on huge scope documents which invariably change, Agile proponents will typically create a prioritised backlog of tasks. The project is then broken down into smaller chunks of activity which pull tasks from the backlog. These smaller chunks are easier to estimate and allow for much more flexibility. This opens up the possibility for regular re-prioritisation in the face of a changing market.

Agile—possibly unknowingly—adopted the military concepts of situational awareness and command intent to move day-to-day decision making from the planners to the front-line teams. This effectively put control back in the hands of the developers.

This approach has demonstrated many benefits over the traditional IT project. But over time, Agile has became decidedly less agile as dogmas crept in. Today many Agile projects feel as formal and conservative as the approaches they overthrew.

The post-Agile age

Perhaps we’re moving towards a post-Agile world? A world that is informed by the spirit of Agile, but has much more flexibility and nuance built in.

This post-Agile world draws upon the best elements of Agile, while ditching the dogma. It also draws upon the best elements of Design Thinking and even—God forbid—the dreaded Waterfall process.

People working in a post-Agile way don’t care which canon an idea comes from, as long as it works.. The post-Agile practitioner cherrypicks from the best tools available, rather than sticking with a rigid framework. Post-Agile is less of a philosophy and more of a toolkit that has been built up over years of practice.

I believe Lean Startup and Lean UX are early manifestations of post-Agile thinking. Both of these approaches sound like new brands of project management, and each has its own dogma. If you dig below the surface, both of these practices are surprisingly lacking in process. Instead they represent a small number of tools—like the business model canvas—and a loose set of beliefs such as testing hypotheses in the most economical way possible.

My initial reaction to Lean was to perceive it as the emperor’s new clothes for this very reason. It came across as a repackaging of what many designers and developers had been doing already. With a general distrust for trademarks and brand names, I naturally pushed back.

What I initially took as a weakness, I now believe is its strength. With very little actual process, designers and developers around the world have imbued Lean with their own values, added their own processes, and made it their own. Lean has become all things to all people, the very definition of a post-Agile approach.

I won’t go into detail how this relates to other movements like post-punk, post-modernism, or the rise of post-factual politics; although I do believe they have similar cultural roots.

Ultimately, post-Agile thinking is what happens when people have lived with Agile for a long time and start to adapt the process. It’s the combination of the practices they have adopted, the ones they have dropped, the new tools they have rolled in, as well as the ones they have rolled back.

Post-Agile is what comes next. Unless you truly believe that Scrum or Kanban is the pinnacle of design and development practice, there is always something new and more interesting around the corner. Let’s drop the dogma and enter this post-Agile world.

Comments (23)

Renting software sucks | August 15, 2016

Back in the the olden days (c. 2000) people used to own software. When a new version of Photoshop or Fireworks came out, you’d assess the new features to decide whether they were worth the price of the upgrade. If you didn’t like what you saw, you could skip a generation or two, waiting until the company had a more compelling offering.

This gave consumers a certain amount of purchasing power, forcing software providers to constantly tweak their products to win customer favour. Of course, not every tweak worked, but the failures were often as instructive as the successes.

This started to change around 2004, when companies like 37 Signals released Basecamp, their Software as a Service project management tool. The price points were low—maybe only a few dollars a week—reducing the barrier to entry and spreading the cost over a longer period.

Other products quickly followed; accounting tools, invoicing tools, time-tracking tools, prototyping tools, testing tool, analytics tools, design tools. Jump forward to today, and the average freelancer or small design agency could have subscriptions to over a dozen such tools.

Subscription works well for products you use on a daily basis. For designers this could be Photoshop or InVision; for accountants this could be Xero or Float; and for consumers this could be Spotify or Netflix.

Subscription also encourages use—it encourages us to create habits in order to get our money’s worth. Like the free buffet at an all-inclusive hotel, we keep going back for more, even when we’re no longer hungry.

In doing so, subscription also locks us in, making it psychologically harder for us to try alternatives. Making it less likely for us to try that amazing local restaurant because we’ve already paid for our meals and need to beat the system. The sunk cost fallacy in all its glory.

Problems with the rental model become more apparent when you’re forced to rent things you use infrequently, like survey products or recruitment tools. You pay to maintain the opportunity of use, rather than for use itself.

We recently did an audit of all the small monthly payments going out of the company, and it’s amazing how quickly they mount up. Twenty dollars here and forty dollars there can become thousands each year if you’re not careful. Even more amazing are the number of products we barely used. Products that somebody signed up for a few years back and forgot to cancel.

You could blame us for our lack of diligence. However the gym membership model of rental is explicitly designed to elicit this behaviour. To encourage people to rent the opportunity, safe in the knowledge that the majority of members won’t overburden the system. Unclear billing practices and disincentives for unsubscribing—”if you leave you’ll lose all your data”—are designed for this very purpose.

Then you have the legacy tools. Products that you rarely use, but still need access to. Photoshop is a great example of this. Even if you’ve decided to move to Sketch, you know many of your clients still use Photoshop. In the olden days you would have keep an older version on your machine, costing you nothing. These days you need to maintain your Creative Cloud account across multiple team members, costing you thousands of dollars for something you rarely use.

This article was sparked by a recent Twitter storm I witnessed where Sketch users raised the idea of a rental model and vilified people who felt paying $30 a month for professional software (which currently retails at $99) was too much.

While I understand the sentiment—after all Sketch is the tool many designers use to make their living—you can’t take this monthly cost in isolation. Instead you need to calculate lifetime cost. As we all know from the real world, renting is always more expensive than ownership in the long term.

You also have to consider rental costs in relationship to every other piece of rented software our industry considers necessary. With this number continuously increasing—but no sign of legacy tool rental declining—entering the digital industry is becoming an increasingly costly prospect for new designers and developers.

The thing I find strange is that, while we’ve been trained to believe renting is the norm for software over the past 10 years, few of us think this way of physical goods. We mostly still buy houses, cars, computers and music systems, rather than renting or leasing them. Many believe ownership offers some kind of noble status, despite the environmental cost of owning atoms over bits.

When we do rent physical products, it’s rarely on a subscription basis. Instead we’ll rent an apartment in New York through AirBnB for a weekend, a Zipcar for an afternoon or a Tasker for an hour.

I’m not saying software rental is always bad. I’d just like to see more diversity in SaaS business models. I’d welcome the ability to subscribe to mature services I use on a regular basis, but rent less common tools on a per-use basis. I’d also like to retain ownership of certain tools, like Sketch, as I think this is a better model for innovation.

We talk a lot about user-centered design in the digital world. Isn’t it about time we considered business models through the same critical lens?

Comments (0)

Why can’t designers solve more meaningful problems? | July 17, 2016

Every few months, somebody in our industry will question why designers don’t use their talents to solve more meaningful problems; like alleviating the world from illness, hunger or debt. This statement will often be illustrated with a story of how a team from IDEO or Frog spent 3 months in a sub-saharan village creating a new kind of water pump, a micro-payment app, or a revolutionary healthcare delivery service. The implication being that if these people can do it, why can’t you?

As somebody who believes in the power of design, I understand where this sentiment comes from. I also understand the frustration that comes from seeing smart and talented people seemingly wasting their skills on another image sharing platform or social network for cats. However this simple belief that designers should do more with their talent comes loaded with assumptions that make me feel very uncomfortable.

Firstly let me state that I think designers are a genuinely caring group of people who got into this industry to have some visible impact on the world. They may not be saving lives on a daily basis, but they are making our collective experiences slightly more pleasant and less sucky. They do this by observing the world around them, being attuned to the needs of individuals, spotting hidden annoyances and frustrations, and then using their visual problem solving skills to address them. As a result, designers are often in a permanent state dissatisfaction with the world.

Designers also regularly find themselves overwhelmed by the volume of problems they are exposed to and expected to solve. This is partly down to the fact that companies still don’t understand the full value design, and fail to resource accordingly. However it’s also down to the designers natural urge to please, often causing them to take on too much work and spread themselves far too thin.

The message that designers aren’t trying hard enough to solve the really big, meaningful problems taps into this deep insecurity; making them feel even worse about the lack of impact they are having than they already do. As somebody who cares about the industry I feel we should be trying to help lighten the load, rather than adding increasingly difficult to achieve expectations onto an already stressed out workforce.

I also worry about who get’s to define what counts as “meaningful” work. For some people, meaningful may mean taking 6-months off to help solve the refugee crisis—an amazing thing to do I’m sure you agree. For others it may mean impacting hundreds of millions of people by working at Facebook or Twitter. That may seem facile to some, but both these platforms have been used to connect isolated communities, empower individuals, and in some cases, topple regimes. So who are we to judge what “meaningful” means to other people?

Many designers I speak to do actually want to have a bigger impact on the world, but don’t know where to start. It’s not quite as easy as giving up your day job, traveling to a crisis zone, and offering your services as a UX designer. It turns out that a lot of the world favours doctors, nurses and engineers over interaction designers and app developers. I sometimes feel there’s a whiff of Silicon Valley libertarianism tied up in the idea that designers should be solving the really big problems; the kind of things that Universities, Governments and NGOs have been struggling with for decades.

There is also a sense of privilege that comes with this notion. While some designers may be in the position to take a pay cut to join an NGO, or invest their savings into starting a community interest company, that’s not true of everybody. Designers may be better paid than many in society, but they still have mortgages to cover, families to look after, and personal lives to lead.

By comparison, many of the people I see extolling these notions have been very fortunate in their careers, and have the time and resources to tackle problems they find meaningful. Some have run successful companies for many years, while others are living on the proceeds of their stock options. Most are tackling these problems for the right reasons, but I can’t help think that some are doing so out of guilt. Doing so to make amends for all the cigarette and alcohol adverts they worked on as a young designer, or to justify the payout they got for being at the right company at the right time.

There is definitely an element of “mid-arrear crisis” in the sense that we should all be doing more with our lives than we actually are; making a bigger impact before our time is up. However it’s much easier to have these thoughts, and see these opportunities towards the end of one’s career, and then judge younger designers for what they themselves didn’t see at that stage in their lives.

Ironically I believe there are a large number of designers choosing to work for the greater good. Organisations like GDS in the UK, and Code for America in the US, have done a fantastic job of recruiting some of the best and the brightest from the tech world to help improve government and foster civic engagement. Other well known designers have given up their time to work on political campaigns, or donated their skills to charity. This is nothing new. Many famous graphic designers, type designers and advertising executives donate part of their time to good causes, be it fundraising drives, charity campaigns, or education.

Less well known, but no less important, are the tens of thousands of designers who work for organisations like Amnesty International, Greenpeace and the WWF. People who actively choose to work for companies they feel are making a positive impact in thew world. Then we have the individual designers, working under the radar for lesser known charities. Much of their work goes unreported. You’ll never see them on stage at a typical web design conference, or writing an article for your favourite digital magazine for instance. But don’t let this lack of visibility fool you into thinking great work isn’t going on; projects like falling whistles and the lucky iron fish are just the tip of the iceberg.

So why aren’t more designers choosing to solve large, difficult, and meaningful problems? I think a big part of the reason is sociological. We look to our peers and our industry leaders to understand the career options available to us, and see what success looks like. If all the evidence says that being a successful designer means working for a well funded start-up, gaining a large Twitter following, and waiting till they IPO, that’s what people will do.

If we really want designers to be solving bigger problems, two things need to happen. First off, the people who currently own those problems need to recognise the value of design, and make it easier for designers to get involved. I think conferences like TED and publications like HBR have helped with his endeavour, but it’s still not obvious how designers can get involved and move the needle in a meaningful way.

Secondly, we need to create an alternative success narrative that shows it’s possible to be an amazing designer by doing meaningful work, without having done the rounds at a well known design consultancy or large tech company. We need to break the idea that solving big, important and meaningful problems is the preserve of the design-elite, and instead create alternate role models for budding new designers to follow.

Comments (2)

Universal wage | July 2, 2016

Stories of mass underemployment due to the rise of Artificial Intelligence have been popping up all over the place the past 18 months. It would be easy to dismiss them as crack-pot theories, were it not for the credibility of their authors; from scientists like Stephen Hawkins to industrialists like Elon Musk.

Self driving cars seem to have gone from science-fiction fantasy to real world fact, in a matter of months, and the worlds transport workers are right to be concerned. Uber are already talking about making their drivers redundant with fleets of self driving taxies, while various local governments are experimenting with autonomous bus services. However the real employment risk comes from the huge swathes of haulage vehicles which could be made redundant. This won’t happen soon, but I suspect our roads will be 30% autonomous vehicles by 2030.

While it’s easy to assume that AI will only affect blue collar jobs, as we saw with the automation of manufacturing, I’m not so sure. I’m currently using an Artificially Intelligent PA to book my meetings and manage my calendar. It’s fairly crude at the moment, but it won’t be long before internet agents will be booking my travel, arranging my accommodation, and informing the person I’m meeting that I’m stuck in traffic. All things that are possible today.

Jump forward 20 years and I can see a lot of professional classes affected by digital disruption and the move to AI. In this brave new future, how will governments cope with rising unemployment?

One idea that’s been raised by both right and left is that of a Universal Wage. Put simply, every citizen would automatically receive a small, subsistence payment at the start each month. This would be enough to cover basic expenses like food and accommodation, but it wouldn’t guarantee a high quality of life, so most people would still choose to top up their incomes through work.

Unlike unemployment benefits, people don’t lose their universal wage if when they do work, removing a huge disincentive for many people. Instead this provides greater flexibility in the type of work people are able to do. For instance carers could fit work around their caring duties or students around college. As such, the Universal Wage supports the current trend we’re seeing towards the gig economy.

This may seem like an impossibly expensive solution, but various economic studies have shown it to be just about feasible today with only a marginal rise in tax. The reason it’s not more comes in part from the savings it would provide to the state. No more judging benefits on means, or policing infractions. Just a simple monthly payment for all.

The left love this policy for the social equality it brings. People can now spend their time in education and training, raising families and caring for loved ones, or exploring the arts. The right like it for similar reasons; empowering individual entrepreneurship while simultaneously reducing the size of government.

Several Universal Wage experiments are taking place around the world at the moment, so it will be interesting to see what the findings bring.

Comments (0)

What the hell is design thinking anyway? | April 4, 2016

In a meeting a couple of weeks ago, one of my colleagues asked me to define “design thinking”. This question felt like a potential bear trap—after all “design thinking” isn’t a new or distinct form of cognitive processing that hadn’t existed before us designers laid claim to it—but I decided to blunder in regardless.

For me, design thinking is essentially a combination of three things; abductive reasoning; concept modelling; and the use of common design tools to solve uncommon problems.

If you’re unfamiliar with abductive reasoning, it’s worth checking out this primer by Jon Kolko. Essentially it’s the least well known of the three forms of reasoning; deductive, inductive and abductive, and the one that’s associated with creative problem solving.

Deductive reasoning is the traditional form of reasoning you’ll be familiar with from pure maths or physics. You start with a general hypothesis, then use evidence to prove (or disprove) its validity. In business, this type of thinking is probably how your finance department plans its budget i.e. to generate this much profit we need to invest this much in staff, this much in raw materials and this much in buying attention.

Inductive reasoning is the opposite of deductive reasoning, using experimentation to derive a hypothesis from a set of general observations. In business, inductive reasoning is often the preserve of the customer insight and marketing team i.e. we believe our customers will behave this way, based on a survey sample of x number of people.

By comparison, abductive reasoning is a form of reasoning where you make inferences (or educated guesses) based on an incomplete set of information in order to come up with the most likely solution. This is how doctors come up with their diagnoses, how many well-known scientists formed their hypotheses, and how most designers work. Interestingly it’s also the method fictional detective Sherlock Holmes used, despite being misattributed as deductive reasoning by Sir Arthur Conan Doyle.

Abductive reasoning is a skill, and one that can be developed and finessed over time. It’s a skill many traditional businesses fail to understand, preferring the logical certainty of deductive reasoning or the statistical comfort of inductive reasoning. Fortunately that’s starting to change, as more and more companies start to embrace the “design thinking” movement.

So what else does design thinking entail other than abductive thinking? Well as I mentioned earlier, I believe the second component is the unique ability designers have to model complex problems, processes, environment and solutions as visual metaphors rather than linguistic arguments. This ability allows designers to both understand and communicate complex and multifaceted problems in simple and easy to understand formats, be they domain maps, personas, service diagrams or something else entirely.

All too often businesses are seduced into thinking that everybody is in alignment, by describing complex concepts in language-heavy PowerPoint presentations, only to realise that everybody is holding a slightly different image of the situation in their heads. This is because, despite its amazing power, language is incredibly nuanced and open to interpretation (and manipulation). Some of our biggest wins as a company have involved creating graphic concept maps in the form of posters that can be hung around the office to ensure everybody understands the problem and is aligned on the solution. We call this activity design propaganda, and it’s a vital part of the design process.

A simpler incarnation is the design thinker’s tendency to “design in the open” and cover their walls with their research, models, and early prototypes. By making this work tangible, it allows them to scan the possibility space looking for un-made connections, and drawing inferences that would have been impossible through language alone.

The final aspect of “design thinking” is the tools us designers have developed to help think through these complex conceptual problems. These tools include a wealth of research techniques, prototyping activities and design games, not to mention processes and frameworks like “lean” and “agile”. Designers are often better equipped than typical management consultants and MBAs to tackle the sorts of problems business are starting to experience. This is just one of the reasons consultants and business leaders have started turning to programs like the Singularity University and dSchool, to become versed in the language and practice of design thinking.

It’s really good news that “design thinking” is starting to gain wider adoption, but this success comes with a small warning. While we designers helped pioneer and popularise the practice of “design thinking”, we may eventually lose out to the traditional purveyors of corporate strategy. Why?

Because despite having the skills necessary to deliver these functions, designers have shied away from the term, and resisted immersing ourselves fully in the business world. The large internal consultancies still have the business connections, they speak the same language, and are now starting to adopt the best practices of our field. So unless we get out of our beautifully designed and ergonomically friendly ivory towers, we may find it’s a hollow and short-lived victory for design after all.

Comments (0)