Developers “Own” The Code, So Shouldn’t Designers “Own” The Experience? | August 24, 2016
We’ve all been there. You spent months gathering business requirements, working out complex user journeys, crafting precision interface elements and testing them on a representative sample of users, only to see a final product that bears little resemblance to the desired experience.
Maybe you should have been more forceful and insisted on an agile approach, despite your belief that the organization wasn’t ready? Perhaps you should have done a better job with your pattern portfolios, ensuring that the developers used your modular code library rather than creating five different variations of a carousel. Or, maybe you even should’ve sat next to the development team every day, making sure what you designed actually came to pass.
Instead you’re left with a jumble of UI elements, with all the subtlety stripped out. Couldn’t they see that you worked for days getting the transitions just right, only for them to drop in a default animation library? And where on earth did that extra check-out step come from. I bet marketing threw that in at the last minute. You knew integration was going to be hard and compromises would need to be made, but we’re supposed to be making the users lives easier here, not the tech team.
When many people are involved in a project, it is very important to make sure that they have a common understanding of the problem and its solution.
Of course, there are loads of good reasons why the site is this way. Different teams with varying levels of skill working on different parts of the project, a bunch of last-minute changes shortening the development cycle, and a whole host of technical challenges. Still, why couldn’t the development team come and ask for your advice on their UI changes? You don’t mess with their code, so why do they have to change your designs around? Especially when the business impact could be huge! You’re only round the corner and would have been happy to help if they had just asked.
While the above story may be fictional, it’s a sentiment I hear from all corners of the design world, whether in-house or agency side. A carefully crafted experienced ruined by a heavy-handed development team.
This experience reminds me of a news story I saw on a US local news channel several years ago. A county fair was running an endurance competition where the last person remaining with their hand on a pickup truck won the prize. I often think that design is like a massive game of “touch the truck”, with the development team always walking away with the keys at the end of the contest. Like the last word in an argument, the final person to come in contact with the site holds all the power and can dictate how it works or what it looks like. Especially if they claim that the particular target experience isn’t “technically possible”, which is often shorthand for “really difficult”, “I can’t be bothered doing it that way” or “I think there’s a better way of doing it so am going to pull the dev card”.
Now I know I’m being unfairly harsh about developers here and I don’t mean to be. There are some amazingly talented technologists out there who really care about usability and want to do the best for the user. However, it often feels as though there’s an asymmetric level of respect between disciplines due to a belief that design is easy and therefore something everybody can have an opinion on, while development is hard and only for the specially initiated. So while designers are encouraged (sometimes expected) to involve everybody in the design process, they often aren’t afforded the same luxury.
To be honest, I don’t blame them. After all, I know just enough development to be dangerous, so you’d be an idiot if you wanted my opinion on database structure and code performance (other than I largely think performance is a good thing). Then again I do know enough to tell when the developers are fudging things and it’s always fun to come back to them with a working prototype of something they said was impossible or take months to implement — but I digress.
The problem is, I think a lot of developers are in the same position about design — they just don’t realize it. So when they make a change to an interface element based on something they had heard at a conference a few years back, they may be lacking important context. Maybe this was something you’ve already tested and discounted because it performed poorly. Perhaps you chose this element over another for a specific reason, like accessibility? Or perhaps the developers opinions were just wrong, based on how they experience the web as superusers rather than an average Jo.
Now let’s get something straight here. I’m not saying that developers shouldn’t show an interest in design or input into the design process. I’m a firm believer in cross-functional pairing and think that some of the best usability solutions emanate from the tech team. There are also a lot of talented people out there who span a multitude of disciplines. However, at some point the experience needs to be owned, and I don’t think it should be owned by the last person to open the HTML file and “touch the truck”.
So, if good designers respect the skill and experience great developers bring to the table, how about a little parity? If designers are happy for developers to “own the code”, why not show a similar amount of respect and let designers “own the experience”?
Everybody has an opinion. However, it’s not a good enough reason to just dive in and start making changes.
Doing this is fairly simple. If you ever find yourself in a situation where you’re not sure why something was designed in a particular way, and think it could be done better, don’t just dive in and start making changes. Similarly, if you hit a technical roadblock and think it would make your lives easier to design something a different way, go talk to your designer. They may be absolutely fine with your suggested changes, or they may want to go away and think about some other ways of solving the same problem.
After all, collaboration goes both ways. So if you don’t want designers to start “optimizing” your code on the live server, outside your version control processes, please stop doing the same to their design.
Originally published at www.smashingmagazine.com on August 9, 2016.
Are we moving towards a post-Agile age? | August 23, 2016
Agile has been the dominant development methodology in our industry for some time now. While some teams are just getting to grips with Agile, others extended it to the point that it’s no longer recognisable as Agile. In fact, many of the most progressive design and development teams are Agile only in name. What they are actually practicing is something new, different, and innately more interesting. Something I’ve been calling Post-Agile thinking. But what exactly is Post-Agile, and how did it come about?
The age of Waterfall
Agile emerged from the world of corporate IT. In this world it was common for teams of business analysts to spend months gathering requirements. These requirements would be thrown into the Prince2 project management system, from which a detailed specification—and Gantt chart—would eventually emerge. The development team would come up with a budget to deliver the required spec, and once they had been negotiated down by the client, work would start.
Systems analysis and technical architects would spend months modelling the data structure of the system. The more enlightened companies would hire Information Architects—and later UX Designers—to understand user needs and create hundreds of wireframes describing the user interface.
Humans are inherently bad at estimating future states and have the tendency to assume the best outcome—this is called estimation bias. As projects grow in size, they also grow in surface area and visibility, gathering more and more input from the organisation. As time marches on, the market changes, team members come and go, and new requirements get uncovered. Scope creep inevitably sets in.
To manage scope creep, digital teams required every change in scope to come in the form of a formal change request. Each change would be separately estimated, and budgets would dramatically increase. This is the reason you still hear of government IT projects going over budget by hundreds of millions of dollars. The Waterfall process, as it became known, makes this almost inevitable.
Untimely the traditional IT approach put too much responsibility in the hands of planners and middle managers, who were often removed from the day-to-day needs of the project.
The age of Agile
In response to the failures of traditional IT projects, a radical new development philosophy called Agile began to emerge. This new approach favoured just-in-time planning, conversations over documentation, and running code; effectively trying to counter all the things that went wrong with the typical IT project. The core tenets of this new philosophy were captured in the agile manifesto, a document which has largely stood the test of time.
As happens with most philosophies, people started to develop processes, practices and rituals to help explain how the tenets should be implemented in different situations. Different groups interpreted the manifesto differently, and specific schools started to emerge.
The most common Agile methodology we see on the web today is Scrum, although Kanban is another popular approach.
Rather than spending effort on huge scope documents which invariably change, Agile proponents will typically create a prioritised backlog of tasks. The project is then broken down into smaller chunks of activity which pull tasks from the backlog. These smaller chunks are easier to estimate and allow for much more flexibility. This opens up the possibility for regular re-prioritisation in the face of a changing market.
Agile—possibly unknowingly—adopted the military concepts of situational awareness and command intent to move day-to-day decision making from the planners to the front-line teams. This effectively put control back in the hands of the developers.
This approach has demonstrated many benefits over the traditional IT project. But over time, Agile has became decidedly less agile as dogmas crept in. Today many Agile projects feel as formal and conservative as the approaches they overthrew.
The post-Agile age
Perhaps we’re moving towards a post-Agile world? A world that is informed by the spirit of Agile, but has much more flexibility and nuance built in.
This post-Agile world draws upon the best elements of Agile, while ditching the dogma. It also draws upon the best elements of Design Thinking and even—God forbid—the dreaded Waterfall process.
People working in a post-Agile way don’t care which canon an idea comes from, as long as it works.. The post-Agile practitioner cherrypicks from the best tools available, rather than sticking with a rigid framework. Post-Agile is less of a philosophy and more of a toolkit that has been built up over years of practice.
I believe Lean Startup and Lean UX are early manifestations of post-Agile thinking. Both of these approaches sound like new brands of project management, and each has its own dogma. If you dig below the surface, both of these practices are surprisingly lacking in process. Instead they represent a small number of tools—like the business model canvas—and a loose set of beliefs such as testing hypotheses in the most economical way possible.
My initial reaction to Lean was to perceive it as the emperor’s new clothes for this very reason. It came across as a repackaging of what many designers and developers had been doing already. With a general distrust for trademarks and brand names, I naturally pushed back.
What I initially took as a weakness, I now believe is its strength. With very little actual process, designers and developers around the world have imbued Lean with their own values, added their own processes, and made it their own. Lean has become all things to all people, the very definition of a post-Agile approach.
I won’t go into detail how this relates to other movements like post-punk, post-modernism, or the rise of post-factual politics; although I do believe they have similar cultural roots.
Ultimately, post-Agile thinking is what happens when people have lived with Agile for a long time and start to adapt the process. It’s the combination of the practices they have adopted, the ones they have dropped, the new tools they have rolled in, as well as the ones they have rolled back.
Post-Agile is what comes next. Unless you truly believe that Scrum or Kanban is the pinnacle of design and development practice, there is always something new and more interesting around the corner. Let’s drop the dogma and enter this post-Agile world.
Renting software sucks | August 15, 2016
Back in the the olden days (c. 2000) people used to own software. When a new version of Photoshop or Fireworks came out, you’d assess the new features to decide whether they were worth the price of the upgrade. If you didn’t like what you saw, you could skip a generation or two, waiting until the company had a more compelling offering.
This gave consumers a certain amount of purchasing power, forcing software providers to constantly tweak their products to win customer favour. Of course, not every tweak worked, but the failures were often as instructive as the successes.
This started to change around 2004, when companies like 37 Signals released Basecamp, their Software as a Service project management tool. The price points were low—maybe only a few dollars a week—reducing the barrier to entry and spreading the cost over a longer period.
Other products quickly followed; accounting tools, invoicing tools, time-tracking tools, prototyping tools, testing tool, analytics tools, design tools. Jump forward to today, and the average freelancer or small design agency could have subscriptions to over a dozen such tools.
Subscription works well for products you use on a daily basis. For designers this could be Photoshop or InVision; for accountants this could be Xero or Float; and for consumers this could be Spotify or Netflix.
Subscription also encourages use—it encourages us to create habits in order to get our money’s worth. Like the free buffet at an all-inclusive hotel, we keep going back for more, even when we’re no longer hungry.
In doing so, subscription also locks us in, making it psychologically harder for us to try alternatives. Making it less likely for us to try that amazing local restaurant because we’ve already paid for our meals and need to beat the system. The sunk cost fallacy in all its glory.
Problems with the rental model become more apparent when you’re forced to rent things you use infrequently, like survey products or recruitment tools. You pay to maintain the opportunity of use, rather than for use itself.
We recently did an audit of all the small monthly payments going out of the company, and it’s amazing how quickly they mount up. Twenty dollars here and forty dollars there can become thousands each year if you’re not careful. Even more amazing are the number of products we barely used. Products that somebody signed up for a few years back and forgot to cancel.
You could blame us for our lack of diligence. However the gym membership model of rental is explicitly designed to elicit this behaviour. To encourage people to rent the opportunity, safe in the knowledge that the majority of members won’t overburden the system. Unclear billing practices and disincentives for unsubscribing—”if you leave you’ll lose all your data”—are designed for this very purpose.
Then you have the legacy tools. Products that you rarely use, but still need access to. Photoshop is a great example of this. Even if you’ve decided to move to Sketch, you know many of your clients still use Photoshop. In the olden days you would have keep an older version on your machine, costing you nothing. These days you need to maintain your Creative Cloud account across multiple team members, costing you thousands of dollars for something you rarely use.
This article was sparked by a recent Twitter storm I witnessed where Sketch users raised the idea of a rental model and vilified people who felt paying $30 a month for professional software (which currently retails at $99) was too much.
While I understand the sentiment—after all Sketch is the tool many designers use to make their living—you can’t take this monthly cost in isolation. Instead you need to calculate lifetime cost. As we all know from the real world, renting is always more expensive than ownership in the long term.
You also have to consider rental costs in relationship to every other piece of rented software our industry considers necessary. With this number continuously increasing—but no sign of legacy tool rental declining—entering the digital industry is becoming an increasingly costly prospect for new designers and developers.
The thing I find strange is that, while we’ve been trained to believe renting is the norm for software over the past 10 years, few of us think this way of physical goods. We mostly still buy houses, cars, computers and music systems, rather than renting or leasing them. Many believe ownership offers some kind of noble status, despite the environmental cost of owning atoms over bits.
When we do rent physical products, it’s rarely on a subscription basis. Instead we’ll rent an apartment in New York through AirBnB for a weekend, a Zipcar for an afternoon or a Tasker for an hour.
I’m not saying software rental is always bad. I’d just like to see more diversity in SaaS business models. I’d welcome the ability to subscribe to mature services I use on a regular basis, but rent less common tools on a per-use basis. I’d also like to retain ownership of certain tools, like Sketch, as I think this is a better model for innovation.
We talk a lot about user-centered design in the digital world. Isn’t it about time we considered business models through the same critical lens?