Reoccuring billing, and forgetfulness as a dark pattern | May 1, 2018
Five years ago my company was pitching for a newspaper project, so I decided to sign up for a bunch of newspaper subscriptions, including the Times. I was really impressed with the thought and consideration that had gone into both their Web offering and iPad app, so used it constantly for about a month. Once the pitch had finished, I stopped using the service, and my attention drifted elsewhere.
Today, I received a threatening email from the Times saying that my last payment hadn’t gone through and if I didn’t do something about it, they were ominously going to “take action”. I immediately felt a knot in my stomach. Had I accidentally been paying for the subscription all this time? How could I have not known? I felt stupid, I felt gullible, I felt angry and betrayed.
It would be easy to blame me for my own stupidity, and a do take a good deal of responsibility. After all, shouldn’t I be checking my bank statements each month for erroneous payments? I’m sure a lot of people do this with their personal accounts, but I genuinely struggle to find the time. It’s even harder with company accounts. With so many transactions are going though each month, it’s easy to miss small anomalies. Especially when the person managing the accounts isn’t the person making the payments. When this happens, associations easily get divorced.
For the majority of online services I use, this isn’t a problem. I’ll sign up with an email address and be sent a handy invoice or billing reminder each time money is taken. This lets me know that I’m still a subscriber, and reminds me to use the service and get the value I’m paying for.
This wasn’t the case with my Times subscription. They kept dutifully taking money from my account once a month, without informing me this was happening. Understanding that memory is fallible, it’s inevitable that people will eventually forget some of the subscriptions they’ve signed up for, and without these billing reminders, people like myself will find themselves accidentally paying for services long after they’ve stopped being useful.
It’s entirely possible that this was simply an oversight from the Times, and there was no malicious intent. However it’s interesting to think that of the two possible approaches—remind or don’t remind—the former massively favours the customer as it prevents them from forgetting subscriptions, while the latter hugely favours the company as they constantly get to extract money from people who have accidentally forgotten their subscription, and would otherwise cancel. So you can’t help but wonder whether an ambitious product manager made a deliberate decision to avoid subscription reminders, in an attempt to maximise revenue.
Even if this was a genuine oversight, and not a deliberate dark pattern, I cannot be the first person to have forgotten they were paying for a service they weren’t using. Especially with the increasingly ageing population of Times readers. As such, I suspect the customer service team must get a handful of such calls each week, if not each day.
Oh, did I mentioned that the only way to unsubscribe was by phone, despite being able to sign up online. This is another well known dark pattern. Making it super easy to subscribe to something, but relatively hard to unsubscribe. As such. I’m sure a good portion of people phoning up to unsubscribe get fed up waiting 15 to 20 minutes for an answer, decided they’d phone back another time, then dutifully forgot again.
Being a known problem, this would be a relatively simple thing to fix. For instance, you could send customers a gentle reminder on the anniversary of their subscription. That way, if customers were to forget, this would give them an opportunity to re-enguage with your service, or cancel if they no longer wanted to use you.
A more sophisticated approach would be to notice that people hadn’t logged in after a set time, like 3 months, and put peoples accounts on hold. This is a really nice approach as it shows a duty of care to your customers, while still keeping the option open that they’ll re-enguage.
Of course this all depends on having the will to make these improvements, especially if the result could mean a drop in income. So for many companies it’s just more convenient—and more profitable—to ignore these edge cases and keep taking money from people you know no longer use your service. To that end, it feels like the fallibility of human memory is a dark pattern many companies are using for their own benefit.
Norwegian A.I. Retreat | October 1, 2017
Last week I took a group of friends and colleagues over to the Norwegian Fjords for a 3 day retreat. We’d all been working hard the past year, and were feeling pretty burnt out. So everybody jumped at the chance to breath in the clean Nordic air, marvel and the beautiful surroundings, and get a sense of inspiration and perspective.
We’d booked the wonderful Juvet Hotel, a place I’ve wanted to stay ever since seeing it used as the location for Ex Machina. We needed an area of focus for the retreat, and considering the location, Artificial Intelligence seemed like the obvious choice.
Once the preserve of Science Fiction, A.I. is starting to weave its way into our lives. At the moment there’s a lot of hype and speculation; over active marketing teams trying to convince us that adding the letters “A” and “I” to a product instantaneously make it better. While that isn’t always the case, it’s helping lots of start-ups attract more investment and increase their valuation.
At the same time there is also a lot of fear, uncertainty and doubt being generated by the media. Rarely a day goes by without some news story proclaiming an end of days scenario, only for that same publication to dismiss the fears as irrational a week later. It seems that if A.I is good for one thing, it’s boosting circulation.
With opinions fluctuating between utopia and dystopia, I was struggling to get an accurate view of the field. As such, one goal of the retreat was to gain a more realistic understanding of A.I. away from the media hyperbole. We did this by starting the retreat with a simple domain mapping exercise, to ensure that we all had a shared vocabulary and understood the general direction of travel.
My other hope was to bring more diversity of thought to the conversation, and break away from the usual circle of computer scientists and technocrats. Rather than domain experts, we were a group of interested parties, comprised of people from the arts, humanities, academia and design. We weren’t necessarily the people inventing this brave new world, but we would be the type of people called upon to make it palatable to the general public through story telling and design.
At the outset of most technological revolutions, the focus is understandably on the benefits it can bring. For industrial advances, these benefits are often in the form of reduced costs, increased productivity, and increased shareholder value. It’s only later that the social effects become clear.
As a group of user-centered designers and humanist technologists, it became evident that our interest lay in the effects A.I. would have on people and society as a whole, rather than the more immediate and obvious benefits to productivity and commerce. Over the course of around a dozen unconference style conversations, from brainstorming dystopian futures to discussing robot ethics, a pattern of concerns started to emerge. We’ve tried to capture the outputs of these discussions as a series of open questions, which we hope to share soon.
One big topic of discussion was the fear that A.I. and robotics may bring about large scale under-employment. Past technological revolutions have sparked similar fears, and humanity has always been able to adapt. Engines surpassed human power, production line technology automated mundane and repetitive tasks, and computers allowed people to outsource data storage and processing. Each time this has happened, we’ve be able to find new and meaningful work to replace the lost jobs, driving productivity ever forward. However could A.I. be difference? If we can finally outsource human cognition to the machine, is there anywhere left to go?
A related problem is the nature of the work we’ll end up doing as a result. A.I. has the ability to remove mundane tasks and let us focus on the fun and creative parts of our work. It also has the ability to create a generation of workers who’s sole job is to babysit machines, only stepping in when some sort of exception is thrown up. While this may be efficient, it’s not a great route to job satisfaction. As a result A.I. could very well eat into the middle of the jobs market, pushing some people up the skills ladder, and others down.
Another big problem was the realisation that as systems become more and more sophisticated, they become more difficult to understand. It’s no longer just a case of viewing source and checking the code, but also understanding the training data. If the training data is biased, because society can be biased, the results may be skewed and difficult to detect. This could result in new jobs like A.I. trainers and data bias consultants to ensure that new A.I.s are being fair with their decisions.
We briefly talked about robot skeuomorphism; how a lot of household robots are currently designed to look vaguely humanoid. This has certain benefits, such as signalling the robots capabilities to their users. If the robot has eyes, you presume it can see, if it has ears you presume it can hear, and if it has legs you assume it can walk. A lot of robots also seem to demonstrate rather childlike capabilities like big eyes and short stature, partly to communicate a level of simplicity and demonstrate that they aren’t a threat. At the moment the form is largely a consequence of engineers trying to create robots that can do similar tasks to humans by duplicating their movements. However over time I believe that robots will move away from the humanoid form and develop shapes which are better designed and more suited to their particular tasks.
We also touched on the area of ethics and morals. For instance should A.I.s be forced to adopt a human moral code, and if so, what would that actually be? If we did manage to create some form of super intelligence, would we have to grant them human-like rights, or could we still consider them a utility like a car or a toaster. If they were treated like utilities, wound’t that raise some rather uncomfortable questions?
On a slightly more mundane, but possibly more near-term scenario, should we encourage people to be polite towards A.I.s in the same way we are polite to people? Obviously the current crop of A.I.s wouldn’t really care if we say please or thank you. However by ignoring these common social behaviours, we may be baking in problems for the future. One of the attendees mentioned how they accidentally started talking to their partner like they were talking to their Alexa, while several parents noted their kids had started to behave in similar ways. Imagine a future where robots pets and helpers were common. Could we imagine a scenario where mistreating a robot pet change the way we treated actual animals? If so, would we eventually have to consider legislation to prevent robot abuse?
Obviously a lot of these questions are just thought experiments, and are a long way off at the moment. However I truly believe that when I’m old enough to need some form of home care, there’s a good chance I’ll be looked after in part by a robot. So many of these challenges may will be here In our lifetimes, and a few may arrive sooner than we think.
Of course the trip wasn’t just pondering theoretical questions. I was as much a holiday as anything else. So as well as lots of stimulating conversations over good food and slightly overpriced beer, we had plenty of time to enjoy our surroundings. This included a lovely forrest walk along King Olav’s Way, a hike up a mountain to visit a Glacier, and even a chance to see the Northern Lights. These shared experiences helped us bond as a group, while the beautiful scenery helped put things in perspective and provide the space to think.
Considering the short amount of time we had at our disposal, It’s amazing how much ground we managed to cover and how productive we felt by the end. We had started the retreat with the 20 of us sitting around discussing what we’d hoped to get out of the next two days. We finished on a similar note, explaining what we’d all taken away. Everybody had a different story to tell, but we all left with new connections, deeper friendships, a better understanding of the emerging field of A.I. and a newfound love of the Norwegian Fjords. So much so, that we’ve already booked next years trip. I for one can’t wait.
Twitter and the end of kindness | September 13, 2017
When you see somebody with spinach in their teeth, the kind thing to do is to tell them privately. If you tell them to their face, in front of a group of friends and strangers, you get the same end result; the spinach gets removed. However in doing so you bring attention to the problem, and shame the participant in the process. So what could have been an act of kindness, quickly turns into an act of cruelty and public humiliation.
There was a time, not so long ago, when you would contact a company directly if you had a problem with a product or service. Maybe the product got lost in the post or wasn’t as advertised, maybe the hotel room wasn’t as expected, or the food didn’t come up to scratch. In these situations you’d tell the waiter or manager, drop the company an email, or call customer support.
These days, when you see a problem, the first reaction is often to reach for Twitter and share your frustration with the world. With large companies this often comes from experience. We’ve all had conversations with banks, utility companies and airlines, which have gone nowhere, so we end up venting our frustration online.
While it’s easy for companies to brush private conversations under the carpet, it’s much more difficult to do in public, so we’ve quickly learned that if we take our criticisms to Twitter, there’s a better chance they will get dealt with.
I’ve had this experience myself. After several frustrated phone conversations with my airline of choice, I took my complaints to Twitter. They immediately responded, took ownership of the problem and sorted it out straight away. I’m now on some kind of airline social media watchlist (the good kind), re-enforcing the fact that if I complain on Twitter my problem will get solved faster than phoning customer services.
In order to avoid a public relations disaster, complaining on social media encourages the best customer service from a company. This is something the large companies could have avoided, by delivering consistantly great customer service through traditional channels. As this hasn’t happened, publicly shaming companies has become the go to way to ensure a good customer service.
If this stopped with large companies, or companies who you’ve experienced an irreconcilable service failure with, I wouldn’t mind. However this behaviour has become the standard behaviour with everybody now, from big companies to small companies, from celebrities to friends. Rather than contacting people directly, we’ve started using public shaming as a tool to correct behaviour.
I see it regularly on Twitter. A friend or follower tweets you to highlight some small problem. Maybe there’s a broken page on your website, a typo on your recent medium post, or you accidentally referenced the wrong user in a recent tweet.
It would be super easy to email or DM the person, but instead you post to their public timeline. Most of the time you mean well, and are simply trying to help. However by posting publicly you draw other peoples attention to the problem, forcing people to act out of shame and embarrassment rather than gratitude.
People usually post to the public timeline because it represents the least amount of effort to do a good thing. You don’t have to switch panes in your Twitter app, go hunting for their email address, or ask if they’d mind following you so you can direct message them. You can get it off your mind as quickly as possible and move on.
However sometimes it feels like there’s an ulterior motive. That there’s a small amount of joy to be had from spotting the person you’re following has done something wrong, and flagging it up in public. That you get the public perception of doing a good deed (which is always nice) while making a small but pointed statement that they’re not perfect in front of their friends. It’s as though you’ve spotted the spinach in their teeth, but decided that the kindest thing to do was to point it out loudly in a crowd, in front of a thousand of their closets friends.
Personally I’d prefer to know rather than not know, so I’m definitely not suggesting people stop pointing out these small errors and transgressions. However I think we should think twice before posting these things publicly, and if time allows, reach out to your friend or follower directly first. That way you’ll avoid accidentally embarrassing them, or making them feel that they have to act from a sense of public pressure.
More importantly it’s the kind and polite thing to do. It’s also going to make that person think more warmly of you, as you’ve done them a favour without seeking any recognition, while maintaining their dignity and public reputation at the same time.
It’s only a small behaviour nudge, but from now on I’m going to do my best to approach people directly first, whether it’s large companies, small businesses, website owners, followers or friends, when I notice something is amiss. I urge you to do the same.
The Golden Age of UX may be over, but not for the reasons stated | August 5, 2017
Last week an article entitled The Golden Age of UX is Over popped onto my RADAR, after causing a bit of a stir amongst the design community. If I was being generous I’d say it was a genius title, designed to spark debate amongst UX designers. If I was being slightly less generous, I’d say it was a devilishly brilliant piece of click-bait, designed to drive traffic to an agency site. Either way I had a feeling the article would annoy me, so spent the next couple of days actively ignoring it. However temptation finally got the better of me and I ended up taking the bait.
On the whole I agree with the sentiment of the title that the “Golden Age of UX” probably is over. I say that as somebody who has been working in the space since the early naughties, set up one of the first UX practices in the UK, and curates the longest running UX conference in Europe.
The field of UX started life as a small but emergent community of practice, on the fringes of conferences like SXSW and the IA Summit. It grew through the blogs of early pioneers, and through the work of consultancies like Adaptive Path and Clearleft. The community accreted around new conferences like UX Week and UX London, which, in their early years, attracted almost the entirety of the UX communities in their respective locations.
I would argue that the quality of innovation, the quality of discourse and the quality of change in the UX space peaked somewhere between 2008 and 2012. This for me could arguably be described as the golden age of UX.
As with any gold rush, news of the find spreads quickly, and as more people rush in to make their fortunes, resources get depleted. By the middle of teens, UX hyperinflation started to occur. Every freelancer and every agency added UX to their titles, without really understanding what the term meant. “UX Designer” featured in lists of the most in-demand new professions and recruiters rushed to fill the gap, with often disastrous effects. While the number of people who self identified as UX designers carried on climbing, a deep and detailed understanding of what UX actually was started to ebb away.
The meaning of UX got muddied. Was it the same as UI? Was it another name for interaction design? Where did strategy, research and IA fit in? UX vs UI memes started to form on Twitter, arguments erupted about the existence of unicorns, and seemingly nobody could agree on anything anymore.
For years I fought to maintain a clear definition of UX, one that linked back to the community of practice from which it sprang. However the tidal wave of misunderstanding and misrepresentation became too big to fight, so I eventually gave up trying. UX had become so watered down and misunderstood that popular perception no-longer represented the community I knew. I became resigned to the fact that meaning changes based on usage, and if the majority of people see UX as this lightweight blending of prototyping and UI, devoid of any deep research, awareness of business needs, or commercial imperative, so be it.
It was this latter sentiment that annoyed me about the “Golden Age” article. Criticising a discipline based on cargo-cult thinking. In-truth, UX has always taken account of business needs and market forces, while Lean start-up was little more than a reformation of user centred design for a business audience. As a result, the article was less about the golden age being over, and more the dawning realisation that they may have confused a badly drawn map with the territory.
It’s also worth noting that most people tend to associate a “golden age” with their formative years, whether it’s movies, musics, or the discovery of a new career. So it’s possible that the golden age may be over for some, but for others it’s just beginning.
Design Leadership Slack Team | July 30, 2017
I recently started a Slack Team for Design Leaders. We currently have over 450 members; mostly Heads, Directors and VPs of Design from prominent tech companies and large traditional organisations. We’ve been very careful building the community. As a result the signal to noise ratio is remarkably high. Recent conversations have included:
- Discussions around recruitment and whether design tasks are a good idea.
- Various design leaders sharing their career progression ladders.
- An ongoing debate around the perfect team structure.
- Whether managing Millennials required a different set of skills (the general conclusion being they don’t).
- The challenges of managing fast growing teams.
- Tactics for your first 90 days in role.
The criteria for joining is fairly straightforward.
- You’re a senior design leader of an in-house team.
- You’re either the most senior design representative in your company, or a member of the design leadership team (at larger orgs).
- You’ve moved away from delivery and aren’t a “player-coach.”
- You’re a genuinely nice person who wants to contribute to the evolving filed of knowledge we’re creating.
If this sounds like you, drop me an email and I’ll hook you up.
Once you’ve joined, please feel free to lurk for a while and get a feel for the place. When you’re ready to jump in, please introduce yourself to the group, letting folks know a little about your background, the company you work for, the team you look after, and the leadership challenges that are interesting you at the moment.
We expect members of the Slack channel to respect each others privacy. If you do wish to disclose anything discussed here, please use Chatham House Rule and refrain from identifying individuals or their companies.
Members of this group are generally kind, considerate, constructive and helpful. Heated discussions may occur, but they should always be done with respect and a desire to advance the conversation. If you witness any disrespectful behaviour, please inform myself or one of the admins. We aim to resolve any conflicts peacefully and in a positive manner. However on the rare occasion this isn’t possible, we may ask individuals to leave.
The Real Value of Original Research | May 17, 2017
User-centred designers typically start a new project with a research phase. This allows them to understand the product or service through the eyes of their customers, explore the limits of the problem space, and come up with recommendations that feel at least partially informed. All useful things from a design perspective.
Sometimes organisations baulk at the idea of doing research, causing the design team to launch into their typical spiel about the value of their approach. In my experience, these objections are rarely about the value of research itself, but more around whether original research is necessary on this occasion.
All organisations of a certain size carry out research as a matter of course. They probably have a marketing department segmenting customers, understanding customer sentiment, and testing new propositions through surveys and focus groups. They also have an analytics team tracking user behaviour, testing the effectiveness of campaigns, and pinpointing areas for improvement. In preparation for this project, the product managers and BAs almost certainly did their own research to help build the business case. They probably have more information than they know what to do with.
Most organisations feel they have a pretty good handle on what’s going on inside their company; they just need you to fix it. They claim there’s no need to do more research. Instead, they will provide you with access to their analytics package, the results of the user testing report they commissioned nine months ago, and copies of their marketing personas. This, combined with a briefing meeting should be enough to get your team up to speed.
On the surface this makes sense. After all, why pay for original research if you already have the answers you need? Better to save the money and spend it coming up with a solution, especially when resources are scarce.
This attitude is completely understandable, but it hides an unusual and counterintuitive truth about the value of design research. Design research is rarely about the acquisition of new knowledge and information. Instead, the real value of design research comes from the process of gathering and analysing the results. It’s this analysis phase where the data gets processed, information gets turned into knowledge, and understanding becomes tacit.
Existing research will have been gathered to answer general business questions, so it won’t necessarily provide the insights the design team need. Instead, design research is done with a specific product or service improvement in mind; it adds nuance to the problem at hand, and allows the designer to weigh up different options and understand how the various solutions may play out.
Knowledge gained from original research is far more impactful than that gathered elsewhere. Remembering the conversation you had with a frustrated customer becomes part of the narrative, and the resulting insight becomes internalised. This is a very different experience from reading a data point in somebody else’s report, which can easily be downplayed or forgotten.
In psychology this phenomenon is known as embodied cognition—the idea that you think through problems using your whole self, rather than just your mind. This means you learn more by experiencing something in person, than you do by reading about it in a book or report.
For most designers, original research isn’t about gathering facts and data. Instead, it’s the process they use to understand the task at hand. Like warming up before the big race, research allows you to engage body and mind, get the creative muscles working, and be flexible and limber enough to tackle the challenge ahead. It isn’t something you can effectively outsource to your coach or team captain. It’s a vital part of the pre-race process.
This was originally posted on UXMas.