Specialism, Ego and The Dunning-Kruger Effect | February 19, 2014
Every few weeks I see a discussion emerge that tries to dismiss the need for specialists in our industry, or refute their existence entirely. It usually goes along the lines of “I’m a [insert discipline] and I do my own [insert activity] so [insert specialism] is unnecessary or doesn’t exist”.
While it’s great to have people with a broad range of skills and abilities, it’s also a little hurtful to people who have dedicated their careers to being good at a particular thing, as it implies that all their choices and hard work were a waste of time.
Sometimes this conversation spins into the area of job titles and their general inability to sum up exactly what an individual does. Other times it has us dismissing fairly well understood disciplines or defining the damned thing. The conversation usually ends up with somebody saying something like “Well I’m just going to call myself a Designer/Developer” as if picking the broadest and most generic term adds clarity to the conversation.
The problem is that I really do understand the sentiment. If you’ve been working in the field of design for a very long time at a reasonably high level, everything starts to look the same. So when I’ve seen product designers, architects or moviemakers talk about their process, the similarities are uncanny. As such it’s no surprise when very experienced people pronounce that’s its design (or development) all the way down.
However when you start to unpick each discipline, you discover that while the thought processes are similar, the individual activities and craft skills are often very different. You also realise that scale has a big influence.
If you’re working on relatively simple projects, it’s entirely possible for a talented individual or small team of generalists to create something amazing. You see this in everything from Web design and indie-publishing to residential architecture and independent filmmaking.
For somebody who has built a successful career in this space, it’s very easy to look at large design agencies, architectural firms or film companies and boggle at all the specialists they have. After all, do Hollywood movies really need a best boy, key grip and clapper loader when you’ve just produced a short that you wrote, filmed and directed yourself?
It seems excessive, and maybe it is. I do think some industries have gone crazy with specialists, so there’s always room to assess whether a certain level of specialism is necessary for your particular requirements or situation.
However therein lies the problem! People really do have the habit of making statements about the whole industry based on the small corner they inhabit. I know, as I’m as guilty of this as most. As such we see lots of comments dismissing the need or even existence of certain specialisms not because they don’t actually exist, but because those individuals just haven’t hit the limits of their abilities where having those specialisms would help.
This is actually a fairly common cognitive bias known as the Dunning-Kruger effect, which sees people inaccurately assess their own level of knowledge while failing to recognize the skills of others. So if you’ve ever watched a man try to build a fire, cook a BBQ or put up a shelf you’ll know what I mean.
In it’s most passive state, the Dunning-Kruger effect manifests itself as naivety and may actually be a key driver for learning. After all, isn’t it more enticing to think that if you start learning a new skill, you’ll get good at it quickly, rather than realising that you’re going to need 10,000 hours to perfect your craft?
As such we can help people get over this hump by expanding their worldview and explaining the deep specialisms of others. It doesn’t mean that you have to become a specialist yourself, but it’s useful to know when you’re reaching the limits of your own abilities, if only to inform where you go next.
However I think the Dunning-Kruger effect can have a more divisive role. I’m sure we’ve all come across the egotistical designer or creative director who rates their own abilities above all else. This approach often leads to really bad design decisions to the detriment of the product or its users.
I’ve seen similar issues in other fields, like developers feeling they are the most qualified people to design the UX because they are expert technology users, or interaction designers believing they will make great visual designers because it’s just a case of learning Photoshop right? This is an interesting area where Dunning-Kruger overlaps with the Halo effect to make people think that because they are good at doing one thing really well, they must be good at doing other things equally well.
I think this attitude is also holding a lot of people back. I’ve met plenty of talented practitioners over the years that had the opportunity to be great, if it wasn’t for the fact that they already believed they were. A lot of this is environmental. For instance if you happen to be great at creating mid-sized projects or happen to be the best designer in an above average agency, it’s easy to think that you’ve got it nailed. However put that person in a world-class team or a hugely complex project and watch them struggle.
I think this is why some of the best designers I know are going to work with big Silicon Valley Tech companies. It forces them to move out of their comfort zone and up their game.
For me, the very best practitioners usually exhibit the opposite of the Dunning-Kruger effect, known as the imposter syndrome. With this cognitive bias, people often have so much visibility of the great work and skill going on in their sector, that they never feel they match up. So they quite literally feel that at some stage they will be unmasked as an imposter.
This bias has a number of benefits in our sector as it forces people to up their game and learn new things, while at the same time making them realise that they will never know everything. As such, people with imposter syndrome tend to specialise. It also means that people are incredibly critical of their work and are constantly striving to improve.
However the imposter syndrome also has its negative effects, like never believing that you’re worthy, or giving overdue credit to people demonstrating Dunning-Kruger like behaviour. So in it’s worst manifestation I seen really amazing people stuck in mediocre jobs because they don’t truly believe how great they are.
As such the key learning is to try and develop a well-rounded view of the industry, the skills you have and the skills and expertise of others. So please, no more tweets or articles like this one decrying a particular skill, discipline or job title. It turns out they are very unhelpful, and more often than not wrong.
Better design through Web Governance | February 8, 2014
I meet a lot of in-house designers in the course of my travels and the same frustrations keep bubbling up - “how can I convince the company I work for to take my expertise seriously”. It seems that companies have a pathology of hiring highly talented people but taking away the decision making abilities they need to do their job.
Quite often the people at the top of the business know what is broken and are trying desperately to fix it, while the people at the coal face can see the solutions but are unable to act. So what’s going on here?
It seems to me that there’s a mid level of management responsible for turning strategy into tactics. So it’s their job to understand the business goals and communicate them to the experts in a way that ensures the problems find a good solution. If this was their only responsibility, I think we’d be in a good place. However a lot of the time this middle tier also start filtering solutions and this is where things start to go wrong.
I’m a firm believer that the people with the most experience in a particular facet of business should be the ones making the decisions for that facet. As such it would be nonsensical for the tech team to be making core financial decisions, as it would for the finance department to drive the technical infrastructure. So why do product managers, designers and UX practitioners constantly find their recommendations being overridden by managers from different departments with little experience in digital.
I think one of the problems lies in the hierarchical approach to management which is a layover of the industrial age. There has always been the assumption that as you rise up the hierarchy you gain more knowledge than the people below you and are therefore more capable of making important decisions.
However in the knowledge age this process is often reversed, with the people at the top forced to rely on the experts below them. Sadly a lot of mid level managers still believe they are in the former model and end up prioritising their opinions over the expertise of others.
This is one reason why I really like the idea of Web Governance. The idea is simple – to put in place a governance strategy that explains how decisions get made in the digital sphere.
Web Governance allows an organisation to identify the experts in a range of different disciplines and cede responsibility for those areas over to them, even if they happen to be lower in the organisational hierarchy. For instance, the governance document may state that a senior stakeholder has responsibility for delivering a set of business objective and metrics, but that UI decisions are the ultimate responsibility of the head of UX.
Imagine working in an organisation where the head of UX actually had genuine responsibility for the user experience of their product and can turn down bad poor ideas if they can’t be demonstrated to be in the service of a specific set of business outcomes.
Of course, there will be times when these issues clash, so the governance document needs to include information about who needs to be consulted on various decisions. However the goal here is to encourage discussion and negotiation over blanket control based on status alone.
The main thing here is to clearly set out the roles and responsibilities of each individual, rather than have them implied by status or inferred by domain. It’s also about breaking out of the traditional corporate hierarchy and allowing experts to have decision making responsibilities that can override more senior members in certain well defined areas.
Web governance feels like an effective solution to me and all the documentation I’ve reason on the subject so far seems extremely logical and positive. So if you’re struggling to get your expertise heard, maybe it’s time to start thinking about a governance strategy.
Paying Speakers is Better for Everybody | August 16, 2013
When I attend a conference I’m not there for the food or the venue, I’m there for the content (and occasionally the after parties). So it amazes me that conference organisers typically pay for everything but the thing people are there to see. That’s right, despite the often high ticket costs, very few events pay for speakers for their time. I think this is bad for conference goers, event organisers, speakers and the industry as a whole. I’ll explain.
When speakers don’t get paid for their time it’s really hard to justify putting much effort into their talks. So I’ve been to plenty of conferences where speakers will rush their preparation, and end up delivering a mediocre performance. They’ll joke that they wrote the talk the evening before, and will duck out of the speakers dinner early to finish off their slides. This shows a certain amount of contempt for the audience, many of whom have had to fight for the budget to attend, or save up out of their own pocket. However it’s really not their fault. Even first time speakers are busy people and if you’re not able to justify spending the time to write, hone and practice your talk during working hours, the quality will suffer.
Another justified criticism I hear is that conferences are full of the same old voices. Interestingly enough I believe paying all of the speakers, and not just the experienced ones, would help balance this out. This is because many first time speakers give up after their first couple of attempts because they just don’t see the value in speaking. Maybe it took them much longer to write the talk than they expected and their work or home life suffered, or maybe the fame and fortune the conference organisers promised didn’t actually materialise. If potentially great new speakers don’t see the conference circuit and a viable and sustainable ecosystem, they just won’t partake. I think this is a potentially huge loss.
From the organisers perspective, conferences are very expensive, so if they can avoid any additional costs, they will. The venue, catering and AV team most definitely won’t work for free, but it’s relatively easy to convince a speaker to do this, so many of them will. The usual arguments are that the conference organisers aren’t making any money so why should the speakers? As a conference organiser myself, this argument doesn’t hold water for the reasons already stated. In relation to the other costs involved, speaker remuneration is actually very low, and I’m sure most attendees would be happy to pay an extra £10 or £20 to ensure the speakers had enough time to write their talks and deliver good content.
The other argument is that the speakers will be getting exposure and possible work. This may be true in a few instances, but I’ve never had anybody give me work as a direct result of a conference. I’m not saying it does’t happen, but it’s not as common as conference organisers would like you to expect. In fact this argument is a bit like sleazy movie moguls doing screen tests with young models for exposure and a shot at the big time — a shot that rarely ever happens.
In truth, it takes a speaker at least a week to prepare a new talk, if not longer. You’ve then got to add on the time spent out of the office traveling to, and being at, the event. So even if you pay them $500 or $1,000 it’s unlikely they’ll be making a profit. It just makes it easier to justify the loss of income. As such the arguments around exposure should’t be used as an excuse not to pay, it’s just the icing on the cake if they do.
As an organiser I think paying speakers is actually a very good idea, whether they ask for it or not. This is because it changes the relationship from a voluntary one to a business one. When you’re not paying somebody you really can’t expect them to put a lot of effort into their talks, help you promote the event or respond to your emails quickly (a constant bugbear for organisers). However by paying speakers for services, you set up a different relationship and a level of expectation that makes your life easier and the quality of your event better. We’re not talking huge piles of cash in un-marked bills btw. Sometime a few hundred dollars or a voucher from Amazon is enough to make a speaker feel valued.
Now I’m not saying that speakers should always charge to speak. Far from it. There are plenty of situations where it’s not practical or even desirable, such as small local community events or the local University. There are also plenty of speakers who are paid by companies to speak as part of their jobs, so don’t expect payment. However if an event organiser is charging for attendance and paying other suppliers, I think it’s reasonable to expect to be treated similarly.
When you don’t pay your speakers, they will often try and get value back by other means like pitching their product, service or upcoming book. This is especially common in the tech and start-up arena where many of the speakers will be promoting their companies, looking for investment opportunities or attempting to hire. So I’m sure we’ve all sat through sessions which were essentially thinly veiled product pitches. I’m not saying this doesn’t happen when you pay people to speak, but it tends to be a lot less overt. Instead, folks tend to focus more on sharing useful content than gaining additional value.
On a broader level, I think conference organisers wield a huge amount of influence in our community and this sends the wrong message about the amount of value we put in a persons time and expertise. It’s basically saying that your experience is worthless and you should only get paid to push pixels or deliver code. This is the same problem I have with speculative design work, free “design competitions” and unpaid internships. So as community leaders I think it’s important for conference organisers to help define the industry they want to be part of, rather than simply save a few pounds because they know they can get away with it.
That being said, it’s also the responsibility of every speaker to ask for a fee and turn down the event if it’s not forthcoming, just as it’s your responsibility to be paid for your design work and turn down creative pitches if they don’t want to pay. If you don’t behave this way it’s not just yourself that you’re hurting, but every other speaker (or designer) out there. Conferenced can get away with not paying their speakers because speakers allow it to happen.
When I first started speaking it was very rare for people to actually offer to pay me to speak. However when I went back to conferences with a fee, they almost always agreed. At the very least it was the start of a negotiation. So I think speakers should be a little bolder and ask for speaker fees.
Ultimately I think the default setting should be for speakers to expect to be paid and for conference organiser to expect to be asked to pay. Not exactly a radical suggestion I’m sure you’d agree. This creates a market and helps ensure quality and longevity. As things currently stand, most conference organisers expect everybody except the biggest names to speak for free, and do a good job of making people feel guilty if they ask. Consequently only a few people jump the chasm to become “big names” and end up speaking at every conference under the sun.
Want more quality and diversity in your conferences? Pay your speakers.
Does TfL deliberately profit from user error? | April 15, 2013
Today I got a £20 penalty fine from TfL (Transport for London) because it turned out that I didn’t have enough credit on my Oyster card. I typically use the underground so when this happens you’re stopped at the barriers, giving you clear feedback and preventing you from making a costly error.
However I rarely use the DLR/Overground which is barrier free and have never had a situation where my credits had expired. It turns out when this happens the machine beeps twice rather than once. Unfortunately (for me) I wasn’t aware if this so I simply heard a beep and assume everything was OK and got on my train.
I presume there was also a message on the machine, and if I was to complain would be told that it was my duty to read the display. Of course we all know that the context if use (busy platform, unfamiliar surroundings, contact less payment and rushing for a train) makes glancing at a tiny display unlikely.
Sure this was user error, but a user error that could easily be avoided if the system was designed correctly. For a start it would be very easy to change the tone if the error message from a friendly and encouraging beep to a low toned culturally understandable buzz.
Secondly it would be easy to put the card into debit and allow users to top up on their next trip. This is what many other transit systems around the world do and what I thought the underground did as well.
Sadly TFL make user errors extremely easy and as they profit from this error I suspect there is little incentive to change.
I’ve experienced similar issues when booking rail tickets at train station kiosks. They always seem to present the most expensive ticket first (one way peak fare to London) rather than the most popular fare, presumably in the hope that a percentage of people will succumb to human error in their rush to buy a ticket and end up spending more money.
In most customer facing jobs, when user error happens you only need to look towards a friendly customer service representative to get the issue resolved. Sadly with TFL it seems there is an immediate assumption of fare evasion so rather than assistance you get slapped with a fine.
Even this would be OK if the treatment you received was friendly and apologetic. But in my experience its usually the opposite - cold, rude and unsympathetic. So what started as a small and easy to dismiss error ends up leaving you angry at an institution you spend hundreds if not thousands of pounds with, while casting a shadow over the rest of your day.
Privatisation (the legacy of Margaret Thatcher) was supposed to give us consumers better customer service and more choice, it instead it feels like we’ve inherited the worst of capitalism (profiteering) and the worst of state control (poor customer service) instead.
Why The Same Old Faces? | March 27, 2013
In an eailier post I discussed one reason why some people may perceive a lack of new faces on the speaker circuit — namely that by the time you reach the point in your career where you’re being asked to speak at conferences, you will most likely have had so much exposure already that you’ll no longer feel like a new voice.
This being said, there is a small but growing number of people who are continually asked to write articles, comment on news stories or speak at conferences. Is this due to lazy editors and event curators, or due to the existence of an “old boys network” that aims to exclude outsiders in favour of it’s own?
While it’s easy to assume that the road is blocked by others, sadly the truth is usually more mundane. Being an awesome designer or developer doesn’t necessarily make you a great writer or speaker. I’ve met some truly outstanding practitioners who show almost no interest or ability in sharing their knowledge on the public stage. Conversely I’ve met plenty of—often only slightly above average—designers and developers who have an amazing ability to tell stories and communicate ideas.
It turns out that the ability to inspire, inform and entertain is pretty rare, so is it any wonder why these people are approached time and again? In fact, wouldn’t it be a little strange if conference organisers and publishers routinely ignored people with a track record in favour of less experienced people?
It also turns out that being knowledgeable in a particular topic doesn’t make you automatically attractive to conferences and magazines. Especially if there are dozens of other people talking about the same thing. Being a recognised authority in a subject is attractive to commercial organisations as it helps increase sales and minimise risk. So it’s important to build a strong following, whether that’s because you were the first, the best or simply the most prolific. Self promotion isn’t necessarily a bad thing, just as long as it has some substance to back it up.
One reason for seeing the same old faces is because they are the ones offering to write content or speak at events. There seems to be an unhealthy belief that it’s solely the responsibility of publishers and conference organisers to discover talent. However that’s not true. It’s also down to the individuals to promote themselves, and some of the most recognisable faces happen to be the ones that put themselves out there time and again.
Reliability is another big factor here. One of the reasons I get asked to comment a lot in magazines is because I respond quickly and have something relevant to say. This feels like such a small thing, but if you’re working to a deadline and you know somebody is slow to respond and variable in quality, you’ll simply stop asking. We’ve had similar issues with speakers. You’ll set deadlines for speakers to send in bio information, provide talk descriptions and confirm flights. People are really busy these days so you have to make allowances, but if folks are continually late sending you information, you eventually stop asking, no matter how good they are.
These are just some of the many reasons why you see the same people cropping up time and again. It’s not that they are necessarily the best designers and developers out there, or that they have the most cutting edge things to say. It’s usually because they put themselves out there, can spin a good yarn, respond to their emails in a timely manner, consistently deliver the goods and a host of other pedestrian reasons.
Should Programming be Taught at Schools? | March 25, 2013
There’s a lot of buzz around technology education at the moment.
The old ICT courses which taught children to be passive consumers are being overturned as schools in the UK are encouraged to set up their own curricula with programming at it’s core. At the same time after-schools clubs are growing in popularity with projects like Code Club operating in nearly a thousand British schools. This boom has been thanks, in part, to services like Code Academy and Scratch which have revolutionised the way people learn to programme, and to projects like the Raspberry Pi which hark back to the golden age of the BBC Micro.
While I don’t necessarily buy into the Rushkoffian rhetoric of “programme or be programmed”, I see huge benefits in leaning to code. For instance it’s a practical and engaging way of teaching other skills like maths and physics, while the problem-solving techniques you pick up are highly transferable. I also think it can provide young people with a sense of agency and purpose which is often lacking in their lives (computer games often fill this role). So as somebody in the technology industry I see this trend as a very positive move. However I also wonder if this could just be a case of selective bias?
Classicists argue that Latin is is one of the most important subjects to be taught at school as it’s the basis for all modern languages. Similarly business leaders argue that finance, law and entrepreneurship should take a central place in school curriculum. We even have sports people and celebrity chefs calling for health and nutrition to feature more prominently in schools. I bet if we asked most vocations, from engineers and architects to TV presenters and ballet dancers they’d be able to provide a string of tangible benefits their profession can teach. As such I struggle to tell how valuable learning programming at school really is or how we balance this against other subjects.
I also worry about the expectations we’re setting by teaching programming as a core subject. Are we creating a generation of children raised on the dream of becoming the next Internet entrepreneur only to end up creating an underclass of poorly paid Microserfts? What’s more, do we really want our education policy dictated by the Facebook’s and Google’s of this world, just to ensure they have a plentiful supply of engineers?
It’s a tough question and one that has me sitting on the fence. The benefits to me are immediate and obvious. However I still can’t shake the concern that the downside will only become apparent 5 or 10 years down the line when it’s Java (pun intended) programmers serving our coffee in Starbucks rather than geography graduates.