d.Construct 2007 is go! | May 21, 2007
If you haven’t already heard, we quietly launched the d.Construct 2007 website last week. Because this year’s theme is “Designing the User Experience”, we thought it would be fun to mimic the user experience process and evolve the site over time. The site is currently in its wireframe stage, and will slowly progress as the event approaches.
I’m really excited about this year’s event as we’ve got an amazing line-up of speakers from the world of usability, information architecture and design. These include the likes of Cameron Moll, Tom Coates, Peter Merholz and Jared Spool.
Sessions will range from practical talks dealing with agile design methodologies, and UX techniques, through to philosophical musings on the personality of tech products. We’re also planing to run two days of workshops at the start of the event, so you can learn Microformats from Tantek Celik, or Interaction design from Peter Merholz. What’s more, booking any of these workshops will guarantee your place at d.Construct, before the conference tickets have even gone on sale.
Because d.Construct is a community event, I hope you’ll help us build up buzz and get the word out there. To help, we’ve created a series of snazzy buttons you can put on your site. You can even mod them yourself if you’d like.
I’m really looking forward to seeing you all at d.Construct this year, so keep an eye on this site and the event RSS feed for more announcements soon.
XTech 2007 | May 20, 2007
The theme of this year’s event was “The Ubiquitous Web”, and there were some fantastic sessions on this subject from Matt Biddulph, Adam Greenfield and Matt Webb. Matt Biddulph talked about the prototyping opportunities of second life, and how it was a great environment to test out spimes and other location aware devices. Matt demonstrated the flickr photo frame he created for himself, as well as a complicated visualisation he created for Nature magazine. He also talked about IBM using ball location data from Wimbledon to replay the matches in second life. However the thing that really got the geek audience excited was the Arduino hardware hacks he’d been doing. I wouldn’t be surprised if half the audience went out and bought kits in time for the next hack day in London.
Adam Greenfield gave a predictably excellent talk based on the themes from his book, Everyware. In it he discussed everything from the unnecessarily complicated digital locks in Korea (who do you call if you forget your passcode?), through to the fantastic usability of the Hong Kong underground Octopus cards, beloved by so many user experience people. Finally, Matt Webb closed the event off with an inspiring keynote on his vision of interaction design.
One of the great things about XTech is the fact that it’s co-hosted by the W3C. As such, there were lots of important W3C people in attendance, and some very interesting discussions to be had. The session I was most looking forward to, and the one I was most disappointed by, was the session on HTML5. As we all know, HTML5 is a pretty hot topic at the moment, and one I’m going to deal with in a later post.
While the panellists introduced themselves I worked up a series of questions about doctypes, presentational elements, timeframes and the lessons we could learn from other interface languages like MXML and ZUL. However I thought I’d open with a quick question about the divergence between XHTML2 and HTML5. I was expecting a short discussion about the different aims of each language, and their various feature sets. What I ended up with was a 20 minute discussion about namespacing and error handling that completely missed the point of the question. If I hadn’t know it already, I came to the realisation in that the W3C is all about creating specifications for browser manufacturers, and not about providing tools for us web developers. But like I said, more on that later.
Thankfully, Molly brought things back down to earth with an excellent talk on the issues both developers and browser manufacturers were facing. In language the average developer (i.e. me) could understand, she demonstrated how different browsers handle something as simple as mixing rgb colour property units. The letter of the spec says that this is illegal, and so the rules should be ignored. Some browsers follow this draconian error handling and fail to display the rules, even though they could. Other, more pragmatic browsers attempt to display what the developer was intending, even if they break the spec. The difference in implementation makes it difficult for developers to obtain predictable results, and difficult for new browsers to decide how to handle these errors. One argument is that the browser manufacturers should stick t the spec, even if it’s wrong. A more sensible approach would be to change the spec. But that’s another story. I hope Molly posts her slides up soon, as it was a very interesting session.
On a more social note, I had a great time hanging out in Paris. I managed to catch up with old friends as well as making some new ones. Being a vegetarian in Paris was pretty hard work, so I ate a lot of bread and salad while I was there. One day it dawned on me that I hadn’t visited Paris for 7 years. It’s so easy getting over to Paris from London on Eurostar these days, there really is no excuse. So I’ve vowed to go back for a weekend this summer, and explore some of the great museums and galleries the city has to offer. Can’t wait.
CSS2.2 | May 6, 2007
The early pace of CSS development was pretty impressive. First proposed by Hakon Lie in Oct 1994, CSS1 became one of the first W3C recommendations in Dec 1996. Nipping at its heals, CSS2 became an official recommendation in May 1998, just 18 months later. By June 1999 the first 3 draft modules of CSS3 had been published, and in their ground breaking book published that same year, Bert Bos and Hakon Lie postulated that CSS3 would arrive sometime in late 1999.
Over 7 years later, and we’re still waiting. This begs the question, what went wrong?
For a recent conference, I decided to do a talk on CSS3. While researching all the cool CSS3 features modern browsers support, I became intrigued why things were taking so long. I started reading up on the W3C, how it was structured, how you became a member and exactly who was on the CSS working group. I started speaking to existing members and invited experts, reading blog posts from critics and people who had resigned, and looking at every bit of public information I could find.
Organisations pay thousands of dollars to join the W3C, and in return get to set the agenda on forthcoming technologies. While most of the companies involved are eager to shape the future of the internet in a positive direction, they all have their own agendas. Some obviously want to build better browsers, while others are worried about backwards compatibility and engineering problems. Some organisations have a vested interest in technologies such as SVG, while others are more concerned with opening the web up to different platforms like mobile phones and TV. By paying to be a member of the W3C, companies are able to get some of the brightest minds in the industry working on the issues important to their business, and who can blame them?
CSS3 has been in development in it’s current form since early 2000. There are currently 5 modules in “Candidate Release” status, and a further 6 are in “Last Call” status. This sounds good, until you realise that the selectors module was in “Candidate Release” as far back as 2001, and got rolled back to “Last Call” in 2005. Some of the current modules are set to be rolled back, while other modules like the “Box Model” module haven’t been touched since 2002. Of the 40 or so modules, only the TV profile and media queries modules are nearing completion. Lucky us.
There are various reasons why this is taking so long. Many of the issues are technical and can’t be avoided; problems when testing, issues with backwards compatibility and bugs with browser implementation. However there also seems to be a lot of politics involved. Discussions are getting bogged down in the same old arguments that occur time and again, priorities have been given to the wrong areas, and companies have been pursuing their own personal agendas.
Despite being broken down into separate modules, the scope of CSS3 is vast. As well as trying to look at the needs of the current web, the W3C are trying to anticipate the future. One of the big issues is internationalisation, which brings up problems most of us haven’t even heard of before. Tibetan style text justification anybody? Also with the project taking so long, the W3C are working in a constantly shifting environment. What may have been true about the web back in 2000, may not be true today, next year or in the next decade.
My fear is that the W3C has bitten off more than it can chew, and this is having a negative effect on the web. We currently live in a world of live texture mapping and rag doll physics. And yet as web developers, we don’t even have the ability to create rounded corner boxes programmatically. The W3C are so concerned with shaping the future, I’m worried that they may have forgotten the present. Forgotten the needs of the average web designer and developer.
I’ve been thinking about this for a while, and wonder if we need an interim step. If CSS3 is as big and complicated as the development timeline suggests, maybe we need something simpler? Something that gives us designers and developers the tools we need today, and not the tools we need in five or ten years. Maybe we should take all of the immediately useful parts of CSS3 such as multiple background images, border radius and multi-column layout. Maybe we should take all the CSS3 properties, value and selectors currently supported by the likes Safari, Opera and Firefox. Maybe we should take all of this information and build a simpler, interim specification we can start using now. Maybe, just maybe, it’s time for CSS2.2?
Over to you.