Google PageRank Explained and Uncovered | March 31, 2004
Google Page Rank is a very interesting thing. On the surface, it’s quite easy to understand. The Page Rank (or PR) of a site is related to the PR of all the sites pointing to it. Each page pointing to your site gives you a little bit of PR. Thus, the more sites you have pointing to you, the higher your PR will be. Sites with higher PR’s have more PR to give, so it’s beneficial if you are linked to by sites with a high PR.
However, scratch the surface and things get much more involved. There are quite a few articles about PageRank. However, here are a couple of the better ones.
- Google Pagerank Uncovered Online
- Google’s PageRank Explained and how to make the most of it
- The Google Pagerank Algorithm and How It Works
If you wanted to check your page rank, you previously had to download the Google toolbar, something that was only available for IE on PC. However you can now check your page rank on the web, using this handy Page Rank Calculator
Blog Updates Part 2 | March 31, 2004
This site started out as a place to show off the travel pics in my gallery. I installed Movabletype and created my blog design just for a bit of fun. Never thinking people would actually visit my blog, I hid it away in it's own directory, with no way of accessing it from the main site.
As the popularity of my blog increased, I realised I needed to move it to the root of the site, and make the URL's more meaningful at the same time. However, I realised loads of people were linking to pages on this site, so wanted to do it as smoothly as possible. The weekend before last I decided to bite the bullet and make the transfer.
The first step was to change the config so that the siteURL, archiveURL (and their respective paths) pointed to the new locations.
I wanted the date based archives to be of the format
To do this I changed the Monthly archive template to
I wanted the individual entries to be in the correct monthly directories and follow the format
There seemed to be a few ways of doing this, but the simplest way involved making each post an index page in it's own directory.
<$MTArchiveDate format="%Y/%m"$>/<$MTEntryTitle dirify="1"$>/index.php
The category pages were quite simple. They would follow the format
and this was accomplished by changing the category archive template to:
This allowed all the pages to be accessed using
However, they still appeared as
on the pages. To rectify this, I used a bit of PHP which I found on Stopdesign.
<a href="<?php echo preg_replace("#index.php$#","","<$MTEntryPermalink$>"); ?>"><$MTArchiveTitle$></a>
Which removes the index.php bit using a regular expression.
I had originally been storing post related images in the old archives folder. However, this didn't seem like a good idea. Instead, I created an images folder in the new archives folder and then I copied all the images to this new folder.
Obviously my posts were still referencing the old images. This was fine for now, as the images were still there. However, I'd want to get rid of them at some stage and clean out my old blog folder, so decided to do a quick url rewrite using a .htaccess file in the old archives folder.
RewriteEngine on RewriteRule (.*)\.gif$ /archives/images/$1\.gif [R=301,L] RewriteRule (.*)\.jpg$ /archives/images/$1\.jpg [R=301,L]
Which simply redirects any image files the corresponding images in the new image directory.
People would still be linking to old posts, so I needed to redirect them to their new locations. As the names had changed as well as the locations, there weren't any clever rewrites I could use. I'd have to list each old file out and then the corresponding new file, as a redirect. Having quite a few posts, writing all the redirects would have taken ages. Luckily I found a few sites that suggested using MT to create the .htaccess file. To do this, I created a new template named htaccess, containing the following code.
<MTEntries sort_order="ascend"> Redirect permanent /blog/archives/<$MTEntryID pad="1"$>.html http://www.andybudd.com/archives/<$MTArchiveDate format="%Y/%m"$>/<$MTEntryTitle dirify="1"$>/ </MTEntries>
Rebuilding this template created a .htaccess file that looked something like this
Redirect permanent /blog/archives/000006.html http://www.andybudd.com/archives/2003/08/city_of_god/ Redirect permanent /blog/archives/000005.html http://www.andybudd.com/archives/2003/08/finally_got_round_to_it/ Redirect permanent /blog/archives/000014.html http://www.andybudd.com/archives/2003/08/skillswap_meeting/ ...
With all the individual entries now being redirected, I had a few more pages, such as my blog index, archive page, links page etc to redirect, and the move was almost complete.
Mostly done now, I just wanted to tie up a quick loose end. With meaningful URL's, people often try to navigate them by "Hacking" the URL. For instance If I see this URL
I may try and backtrack to see where this
takes me. The first one works fine, as it take you to the monthly archive index page. However, the latter one just takes you to a directory listing. To avoid this I created a new .htaccess file in the archives directory with the following contents.
RewriteEngine on RewriteRule 200./$ /archives/ [R=301,L]
Which basically redirects any of my yearly folders back to the main archive page, keeping things nice and neat and hackable.
If you were browsing this site the weekend before last, you may have noticed things jumping around a bit. If You've bookmarked any pages on this site, it may be worth updating your bookmarks. I'm pretty sure that all the old pages are redirecting to the new pages, but you can never be 100% sure. If you spot any bugs, or have any suggestions , please let me know.
From Hero to Zero | March 30, 2004
My girlfriend bought me a copy of Zero7’s new CD for my birthday the other week. Putting the CD on at work, I was surprised to find that it started skipping all over the place. A little dismayed I had a quick look at the case and discovered the answer. It wasn’t actually a CD, by which I mean that it was copy protected.
From what I understand (stop me if I’m wrong) copy protection basically involves deliberately adding errors to a disk. Most low end CD players simply don’t register these errors, but better quality CD players and those meant for reading data get confused and either skip, or don’t play at all.
Testing the disk at home, it worked fine in my cheap CD player, but was an absolute no go on my iMac. In fact it basically jammed the whole CD player up and forced me to manually eject the thing while it was still spinning.
As I pretty much only listen to music using iTunes at home these days, and have stopped using a discman in favour of an iPod, not being able to transfer my music onto my computer is a big problem. If I’d bought it for myself, I would have probably taken it back to the shop and asked for a proper CD, but as it was a present, it’s not something I can really do. It looks like more and more disks in the UK are being released with copy protection, so it’s well worth baring this in mind next time your shopping for music.
Blog Updates Part 1 | March 26, 2004
Since setting up this site, there have been a number of things about it that have bugged me. I kept adding them to my todo list, but never quite got round to doing them. By themselves, they were all quite small things. but together, they amounted to quite a lot of work. They were also things that could potentially mess up the site, so for a while I thought, "if it aint broke, don't fix it". However, last week I finally decided to bite the bullet and fix some of these annoyances.
The first annoyance was the way my templates were structured in MT. When I set this site up I was very much an MT newbie, so built it on the back of the default templates. Since then, I've pretty much overhauled the individual templates, but the structure was pretty much the same.
One of the things that annoyed me was the amount of duplication. When I build a PHP site, I'm used to having header and footer files as well as various other includes like nav bars, side bars etc. However, in this site, everything was duplicated. This meant, if I wanted to change the name of a file say, I'd have to change the link in the nav bar of every page.
So first off, I created a new template module called "Nav" which looked something like this.
<div id="nav"><a href="<MTBlogRelativeURL>index.php"><img src="<MTBlogRelativeURL>images/home.gif" alt="Home" width="26" height="19" border="0" title="Home" /></a> ...
And then called this module from every template page.
Next I created a generic Header and Footer template module. I realised the header would be a problem, as it obviously contained some meta data like the title of each page. Using Brad Choate's MTIfEmpty I tried to do something like this:
<title><$MTBlogName$><MTIfNotEmpty var="ArchiveTitle">: <MTArchiveTitle></MTIfNotEmpty></title>
Thinking that if the page was an archive page, the title would get displayed, but it if wasn't, you'd just get shown the blog name. Unfortunately when I tried to rebuild the index page, I'd get an error message saying that <MTArchiveTitle> was being used in a place that it shouldn't be used. There is probably some simple way of doing this, but in the end I gave up and created two Header template modules. One for the archive pages and one for all the other pages.
Next thing to do was to create a generic side bar. Before, I only had a side bar on my home page. However I wanted to have a side bar on every page. Again I created a new template module and called it in from every template page. This worked fine on my homepage and archive index pages, but the links went all screwy on the monthly and category pages. My "Last 6 posts" started displaying the last six posts from that category, rather than the last 6 posts from the whole blog. Again, I'm sure there was a simple way of getting it too work in MT, but as a stop gap, I created another side bar template module, without the "Last 6 posts" to use on the sub pages. What I'll probably do is create an index template instead and then use a php include to attach it to each page. I'd have preferred to find an elegant MT only solution, but couldn't figure one out.
I've been using MTW3CValidate on my index page to validate the code when I rebuild it. If the code is valid, a W3C button gets displayed, whereas nothing gets displayed if it doesn't validate. I added the <MTW3CValidate> tags to my header and footer, but noticed that Safari was timing out on the rebuild pages. I tried adding the tags to the individual templates, but rebuilds were still timing out for the individual pages. It would work intermittently on other browsers with longer time out times, but ended up being far to patchy to be able to use.
So I ran into a few problems, and things aren't working quite as I'd like them to. However the template structure makes much more sense now that I've created header, footer, nav and sidebar template modules. This was all in preparation for my next step, updating the archiving format so that my posts had meaningful url's, and moving the whole blog up a directory. However I'll leave that till Blog Updates Part 1.
The Incredible Dancing Google Ads | March 26, 2004
If your viewing this page using FireFox (also Moz 1.6) on Windows, you may notice a rather strange problem. If you roll over a link in the main content area on my blog (but not the nav or the side bar), the Google ads appear to jump up the page for a split second, and then jump down again. This also happens when you roll off a link.
I have absolutely no idea what's going on, but would love to get to the bottom of the problem. If you've got any idea, please let me know. In the meantime, I'd just like thank all the people who've emailed me to let me know. There have been quite a few so sorry that I've not been able to thank you in person, but I really appreciate the heads up.
AllTheWeb R.I.P. | March 26, 2004
It's official. AllTheWeb is now dead. Searches on AllTheWeb now use Yahoo's new crawler. They've even managed to bugger the layout of, what was once, a really nice CSS based layout.
My Porn Site | March 26, 2004
SXSW | March 24, 2004
SXSW sounded great this year. So good in fact, that I plan to organise a Brighton contingent to go over next year. The quality of the panels sounded excellent, however it’s the social angle that seemed the biggest hit.
The majority of blog chatter focused less on the actual event, and more on the people they’d met. The chance to meet in person the people whose books you’ve read, who’s work you’ve admired and who’s RSS feeds you’ve been subscribed to for the last few years. That would be the big draw for me. I know just from attending local meet-ups, how cool it is to finally put a name to a face after so many years of reading peoples posts. SXSW seemed to do this to the power of 10.
This really brings home the power of the web, and in particular the power of blogging. It’s all about people. About fostering a community and creating new links and new friendships based on interest, not just geographic location. So I definitely plan to visit next year, for the social aspect alone.
In between all of the socialising, the panel discussions sounded great. These links have been in my todo file for a few days now, and have already filtered through the blogsphere. However, if you’ve missed them, you should really check out the following SXSW presentations.
- HiFi Design with CSS by Douglas Bowman
- HiFi Design with CSS by Dave Shea
- CSS: The Good, the Bad, & the Ugly by Douglas Bowman
- CSS: The Good, the Bad, & the Ugly by Dave Shea
- CSS: The Good, the Bad, & the Ugly by Tantek Celik
- High Accessibility, High Design by Joe Clark
- The Frontiers of User Experience by Jeffrey Veen
- I Don’t Care About Accessibility by Jeffrey Veen
- Interface Design by Jason Fried
Sub:lime Discussion | March 21, 2004
Thanks for all the comments about my recent Zen Garden rip. Who'd have known Metallica were such avid readers of my blog ;-)
I've been in email contact with both the designer who used my design and the owners of the site. Both have been extremely reasonable and the site got taken down and redesigned almost immediately.
It was clear from my corespondents that the designer in question was under the impression that the CC license gave him the right to use my design in any way he felt fit. My initial feeling was that he was wrong, but after reading some of your comments, I'm not so sure.
When I submitted my design, I read through the FAQ's, the CC licence and the comments Dave puts in the CSS files. All designs are given the same license and comments, so this is not something I could change or add to.
It was my understanding that the CSS was under a "Share and share alike" licence, but the design was not. This made perfect sense to me at the time. Being a designer and a developer, I see the design and the code as two completely different things. The design is the visual idea, the CSS is simply the mechanism I chose to represent that idea in.
As a developer, I'm not precious about my code. I'm happy for people to download it and play with it to see how it works. God knows I've done this before. I'm happy for people to grab chunks of code to use on their own sites, or use the code as a building block for something new. After all, this is how many people learn web design, and one of the reason the Garden was set up. Hell, it's one of the reason I submitted my design in the first place.
However as a designer, I see the design as a completely different, and very personal thing. While I'm happy for people to use my code, I'm less happy for people to use my designs without permission. Call me whatever you want, that's just the way I feel.
Usually it's pretty easy to spot a copy. I get quite a few people emailing me, telling me about a new sub:lime clone they have found. If they were better than the original, I'd be pretty chuffed. However they rarely are. A few people said that I should be happy that the designs are always worse than the original. However it just makes me sad. As I explained to Dave S, it's like when you design a great looking site for a client and then come back in 6 months to see that it's been messed up. It's not a nice feeling seeing good things go bad.
When something stops being your design and starts being a completely new work is a different matter. I'm all for people taking inspiration. In fact I'd encourage it. Building on somebody else's work is a good, creative and positive thing to do. Simply copying the design is lazy.
I think using music as an analogy is an interesting idea. I'd be more than happy for you to take a complete copy of sub:lime and either keep a copy on your computer or even publish it on your own version of the CSS Zen Garden. Using the music analogy, this would equate to file sharing . I'm also happy for people to take the design and use it as a base for creating something completely new and different (remixing). However I'm not happy for people to add their own content to my design and then do whatever they want with it. This would be like downloading a song, changing a few of the words and then using it for your company jingle or trying to sell it on to somebody else as your own work.
After reading some of the comments on my original post, I revisited the CC license and realised that the distinction between the design and the code wasn't as clear cut as I'd first thought. While I still believe that the design is different from the code, I now agree that the design is implied by the code, and that, if you release the code under a "Share and share alike" license, you're effectively releasing the design as well.
Despite what the rest of the comments in the code may say, having your design released under such a licence removes all control you have over the design. Effectively, the design isn't yours any more. You'd have no right to ask people not to use your design. They could sell it on, they could post it to a site such as www.oswd.org, they can basically do anything, as long as the license remains the same. It also appears, that once something is released under a CC licence, it can't be changed.
Some people have suggested that my decision to remove my design from the Garden was down to me not being able to handle the fact that people will always rip off designs. This is not the case. I'm perfectly aware that there will always be people out there who will copy other's work.
The reason I chose to remove my design is because the license it's been released under actively encourages this kind of copying.
Cool Tee's | March 20, 2004
Brighton based T-shirt designers Knofler, have just launched their summer range. So if you want some distinctive threads this summer, they are well worth checking out.
I Take it All Back | March 18, 2004
When Apple announced the iPod mini, I was underwhelmed to say the least. I thought the colours looked a little tacky, and I really didn’t think there was a need for a smaller iPod. I mean, it’s not as thought the regular iPod has ever been described as bulky. My main complaint however, was the price point. For $50 more, you could get a “proper” iPod, and I really thought most people would just pay the extra cash to get something with much more space.
However, Pete got hold of one while he was in SF and I have to say, they are pretty cool. First off, they’re tiny. After seeing one in real life, I’m beginning to see the benefit of their size. They are so small, you can just keep it in your pocket permanently, and they are so light that you won’t notice it’s even there. No more walking around lopsided, with a pocket full of phones and mp3 players. Another cool feature which Pete pointed out, was the fact that you can transfer all the images on your digital camera direct to your iPod. No need for extra kit like the Belkin Media Reader. If you’ve got a digital camera like the EOS 300D, this would be a reason to get the iPod mini alone.
They are still a little pricey for the UK market, but if you happen to be in the states, you can pick them up for around �125 (if they’ve not all sold out) which is a bargain. If they were that price in the UK, they would be literally flying off the shelves. suffice to say, I’ve added one to my wish list and have my fingers crossed that somebody out there will get me one for my birthday in a few days time.
Well, it was worth a try anyway :-)
More Community Minded | March 18, 2004
A few of you may remember a spat I had with a few individuals on my local mailing list a couple of weeks back. The spat left me feeling distinctly negative towards the local web community, and as a result I signed off the list and was thinking seriously about packing my SkillSwap project in.
Since leaving the list, there have been more spats and more people have left, some of whom have been there from the start. At one point, it really looked like the list was going to implode. Luckily it looks like the message may finally be getting across to the individuals concerned, and things seem to be calming down. Over the last couple of days, I’ve been to two local web community events, and was extremely impressed with positive vibe at both.
On Monday, I ran a SkillSwap event entitled “The Business of Freelancing”. The event was extremely popular and heavily oversubscribed. This was partly down to the excellent speakers we had in the shape of Jonathan Hirsh and Tom Nixon, and partly due to the timeliness of the talk. The web business really seems to be picking up at the moment and quite a few people are taking the leap from full time work into freelancing. There seems to be much more business around at the moment, something confirmed by the speakers and the attendees alike.
The talk went down really well. Both presenters had plenty of material, and each could have done a whole SkillSwap just on their own. However, having two presenters worked really well, as each came from a different background and had a different approach.
Jon’s talk was very much focused on the practicalities of being a Freelancer. He talked about having good contracts, book keeping and tax, trading statuses etc, as well as more prosaic topics such as networking and elevator pitches. Tom’s talk revolved around what potential employers want from a freelancer. He discussed the best way to approach agencies looking for work, and gave examples of good (and bad) email people had sent him when replying to job adverts. He talked about reliability and the fact that companies will hire a freelancer to “take the pain away”. If your interested in finding out more about these talks, you can download the presentations from the SkillSwap archive.
After the presentations, we all went for drinks and were joined by a number of other local web folk. The pub talk was all extremely positive and it was good to put more faces to names as it were.
Then last night the BNM had a social down at Riki Tik’s. I was really impressed by the turnout. Loads of people came down, and I had another matching faces to names session. There were a lot of people there who I didn’t recognise, so I’m sure there must have been a big lurker contingent.
Overall, a couple of very positive nights that have helped revive my faith in the local web community. It’s really easy to mistake the loud voices of a few vocal community members as a general feeling of negativity. However in truth, it’s only a few people resonating these negative vibes, and on the whole, Brighton has a very active web community.
Quick Quiz: H1's and Logos | March 18, 2004
Wrapping your logo in a H1. Good or Bad?
New Standards Compliant Website | March 13, 2004
The Work Included “Redesign of existing website to current web-standards, graphic design, search engine optimization”.
“Octane was contracted to redesign the hweb site for Clocking Edge Software’s award winning Hormonal Forecaster program. Already recognized as a Five Star program by Ziff Davis, HFC needed a web site that would be search engine compliant, user friendly, and striking in design. We developed a naturalistic theme that is easy to navigate and very quick loading, while still maintaining standards compliance and google-friendliness.”
And there was me thinking that I created the naturalistic theme and graphic design. I have to say that I’m not massively bothered about people using a version of sub:lime on their personal sites. But I really draw the line as having my work ripped by another web design agency and then passed off as their own!
Amazon Wishlist | March 9, 2004
When I designed my blog, I added my Amazon wishlist more because everybody else did it, than because I though people would want to buy me stuff. However I figured, nothing ventured, nothing gained. Somebody may find something on this site that helps them, and decide to buy me something as way of thanks.
However I’ve never been bought anything and to be honest, it didn’t come as a surprise. Then last week, I was browsing my wishlist and saw a link saying “Items already purchased for you are hidden from view. Reveal purchased items”. Cool I thought, somebody has bought me something. Sure enough, I see that some kind sole has bought me a couple of books of my list.
Hang on a minute, I thought. I’ve not actually been sent any books. So I shoot off an email to Amazon asking if they can tell me what’s going on. The following day I get a polite email back saying no, they can’t tell me what’s going on, I have to speak to the person who bought me the items. I tried to explain that I didn’t know who bought me the items, but get sent back a reply saying that Amazon can’t tell me who bough the books for me. I try to explain that I wasn’t asking them to tell me who sent me the book, but that surely they must know, and couldn’t they find out what’s going on. I get a reply back, basically saying exactly the same thing. Sorry but we cant tell you who sent you the books. End of conversation.
So if the person or persons who bought me these books is reading this, I’d just like to say a big thanks as they are both books I’ve wanted for a while. However is there any chance you could drop Amazon an email to let them know that the books haven’t been recieved :-(
Beautiful Version 2 Entries | March 8, 2004
I have to admit, I wasn’t expecting such a high level of quality entries when Paul (whitespace) Scrivens announced his monthly Version 2 competition. However, on looking at the list of submission for the February contest, I have to say I was impressed.
The standards is extremely high and below is a list of my favourites.
And The Winner is... | March 7, 2004
The judging is over, the votes are cast, and our esteemed judges have come to an almost unanimous decision. So it’s with much pleasure that I’d like to announce the 85th PGA Championship site as Web Standards Awards Site of the Month for February.
Despite some peoples misgivings, the quality of February’s award winners has been universally high. We are always looking out for sites that are visually appealing and marry usability and accessibility, with creative use of web standards. If you have been involved in creating such a site, or feel a particular site should be considered for an award, please don’t hesitate to send in your submission.
Web Superstars | March 5, 2004
Every creative field has it's superstars, from Music to Movies, Architecture to Interior Design. Who are the people in the web industry that really ring your bell? Would you walk a million miles for one of Zeldman's smiles or climb every mountain to hang out with Jakob Nielsen?
If you were stuck on a desert island with just a Laptop, a WiFi connection and a lifetimes supply of beer and junk food, which web superstars would you want to be marooned with and why?
Minor Additions | March 5, 2004
Over the last few weekends, I've been moving all of my links from a static HTML page, into a new MT links blog. Now that's done, I've added a new "Latest Links" header to my side bar. So now, rather than having to write lists of links in my main blog, I'll be posting all the well designed CSS sites I find, straight to my links blog.
AllTheWeb Bites the Dust? | March 4, 2004
When I started using the web, there were a large number of popular search engines and directories to choose from. I started using WebCrawler, Magellan, then I moved to Yahoo, AltaVista, Lycos and HotBot. I’d have a favourite, but would still tend to use 2 or 3 search engines at any one time. Then along came Google and slowly word of mouth made it the Search Engine of choice for the majority of web users.
As more and more people started to use Google, the other Search Engines began to loose market share and either fold or get subsumed by other search services. A few smaller search engines like Teoma appeared, but they never managed to achieve popularity of Google.
Apart from Google, the only other Search Engine I use is AllTheWeb. AllTheWeb is a great search engine and is definitely on par with Google in terms of usability and relevancy. In fact, as Google’s results seem to be getting worse and worse by the day, I always saw AllTheWeb as a real competitor to Google. So it’s a shame that Yahoo, who acquired AllTheWeb a while back, look like they are going to drop AllTheWeb.
Rather than improving their position in the market, by dumping AllTheWeb, Yahoo seems to be getting get rid of Google’s closest competitor, in terms of size and relevancy, if not in market share.
Idiots Guide to Backing-up MySQL on OS X | March 3, 2004
At work we use an old G4 to act as our file and development server. At the end of last week, while synchronising one of our local MySQL databases with the live database, I accidentally overwrote a local table with old data. Luckily we backup so this wasn't a problem. However I thought some of you may find it useful to know how we do this, just incase you ever end up in a similar predicament.
Backing up a local MySQL database on OS X is pretty simple. To do a backup, you make use of the mysqldump command. This simply dumps the contents of the specified databases into a text file as INSERT statements. The below example simply dumps the contents of my clowns database to the current directory.
(I'm using the ¬ symbol to indicate where a line wraps)
mysqldump --user=andy --password=secret¬ clowns > clowns.sql
If you want, you can add a number of parameters to change the way mysqldump behaves. In this example, I'm locking the database before running the dump, adding a drop table and then naming the file based on the day of the week. If I run this command each day, I'll end up storing 7 days of backups locally.
mysqldump --user=andy --password=secret clowns¬ --lock-tables --add-drop-table > clowns`date +%u`.sql
If I wanted, I could name the file based on the actual date and store one file for each day. This would probably make more sense, but storage space is pretty tight, so I'm happy with just the last 7 days.
The next thing to do is make a shell script file. Open up a blank document, name it mysqlbackup.sh and then add the following lines.
#!/bin/sh mysqldump --user=andy --password=secret clowns¬ --lock-tables --add-drop-table > clowns`date +%u`.sql;
In my backup script, I've added a bit of logic and error reporting so I know what's going on. If you wanted, you could also turn things like the username and password into variables to make editing the script easier.
Here is an example of the script I'm using.
#!/bin/sh # echo start message echo "Backup Script Processing" # navigate to backup dir if cd /Users/andy/clownbackups/ then echo "...successfully navigated to backup dir" else echo "could not locate backup directory" exit 0 fi # echo message echo "exporting SQL dump" if #dump the db into a .sql file mysqldump --user=andy --password=secret clowns¬ --lock-tables --add-drop-table > clowns`date +%u`.sql; then #echo success message echo "SQL dump successful" else #echo error message echo "mysqldump error" exit 0 fi
You can test your script out by running it from the terminal using the sh command. You'll see the output echo'd to the screen and if it's been successful, your mysql dump will appear in the directory you specified.
Now this is pretty cool, but nobody wants to open up the terminal and run this script every day before you back-up to disk (you do back-up every day, don't you?).
To get round this, you need to set up a cron job. When I first heard about cron jobs, they filled me with fear (as you can tell, I'm not hugely server savvy). However cron jobs are actually very simple. A cron job is simply a job that you want to automate, so that it runs at a set time. OS X does a number of cron jobs by default, mostly involving cleaning up logs and temporary files. The file that manages the system cron jobs is called crontab (short for cron table) and on OS X, is located at /private/etc.
This file is probably owned by root, so you'll need to sudo pico into it to make any changes. Once open, you'll see that it's a simple simple table of jobs (hence the name). Along the top is the frequency, and down the right hand side is the job to perform. Here is an example of my default crontab file.
# /etc/crontab SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin HOME=/var/log # #minute hour mday month wday who command # 15 3 * * * root daily 30 4 * * 6 root weekly 30 5 1 * * root monthly
What we need to do is add a new line to this file that tells OS X to run your back-up script at a set period of time. We usually back-up at 5:30pm so I'm going to run my mysql backup script at 5:15pm and then save the file. If you're not comfortable doing this manually, cronnix is a useful OS X GUI for editing your cron jobs.
15 17 * * * root ¬ sh /private/etc/mysqldump.sh¬ >> /private/etc/cron_error.log 2>&1
In the above example I'm also sending the output to an error log so I can make sure the script is executing correctly. When I run the script, the mysqlbackup file appears in my backup folder. However if I open it, it's empty. Looking at the error log tells me that the mysqldump command couldn't be found. Now this may seem strange as you can run the same shell script from the command line and it works fine. The reason lies in environmental variables.
When you log in to the terminal, one of the first things that happens is OS X goes off and looks for a file called .tcshrc (if you're using tsch) which contains customisation information about the user. One of these bits of information is a path which tells OS X where certain programs are stored. If you run a cron job, you need to add this path information to your crontab. If you look at the above example of my crontab file, you'll see the following text.
What you need to do is add to this the path to mysql. When I do this, my path becomes.
Depending on how you installed MySQL, your path information may be different. If you're not exactly sure where myslq is stored, you could either do a quick locate or have a look in your .tcshrc file (it's quite interesting to see what's in there anyway). With this set, your specified databases should get backed up every day and you'll no longer have to worry about overwriting or loosing data. For more fun, you can change your shell script to backup and download from a remote server.
Word of warning though. I'm no *NIX expert so all the usual disclaimers apply. Don't talk to strangers, always wear clean underwear if your going out, and make sure that you create copies of any system files you're editing. There are probably better ways of accomplishing this, but the above method is pretty simple and quite a good example of what can be done with a little command line tinkering on OS X.
Bloggers Weekend | March 2, 2004
As you may already be aware, myself, Jeremy, Richard, Jon and Stuart had a bloggers weekend in Dorset at the invitation of Dunstan Orchard. The weekend was loads of fun, and decidedly less geekey than expected.
In between all the eating, drinking and tech talk, there was a whole bunch of photography going on. Spurned on by some amazing pics Dunstan took the night before we arrived, we went out in the middle of the night armed with a bunch of torches and Dunstan's EOS10D. After much jumping around we ended up with a load of crazy pics, one of which you can see below.