Backing up MySQL is not very hard -- you just run mysqldump and output to a file -- but it's also not really designed for production environments either.

At first, running mysqldump only took a few seconds, so it wasn't a big deal that everything locked up, but over time our backups got so big the entire site would go down for half an hour if we ran a backup. Even with lots of caching that always meant that somebody was getting an error page every single day. The problem is that mysqldump locks the database tables before it exports so that new things aren't being inserted while you're in the middle of exporting.

The solution is to use the 

        --single-transaction
    

 argument, which will give you a consistent backup without locking up everything. What actually happen is that mysqldump will start a new SQL transaction, flushing all pending writes, and then complete the backup as part of a transaction that doesn't block other updates from happening.

Note: the only caveat is that your database tables should be using InnoDB rather than MyISAM. Since that's been the default for a while, you should probably be fine. 

So now that we've got that figured out, just add the argument to your normal backup routine, like this:

mysqldump -uUser -pPass -hHost --single-transaction database > backup.bak

And now your backups will be much more reliable.