[dundee] MySQL backup woes

Kris Davidson davidson.kris at gmail.com
Mon Feb 15 12:00:24 UTC 2010


Yeah, I considered just stopping MySQL and copying the directory, I
would like to avoid it if possible though, I think part of the problem
is one of the databases has 34077 tables and most of the additional
options I try fail with:

mysqldump: Got error: 29: File './path/to/file.MYD' (Errcode: 24) when
using LOCK TABLES
mysqldump: Got error: 1016: File './path/to/file.MYD' (errno: 24) when
using LOCK TABLES

I've tried upping open files to 8192 but obviously it still isn't
going to like.it.

I feel the need to say I'm not responsible for the mess, just trying
to tidy it up. The last guy use to backup the databases using Navicat
on a local PC.

Kris

On 15 February 2010 10:58, Gavin Carr <gavin at openfusion.com.au> wrote:
> Yep, that seems horribly slow. I do something pretty similar on a
> larger server (8G RAM) with fast disks:
>
>  nice mysqldump --defaults-file=~/.my-mysql.cnf --opt --skip-dump-date $db | gzip > $db.sql.gz
>
> and my 8.5GB db dumps in 9 minutes.
>
> Maybe try adding --quick to your dump, and/or switch from
> --single-transaction to --opt (which basically means using --lock-tables
> instead). --single-transaction is better if and only if all your tables
> are innodb, I think?
>
> Cheers,
> Gavin
>
>
>
> On Mon, Feb 15, 2010 at 10:22:44AM +0000, Kris Davidson wrote:
>> Forgot to say I'm using
>>
>> for DB in $($MYSQL -u ${MYSQLUSER} -p${MYSQLPASSWORD} -e 'show
>> databases' -s --skip-column-names); do  $MYSQLDUMP -u ${MYSQLUSER} -h
>> localhost -p${MYSQLPASSWORD} --single-transaction $DB | $GZIP >
>> ${BACKUP}/${DBFILE}; done
>>
>> to backup, I've tried not compressing, using a single .sql file, using
>> multiple etc.
>>
>> Kris
>>
>> On 15 February 2010 10:05, Kris Davidson <davidson.kris at gmail.com> wrote:
>> > Got a Dell Poweredge DX-250 (Quad Core Intel Xeon 2.40Ghz, 1GB ECC
>> > DDR2 RAM, 160GB SATA 7,200RPM HD ) running Debian Lenny with the
>> > 2.6.26-2-686
>> > kernel (previously upgraded from an Etch install). It doesn't really
>> > run much - DNS, Apache and FTP. Its not under heavy load.
>> >
>> > Anyway, I've been cleaning and sorting the server out and decided to
>> > schedule some backups:
>> >
>> > 2,643,619 files into a 6GB archive
>> > 34 databases, 600MBs worth into a 100MB archive
>> >
>> > Now the files backup in about 2 hours while the databases take 5
>> > hours. The time seems excessive to me, I'm use to long backup times on
>> > large databases (10GB+) but these don't seem all that large.
>> >
>> > Anyone just wondering if anyone had advice or suggestions?
>> >
>> > kris
>> >
>>
>> _______________________________________________
>> dundee GNU/Linux Users Group mailing list
>> dundee at lists.lug.org.uk  http://dundeelug.org.uk
>> https://mailman.lug.org.uk/mailman/listinfo/dundee
>> Chat on IRC, #tlug on irc.lug.org.uk
>
>
> _______________________________________________
> dundee GNU/Linux Users Group mailing list
> dundee at lists.lug.org.uk  http://dundeelug.org.uk
> https://mailman.lug.org.uk/mailman/listinfo/dundee
> Chat on IRC, #tlug on irc.lug.org.uk
>



More information about the dundee mailing list