[GLLUG] crossloading a mysql db - improvements to shell script?
Karanbir Singh
mail-lists at karan.org
Mon Apr 7 08:50:50 UTC 2014
On 04/07/2014 09:24 AM, tid wrote:
> Hi Folks,
>
> I'm seeking to improve a database crossload from one mysql server to
> another and am looking for any ideas / suggestions.
>
> Previously, the client was using :
>
> -------------------------------------------------
> dump fromserver db1 > file1.sql
> dump fromserver db2 > file2.sql
> cat file1 | mysql toserver db1
> cat file2 | mysql toserver db2
> -------------------------------------------------
>
> which I've speeded ( sped?) up by
>
> -------------------------------------------------
> dump fromserver db1 > file1.sql &
> dump fromserver db2 > file2.sql &
> wait
> cat file1 | mysql toserver db1 &
> cat file2 | mysql toserver db2 &
> wait
> -------------------------------------------------
>
> I can obviously do this:
>
> dump fromserver db1 | mysql toserver db1
>
> at which point the network becomes the bottleneck. Is it possible to
> improve performance by bonding network interfaces? The only fly in the
> ointment here is that the mysql server is an AWS RDS instance, and
> therefore I can't add interfaces to it - only to the server where my
> script runs.
>
> Any thoughts / suggestions / guffahs of laughter gratefully received.
>
maaktkit has a parallel dump and load, it will parallelise at the table
level and can gzip on the fly. plus, why write to file and then cat it
out to the toserver later, why not send it as a stream on the fly ( use
nc for more perf wins ).
- KB
--
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc
More information about the GLLUG
mailing list