[Sussex] Running out of memory :(

Richie Jarvis richie at helkit.com
Wed Jul 2 10:47:38 UTC 2008


Hi All,

I am trying to setup a new machine based upon the 'System on a chip' 
ADM5120 controller - its running midge 
(http://midge.vlad.org.ua/wiki/Main), which works very well.  My plan is 
to connect my 750GB USB drive up to the unit, and do a daily rsync to it 
in order to backup my 750GB RAID unit - eventually, moving the system 
remotely for added security.

Anyway, I've set it up on my workbench - everything's running very well, 
the system runs ok.  I can mount the USB drive, and run rsync (once the 
package has been installed via ipkg), and start moving data.

The problem is that the poor little system keeps running out of memory.  
>From what I understand, rsync builds a filelist on both sides of the 
equation in order to determine what to transfer over, and holds that 
data in memory, which is where I believe the problems start occurring.  
I've already split out the rsync's to try and remove the problem, but 
there still appears to be far too much data in memory, as I get this:

__alloc_pages: 0-order allocation failed (gfp=0x1f0/0)
__alloc_pages: 0-order allocation failed (gfp=0xf0/0)
__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
__alloc_pages: 0-order allocation failed (gfp=0xf0/0)
__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
VM: killing process zebra
SOFTDOG: Initiating system reboot.

After which, the system reboots, my drive isn't mounted anymore (there 
seems to be a problem with the automount), and rsync isn't loaded.  
Obviously, once I've got everything tested, working and properly 
running, I'll have to rebuild the system image to include rsync.

Anyway, so can any of you fearfully intelligent folks out there think of 
a way to script automatically generating a filelist, and then passing 
that to rsync in small chunks so this thing doesn't blow up?

Thanks,

Richie




More information about the Sussex mailing list