[Klug-general] disk space issue

alan alan at hipnosi.org
Mon Apr 29 15:21:56 UTC 2013


du -sh /var/log/*

Check out certain logs like btmp (the output you get when you use the 
command 'lastb').
Hacking attempts can swell it large, a publically availbale IP can 
attract 2GB of hacking attempts a year.

Some distros do not have a well configured default logrotate.conf 
(Centos 5 for example), your conf may need something like this added:

/var/log/btmp {
     missingok
     monthly
     minsize 1M
     create 0600 root utmp
     rotate 1
}


On 29/04/13 15:55, Sharon Kimble wrote:
> On Mon, 29 Apr 2013 10:15:18 +0100
> Dan Attwood <danattwood at gmail.com> wrote:
>
>> ahh yes, maths was never a strong point of mine.
>>
>> so looks like the disk are good now. Time to look more into what
>> suddenly ate the space up.
> You don't run 'rsnapshot' do you? Because that’s notorious for eating up
> empty space on the hard drive!
>
> Sharon.
>>
>> On 29 April 2013 10:11, Matthew Tompsett <matthewbpt at gmail.com> wrote:
>>
>>> Really? It seems to add up to me.
>>>
>>> 342G + 24G + (0.01 * 370)G = 369.7G  ~  370G
>>>
>>> On 29 April 2013 09:49, Dan Attwood <danattwood at gmail.com> wrote:
>>>> ok just tested that a dev server and that worked do I've push it
>>>> to live.
>>>>
>>>> That given me some more breathing room. However I don't think
>>>> this is the root cause of the problem as a df -h still shows:
>>>>
>>>>
>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>> /dev/sdb1       370G  342G   24G  94% /var
>>>>
>>>> so there is still 30 gig odd being gobbled up somewhere
>>>>
>>>>
>>>>
>>>>
>>>> On 29 April 2013 09:45, alan <alan at hipnosi.org> wrote:
>>>>> Note that you replace /dev/hdXY with /dev/sdb1 (or whatever your
>>> partition
>>>>> is called)
>>>>>
>>>>> Just tested lowering it (on a non-production server) and got
>>>>> another
>>> 100GB
>>>>> available on /data straight away,
>>>>> so seems OK to do it live.
>>>>>
>>>>> root at otp:~# df -h /dev/sdb1
>>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>> /dev/sdb1       1.8T   52G  1.7T   3% /data
>>>>>
>>>>> root at otp:~# tune2fs -m 1 /dev/sdb1
>>>>> tune2fs 1.42 (29-Nov-2011)
>>>>> Setting reserved blocks percentage to 1% (4883778 blocks)
>>>>>
>>>>> root at otp:~# df -h /deb/sdb1
>>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>> /dev/sdb1       1.8T   52G  1.8T   3% /data
>>>>> root at otp:~#
>>>>>
>>>>>
>>>>> On 29/04/13 09:32, Dan Attwood wrote:
>>>>>
>>>>>> Have you tried running df --sync
>>>>> didn't know that. But I've run it and it makes no difference
>>>>>
>>>>>> ext filesystems reserve 5% of the available space
>>>>> The link talks about ext3 - the drive is ext4, those that make a
>>>>> difference?
>>>>> Also I was to run the tune2fs -c 0 -i 1m /dev/hdXY command is
>>>>> that something that then happens instantly or will this cause
>>>>> downtime?
>>>>>
>>>>>
>>>>>
>>>>> On 29 April 2013 09:26, Alan <alan at hipnosi.org> wrote:
>>>>>> ext filesystems reserve 5% of the available space
>>>>>> reasons  and solution explained here:
>>>>>>
>>>>>>
>>> https://wiki.archlinux.org/index.php/Ext3#Reclaim_Reserved_Filesystem_Space
>>>>>> I hope I have not misunderstood, with relevance to VM's...
>>>>>>
>>>>>> On Mon, 29 Apr 2013 09:13:58 +0100
>>>>>> Dan Attwood <danattwood at gmail.com> wrote:
>>>>>>
>>>>>>> hi all hopefully someone can point me to a good solution to
>>>>>>> this.
>>>>>>>
>>>>>>> I have a VM server running on VMare. Recently if started to
>>>>>>> run out
>>> of
>>>>>>> space on it's /var disk - which is a thin provisioned disk.
>>>>>>> We gave
>>> it
>>>>>>> some
>>>>>>> more space and I rebooted the server into gparted and
>>>>>>> expanded the disks
>>>>>>> into the new free space.
>>>>>>>
>>>>>>> Today I've come in to find that the /var disk had run out of
>>>>>>> space completely. I did a df -h and can see the following:
>>>>>>>
>>>>>>> Filesystem           Size   Used  Avail   use% mounted on
>>>>>>> /dev/sdb1             370G  348G  3.0G 100% /var
>>>>>>>
>>>>>>>
>>>>>>> so the 370 gig disk has only used 348 gigs and yet is 100%
>>>>>>> percent
>>> full
>>>>>>> my imeadiate thought was I had run out of inodes, however:
>>>>>>>
>>>>>>> filesystem           inodes      iused    ifree
>>>>>>> iuse% /dev/sdb1            24576000 430482 24145518    2% /var
>>>>>>>
>>>>>>> so I have loads of them free.
>>>>>>>
>>>>>>> I also rebooted the server into grparted and double checked
>>>>>>> the disk partition and also ran a disk check from here - this
>>>>>>> flagged up no errors.
>>>>>>>
>>>>>>> I've now gone through and deleted some stuff to give me some
>>> breathing
>>>>>>> room
>>>>>>> but I really need that space back.
>>>>>>>
>>>>>>> Does any ones have any suggestions please?
>>>>>>>
>>>>>>> Dan
>>>>>>




More information about the Kent mailing list