<div dir="ltr">ahh yes, maths was never a strong point of mine. <div><br></div><div style>so looks like the disk are good now. Time to look more into what suddenly ate the space up.</div></div><div class="gmail_extra"><br>
<br><div class="gmail_quote">On 29 April 2013 10:11, Matthew Tompsett <span dir="ltr"><<a href="mailto:matthewbpt@gmail.com" target="_blank">matthewbpt@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Really? It seems to add up to me.<br>
<br>
342G + 24G + (0.01 * 370)G = 369.7G ~ 370G<br>
<div class="HOEnZb"><div class="h5"><br>
On 29 April 2013 09:49, Dan Attwood <<a href="mailto:danattwood@gmail.com">danattwood@gmail.com</a>> wrote:<br>
> ok just tested that a dev server and that worked do I've push it to live.<br>
><br>
> That given me some more breathing room. However I don't think this is the<br>
> root cause of the problem as a df -h still shows:<br>
><br>
><br>
> Filesystem Size Used Avail Use% Mounted on<br>
> /dev/sdb1 370G 342G 24G 94% /var<br>
><br>
> so there is still 30 gig odd being gobbled up somewhere<br>
><br>
><br>
><br>
><br>
> On 29 April 2013 09:45, alan <<a href="mailto:alan@hipnosi.org">alan@hipnosi.org</a>> wrote:<br>
>><br>
>> Note that you replace /dev/hdXY with /dev/sdb1 (or whatever your partition<br>
>> is called)<br>
>><br>
>> Just tested lowering it (on a non-production server) and got another 100GB<br>
>> available on /data straight away,<br>
>> so seems OK to do it live.<br>
>><br>
>> root@otp:~# df -h /dev/sdb1<br>
>> Filesystem Size Used Avail Use% Mounted on<br>
>> /dev/sdb1 1.8T 52G 1.7T 3% /data<br>
>><br>
>> root@otp:~# tune2fs -m 1 /dev/sdb1<br>
>> tune2fs 1.42 (29-Nov-2011)<br>
>> Setting reserved blocks percentage to 1% (4883778 blocks)<br>
>><br>
>> root@otp:~# df -h /deb/sdb1<br>
>> Filesystem Size Used Avail Use% Mounted on<br>
>> /dev/sdb1 1.8T 52G 1.8T 3% /data<br>
>> root@otp:~#<br>
>><br>
>><br>
>> On 29/04/13 09:32, Dan Attwood wrote:<br>
>><br>
>> >Have you tried running df --sync<br>
>><br>
>> didn't know that. But I've run it and it makes no difference<br>
>><br>
>> > ext filesystems reserve 5% of the available space<br>
>><br>
>> The link talks about ext3 - the drive is ext4, those that make a<br>
>> difference?<br>
>> Also I was to run the tune2fs -c 0 -i 1m /dev/hdXY command is that<br>
>> something that then happens instantly or will this cause downtime?<br>
>><br>
>><br>
>><br>
>> On 29 April 2013 09:26, Alan <<a href="mailto:alan@hipnosi.org">alan@hipnosi.org</a>> wrote:<br>
>>><br>
>>> ext filesystems reserve 5% of the available space<br>
>>> reasons and solution explained here:<br>
>>><br>
>>> <a href="https://wiki.archlinux.org/index.php/Ext3#Reclaim_Reserved_Filesystem_Space" target="_blank">https://wiki.archlinux.org/index.php/Ext3#Reclaim_Reserved_Filesystem_Space</a><br>
>>><br>
>>> I hope I have not misunderstood, with relevance to VM's...<br>
>>><br>
>>> On Mon, 29 Apr 2013 09:13:58 +0100<br>
>>> Dan Attwood <<a href="mailto:danattwood@gmail.com">danattwood@gmail.com</a>> wrote:<br>
>>><br>
>>> > hi all hopefully someone can point me to a good solution to this.<br>
>>> ><br>
>>> > I have a VM server running on VMare. Recently if started to run out of<br>
>>> > space on it's /var disk - which is a thin provisioned disk. We gave it<br>
>>> > some<br>
>>> > more space and I rebooted the server into gparted and expanded the<br>
>>> > disks<br>
>>> > into the new free space.<br>
>>> ><br>
>>> > Today I've come in to find that the /var disk had run out of space<br>
>>> > completely. I did a df -h and can see the following:<br>
>>> ><br>
>>> > Filesystem Size Used Avail use% mounted on<br>
>>> > /dev/sdb1 370G 348G 3.0G 100% /var<br>
>>> ><br>
>>> ><br>
>>> > so the 370 gig disk has only used 348 gigs and yet is 100% percent full<br>
>>> ><br>
>>> > my imeadiate thought was I had run out of inodes, however:<br>
>>> ><br>
>>> > filesystem inodes iused ifree iuse%<br>
>>> > /dev/sdb1 24576000 430482 24145518 2% /var<br>
>>> ><br>
>>> > so I have loads of them free.<br>
>>> ><br>
>>> > I also rebooted the server into grparted and double checked the disk<br>
>>> > partition and also ran a disk check from here - this flagged up no<br>
>>> > errors.<br>
>>> ><br>
>>> > I've now gone through and deleted some stuff to give me some breathing<br>
>>> > room<br>
>>> > but I really need that space back.<br>
>>> ><br>
>>> > Does any ones have any suggestions please?<br>
>>> ><br>
>>> > Dan<br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Kent mailing list<br>
>>> <a href="mailto:Kent@mailman.lug.org.uk">Kent@mailman.lug.org.uk</a><br>
>>> <a href="https://mailman.lug.org.uk/mailman/listinfo/kent" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/kent</a><br>
>><br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Kent mailing list<br>
>> <a href="mailto:Kent@mailman.lug.org.uk">Kent@mailman.lug.org.uk</a><br>
>> <a href="https://mailman.lug.org.uk/mailman/listinfo/kent" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/kent</a><br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Kent mailing list<br>
>> <a href="mailto:Kent@mailman.lug.org.uk">Kent@mailman.lug.org.uk</a><br>
>> <a href="https://mailman.lug.org.uk/mailman/listinfo/kent" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/kent</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Kent mailing list<br>
> <a href="mailto:Kent@mailman.lug.org.uk">Kent@mailman.lug.org.uk</a><br>
> <a href="https://mailman.lug.org.uk/mailman/listinfo/kent" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/kent</a><br>
<br>
_______________________________________________<br>
Kent mailing list<br>
<a href="mailto:Kent@mailman.lug.org.uk">Kent@mailman.lug.org.uk</a><br>
<a href="https://mailman.lug.org.uk/mailman/listinfo/kent" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/kent</a><br>
</div></div></blockquote></div><br></div>