<br><br>
<div><span class="gmail_quote">On 1/31/07, <b class="gmail_sendername">Brendan Whelan</b> <<a href="mailto:b_whelan@mistral.co.uk">b_whelan@mistral.co.uk</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">Vic,<br><br>Yes, I did delete the files as each was over 10GB and I needed to get the<br>system back in action . However, I then trapped the first entries in the
<br>newly created log file and fixed the cause of the excess messaging.</blockquote>
<div> </div>
<div>There's a booby trap there: if you don't restart the process that is generating files of doom, they won't necessarily get deleted until you restart the process generating them (which has to do with the fopen and unlink functions in C, look them up if you're curious, but it's not an uncommon problem in dealing with mongo large log files). It's often good to have a log rotation tool in place, and for server class machines a bit of monitoring of over-filled file systems. (That sort of thing is what Nagios and SNMP and MRTG were designed for!)
</div>
<div> </div>
<div>The daily cron jobs that are part of most Linux distributions are very good for this, and can be easily modified to support new types of log files: I consider log rotation to be a vital part of building the full software package for any daemons I manage, and it can be helpful to get a log analysis utility like LogWatcher configured to look for whackiness like you describe in the log files, so that you get a nightly report of "we saw 10,000,000 entries just like this".
</div>
<div> </div>
<div>What was it, anyway?</div><br> </div>