[Gllug] top-like command for disk io ?

Nix nix at esperi.org.uk
Wed Mar 14 20:50:14 UTC 2007


On 14 Mar 2007, tethys at gmail.com stated:

> On 3/14/07, salsaman <salsaman at xs4all.nl> wrote:
>
>> I looked at the dtrace link, and it is only for Linux apps running on
>> Solaris.
>>
>> What I need is a simple program that, when I see the disk light going, I
>> can check which process(es) are causing the disk activity.
>
> Like I said, SystemTap.

Nah, there's already a specialized tracer for this.

Turn on CONFIG_BLK_DEV_IO_TRACE in your kernel config, and build and
install blktrace from
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/blktrace.git
(it may be available in tarballs and RPMs and things as well ---
in fact it almost certainly is --- but this is how I got it.)

There are several different tools, some of which allow offline
collection and separate analysis of traces. `btrace' is an online live
tool; `blktrace' does the low-level collection; `btt' does
post-processing of collected traces and provides overall statistics.

There's LaTeX documentation.

An example: say I want to know about disk activity on my RAID array:

# btrace /dev/md1
[...]
936,600256  0     6585     6.734440762   930  A   R 6511642 + 2 <- (253,5) 76127642
  9,1    0     6586     6.734441497   930  Q   R 76127642 + 2 [nfsd]
936,600256  0     6587     6.734456048   930  A   R 6511646 + 2 <- (253,5) 76127646
  9,1    0     6588     6.734456707   930  Q   R 76127646 + 2 [nfsd]
936,600256  0     6589     6.734473413   930  A   R 6511648 + 2 <- (253,5) 76127648
  9,1    0     6590     6.734474491   930  Q   R 76127648 + 2 [nfsd]
936,600256  0     6591     6.734494275   930  A   R 6511682 + 8 <- (253,5) 76127682
  9,1    0     6592     6.734495406   930  Q   R 76127682 + 8 [nfsd]
936,600256  0     6593     6.734513245   930  A   R 6511690 + 8 <- (253,5) 76127690
  9,1    0     6594     6.734513759   930  Q   R 76127690 + 8 [nfsd]
936,600256  0     6595     6.734524649   930  A   R 6511698 + 8 <- (253,5) 76127698
  9,1    0     6596     6.734525319   930  Q   R 76127698 + 8 [nfsd]
936,600256  0     6597     6.734612042   930  A   R 6511706 + 8 <- (253,5) 76127706
  9,1    0     6598     6.734612674   930  Q   R 76127706 + 8 [nfsd]
936,600256  0     6599     6.734623112   930  A   R 6511714 + 8 <- (253,5) 76127714
  9,1    0     6600     6.734623579   930  Q   R 76127714 + 8 [nfsd]
936,600256  0     6601     6.734630638   930  A   R 6511722 + 8 <- (253,5) 76127722
  9,1    0     6602     6.734631178   930  Q   R 76127722 + 8 [nfsd]
936,600256  0     6603     6.738865070   928  A   R 6453216 + 2 <- (253,5) 76069216
  9,1    0     6604     6.738866654   928  Q   R 76069216 + 2 [nfsd]
CPU0 (9,1):
 Reads Queued:       2,993,    6,977KiB  Writes Queued:         290,      314KiB
 Read Dispatches:        0,        0KiB  Write Dispatches:        0,        0KiB
 Reads Requeued:         0               Writes Requeued:         0
 Reads Completed:        0,        0KiB  Writes Completed:        0,        0KiB
 Read Merges:            0               Write Merges:            0
 Read depth:             0               Write depth:             0
 IO unplugs:            11               Timer unplugs:          11

Throughput (R/W): 0KiB/s / 0KiB/s
Events (9,1): 6,604 entries
Skips: 0 forward (0 -   0.0%)


Looks like a lot of NFS traffic, a bit file read, in fact, from device
253,5, which is my LVMed /home. :)

(The docs describe what the fields are for. The sequence numbers and
time are obvious; there are also block counts, event types and so on and
so forth.)

-- 
`In the future, company names will be a 32-character hex string.'
  --- Bruce Schneier on the shortage of company names
-------------- next part --------------
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug


More information about the GLLUG mailing list