[sclug] Dell desktop for 175GBP inc. delivered - offer closes 2/11/2005

John Stumbles john at stumbles.org.uk
Thu Oct 27 16:25:39 UTC 2005


Alex Butcher wrote:
> On Thu, 27 Oct 2005, John Stumbles wrote:
> 
>> BUT.... I think I need more RAM. `top` shows:
>> Mem:    256792k total,   254736k used,     2056k free,      944k buffers
>> Swap:   979956k total,   314992k used,   664964k free,    43416k cached
>> suggesting that even with 512M I'd still be swapping (and I haven't 
>> even got OOo open, let alone multiple users' GUIs running).
> 
> 
> Quite possibly, but it could just be stuff that gets swapped out shortly
> after being started, and never/rarely gets woken and brought back into RAM
> (e.g. cups, apache). Note that you have ~43MB of disc cache, which suggests
> that this is already the case.

I was quetly ignoring the right hand column as I haven't a clue what
those figures mean :-/

> Probably best to have a look at the output of 'vmstat 1', particularly the
> 'si' and 'so' columns to determine whether you're actively swapping.

Not sure exactly what I'm looking for here but running vmstat 1 and
bringing firefox to the foreground and then minimising it again I get this:

>  0  1 396368   2092   1104  47976   68  288   412   368 1467  2380 16  2  0 82
>  0  2 396588   2384   1116  47776  420  536   668   592 1173  1855 22  5  0 73
>  0  4 396768   2060   1076  47972  896  540  1260   540 1307  2441 10  4  0 86
>  0  2 397124   2072   1016  48252  660 1016  1852  1032 1215  2095 22  6  0 72
>  0  2 397188   2060    976  49028  248  196  1248   196 1279  2110  4  2  0 94
>  0  2 397320   2172    876  49208  808  320  2232   320 1216  1971 18  5  0 77
>  0  2 397284   2284    768  48804 1220    0  1620     0 1279  2434 11  3  0 86
>  1  1 397284   2060    700  48288 1576    0  1900    68 1187  2080 30  4  0 66
>  0  2 397700   2076    696  48156  896  592  1008   640 1338  2228  7  4  0 89
>  0  1 397700   2076    680  48156  648    0  1144     0 1455  2112 27  6  0 67
> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
>  3  1 397772   2164    648  48080  668   76  1016    92 1290  2429 10  2  0 88
>  2  1 397828   2132    640  47400  936  896  1124   896 1239  2584 49  8  0 43
>  1  2 397916   3480    640  46888 1460  472  1460   472 1406  2233 14  3  0 83
>  0  2 397916   2228    644  46368 2676   56  2676   104 1526  1868 20  7  0 73
>  0  1 397836   2156    604  45444 1764   40  1784    68 1588  2384 23  8  0 69
>  0  1 397836   2160    604  45468    0    0    24     0 1202  2182 28  4  0 68
>  0  1 397836   2176    604  45492   24    0    48     0 1155  2353  9  3  0 88
>  0  1 397836   2064    604  45604    0    0   112     0 1266  2488 28  6  0 66
>  0  1 397836   2144    604  45600   32   12    32    24 1211  2198 11  2  0 87

The si and so columns were a lot higher than when the system is just
pootling along.


> More likely, you're heavily IO-bound. Are you using SCSI or ATA discs? If
> ATA, have you enabled DMA (use hdparm to check)?

# hdparm /dev/hda

/dev/hda:
  multcount    =  0 (off)
  IO_support   =  1 (32-bit)
  unmaskirq    =  1 (on)
  using_dma    =  1 (on)
  keepsettings =  0 (off)
  readonly     =  0 (off)
  readahead    = 256 (on)
  geometry     = 16383/255/63, sectors = 120034123776, start = 0


> If you're using a 2.6
> kernel, have you tried using a different IO scheduler; I'm not convinced
> that the default - cfq - is good for anything but busy server loads. I'd
> suggest trying the 'deadline' and 'as' schedulers, at least.

---8<---

Before getting too deeply into tuning, are there issues that could
result in order-of-magnitude improvements on my existing system? It's
currently taking me ~5s to bring a mimimised application to the
foreground, whereas I would expect that it should be nearer 0.5s (is
that reasonable?)

-- 
John Stumbles                                      mobile 0780 866 8204
plumbing:heating:electrical:property maintenance     home 0118 954 2406



More information about the Sclug mailing list