[Gllug] IDE issues

Christian Smith csmith at micromuse.com
Thu Apr 18 16:44:49 UTC 2002


On 16 Apr 2002, Stephen Harker wrote:

>[root at abe steve]# hdparm -d0 -c0 -m0 /dev/hda
>
>/dev/hda:
> setting 32-bit I/O support flag to 0
> setting multcount to 0
> setting using_dma to 0 (off)
> HDIO_SET_DMA failed: Operation not permitted
> multcount    =  0 (off)
> I/O support  =  0 (default 16-bit)
> using_dma    =  0 (off)
>[root at abe steve]# hdparm -Tt /dev/hda
>
>/dev/hda:
> Timing buffer-cache reads:   128 MB in  0.46 seconds =278.26 MB/sec
> Timing buffered disk reads:  64 MB in 20.68 seconds =  3.09 MB/sec
>[root at abe steve]# hdparm -d1 -c1 -m16 /dev/hda
>
>/dev/hda:
> setting 32-bit I/O support flag to 1
> setting multcount to 16
> setting using_dma to 1 (on)
> HDIO_SET_DMA failed: Operation not permitted
> multcount    = 16 (on)
> I/O support  =  1 (32-bit)
> using_dma    =  0 (off)
>[root at abe steve]# hdparm -Tt /dev/hda
>
>/dev/hda:
> Timing buffer-cache reads:   128 MB in  0.48 seconds =266.67 MB/sec
> Timing buffered disk reads:  64 MB in 10.47 seconds =  6.11 MB/sec
>[root at abe steve]#
>
>For some reason, I can't enable DMA mode at all :-/
>And the buffered disk read rate is terrible. It's better than it was
>admittedly. But it should be a lot better! Maybe the kernel doesn't
>recognise the ATA133 chipset and can't see the DMA part. It works fine
>under *cough* XP :-/

On the plus side, you've a nice fast buffer-cache interface:) (DDR?)

My guess is that it is the lack of support for the chipset. What is it?

>
>Do I put this in rc.local or something to reactivate at boot or what?

On RH 7, check out /etc/sysconfig/harddisks, or /etc/sysconfig/harddiskhda 
for single hda disk. These are consulted in /etc/rc.sysinit.

Not sure about other distros.

>
>Steve
>
>
>
>
>

-- 
    /"\
    \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL 
     X                           - AGAINST MS ATTACHMENTS
    / \



-- 
Gllug mailing list  -  Gllug at linux.co.uk
http://list.ftech.net/mailman/listinfo/gllug




More information about the GLLUG mailing list