[Gllug] Slow over 2meg Satellite

Andy Farnsworth farnsaw at stonedoor.com
Tue Sep 16 13:22:16 UTC 2003


The issue with satelllite links for internet access is the extremely long
ping times.  If you are using geosyncronus satellites, they orbit at about
24000 miles.  Now every packet must make 2 trips to the satellite which give
24000 up + 24000 down + 24000 up + 24000 down = 96000 miles.  at the speed
of light ~ 186000 miles / second this is just over 1/2 second or 500 ms, add
to that the normal latency of the net and seeing 700 ms times is pretty
normal.  Several of these services will offer a windows only driver for the
satellite interface that will send packets without waiting for the ack to
return.  This speeds up the throughput immensely unless you have a very
lossy period, in which case nothing will speed you up.  As an item of note,
they used to do this in the hardware / firmware of the satellite transceiver
but they moved it to the driver (i.e. software) so they could dumb down the
hardware and make it cheaper.

Andy Farnsworth


> -----Original Message-----
> From: gllug-bounces at linux.co.uk [mailto:gllug-bounces at linux.co.uk]On
> Behalf Of andy at mac1systems.com
> Sent: September 15 2003 18:27
> To: gllug at linux.co.uk
> Subject: [Gllug] Slow over 2meg Satellite
>
>
> Hi,
>
> One of my systems is acting as the main server (NATing and SQUIDing) in
> Africa.  Its connected to the UK via a satellite link.
>
> Has anyone experience of using satellite links and how to get a decent
> throughput?  It seems its a common problem having reasonably fast pipes
> with large delay (700ms in this case).
>
> I've googled for days now and tried various things (mainly those discussed
> at http://www.psc.edu/networking/perf_tune.html which most other articles
> seem to reference).
>
> I've increaed the buffers, turned on SACK, Timestamps and windows scaling
> using the following:
>
> 	echo 1 > /proc/sys/net/ipv4/tcp_timestamps
> 	echo 1 > /proc/sys/net/ipv4/tcp_window_scaling
>         echo 1 > /proc/sys/net/ipv4/tcp_sack
>
>        	echo 8388608 > /proc/sys/net/core/wmem_max
>        	echo 8388608 > /proc/sys/net/core/rmem_max
>        	echo "4096 87380 4194304" > /proc/sys/net/ipv4/tcp_rmem
>        	echo "4096 65536 4194304" > /proc/sys/net/ipv4/tcp_wmem
>
> But it doesn't seem to have made any effect.  Running tests from a PC
> connected to the server and FTP directly from the server shows throughput
> of only about 20k tops (and often a lot less) even though only
> about 30% of
> the pipe is being used.
>
> If I go to the test site mentioned on the psc.edu site
>
> telnet syntest.psc.edu 7960
>
> I get
> =======================================================
> ! Variable        : Val       : Warning (if any)
> !=======================================================
> SACKEnabled       : 3         :
> TimestampsEnabled : 1         :
> CurMSS            : 1448      :
> WinScaleRcvd      : 0         : WARN - 0 WinScale Received
> CurRwinRcvd       : 5840      :
> !
> ! End of SYN options
> !
> Which shows Windows scaling is not enabled.  But
> cat /proc/sys/net/ipv4/tcp_window_scaling
> returns 1.
>
> Is there anything else I need to do to get window scaling working?
>
> Thanks
>
> Andy
>
>
>
>
> --
> Gllug mailing list  -  Gllug at linux.co.uk
> http://list.ftech.net/mailman/listinfo/gllug
>
>


-- 
Gllug mailing list  -  Gllug at linux.co.uk
http://list.ftech.net/mailman/listinfo/gllug




More information about the GLLUG mailing list