[Nottingham] DNS question for specifying redundant servers
Graeme Fowler
graeme at graemef.net
Thu Apr 3 22:22:52 BST 2008
On Thu, 2008-04-03 at 21:35 +0100, Martin wrote:
> Answers for mail servers and web servers please...
For mail servers - MX servers at least, so inbound mail - it's built in
to the record definition. You have:
$ORIGIN domain.com
IN MX 10 mail.domain.com
IN MX 20 mail2.domain.com
and so on, and the associated A records for those hosts. If
mail.domain.com is unreachable, SMTP servers will try mail2.domain.com
and so on reverse order of priority (0 is highest). If they're all
unreachable, the protocol has retries built in until either one is
reachable, or the retry timeout expires in which case the punter gets a
bounce message.
For web servers it is *much* more difficult. Given that you may have web
servers in (say) three different locations, on 3 different networks,
with 3 different IP addresses it's different to do proper IP failover
between them.
The "poor man's solution" is to define multiple A records for your web
servers with a short TTL (say 300 seconds). In theory, if a given server
fails then in 150 seconds (you'll have to look this up, but retry values
are usually a minimum of TTL/2) then there's an N/3 likelihood that a
given client will pick a different value for the A record and connect to
a different host.
Unfortunately, this is blown into a cocked hat by many ISPs who don't
honour TTLs lower than, well, you choose (or they do) - mostly 1 day -
to reduce the hit on their nameservers. This can mean once a client
picks an A record which has failed, they'll stick with that for 12 hours
at least and will be unable to reach you.
However, there's an easy workaround which minimises the effect of this
type of failure... if you have webservers in three different locations,
hey! Why not have DNS servers there too!
And here's the tricky bit: if you request the A record for
www.domain.com from ns1.domain.com, you get the IP associated with that
location. If you request it from ns2.domain.com, you get the one
associated there - and so on. If any of them (the webservers) go down,
you can rejig your DNS accordingly to feed clients to the other
locations. Changing the SOA serial number often refreshes caches in very
short order.
This, however, does not help clients who already have a record in their
resolver cache (or that of a recalcitrant ISP) which refuses to change,
and that's where clever people like Akamai make $$$.
Akamai use a mix of technologies which involve announcing their AS
numbers and networks through multiple locations which are not linked
directly; if one becomes unreachable then hey - this guy over here still
is. There are several content providers using Akamai for "global load
balancing" or "geographic load balancing". Their technology at that
level is very similar to that used for some of the root name servers,
namely BGP "anycast" - making a single address appear at several
locations simultaneously. It isn't quite that simple, but that's the net
effect.
So to summarise, there are simple and cheap ways which work (sort of),
and expensive and complex ways which work (almost the whole time); there
isn't one simple panacea available. Which is a shame - as we'd all use
it if there was.
Graeme
More information about the Nottingham
mailing list