[Gllug] Single IP, HTTP accelerator fronting multiple webservers

Richard Jones rich at annexia.org
Thu Feb 23 18:38:52 UTC 2006


On Thu, Feb 23, 2006 at 06:00:41PM +0000, Tethys wrote:
> Richard Jones writes:
> >TUX / "Red Hat Content Accelerator" is another possibility, but I'm
> >not so sure that I want to run kernel stuff (absolute performance is
> >not an issue for us).
> 
> So why do you need an HTTP accelerator if performance isn't a problem?
> If you're just returning static content, then the tiniest box you can
> buy will *easily* saturate any Internet bandwidth you can afford, so
> you don't need a cache. If you're returning dynamic content, then it's
> generally not cachable anyway, so once again you won't see any benefit
> from a cache.

OK, well that's a fair point.  The actual reason is to do with
isolating the web servers from each other.  We want to do this for
several reasons:

* Each webserver uses different, incompatible technology --
eg. mod_caml doesn't work with Apache 2.0, but we want to begin
upgrading to Apache 2.0 for our other web services.

* Fault isolation -- if one server suddenly comes under attack from a
spammer/idiot running a script against it, we want the other servers
to keep going.

* Permissions -- we may want to give outside users access to a
particular webserver, but we don't want them to be able to interfere
with our other servers or have general accounts on all the virtual
machines.

> The only scenario I can think of where it would help is if you're
> dynamically generating static content. In that case, why not just
> pregenerate static pages once a day from cron or something, and serve
> them up as regular static content, which would give you the same
> performance improvement

Much of our normal content is generated on the fly from a database,
and I can't see us changing that, for example every page on
http://merjis.com is generated on the fly.  On the other hand,
fronting a dynamic site with a caching HTTP accelerator is exactly the
same as pregenerating static content, because the HTTP accelerator
caches pages into files for you.  The application doesn't need to
worry about this, and there is a clearer logical separation of layers.
I'm never happy about having webserver apps writing into files -
there's just too much scope for security problems, locking issues,
data corruption, lack of transactional support, etc.  Our webserver
apps only write to a PostgreSQL database.

> without the need for an extra component to
> get in the way (and potentially fail)?

This is a worry, but I'm pretty happy with Squid - never seen it fall
over.  Has anyone had reliability problems with it?

Rich.

-- 
Richard Jones, CTO Merjis Ltd.
Merjis - web marketing and technology - http://merjis.com
Team Notepad - intranets and extranets for business - http://team-notepad.com
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list