[Gllug] SQL performance tuning (was: RE: Restore RAID0 /var)

Richard Jones rich at annexia.org
Fri Aug 28 08:29:47 UTC 2009


On Thu, Aug 27, 2009 at 02:31:37PM +0100, Walter Stanish wrote:
> Thanks for your contribution but:
>  - the question *was* asked within the context of
>    optimisation for an upcoming web application deployment
>  - the initial answer already highlighted the fact that
>    some strategies were obviously quicker to implement
>    than others
>  - there have already been similar follow ups... here's +1...

One issue with optimizing databases specifically is that the query
plan and the time taken to run queries can be affected greatly by the
data & quantity of data in the database itself.  I can think of a
couple of web apps I've written which performed great in development
on small test datasets, but crawled when deployed against the real
database.

If there's a big separation between the developers and the DBAs / live
database, then such problems can be hard to solve.

> I'm curious as to what percentage of PgSQL databases you
> estimate actually hit the disk for indexes?  I'd have
> thought that on the vast majority of web-app backends
> indexes will easily fit in memory... and in those 
> cases where the database size was so large that this
> were impossible, buying more RAM would most often be
> a more effective AND cheaper fix (both initial cost
> and running costs) than adding disks to the equation.
> Maybe I have this one wrong, though.

Conversely PostgreSQL is great because with reasonable care you can
deploy small instances which perform quite well.  The main database at
Merjis which served multiple light web apps _and_ some OLAP analysis
stuff, was, for a time, deployed on a Xen VM with 256MB of RAM :-)

Rich.

-- 
Richard Jones
Red Hat
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list