[Chester LUG] A few bits and pieces

Stuart Burns stuart.james.burns at gmail.com
Fri Jun 19 10:24:56 UTC 2009


Hi Guys,

Thanks for that info. In answer to a few questions:

40 GB is a big data set. But that doesn't even include the tiles. Part of
the issue I suspect is that we are trying to use this bit of software that
was designed for small sites and run reports based on the whole of the
country, so we probabily have one of the largest data sets ever seen in this
respect.

The data comes from the SAN via a onboard (well built into the machine)
fibre connect at 4 GB/s. The dual GB nics sounds like a good idea. However
the server is a Sun server of a certain vintage. It may be well worth
looking at the link into the DB in terms of speed. Would replicating the
data on the local machine be an idea?

What I may do is discuss with these people (They are map geeks, not computer
geeks) to see if we can somehow break down the data sets. It does seem
rather inefficient. If I get a gold star, I shall buy you all a pint ;)

It is not idea, but the software step up to the daddy of the spatial
software is I suspect measured in 6 figures and we aren't talking the last
two being pence!

Defo some things to think about. If we can get the price down a tad, we
could buy two of them! ;)




2009/6/19 Andrew Williams <andy at tensixtyone.com>

> Being a BI myself, hopefully I can put some good input in :)
>
> On Fri, Jun 19, 2009 at 10:25:42AM +0100, Stuart Burns wrote:
> > At work we do a lot of GIS reporting (creating reports based on 3d maps)
> Now
> > I was just speaking to the guy who runs these reports (They get a dozen a
> > week to run or so, depending on what is happening) can't cope.
> >
> > Until now they have been running on a core duo laptop with 1 GB of ram. I
> > kid you not. Each report takes 48 hours. I suggested they get a super
> > machine ;)
> >
> > The data set (40 GB) comes from a SAN.
>
> Thats a massive dataset for a single report, does it use the whole set
> or just a subset?
>
> > Can anyone suggest a high end rig (Money more or less no object) Put
> simply
> > each 10% quicker it can go, it could save up to two weeks of peddle to
> metal
> > computing per year.
> >
> > I was thinking along the lines of a top of the line AMD (This program can
> do
> > 64 bit computing but doesn't do multiple cpus that efficiently)
> >
> > Around 24 GB RAM
> > 2 * 15K SATA 500 GB in a raid stripe config.
> > 1 GB NIC as the data comes from a fibre
> > bonded FC4 Fibre cards.
> > 1 GB Dual head video card with onboard acceleration.
>
> I suspect that 90% of runtime will be grabbing data from that large set,
> I think you should look at dual 1 GB NICs, that is if your network
> infrastructure supports it.
>
> I guess it depends on the app and how it actually handles the data, but
> wads of RAM and CPU time will help it along, I think Core Duos will give
> you more bang per CPU, if the app has multicpu issues then you want a
> processer that can really hammer the calculations on a single core.
>
>
> --
> Andrew Williams
> w: http://tensixtyone.com/
> e: andy at tensixtyone.com
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAko7Yp8ACgkQOYXUY+Bc/8DuZQCeNNO2EjFwANQ8EaUKGi8g05cN
> NtQAoNuv3DVkTBNEenURgYgz7s89d5Vr
> =J1OD
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Chester mailing list
> Chester at mailman.lug.org.uk
> https://mailman.lug.org.uk/mailman/listinfo/chester
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.lug.org.uk/pipermail/chester/attachments/20090619/6a544d22/attachment.html>


More information about the Chester mailing list