[Gllug] HugePages
Steve Parker
steve at steve-parker.org
Thu Oct 28 17:02:53 UTC 2010
All the documentation I can find about huge pages talks about reserving
a small fraction of the total RAM for huge pages - eg,
Documentation/vm/hugetlbpage.txt has an example of "echo 20 >
/proc/sys/vm/nr_hugepages";
http://www.cyberciti.biz/tips/linux-hugetlbfs-and-mysql-performance.html
says "On busy server with 16/32GB RAM, you can set this to 512 or higher
value." - ie, 1Gb of 16 or even 32Gb machine.
If a dedicated 16Gb database server can make use of hugepages, why would
anyone allocate only 1Gb, not 12Gb or even more? Sure, software not
written for hugepages can't use that memory, and it can't be swapped
out, but if the machine only exists to run a database, why not give it
all the RAM it needs, in the much more efficient hugepages layout?
The reason I ask, is that we have a 72Gb server running JRockit with
48Gb+ JVMs in a 60Gb block of hugepages; with no swapping, and 3Gb free
of the 12Gb "normal" memory that is left, performance is dismal - "ls"
in an empty directory can take many seconds. If we bring down the
hugepages, things seem much better. But the apps guys say that their JVM
has to be all in hugepages, so if there's only 20Gb hugepages, then
their JVM can't store more than 20Gb data in RAM (which is the design
that we have been told we must work with).
Does anybody have much experience with hugepages? This is RHEL5.3, on 2
x quad-core hyperthreaded Intel Xeon 5570s
Steve
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list