[Gllug] HugePages

James Courtier-Dutton james.dutton at gmail.com
Thu Oct 28 23:28:01 UTC 2010


On 28 October 2010 18:02, Steve Parker <steve at steve-parker.org> wrote:
> All the documentation I can find about huge pages talks about reserving
> a small fraction of the total RAM for huge pages - eg,
> Documentation/vm/hugetlbpage.txt has an example of "echo 20 >
> /proc/sys/vm/nr_hugepages";
> http://www.cyberciti.biz/tips/linux-hugetlbfs-and-mysql-performance.html
> says "On busy server with 16/32GB RAM, you can set this to 512 or higher
> value." - ie, 1Gb of 16 or even 32Gb machine.
>
> If a dedicated 16Gb database server can make use of hugepages, why would
> anyone allocate only 1Gb, not 12Gb or even more? Sure, software not
> written for hugepages can't use that memory, and it can't be swapped
> out, but if the machine only exists to run a database, why not give it
> all the RAM it needs, in the much more efficient hugepages layout?
>
> The reason I ask, is that we have a 72Gb server running JRockit with
> 48Gb+ JVMs in a 60Gb block of hugepages; with no swapping, and 3Gb free
> of the 12Gb "normal" memory that is left, performance is dismal - "ls"
> in an empty directory can take many seconds. If we bring down the
> hugepages, things seem much better. But the apps guys say that their JVM
> has to be all in hugepages, so if there's only 20Gb hugepages, then
> their JVM can't store more than 20Gb data in RAM (which is the design
> that we have been told we must work with).
>
> Does anybody have much experience with hugepages? This is RHEL5.3, on 2
> x quad-core hyperthreaded Intel Xeon 5570s
>

It sounds to me that you are suffering from memory pressure.
Some things to look for.
1) Apart from the JVMs, what other functions are using memory? If you
have large file systems, these will use RAM to enable fast lookup. The
larger the filesystem, the larger the amount of RAM used to hold
lookup tables and allocation tables.
2) Google a bit to understand how memory allocation works.
A quick example.
An application requests 4 bytes of memory.
The kernel assigns a free page to that application and allocates a
small section of the whole page to store the 4 bytes in.
Now, if the page is 4K, that request for 4 bytes of memory has
actually taken 4K of RAM resources.
Now, the next request for 4 more bytes of memory from the same
application will use the un-used space of the 4K page.
But, if a different process requests 4 bytes of memory, it cannot be
allocated from the same page as the previous process, so a second 4K
page is allocated to the new process.
So, one process has requested 8 bytes of memory, and another process
has requested 4 bytes of memory, but this has actually used up 8K of
RAM.
Now, if hugh pages are being used and those huge pages are 1Gig
(although in your case they are probably 2MB pages), all it would take
is 72 processes requesting 4 bytes of memory each and that will
consume 72GB of RAM.
This can explain why more RAM resources are being used by a process
than you would otherwise expect.

3) Memory fragmentation.
This is one of the worst problems and it is a very difficult problem to fix.
Quick example.
An application requests 4 bytes of memory (call it request 1). The
application is assigned one 4K page.
The application then requests more small amounts of memory (call it
request 2-200)
Say that on request 201, it asks for 3 more bytes of memory, it has
used all of the 4K page, so the application is assigned a second 4K
page to handle request 201.
The application then frees the memory from request 2-200.
So, request 1 and request 201 still exist.
The application has been allocated 8K of RAM resources, but only
request 1 and 201 still exist, totalling 7 bytes. So, now although the
application only needs 7bytes of memory, it is using 2 pages, and thus
8K of RAM. Each page of RAM has to stay allocated until the process
has freed all requests against that page of RAM.
If this case is expanded using hugepages, and the hugepages are
2MBytes each, this 7 bytes requested by the application is now taking
4Mbytes of RAM.
So, by using hugepages, you have just turned 8K of RAM resource hog
into a 4M of RAM resource hog.

So, this is one of the reasons why using too many hugepages is a bad idea.

Java does in fact help you here, because although a native x86
application cannot get out of the memory fragmentation problem
described, the Java Memory manager can because it can move memory from
one page to another without the Java application having to know. This
is one of the advantages of a software VM over a hardware implemented
VM.

You will have to investigate where your memory pressure problem
actually lies, and hopefully it is not due to (3) because fixing that
generally means reworking the application.

Kind Regards

James
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list