Tuesday, February 19, 2008

JVM Performance Tuning

Last week was JBoss World, and it was exciting to be a part of it. I gave a presentation on performance tuning our Enterprise Application Platform or EAP, and it was packed. In fact, people were sitting on the floor in pretty much all available space. What struck me about the presentation, and much of the discussion I had with individuals afterwards, is that JVM tuning is a big topic. So, I thought I would share some of what I learned over the past couple of months as I was preparing for my presentation.

In preparing for my presentation, I wrote an EJB 3 application, wrote a load test for it, and applied optimizations to various configuration parameters within the EAP, the JVM and the operating system. In particular, one JVM and OS setting really made a huge difference in throughput, and its something that I wanted to share here.

When using a 64-bit OS, in my case Fedora 8 and RHEL 5.1, I wanted to investigate the usage of large page memory support, or HugeTLB as its referred to within the Linux kernel. What I found was very scarce documentation around using this, and that that documentation was incomplete to actually make it work. What I also found, is it makes a huge difference in overall throughput and response times of an application, when using heap sizes above 2GB.

So, without further ado, let's dive into how to set this up. These instructions are for Linux, specifically for Fedora 8 and RHEL 5.1, but the results should be generally applicable to any 64-bit OS and 64-bit JVM that supports large page memory (which all the proprietary UNIX's do, and I found an MSDN article describing how to use this on 64-bit Windows).

You must have root access for these settings. First, you need to set the kernel parameter for shared memory to be at least as big as you need for the amount of memory you want to set aside for the JVM to use as large page memory. Personally, I like to just set it to the maximum amount of memory in the server, so I can play with different heap sizes without having to adjust this every time. You set this by putting the following entry into /etc/sysctl.conf:

kernel.shmmax = n

where n is the number of bytes. So, if you have a server with 8GB of RAM, then you would set it to 8589934592, or 1024*1024*1024*8, which is 8GB.

Second, you need to set a virtual memory kernel parameter to tell the OS how many large memory pages you want to set aside. You set this by putting the following entry into /etc/sysctl.conf:

vm.nr_hugepages = n

where n is the number of pages, based on the page size listed in /proc/meminfo. If you cat /proc/meminfo you will see the large page size of your particular system. This varies depending on the architecture of the system. Mine, is an old Opteron system, and it has a page size of 2048 KB, as shown by the following line in /proc/meminfo:

Hugepagesize: 2048 kB

So, I wanted to set this to 6GB. I set the parameter to 3072, which is (1024*1024*1024*6)/(1024*1024*2). Which is 6GB divided by 2MB, since 2048 KB is 2MB.

After this, you need to set another virtual memory parameter, to give permission for your process to access the shared memory segment. In /etc/group, I created a new group, called hugetlb, you can call it whatever you like, as long as it doesn't collide with any other group name, and it had a value of 501 on my system (it will vary, depending on whether you use the GUI tool, like I did, or whether you do it at the command line, and vary depending on what groups you already have defined). You put that group id in /etc/sysctl.conf as follows:

vm.hugetlb_shm_group = gid

where gid, in my case was 501. You also add that group to whatever your user id is that the JVM will be running as. In my case this was a user called jboss.

Now, that concludes the kernel parameter setup, but there is still one more OS setting, which changes the users security permissions to allow the user to use the memlock system call, to access the shared memory. Large page shared memory is locked into memory, and cannot be swapped to disk. Another major advantage to using large page memory. Having your heap space swapped to disk can be catastrophic for an application. So, you set this parameter in /etc/security/limits.conf as follows:

jboss soft memlock n
jboss hard memlock n

where n is equal to the number of huge pages, set in vm.nr_hugepages, times the page size from /proc/meminfo, which in my example would be, 3072*2048 = 6291456. This concludes the OS setup, and now we can actually configure the JVM.

The JVM parameter for the Sun JVM is -XX:+UseLargePages (for BEA JRocket its -XXlargePages, and for IBM's JVM its -lp). If you have everything setup correctly, then you should be able to look at /proc/meminfo and see that the large pages are being used after starting up the JVM.

A couple of additional caveats and warnings. First, you can dynamically have the kernel settings take affect by using sysctl -p. In most cases, if the server has been running for almost any length of time, you may not get all the pages you requested, because large pages requires contiguous memory. You may have to reboot to have the settings take affect. Second, when you allocate this memory, it is removed from the general memory pool and is not accessible to applications that don't have explicit support for large page memory, and are configured to access it. So, what kind of results can you expect?

Well, in my case, I was able to achieve an over 3x improvement in my EJB 3 application, of which fully 60 to 70% of that was due to using large page memory with a 3.5GB heap. Now, a 3.5GB heap without the large memory pages didn't provide any benefit over smaller heaps without large pages. Besides the throughput improvements, I also noticed that GC frequency was cut down by two-thirds, and GC time was also cut down by a similar percentage (each individual GC event was much shorter in duration). Of course, your mileage will vary, but this one optimization is worth looking at for any high throughput application.

Good luck!

65 comments:

hotsun said...

Andrig,
Thank you good info about OS and JVM tuning.
You said you have achieved over 3x improvement after tuning. Could you provide a little more info about how about original setting before tuning.
What tuning tools you are using.
Thanks in advance.

Jim

Andrig T Miller said...

The 3x improvement was due to other configuration changes besides the JVM that were specific to my application running on the JBoss EAP 4.2 release. They were mostly adjustments to the pool sizes for the EJB 3 objects.

Having said that, the changes for the JVM resulted in around 70% of that total change in throughput.

The starting point was using -server -ms3584m -mx3584m.

This is simply putting the HotSpot JVM into server mode, and using a minimum and maximum heap size of 3.5GB.

After this I added -XX:+UseLargePages. Of course, in order for this to work, you have to go through the OS setup instructions in the blog. That was all I did. I believe in keeping things as simply as possible as your starting point, and adding one thing to see what it does for you. The simpler you keep the options, the better off you are going to be, IMHO.

rostiarso said...

Thanks for the tips Andrig.

Additional info for those who still stuck with 2.4 kernels (e.g. RHEL 3), use vm.hugetlb_pool parameter to set the huge page size (in MB). So to configure 6GB page size, the sysctl parameter is vm.hugetlb_pool=6144

Andrig T Miller said...

Thanks for the additional RHEL 3 information. That will probably help quite a few folks, considering the cycle time to upgrade the OS that most organizations have.

Thanks again!

arnold_mad said...

Hi !

I tied to setup my jboss the same way you did but I always get this erro when starting up the JVM:

Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12).

Do u know what could be wrong ?

Andrig T Miller said...

You get the errno=12 when you don't have the HugeTLB setup correctly, or you don't have enough pages.

If you want to show me specifically what you setup on the Linux side of things, and what you are passing on the JVM command-line I can probably tell you what's wrong.

Someone said...

Hi Andrig,

Hopefully you still read your blog (and this post!). I have a RH Linux machine that is configured for large pages. If I run my application (with 24gb) using Jrockit, I can access the large pages. If I use Sun's JVM, I get the errno = 12. Any idea what to look at?

Andrig T Miller said...

Are you sure you are really accessing the large pages with JRockit? If you look at /proc/meminfo, what do you see? You should see non-zero values in HugePages_Rsvd, _Free, etc.

I wouldn't doubt that its not working with JRockit, but that it fails silently versus the Sun JVM.

Besides just verifying that it is actually working with JRocket, and the HugePages_ values in /proc/meminfo, I would take a look at:

/etc/security/limits.conf - do you have the user that you are running the JVM as defined there with the proper values for memlock?

/etc/sysctl.conf - do you have the following defined:

# Change maximum shared memory segment size to 8GB
kernel.shmmax = 8589934592

# Add the gid to the hugetlb_shm_group to give access to the users
vm.hugetlb_shm_group = 501

# Add 6GB of in 2MB pages to be shared between the JVM and MySQL
vm.nr_hugepages = 3072

Of course your values might be different and the group value will have to be added in /etc/group to the specific user you are running the JVM as.

Anyway, if you want to post the values you have in /etc/sysctl, /etc/security/limits.conf, /etc/group, and tell me what user you are using for running the JVM, I can probably spot what is wrong. Oh, also the /proc/meminfo, and how much memory you want to use as large pages.

cadmo said...

hi andrig - thanks for the very useful post.

i have applied your suggestion to my jboss system (sun jvm1.5 on redhat 4 on AMD) and it is working at the OS level, but the JVM does not seem to use any large pages.

the -XX:+UseLargePages is used.

any suggestion?

Andrig T Miller said...

Hmm... How do you know that the JVM is not using the large pages?

If you include the text of what's in /proc/meminfo, that might be helpful as well.

gdimitrov said...

Very helpful.
It is important to update the kernel, glibc and maybe other packages in order this to work.
I tried it with original RHEL 4 update 2 and I had to update to the latest update 6 before it works. Even if I did all settings correctly before this.

Andrig T Miller said...

Thanks for the tip. Yes, keeping your RHEL installation up to date is definitely key to making sure this works. I'm not surprised there were issues, and that you had to upgrade to update 6 for it to work.

rafaelcba said...

Hello!

I setup my env. as you decribed in this post but when I start my JVM it show the following WARN:

Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 22).

My JAVA_OPTS is:

JAVA_OPTS: -Dprogram.name=run.sh -Dflag.kill.jboss= -server -Duser.language=pt -Duser.region=BR -Dfile.encoding=ISO8859_1 -XX:+UseParallelGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/var/log/JVMGC/gc.log -XX:+DisableExplicitGC -XX:+UseSpinning -Xms64g -Xmx64g -XX:NewSize=16g -XX:MaxNewSize=16g -XX:SurvivorRatio=6 -XX:+PrintTenuringDistribution -XX:+UseLargePages -XX:PermSize=512m -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled -XX:+UseTLAB -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dcom.sun.management.jmxremote.port=7777 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.net.preferIPv4Stack=true

my /proc/meminfo is:
HugePages_Total: 32768
HugePages_Free: 32768
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

my system limits is:
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1056768
max locked memory (kbytes, -l) 67108864
max memory size (kbytes, -m) unlimited
open files (-n) 99999
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1056768
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

I am using Debian (Linux Version 2.6.26-2-amd64).

Can you help me?

Thanks.

rafaelcba said...

Hello!

I setup my env. as you decribed in this post but when I start my JVM it show the following WARN:

Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 22).

My JAVA_OPTS is:
JAVA_OPTS: -Dprogram.name=run.sh -Dflag.kill.jboss= -server -Duser.language=pt -Duser.region=BR -Dfile.encoding=ISO8859_1 -XX:+UseParallelGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/var/log/JVMGC/gc.log -XX:+DisableExplicitGC -XX:+UseSpinning -Xms64g -Xmx64g -XX:NewSize=16g -XX:MaxNewSize=16g -XX:SurvivorRatio=6 -XX:+PrintTenuringDistribution -XX:+UseLargePages -XX:PermSize=512m -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled -XX:+UseTLAB -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dcom.sun.management.jmxremote.port=7777 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.net.preferIPv4Stack=true

my /proc/meminfo is:
HugePages_Total: 32768
HugePages_Free: 32768
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

my system limits is:
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1056768
max locked memory (kbytes, -l) 67108864
max memory size (kbytes, -m) unlimited
open files (-n) 99999
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1056768
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

I am using Debian (Linux Version 2.6.26-2-amd64).

Can you help me?

Thanks.

Andrig T Miller said...

In looking at this rather long set of options, here is what is wrong. While I have found conflicting documentation around this, the truth is you have only allocated 64g of huge pages, but your command line is asking for that plus another 512m, because of the perm size parameter. You need to add another half a gig to your settings to have this work. The perm size is additive to your heap settings (max).

rafaelcba said...

Hello.

This same configuration worked fine on RHL 5.3 64-bit.

On Debian amd64 the maximum JVM HEAP which I could set was 8GB. I don't know why yet :(

Thanks.

Andrig T Miller said...

Interesting that it worked at all. For the Debian side of the equation, do you know how the kernel was built? There is certainly something different, but I don't have a clue what.

Darin Pope said...

I'm having issues getting the large pages to be used. Here's the error:

Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12).

We are running Tomcat 6.0.18 running Sun JDK 1.6.0_12 configured with a 6GB heap on a CentOS 5.3 server with just under 8GB. I would like to use 6GB for large pages.

I have followed your instructions, including rebooting after making all the changes, but the error persists.

Here's all my pertinent info:

JVM opts:
-verbose:gc
-XX:+UseLargePages
-XX:+UseParallelGC
-XX:+UseParallelOldGC
-XX:ParallelGCThreads=2
-XX:NewRatio=2
-XX:+PrintGCDetails
-Xloggc:/opt/tomcat-instance/tomcat1/logs/gc_parallel.log
-Xms6144m
-Xmx6144m

/proc/meminfo:
MemTotal: 7927716 kB
MemFree: 1147532 kB
Buffers: 14740 kB
Cached: 169492 kB
SwapCached: 0 kB
Active: 290448 kB
Inactive: 149664 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 7927716 kB
LowFree: 1147532 kB
SwapTotal: 4095992 kB
SwapFree: 4095992 kB
Dirty: 24 kB
Writeback: 0 kB
AnonPages: 255864 kB
Mapped: 20124 kB
Slab: 22000 kB
PageTables: 4116 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 4914120 kB
Committed_AS: 6802352 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 2940 kB
VmallocChunk: 34359733659 kB
HugePages_Total: 3072
HugePages_Free: 3072
HugePages_Rsvd: 0
Hugepagesize: 2048 kB

OS:
Linux vm-tomcat-prd16 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:21:56 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

/etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
#kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

#
# settings for huge page support
#
# http://andrigoss.blogspot.com/2008/02/jvm-performance-tuning.html
#
# since we have just under 8GB of memory on the server, set shmmax to 7GB:
kernel.shmmax = 7516192768
vm.nr_hugepages = 3072
vm.hugetlb_shm_group = 506

/etc/security/limits.conf:
tomcat soft memlock 629146
tomcat hard memlock 629146

/etc/group:
tomcat:x:505:
hugetlb:x:506:tomcat

id tomcat:
uid=505(tomcat) gid=505(tomcat) groups=505(tomcat),506(hugetlb) context=user_u:system_r:unconfined_t

ulimit -a:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 65536
max locked memory (kbytes, -l) 629146
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Andrig T Miller said...

Darin,

Your kernel parameter for setting the maximum amount of shared memory is commented out:

# Controls the maximum shared segment size, in bytes
#kernel.shmmax = 68719476736

Remove the # from kernel.shmmax and try it again.

Sorry for the late reply. My inbox has been a mess lately.

Brett Cave said...

it was added later on, with a new size. kernel.shmmax = 7516192768

Andrig T Miller said...

You know I think the problem is that you actually don't have enough large pages configured. You are allocating 6GB on the heap, and you have 6GB of large pages, which seems right on the surface, but what I have learned is that the PermGen space is additive to the 6GB of space for the -Xmx size. So, you need to add some more pages to the kernel. I believe the default is 64MB for PermGen, so add at least 32 more pages to the hugepages configuration for the kernel.

Anonymous said...

You say "where n is equal to the number of huge pages, set in vm.nr_hugepages, times the page size from /proc/meminfo, which in my example would be, 3072*2048 = 629146."

3072*2048=6291456

You are missing the 5 and making a WAY different number. People will do their own equations, but I figured I would help out with the minor typo.

Andrig T Miller said...

Thanks for catching the typo. I fixed it so its correct!

Kirk said...

So I am getting the "Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12)." error only on shutdown. Everything starts up correctly, tomcat reserves hugepages, just fine. runs just fine too. It only happens when I shut down the server and I do not want to have any errors before I deploy this change to my production environment. I even bumped up the amount of hugepages to 12GB just to see. Still errors on shutdown. I see nothing in /var/log/messages or any other file I am thinking of looking in.

Any ideas?

Andrig T Miller said...

It should be impossible to get the error on shutdown, versus startup. I'm wondering if it is really accessing the hugepages at all. After startup, and when running some tests, what do you see in /proc/meminfo, in the following two fields:

HugePages_Free
HugePages_Rsvd

If HugePages_Free is the same as HugePages_Total, and HugePages_Rsvd is 0, then the JVM is not using the Huge Pages or large pages.

Mohan said...

Hi Andrig,

I am getting the "Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12)." error only on shutdown. tomcat starts up correctly, tomcat reserves hugepages. It only happens when I shut down the server. Below is the output of the meminfo before and after tomcat startup. Any ideas what could be issue.

[tomcat@appser01 ~]$ cat /proc/meminfo | grep -i huge
HugePages_Total: 3072
HugePages_Free: 2987
HugePages_Rsvd: 2519
Hugepagesize: 2048 kB
[tomcat@appser01 ~]$ /opt/tomcat/bin/startup.sh
Using CATALINA_BASE: /home/tomcat
Using CATALINA_HOME: /opt/tomcat
Using CATALINA_TMPDIR: /home/tomcat/temp
Using JRE_HOME: /usr
Using CLASSPATH: /opt/tomcat/bin/bootstrap.jar
Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12).
[tomcat@appser01 ~]$ cat /proc/meminfo | grep -i huge
HugePages_Total: 3072
HugePages_Free: 3072
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
[tomcat@appser01 ~]$ /opt/tomcat/bin/shutdown.sh
Using CATALINA_BASE: /home/tomcat
Using CATALINA_HOME: /opt/tomcat
Using CATALINA_TMPDIR: /home/tomcat/temp
Using JRE_HOME: /usr
Using CLASSPATH: /opt/tomcat/bin/bootstrap.jar
[tomcat@appser01 ~]$ cat /proc/meminfo | grep -i huge
HugePages_Total: 3072
HugePages_Free: 3034
HugePages_Rsvd: 2566
Hugepagesize: 2048 kB
[tomcat@appser01 ~]$

Andrig T Miller said...

Umh... This doesn't make any sense. How could you have some of the large pages showing up as reserved before you startup Tomcat, and then have no pages reserved after you startup Tomcat? Then once again have large pages reserved after you shut it down?

You can only get the errno=12 on a shmget system call, when it tries to reserve the heap space. It's not possible to get this on stopping the process, since that system call will not be made.

It almost looks like you actually already have the JVM started, and then you try to start it up again, which of course you would get this error, because there wouldn't be enough large pages left to allocate.

Mohan said...

Hey Andrig,

Thanks for your reply. However, I didn't had any JVM running before starting the tomcat. I managed to find the solution. I have posted it on my site here is the link (http://www.mohancheema.net/appserver/java-hotspottm-64-bit-server-vm-warning-failed-to-reserve-shared-memory-errno-12) please check it and let me know if it is correct.

Thanks for your article it helped me alot.

Regards,

Mohan

Andrig T Miller said...

Mohan,

Based on your blog post about your solution, I learned something new recently, which is you can set the memlock for the user as "unlimited", like the following:

soft memlock unlimited
hard memlock unlimited

This makes it much easier, because you don't have to change the value if you later want to change the number of large pages being allocated, and avoids possible mistakes to the calculation.

EDO said...

I am trying to allocate ram with xms = xmx on a sles10 x64 running under VMware.

When stopping the JVM the following error is thrown:

Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12).

The RAM of the VM is 8 GB and they are reserved.

The VM sees 8GB and it can be allocated during runtime via the XMX setting.

On another Virtual SLES10 with 16 GB RAM Reserved via VMWare I don't have a problem with allocation of RAM even when setting the hugepages and shmax only by echo it works fine.

echo 8000 > /proc/sys/vm/nr_hugepages

echo 8589934592 > /proc/sys/kernel/shmmax

Using the echo commands on the other SLES10 show no effect in /proc/meminfo at all.

here are my configs 1st on is the SLES10 where XMS fails to allocate.

# more /apps/liferay-portal-5.2.5/tomcat-5.5.27/bin/setenv.sh
JAVA_HOME=/apps/java5
JRE_HOME=/apps/java5
JAVA_OPTS="$JAVA_OPTS -Xms3G -Xmx3G -XX:NewRatio=3 -XX:MaxPermSize=256m -XX:SurvivorRatio=20 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -XX:+UsePa
rallelGC -XX:ParallelGCThreads=4 -XX:+UseLargePages -Xloggc:/apps/gc.log -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGC -XX:+PrintGCTimeStamps -
XX:+PrintGCDetails -Dfile.encoding=UTF8 -Duser.timezone=GMT+2 -Djava.security.auth.login.config=$CATALINA_HOME/conf/jaas.config -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_C
LEAR_REFERENCES=false"


more /etc/sysctl.conf
kernel.shmmax=7516192768
vm.nr_hugepages=3072
vm.hugetlb_shm_group=1000

more /etc/securtiy/limits.conf


#@student - maxlogins 4
* soft memlock unlimited
* hard memlock unlimited
tomcat soft memlock 6291456
tomcat hard memlock 6291456
# End of file


# cat /proc/meminfo
MemTotal: 7928752 kB
MemFree: 737004 kB
Buffers: 0 kB
Cached: 417368 kB
SwapCached: 0 kB
Active: 487428 kB
Inactive: 324072 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 7928752 kB
LowFree: 737004 kB
SwapTotal: 2097144 kB
SwapFree: 2097020 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 397208 kB
Mapped: 72180 kB
Slab: 62136 kB
CommitLimit: 2915792 kB
Committed_AS: 748576 kB
PageTables: 3292 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 7028 kB
VmallocChunk: 34359731271 kB
HugePages_Total: 3072
HugePages_Free: 2305
HugePages_Rsvd: 897
Hugepagesize: 2048 kB


After a lot of playing around - I found out that when I lower the amount of huge pages to 3200 on the second machine the same limitation occurs.

I than can only address 2900MB XMS and XMX

When I increase the number of huge pages to 8000 again I can allocate more than 3 GB RAM on JVM startup.

This is very strange - any Ideas

Greetings
EDO

Andrig T Miller said...

I'm wondering how VMWare handles large pages, and if you are really getting the 6GB of large pages you have configured. Large page memory has to be contiguous pages, and cannot be swapped. With a virtual environment, this may not work the way it would on bare metal. I did find this link:

https://www.vmware.com/files/pdf/large_pg_performance.pdf

It actually says with older version of ESX, large page memory was emulated (ouch!). I'm not sure what version of VMWare you are using, but I would recommend readin all the information you can find on large pages and VMWare.

Carl said...

Good to wrote an EJB 3 application,which is really handy and the settings and details are really useful to read,good to tell.

__________________
Architecture Dissertation

Anonymous said...

The 3x improvement was due to other configuration changes besides the JVM that were specific to my application running on the JBoss EAP 4.2 release. all of these facts and figures is really innovative for me.

Send gifts to Karachi

Anonymous said...

I really happy to read about you have achieved over 3x improvement after tuning. Could you provide a little more info about how about original setting before tuning.
What tuning tools you are using.

Flower to Karachi

Andrig T Miller said...

The only "tools" I used, in this case, was JMeter for driving the load. Everything else was test after test after test, changing one thing at a time and measuring the result. This was quite a long time ago, when I posted this, but since then I have been experimenting with using oprofile, which is a Linux system side profiler. It shows CPU utilization, and can show the JIT code in the JVM as long as you have the debug info packages installed (at least for OpenJDK).

Arex said...

I'm curious about I would configure Largepages on windows.

These are my current configurations.

-XX:+UseFastAccessorMethods -Xmn=1444m -Xss2048k -XX:-BytecodeVerificationLocal -XX:-BytecodeVerificationRemote -XX:+UseCodeCacheFlushing -XX:MaxPermSize=128m -XX:+UseLargePages -XX:+UseParallelOldGC -XX:ParallelGCThreads=2 -XX:+UseParallelGC -XX:+UseCompressedOops -XX:+DoEscapeAnalysis -XX:+OptimizeStringConcat -XX:+UseStringCache -server -Xmx6144m -Xms6144m

Andrig T Miller said...

I'm not a Windows user, but I found the following for configuring the permissions for using large page memory on Windows 2003:

// Windows large page support is available on Windows 2003. In order to use
// large page memory, the administrator must first assign additional privilege
// to the user:
// + select Control Panel -> Administrative Tools -> Local Security Policy
// + select Local Policies -> User Rights Assignment
// + double click "Lock pages in memory", add users and/or groups
// + reboot
// Note the above steps are needed for administrator as well, as administrators
// by default do not have the privilege to lock pages in memory.
//
// Note about Windows 2003: although the API supports committing large page
// memory on a page-by-page basis and VirtualAlloc() returns success under this
// scenario, I found through experiment it only uses large page if the entire
// memory region is reserved and committed in a single VirtualAlloc() call.
// This makes Windows large page support more or less like Solaris ISM, in
// that the entire heap must be committed upfront. This probably will change
// in the future, if so the code below needs to be revisited.

This comes from the source file in OpenJDK for the Hotspot VM.

ALex said...

I've done all that already for windows :[
There wasn't much of an increase in performance :P

Going to try playing around with this value
-XX:LargePageSizeInBytes

mirckur said...

That is very good comment you shared.Thank you so chat much that for you shared those things with us.Im wishing you chat sohbet to carry on with ur achivments.All the best .

Inas Labib said...

Hi,
Thanks for your great post.
I have a question, after I did what you said to tune my JVM for high load, and after changing the number of huge pages, this mount of memory reserved even if my application is not running.Is it normal? if yes, how can I monitor my RAM usage in this way to see how mush RAM my application is using?

Dissertation Writing said...

JVM Performance Tuning <-- that's what i was looking for
Writing a Dissertation

Andrig T Miller said...

When allocating large pages, yes they will be reserved regardless of whether your application is running or not. That is normal. It's something I talk about when I do performance talks. Only applications that attached to the shared memory segment with the appropriate flags (like the JVM when using -XX:+UseLargePages) will be able to access that memory.

youlacka said...

Hello to everyone who has blog.first I congratulate the blog owner.he is got excellent blog.That is really very good article. I am glad to know. Thanks for sharing !
kızlarla chatcetavrupa chatchat odalari
Mersin Chatsohbet odalari

Daniel Mace said...

I realize this blog is a bit old, but it was one of the first I came across for setting up what I needed on a Ubuntu 10.10 machine. Darin's post made me realize that I had to append my /etc/sysctl.conf to include kernel.shmall as the current directions only allowed me to create a JVM of size ~1.5GB max, anything larger would result in "Failed to reserve shared memory (errno = 22).". On a 24GB machine with 16GB of space set aside for huge pages (2048k pages, 8192 of them), I set my shmall to shmmax/PAGE_SIZE (where page size on my machines was 4096). I haven't had any problems requesting large JVM's since making this change.

--------------------------------

kernel.shmmax = 25769803776
kernel.shmall = 6291456
vm.nr_hugepages = 8192
vm.hugetlb_shm_group = 516

Mohammad said...

Great post. I have a Ubuntu 10.04 with 8GB memory on VMware ESX. With large page file configuration, I've got ~3x performance improvement.

Thanks for sharing.

Chat Sohbet said...

Thank you admin

Yemek Tarifleri said...

Yemek Tarifleri Thanks for this post – I have struggled with this as I have NO comments yet on my blog and in some ways feel badly about that (until i read your posts about this) and yet I see there is more that I can do. thankss.

pepek said...

thank you for information.
http://www.blog.mircchats.com
http://www.mircchats.com
http://www.forum.mircchats.com
http://www.video.mircchats.com

jagan said...

Hi andrig,

Your document is indeed a good one. we are facing Java HotSpot(TM) Server VM warning: Failed to attach shared memory (errno = 12) this error. we are using 24 JVM's each with 2048M Xmx. And our java is 32-bit JVM.

we have given unlimited to memlock. and shmmax is 24*(heap size + perm size)
and nr_hugepages set to shmmax(in MB)/2MB.

can u help us in findin the root cause.

Thanks in advance

-Jagan

Andrig T Miller said...

Jagan, I recently ran into this problem as well. That errno means no space left on device. What I discovered is that you probably have to make the shared memory max much higher. You may also have to increase the number of shared memory segments higher too. Don't be afraid to set the shared memory max to about three times the physical memory size of the server.

janetpayne70 said...

This article is very informative and unique. I have been looking for this information for many days. I admire your work. Thanks for sharing. business management dissertation

Algevis said...

A very nice page. I think the effort has passed, we have to thank you:))
Estetik Dis Beyazlatma

Algevis said...

A very nice page. I think the effort has passed, we have to thank you:))
Estetik Dis Beyazlatma

Algevis said...

A very nice page. I think the effort has passed, we have to thank you:))
Estetik Dis Beyazlatma Fiyatları

affordable logo design said...

Great post, the subject is extremely useful and informative for me. Keep doing the good work. Regards

logo design price

Jacob Hubbard said...

This is a fantastic post. You have explained it very well. I appreciate your work. I am going to bookmark your website. Thanks for posting.
Dissertation Help

sherry wood said...

The post is very informative. You have written it well. I am willing to read more articles like this. Keep on posting such articles. Thank you.
college research paper writing

loriray said...

I would like to say that you have good writing skills. I feel like reading it again and again. I admire your work. I will bookmark your website. Thanks for posting.
writing custom term papers

Dissertations said...

It is an interesting article. You have explained it very well. I feel like reading it again and again. I will come again to check new updates. I praise your work. Thank you.

banner design service said...

Nice blog. You should think more about RSS Feeds as a traffic source.

Anonymous said...

Hi Andrig,

Is there any limitation on the size of the Heap size in 32 bit environment for large pages?
I'm able to do large page configuration successfully for less than 2GB Heap size. But when i tried to do configuration for 3GB it is throwing following error.
Java HotSpot(TM) Server VM warning: Failed to attach shared memory (errno = 12).
Any clue on this.

our configuration values:
shmmax = 214748364800
nr_hugepages = 36400
memlock unlimited in limits.conf

Thanks in advance

Andrig T Miller said...

It shouldn't work for the 32-bit JVM at all. As far as my own testing goes, I get an error (I forget exactly what the content was). Are you sure you are using the 32-bit JVM?

Lionel said...

Hi Adrig,
I configure my current JBoss to use large page, the server had 8GB memory and I configured 3GB for large pages. After restart and start JBoss, I didn’t see any changes after I configure the HugePage. It seems like not working. The configuration I follow as instruction.
The statistic always shown at /proc/meminfo is
HugePages_Total: 1500
HugePages_Free: 1500
Hugepagesize: 2048 kB

It seems not using at all.
Here is my config detail:
JBoss JVM (run.conf):
JAVA_OPTS="-Xms2560m -Xmx2560m -XX:PermSize=1024m -XX:MaxPermSize=1024m -XX:+UseLargePages -XX:LargePageSizeInBytes=2m -XX:+UseParallelGC -XX:ParallelGCThreads=4 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000"
#huge page config at /etc/sysctl.conf
kernel.shmmax=3145728000
vm.nr_hugepages=1500
vm.hugetlb_shm_group=501

#memlock config at /etc/security/limits.conf
jserver soft memlock 3072000
jserver hard memlock 3072000

$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 1024
max locked memory (kbytes, -l) 3072000
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65535
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Any advice? Did I miss out anything?
Besides, I running RHEL4 64bit

Andrig T Miller said...

The number of huge pages needs to account for your heap and perm gen too. Based on your JVM settings, it doesn't look like you have enough large pages configured (you are using 3.5 GB of Heap+PermGen).

Lionel said...

Hi Adrig,

Thanks for the advice. Now I able use the Large Page after I adjusted the memory used on HEAP & PermGen. It also can use command to verify the JVM’s setting does run on UseLargePage:
#java -Xms2560m -Xmx2560m -XX:PermSize=1024m -XX:MaxPermSize=1024m -XX:+UseLargePages –version
If error message show:
ERROR 1:
Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12).
Mean the system which supports large page, but is not configured for its use.
ERROR 2:
Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 38).
Mean the system which doesn't support large page.