At work, there’s been some debate on how best to structure our J2EE servers to maximise utilisation of a scarce resource (per-CPU licenses!). The crux of the debate centred, for some reason, on how we can allocate more JVM instances per server.
To do that, we needed to understand how Windows (we use Windows 2K Server as a host platform) allocates memory to processes, and how it behaves when the amount of physical memory gets to a decent size (8-16GB).
I found this summary by Raymond Chen to be extremely informative, and what it boils down to is this:
- You can configure Windows to use more than 4GB of physical memory.
- Windows processes, despite being given a 4GB virtual address space, typically can only access 2GB (the other 2GB reserved for use by the kernel in dealing with the process). This was the reason for a JVM heap size limit we hit of about 1.5GB.
- You can make Windows allow process to access 3GB of address space, but it’s not contiguous (there’s a block reserved around the 2GB mark that can’t be shifted). This explained why the method (the /3GB switch) wasn’t supported by Sun (or BEA); the JVM heaps needed to be contiguous in the virtual address space, and they couldn’t be.
So, what it boiled down to was: we can have as many JVMs as we like, as long as they fit in the physical memory (we don’t want to page in and out all the time), and as long as they are under 2GB in memory size each.