A memory pool is a logical division of main memory or storage that is reserved for processing a job or group of jobs. On the iSeries™ server, all main storage can be divided into logical allocations called memory pools. By default, the system manages the transfer of data and programs into memory pools.
The memory pool from which user jobs get their memory is always the same pool that limits their activity level. (The activity level of a memory pool is the number of threads that can be active at same time in a memory pool.) Exceptions to this are system jobs (such as Scpf, Qsysarb, and Qlus) that get their memory from the Base pool but use the machine pool activity level. Additionally, subsystem monitors get their memory from the first subsystem description pool, but it uses the machine pool activity level. This allows a subsystem monitor to always be able to run regardless of the activity level setting.
You can control how much work can be done in a subsystem by controlling the number and size of the pools. The greater the size of the pools in a subsystem, the more work can be done in that subsystem.
Using shared memory pools allows the system to distribute jobs for interactive users across multiple subsystems while still allowing their jobs to run in the same memory pool.
If data is already in main storage, it can be referred to independently of the memory pool it is in. However, if the needed data does not exist in any memory pool, it is brought into the same memory pool for the job that referred to it (this is known as a page fault). As data is transferred into a memory pool, other data is displaced and, if changed, is automatically recorded in auxiliary storage (this is called paging). The memory pool size should be large enough to keep data transfers (paging) at a reasonable level as the rate affects performance.