Memory pools

A memory pool is a logical division of main memory or storage that is reserved for processing a job or group of jobs. On the iSeries™ server, all main storage can be divided into logical allocations called memory pools. By default, the system manages the transfer of data and programs into memory pools.

The memory pool from which user jobs get their memory is always the same pool that limits their activity level. (The activity level of a memory pool is the number of threads that can be active at same time in a memory pool.) Exceptions to this are system jobs (such as Scpf, Qsysarb, and Qlus) that get their memory from the Base pool but use the machine pool activity level. Additionally, subsystem monitors get their memory from the first subsystem description pool, but it uses the machine pool activity level. This allows a subsystem monitor to always be able to run regardless of the activity level setting.

Why use memory pools

You can control how much work can be done in a subsystem by controlling the number and size of the pools. The greater the size of the pools in a subsystem, the more work can be done in that subsystem.

Using shared memory pools allows the system to distribute jobs for interactive users across multiple subsystems while still allowing their jobs to run in the same memory pool.

Multiple pools in a subsystem help you to control the jobs' competition for system resources. The advantages of having multiple pools in a subsystem are that you can separate the amount of work done and the response time for these jobs. For example, during the day you may want interactive jobs to run with good response time. For better efficiency you can make the interactive pool larger. At night you might be running many batch jobs, so you make the batch pool larger.
Note: Although tuning and managing your system can help the efficiency of the flow of work through your iSeries server, it cannot account for inadequate hardware resources. Consider a hardware upgrade if the demands of your workload are significant.

How data is handled in memory pools

If data is already in main storage, it can be referred to independently of the memory pool it is in. However, if the needed data does not exist in any memory pool, it is brought into the same memory pool for the job that referred to it (this is known as a page fault). As data is transferred into a memory pool, other data is displaced and, if changed, is automatically recorded in auxiliary storage (this is called paging). The memory pool size should be large enough to keep data transfers (paging) at a reasonable level as the rate affects performance.

Related concepts
Manage memory pools
Related information
Retrieve System Status (QWCRSSTS) API
Manage iSeries performance
Basic performance tuning
Applications for performance management
Experience report: The Performance Adjuster