Use disk pools for improved performance

If you are using user disk pools for better system performance, consider dedicating a disk pool to one object that is very active. In this case, you can configure the disk pool with only one disk unit.

However, it typically does not improve performance to place a single device parity-protected unit in a user disk pool because the performance of that unit is affected by other disk units in the device parity set.

Allocating one user disk pool exclusively for journal receivers that are attached to the same journal can improve journaling performance. By having the journal and journaled objects in a separate disk pool from the attached journal receivers, there is no contention for journal receiver write operations. The units that are associated with the disk pool do not have to be repositioned before each read or write operation.

The system spreads journal receivers across multiple disk units to improve performance. The journal receiver may be placed on up to ten disk units in a disk pool. If you specify the RCVSIZOPT(*MAXOPT1) or (*MAXOPT2) journal option, then the system may place the journal receiver on up to 100 disk units in a disk pool. If you add more disk units to the disk pool while the system is active, the system determines whether to use the new disk units for journal receivers the next time the change journal function is performed.

Another way to improve performance is to make sure there are enough storage units in the user disk pool to support the number of physical input and output operations that are done against the objects in the user disk pool. You might have to experiment by moving objects to a different user disk pool and then monitoring performance in the disk pool to see if the storage units are used excessively. For more information about working with disk status (WRKDSKSTS command) to determine if the storage units have excessive use, see Work Management. If the units have excessive use, you should consider adding more disk units to the user disk pool.