There are two basic environments in which you can take advantage of independent disk pools: a multisystem environment managed by an iSeries™ cluster, and a single-system environment with a single iSeries server.
Another option that can be leveraged in a multisystem environment is geographic mirroring. Geographic mirroring allows you to maintain two identical copies of an independent disk pool at two sites that are geographically separated. The independent disk pools at the separate sites can be switchable or dedicated.
Independent disk pools allow you to isolate certain maintenance functions. Then, when you need to perform disk management functions that normally require the entire system to be at DST, you can perform them by merely varying off the affected independent disk pool.
The following table compares dedicated independent disk pools and independent disk pools in a multisystem environment.
Consideration  | Dedicated | Multisystem environment | |
---|---|---|---|
Single system | Multisystem cluster | Logical partitions in a cluster | |
iSeries cluster required | No | Yes | Yes |
Connectivity between systems | Not applicable | HSL loop | Virtual OptiConnect |
Location of disk units | Any supported internal or external disk units | External expansion unit (tower) | IOP on shared bus |
Switchability | No | Yes, between systems | Yes, between partitions |
Switchable entity | None | Expansion unit | IOP |
In a hardware switching environment, one node in the device domain owns it, and all the other nodes in the device domain show that the independent disk pool exists. In a geographic mirroring environment, one node at each site owns a copy of the independent disk pool. When an independent disk pool is created or deleted, the node creating or deleting the independent disk pool informs all the other nodes in the device domain of the change. If clustering is not active between the nodes, or a node is in the midst of a long running disk pool configuration change, that node will not update and will be inconsistent with the node rest of the nodes. Nodes must be consistent before a failover or switchover. Ending clustering and starting clustering will ensure that the configuration is consistent.
For more on switchable and dedicated independent disk pools, including example configurations for each of these environments, see Examples: Independent disk pool configurations.