Configure geographic mirroring with switchable independent disk pools

To configure geographic mirroring you must first configure your cross-site mirroring (XSM) environment and create the independent disk pool that you want to mirror. Before using the iSeries™ Navigator, you should also define up to four, one-to-one data port TCP/IP routes bidirectionally as part of the connection between all the nodes in the cluster resource group. Geographic mirroring allows you to maintain an exact copy of the independent disk pool on a system at a different location for protection and availability purposes. Configuring your independent disk pool to be switchable between nodes at the same site in the cluster allows for greater availability options. See Example: Independent disk pools with geographic mirroring.

The following example shows geographic mirroring between sites and both sites using switchable independent disk pools. These configuration steps correlate to the graphic. You might also configure one site to contain switchable independent disk pools, while the other site uses a dedicated independent disk pool. If this is the case, change the instructions to fit your specific environment.

Geographic mirroring for an independent disk pool between New York and San Francisco

To configure geographic mirroring with switchable independent disk pools using the iSeries Navigator, follow these steps:

  1. Plan and configure your data port TCP/IP routes. See Communications requirements and Customize TCP/IP with iSeries Navigator.
  2. Create a cluster containing nodes A and B.
  3. Make your hardware switchable. If you have stand-alone expansion units or IOPs that contain disk units that are to be included in an independent disk pool, you must authorize the expansion unit or IOP to grant access to other nodes at the same site.
  4. Create a switchable hardware group. A switchable hardware group, also known as a device CRG, defines the switchable independent disk pool. This is what manages the switching of the device. This wizard takes you through the steps to create a new switchable hardware group. It will also guide you through the New Disk Pool wizard which assists you in creating a new disk pool and adding disk units to it for the cluster.
    Note: If you had switchable software products which conform to specific iSeries Navigator cluster guidelines installed when you ran the New Cluster wizard in step 1, the New Cluster wizard might have already prompted you to create a switchable hardware group. If the New Cluster wizard did not detect that a switchable software product was installed, then you have not created the switchable hardware group.
  5. Add nodes C and D to the cluster and to the same device domain nodes A and B are in. This will enable independent disk pool to switching (swap roles) between nodes at both sites:
    1. In iSeries Navigator, expand Management Central.
    2. Expand Clusters.
    3. Expand the cluster for which you need to add a node.
    4. Right-click Nodes, and select Add Node.
      Note: Clusters configured through iSeries Navigator can be made up of a maximum of four nodes. If four nodes already exist in the cluster, the Add Node option is disabled. If your clustering needs to extend beyond four nodes, you can use cluster resource services application programming interfaces (API) and CL commands to support up to 128 nodes. However, only four nodes are supported through the iSeries Navigator interface.
  6. Add nodes C and D to the device domain:
    1. In iSeries Navigator, expand Management Central.
    2. Expand Clusters.
    3. Expand the cluster containing the node you want to add to the device domain.
    4. Click Nodes.
    5. In the right pane, right-click the required node (node C) and select Properties.
    6. On the Clustering page, in the Device Domain field, enter the name of the device domain that node A and node B exist in and click OK.
    Repeat this process to add node D to the same device domain as nodes A, B, and C.
  7. Add nodes C and D to the switchable hardware group:
    1. Right-click the newly created switchable hardware group and select Properties.
    2. Select the Recovery Domain tab.
    3. Click Add.
    4. Select the node and click OK. Repeat for each node.
  8. Define your geographic mirroring sites in the recovery domain:
    1. Right-click your switchable hardware group and select Properties.
    2. Select the Recovery Domain tab.
    3. Select the primary node and click Edit.
    4. In the site name field, specify the primary site for the production copy.
    5. Click Add to specify the data port IP addresses of the primary node.
    6. On the Edit Node dialog box, specify the data port IP addresses for the primary node that you set up in step 1, Plan and configure your TCP/IP routes, and click OK. You can configure up to four data port IP addresses. You should consider configuring multiple communication lines to allow for redundancy and the highest throughput. The same number of ports used here should be used on all nodes.
    7. On the General tab, click OK.
    8. Repeat the previous steps to specify the site name and IP address for all other nodes in the switchable hardware group.
  9. After you have completed the XSM prerequisites, follow these steps to configure geographic mirroring:
    1. In iSeries Navigator, expand My Connections (or your active environment).
    2. Expand your iSeries server > Configuration and Service > Hardware > Disk Units > Disk Pools.
    3. If the Geographic Mirroring columns are not displayed, click the Disk Pool you want to mirror, and select View > Customize this view > Columns, then select the columns with the suffix "- Geographic Mirroring" from the Columns available to display list .
    4. Right-click the disk pool you want to mirror, and select Geographic Mirroring > Configure Geographic Mirroring.
    5. Follow the wizard's instructions to configure geographic mirroring.
      Note: The disk pools you select to geographically mirror must be in the same switchable hardware group. If you want to geographically mirror disk pools in more than one switchable hardware group, you will need to complete the wizard one time for each switchable hardware group.
  10. Print your disk configuration. Print your disk configuration to have in case a recovery situation occurs. Also, record the relationship between the independent disk pool name and number.
You have now configured geographic mirroring . The remaining steps are required to prepare the independent disk pool for use in this environment.
  1. Start switchable hardware group. Start the switchable hardware group to enable device resiliency for the switchable hardware group.
  2. Make a disk pool available. To access the disk units in an independent disk pool you must make the disk pool available (vary on) the disk pool.
  3. Wait for resync to complete.
  4. Perform a test switchover. Before you add data to the disk pool, perform a test switchover on the switchable hardware group you created to ensure that each node in the recovery domain can become the primary node.
Note: If you remove a node from a device domain after you configure geographic mirroring, the removed node takes any production copies or mirror copies that it owns. These are changed to non-geographic mirrored disk pools.

Using CL commands and APIs

To configure geographic mirroring with switchable independent disk pools using CL commands and APIs, follow these steps:

You can use CL commands and APIs for creating a switchable independent disk pool, however there are some tasks that require that you use iSeries Navigator.
  1. Plan and configure your TCP/IP routes on all nodes, as follows:
    • Node A should have routes to C and D.
    • Node B should have routes to C and D.
    • Node C should have routes to A and B.
    • Node D should have routes to A and B.
  2. Create the cluster. Create the cluster with required nodes using the CRTCLU (Create Cluster) command.
  3. Start the nodes that comprise the cluster. Start the nodes in the cluster using the STRCLUNOD (Start Cluster Node) command
  4. Create the device domain. You must create the device domain for all nodes involved in switching an independent disk pool using the ADDDEVDMNE (Add Device Domain Entry) command. All nodes must be in the same device domain.
  5. Create the device descriptions. Device descriptions must be created on all nodes that will be in the cluster resource group (CRG). Use the CRTDEVASP (Create Device Description (ASP)) command. On the command line in the character-based interface, enter CRTDEVASP. In the Resource Name and the Device Description fields, enter the name of the independent disk pool you plan to create.
  6. Create the cluster resource group. Create the device CRG with the nodes, their roles in the recovery domain, and independent disk pool device descriptions You must also specify a site name and up to four data port IP addresses for each node in the recovery domain.
  7. Make your hardware switchable. If you have stand-alone expansion units or IOPs that contain disk units that are to be included in an independent disk pool, you must authorize the expansion unit or IOP to grant access to other nodes at the same site (iSeries Navigator required).
  8. Create a disk pool. Create the disk pool on the node that owns the disk units using the New Disk Pool wizard when the server is fully restarted. Make sure clustering is active before you start. Name the independent disk pool to match the device description resource name that you gave in step 3. As you add disk units, it is best to localize disk units in the same expansion unit or IOP. Also, do not spread the disk pool across device parity sets (iSeries Navigator required).
  9. Follow these steps to configure geographic mirroring:
    1. In iSeries Navigator, expand My Connections (or your active environment).
    2. Expand the iSeries server that is the primary node .
    3. Expand Configuration and Service.
    4. Expand Hardware.
    5. Expand Disk Units.
    6. Expand Disk Pools.
    7. Right-click the Disk Pool you want to mirror and select Geographic Mirroring > Configure Geographic Mirroring.
    8. Follow the wizard's instructions to configure geographic mirroring.
      Note: The disk pools you select to geographically mirror must be in the same switchable hardware group. If you want to geographically mirror disk pools in more than one switchable hardware group, you will need to complete the wizard one time for each switchable hardware group.
      Note: The mirror copy and the production copies must be in different sites. If we have two sites, AB and CD and the production copy is on node A on site AB, the backup copy must be on node C or D on site CD.
  10. Print your disk configuration. Print your disk configuration to have in case of a recovery situation. See How to display your disk configuration in Backup and Recovery. Link to PDF Also, record the relationship between the independent disk pool name and number.
You have now configured geographic mirroring. The remaining steps are required to prepare the independent disk pool for use in this environment.  
  1. Start the cluster resource group. Start the cluster resource group to enable device resiliency using the STRCRG (Start Cluster Resource Group) command.
  2. Make the disk pool available. To access the disk units in an independent disk pool you must vary on the disk pool using the VRYCFG (Vary Configuration) command. Vary on will also reconnect connections, so that any new route definitions can take effect.
  3. Wait for resync to complete.
  4. Perform a test switchover. Before you add data to the disk pool, perform test switchovers on the switchable hardware group you created to ensure that each node in the recovery domain can become the primary node. Use the CHGCRGPRI (Change CRG Primary) command.