ibm-information-center/dist/eclipse/plugins/i5OS.ic.rzaly_5.4.0.1/rzalyconfigureswitchgeographic.htm

252 lines
16 KiB
HTML
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html
PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html lang="en-us" xml:lang="en-us">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="security" content="public" />
<meta name="Robots" content="index,follow" />
<meta http-equiv="PICS-Label" content='(PICS-1.1 "http://www.icra.org/ratingsv02.html" l gen true r (cz 1 lz 1 nz 1 oz 1 vz 1) "http://www.rsac.org/ratingsv01.html" l gen true r (n 0 s 0 v 0 l 0) "http://www.classify.org/safesurf/" l gen true r (SS~~000 1))' />
<meta name="DC.Type" content="topic" />
<meta name="DC.Title" content="Configure geographic mirroring with switchable independent disk pools" />
<meta name="DC.Relation" scheme="URI" content="rzalyconfigure.htm" />
<meta name="copyright" content="(C) Copyright IBM Corporation 2002, 2006" />
<meta name="DC.Rights.Owner" content="(C) Copyright IBM Corporation 2002, 2006" />
<meta name="DC.Format" content="XHTML" />
<meta name="DC.Identifier" content="rzalyconfiguregeographic" />
<meta name="DC.Language" content="en-us" />
<!-- All rights reserved. Licensed Materials Property of IBM -->
<!-- US Government Users Restricted Rights -->
<!-- Use, duplication or disclosure restricted by -->
<!-- GSA ADP Schedule Contract with IBM Corp. -->
<link rel="stylesheet" type="text/css" href="./ibmdita.css" />
<link rel="stylesheet" type="text/css" href="./ic.css" />
<title>Configure geographic mirroring with switchable independent disk pools</title>
</head>
<body id="rzalyconfiguregeographic"><a name="rzalyconfiguregeographic"><!-- --></a>
<!-- Java sync-link --><script language="Javascript" src="../rzahg/synch.js" type="text/javascript"></script>
<h1 class="topictitle1">Configure geographic mirroring with switchable independent disk pools</h1>
<div><p>To configure geographic mirroring you must
first configure your cross-site mirroring (XSM) environment and create the
independent disk pool that you want to mirror. Before using the iSeries™ Navigator,
you should also define up to four, one-to-one data port TCP/IP routes bidirectionally
as part of the connection between all the nodes in the cluster resource group.
Geographic mirroring allows you to maintain an exact copy of the independent
disk pool on a system at a different location for protection and availability
purposes. Configuring your independent disk pool to be switchable between
nodes at the same site in the cluster allows for greater availability options.
See <a href="rzalyexamplegeomirror.htm">Example: Independent disk pools with geographic mirroring</a>.</p>
<p>The following example shows geographic mirroring between sites and both
sites using switchable independent disk pools. These configuration steps correlate
to the graphic. You might also configure one site to contain switchable independent
disk pools, while the other site uses a dedicated independent disk pool. If
this is the case, change the instructions to fit your specific environment.</p>
<img src="rzaly507.gif" alt="Geographic mirroring for an independent disk pool between New York and San Francisco" /><p>To configure geographic mirroring with switchable independent disk pools
using the iSeries Navigator,
follow these steps:</p>
<div class="p"><ol><li>Plan and configure your data port TCP/IP routes. See <a href="rzalycommunications.htm">Communications requirements</a> and <a href="../rzai2/rzai2custom.htm">Customize TCP/IP with iSeries Navigator</a>.</li>
<li><a href="rzalycreatecluster.htm">Create a cluster</a> containing
nodes A and B. </li>
<li><a href="rzalymakehardwareswitchable.htm">Make your hardware switchable</a>. If
you have stand-alone expansion units or IOPs that contain disk units that
are to be included in an independent disk pool, you must authorize the expansion
unit or IOP to grant access to other nodes at the same site. </li>
<li><a href="rzalycreatecrg.htm">Create a switchable hardware group</a>. A switchable hardware
group, also known as a device CRG, defines the switchable independent disk
pool. This is what manages the switching of the device. This wizard takes
you through the steps to create a new switchable hardware group. It will also
guide you through the New Disk Pool wizard which assists you in creating a
new disk pool and adding disk units to it for the cluster. <div class="note"><span class="notetitle">Note:</span> If you
had switchable software products which conform to specific iSeries Navigator
cluster guidelines installed when you ran the New Cluster wizard in step 1,
the New Cluster wizard might have already prompted you to create a switchable
hardware group. If the New Cluster wizard did not detect that a switchable
software product was installed, then you have not created the switchable
hardware group.</div>
</li>
<li>Add nodes C and D to the cluster and to the same device domain nodes A
and B are in. This will enable independent disk pool to switching (swap roles)
between nodes at both sites:<ol type="a"><li>In iSeries Navigator,
expand <strong>Management Central</strong>.</li>
<li>Expand <strong>Clusters</strong>.</li>
<li>Expand the cluster for which you need to add a node.</li>
<li>Right-click Nodes, and select <strong>Add Node</strong>.<div class="note"><span class="notetitle">Note:</span> Clusters configured
through iSeries Navigator
can be made up of a maximum of four nodes. If four nodes already exist in
the cluster, the <strong>Add Node</strong> option is disabled. If your clustering needs
to extend beyond four nodes, you can use cluster resource services application
programming interfaces (API) and CL commands to support up to 128 nodes. However,
only four nodes are supported through the iSeries Navigator interface.</div>
</li>
</ol>
</li>
<li>Add nodes C and D to the device domain:<ol type="a"><li>In iSeries Navigator,
expand <strong>Management Central</strong>.</li>
<li>Expand <strong>Clusters</strong>.</li>
<li>Expand the cluster containing the node you want to add to the device domain.</li>
<li>Click <strong>Nodes</strong>.</li>
<li>In the right pane, right-click the required node (node C) and select <strong>Properties</strong>. </li>
<li>On the <strong>Clustering</strong> page, in the <strong>Device Domain</strong> field, enter
the name of the device domain that node A and node B exist in and click <strong>OK</strong>. </li>
</ol>
Repeat this process to add node D to the same device domain as nodes
A, B, and C.</li>
<li>Add nodes C and D to the switchable hardware group:<ol type="a"><li>Right-click the newly created switchable hardware group and select <strong>Properties</strong>.</li>
<li>Select the <strong>Recovery Domain</strong> tab.</li>
<li>Click <strong>Add</strong>. </li>
<li> Select the node and click <strong>OK</strong>. Repeat for each node.</li>
</ol>
</li>
<li>Define your geographic mirroring sites in the recovery domain:<ol type="a"><li>Right-click your switchable hardware group and select <strong>Properties</strong>.</li>
<li>Select the <strong>Recovery Domain</strong> tab.</li>
<li>Select the primary node and click <strong>Edit</strong>.</li>
<li>In the site name field, specify the primary site for the production copy.</li>
<li>Click <strong>Add</strong> to specify the data port IP addresses of the primary
node.</li>
<li>On the Edit Node dialog box, specify the data port IP addresses for the
primary node that you set up in step 1, Plan and configure your TCP/IP routes,
and click <strong>OK</strong>. You can configure up to
four data port IP addresses. You should consider configuring multiple communication
lines to allow for redundancy and the highest throughput. The same number
of ports used here should be used on all nodes. </li>
<li>On the General tab, click <strong>OK</strong>.</li>
<li>Repeat the previous steps to specify the site name and IP address for
all other nodes in the switchable hardware group.</li>
</ol>
</li>
<li>After you have completed the XSM prerequisites, follow these steps to
configure geographic mirroring:<ol type="a"><li>In iSeries Navigator,
expand <strong>My Connections</strong> (or your active environment).</li>
<li>Expand your iSeries <span class="menucascade"><span class="uicontrol">server</span> &gt; <span class="uicontrol">Configuration and Service</span> &gt; <span class="uicontrol">Hardware</span> &gt; <span class="uicontrol">Disk Units</span> &gt; <span class="uicontrol">Disk
Pools</span></span>.</li>
<li>If the Geographic Mirroring columns are not
displayed, click the Disk Pool you want to mirror, and select <span class="menucascade"><span class="uicontrol">View</span> &gt; <span class="uicontrol">Customize this view</span> &gt; <span class="uicontrol">Columns</span></span>, then select the columns with the suffix "- Geographic Mirroring"
from the <strong>Columns available to display list </strong>.</li>
<li>Right-click the disk pool you want to mirror, and select <span class="menucascade"><span class="uicontrol">Geographic Mirroring</span> &gt; <span class="uicontrol"> Configure Geographic
Mirroring</span></span>.</li>
<li>Follow the wizard's instructions to configure geographic mirroring.<div class="note"><span class="notetitle">Note:</span> The
disk pools you select to geographically mirror must be in the same switchable
hardware group. If you want to geographically mirror disk pools in more than
one switchable hardware group, you will need to complete the wizard one time
for each switchable hardware group.</div>
</li>
</ol>
</li>
<li><a href="rzalyprintgraphview.htm">Print your disk configuration</a>. Print your
disk configuration to have in case a recovery situation occurs. Also, record
the relationship between the independent disk pool name and number. </li>
</ol>
</div>
<div class="p">You have now configured geographic mirroring . The remaining steps are
required to prepare the independent disk pool for use in this environment. <ol><li><a href="rzalystartcrg.htm">Start switchable hardware group</a>. Start the switchable
hardware group to enable device resiliency for the switchable hardware group.</li>
<li><a href="rzalymakediskpoolavailable.htm">Make a disk pool available</a>. To
access the disk units in an independent disk pool you must make the disk
pool available (vary on) the disk pool.</li>
<li>Wait for resync to complete. </li>
<li><a href="../rzaig/rzaigmanageperformswitchover.htm">Perform
a test switchover</a>. Before you add data to the disk pool, perform a
test switchover on the switchable hardware group you created to ensure that
each node in the recovery domain can become the primary node.</li>
</ol>
<div class="note"><span class="notetitle">Note:</span> If you remove a node from a device domain after you configure geographic
mirroring, the removed node takes any production copies or mirror copies that
it owns. These are changed to non-geographic mirrored disk pools.</div>
</div>
<div class="section"><h4 class="sectiontitle">Using CL commands and APIs</h4><p>To configure geographic
mirroring with switchable independent disk pools using CL commands and APIs,
follow these steps:</p>
<div class="p"><blockquote>You can use CL commands and APIs for creating
a switchable independent disk pool, however there are some tasks that require
that you use iSeries Navigator.
<ol><li>Plan and configure your TCP/IP routes on all nodes, as follows:<ul><li>Node A should have routes to C and D.</li>
<li>Node B should have routes to C and D.</li>
<li>Node C should have routes to A and B.</li>
<li>Node D should have routes to A and B.</li>
</ul>
</li>
<li><strong>Create the cluster.</strong> Create the cluster with required nodes using
the <a href="../cl/crtclu.htm">CRTCLU (Create
Cluster) command</a>. </li>
<li><strong>Start the nodes that comprise the cluster.</strong> Start
the nodes in the cluster using the <a href="../cl/strclunod.htm">STRCLUNOD (Start Cluster Node) command</a></li>
<li><strong>Create the device domain.</strong> You must create the device domain for
all nodes involved in switching an independent disk pool using the <a href="../cl/adddevdmne.htm">ADDDEVDMNE (Add Device
Domain Entry) command</a>. All nodes must be in the same device domain. </li>
<li><strong>Create the device descriptions.</strong> Device descriptions must be created
on all nodes that will be in the cluster resource group (CRG). Use the <a href="../cl/crtdevasp.htm">CRTDEVASP (Create Device
Description (ASP)) command</a>. On the command line in the character-based
interface, enter CRTDEVASP. In the <strong>Resource Name</strong> and the <strong>Device
Description</strong> fields, enter the name of the independent disk pool you plan
to create. </li>
<li><strong>Create the cluster resource group.</strong> Create the device CRG with the
nodes, their roles in the recovery domain, and independent disk pool device
descriptions You must also specify a site name and up to four data port IP
addresses for each node in the recovery domain.</li>
<li><strong><a href="rzalymakehardwareswitchable.htm">Make your hardware switchable</a></strong>.
If you have stand-alone expansion units or IOPs that contain disk units that
are to be included in an independent disk pool, you must authorize the expansion
unit or IOP to grant access to other nodes at the same site <strong>(iSeries Navigator
required)</strong>.</li>
<li><a href="rzalycreatediskpool.htm">Create a disk pool</a>. Create the
disk pool on the node that owns the disk units using the New Disk Pool wizard
when the server is fully restarted. Make sure clustering is active before
you start. Name the independent disk pool to match the device description
resource name that you gave in step 3. As you add disk units, it is best
to localize disk units in the same expansion unit or IOP. Also, do not spread
the disk pool across device parity sets <strong>(iSeries Navigator required)</strong>.</li>
<li>Follow these steps to configure geographic mirroring:<ol type="a"><li>In iSeries Navigator,
expand <strong>My Connections</strong> (or your active environment).</li>
<li>Expand the iSeries server
that is the primary node .</li>
<li>Expand <strong>Configuration and Service</strong>.</li>
<li>Expand <strong>Hardware</strong>.</li>
<li>Expand <strong>Disk Units</strong>.</li>
<li>Expand <strong>Disk Pools</strong>.</li>
<li>Right-click the Disk Pool you want to mirror and select <span class="menucascade"><span class="uicontrol">Geographic Mirroring</span> &gt; <span class="uicontrol">Configure Geographic
Mirroring</span></span>.</li>
<li>Follow the wizard's instructions to configure geographic mirroring.<div class="note"><span class="notetitle">Note:</span> The
disk pools you select to geographically mirror must be in the same switchable
hardware group. If you want to geographically mirror disk pools in more than
one switchable hardware group, you will need to complete the wizard one time
for each switchable hardware group.</div>
<div class="note"><span class="notetitle">Note:</span> The mirror copy and the production
copies must be in different sites. If we have two sites, AB and CD and the
production copy is on node A on site AB, the backup copy must be on node C
or D on site CD. </div>
</li>
</ol>
</li>
<li><strong>Print your disk configuration</strong>. Print your disk configuration to
have in case of a recovery situation. See How to display your disk configuration
in <a href="../books/sc415304.pdf">Backup and
Recovery</a>. <img src="wbpdf.gif" alt="Link to PDF" /> Also, record the relationship between the independent disk pool name
and number. </li>
</ol>
<div class="p">You have now configured geographic mirroring. The remaining steps
are required to prepare the independent disk pool for use in this environment.  <ol><li><strong>Start the cluster resource group</strong>.
Start the cluster resource group to enable device resiliency using the <a href="../cl/strcrg.htm">STRCRG (Start Cluster
Resource Group) command</a>. </li>
<li><strong>Make the disk pool available</strong>. To
access the disk units in an independent disk pool you must vary on the disk
pool using the <a href="../cl/vrycfg.htm">VRYCFG
(Vary Configuration) command</a>. Vary on will also reconnect connections,
so that any new route definitions can take effect.</li>
<li>Wait for resync to complete.</li>
<li><strong>Perform a test switchover</strong>. Before you add data to the disk pool,
perform test switchovers on the switchable hardware group you created to ensure
that each node in the recovery domain can become the primary node. Use the <a href="../cl/chgcrgpri.htm">CHGCRGPRI (Change CRG
Primary)</a> command.</li>
</ol>
</div>
</blockquote>
</div>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="rzalyconfigure.htm">Configure independent disk pools</a></div>
</div>
</div>
</body>
</html>