Scenario: Consolidate servers using switchable independent disk pools

Situation

Your company's network currently uses 30 small servers distributed within a single region, all in the same time zone, using the same language, and running the same release of the operating system and programming code. The amount of time and effort you spend maintaining the small systems and keeping them at the same operating system and application release levels is significant.

Objectives

To reduce the resource required to maintain and administer your servers, you want to consolidate by reducing the number of servers in your network.

The objectives of this scenario are as follows:
  • To consolidate from 30 small servers to one larger server at a central location
  • To maintain data independence for each geographic region

Details

None of the 30 small servers in your network require more than four disk units.

Prerequisites and assumptions

A potential consolidation answer for your network is logical partitioning (LPAR). However, in your scenario, consolidating the 30 locations with logical partitioning is not ideal because:
  • The effort required to manage the partitions is approximately the same as managing 30 distributed systems.
  • Each partition requires an IOP in order to support a load source for the partition. As a result, 30 IOPs are required for the consolidated system.
  • Additional expansion units are required to hold the IOPs needed for the 30 partitions. Since each location uses only a few disk units, the expansion units might be nearly empty.
As a result, the LPAR solution is not justifiable from an economic point of view for your scenario.

A better way to solve your particular scenario is to use switchable independent disk pools to provide server consolidation. By creating one switchable independent disk pool for each of the 30 branch offices, you will be able to reduce the number of IOPs from 30 to 7, while requiring just two expansion units. This is an economically attractive alternative.

Design

To understand how to use switchable independent disk pools, see Create a switchable independent disk pool. In addition to the planning and configuration steps for implementing switchable independent disk pools, the following strategies can be used to ensure that your users at the respective branch offices can seamlessly access data:
  • To ensure that users receive access to the correct set of data, your run-time environment can be changed to make sure that users from different branch offices connect to their data in the corresponding independent disk pool. This can be accomplished through a simple adjustment to user profiles and to the job descriptions that are specified by user profiles.

    All user profiles from a particular branch office will use one job description. The job description will specify the independent disk pool that contains the user's data, and create the library list that each job will use. With these simple changes, the task of getting each user to the correct set of data is completed.

  • Another run-time problem to be pointed out is the resolution of duplicate subsystems and job queues. Each branch office uses a cloned subsystem description to run batch jobs. Each of the subsystems uses job queues that have the same name on each of the branch office subsystems. If a single subsystem and a single set of job queues are used in the consolidated environment, jobs submitted by users from different branch offices will all be placed on the same set of queues and initiated by a single subsystem. This results in work flow that is inconsistent with the run-time environment of the distributed systems.

    To resolve this problem, the subsystems will be given unique names. Then, a command to start all of the subsystems will be added to the startup program. Finally, each of the job queues used by the subsystem will be moved into a library that is unique to each of the job descriptions that are used by the branch offices. As a result, any application that submits a job will require no changes in order to submit batch jobs to its unique queue.