Tuesday, February 26, 2013

Live Partition Mobility


PowerVM LPM allows for the movement of an active or inactive partition from one system to another with no application downtime.

Requirements:
  1.  Power6 or newer       
  2. PowerVM enterprise edition.
  3. If both Source and destination systems managed with same HMC . In this case HMC must be at version 7 release 3.2 or later.
  4. If both source and destination systems connected to separate HMC
  5. Both HMC’s connected to same network.
  6. Both HMC’s must be at version 7, release 3.4
  7.  A SSH connection must be set correctly between the 2 HMCs
  8.  Systems have a VIO installed with version 1.5.1.1 or later
  9.  Power6 based systems must have a firmware level of 01Ex320 or later.
  10.  The systems must have the same logical memory block size. This can be checked through ASMI.
  11.  At least one of the source and one of the destination VIO servers are set as Mover service partition.
  12. Both VIO servers should have their clocks synchronized.
  13.  The amount of memory available on the destination system is greater than or equal to the amount of memory used by the mobile partition of the source system.
  14. If mobile partition uses dedicated processors, the destination system must have at least this number of available processors.
  15. If the mobile partition is assigned to a shared processor pool, the destination system must have enough spare entitlement to allocate it to the mobile partition in the destination shared processor pool.
  16. The destination system must have at least one vio server that has access to all the network and storage resources used by the migrating partition.
  17. The destination system must have a virtual switch configured with same name as the source system.
  18.  IO resources must be virtual. Virtual SCSI devices must be backed by SAN volumes.
  19. Systems using IVE cannot be migrated. The migration partition can use only virtual Ethernet
  20.  The partition is not designated as a redundant error path reporting partition.
  21.  The partition is not part of an LPAR workload group
  22.  The partition has a unique name.
  23.  The additional virtual adapter slots for this partition do not appear as required in the partition profile.
  24. The OS must be at AIX 5.3 TL 7 or later
  25.  The partition doesn’t use Barrier synchronization register. Partition does not use huge pages.
  26. If source system configured with dual VIO then you need 2 or more VIO’s on the destination system
LPM Validation process:-

  1. Active partition migration capability and compatibility check
  2.  RMC Check
  3.  Partition Readiness
  4. System resource availability
  5. Virtual adapter mapping
  6.  OS and application readiness check


Use high speed Ethernet link between systems. A minimum of 1 GBPS link is preferable.

Migration Validation: - This process verifies that the migration of a given partition on one server to another specified server meets all the compatibility requirements. Select Partition àoperations àMobilityà Validate à Fill remote HMC, user, destination server and click validate. Once successful validation select Migrate.

Through Command line:- migrlpar –o m –m “ Source” –t “Destination” –id “partition id” –redundantvios 1 –mpio 1

Active Memory Expansion


AME ( Active Memory Expansion):

AME provides the capability to expand the effective memory capacity of a system beyond its physical memory capacity.

AME relies on compression of in-memory data to increase the amount of data can be placed into memory and thus expand the effective memory capacity of a P7 system.

The in-memory data compression is managed by OS and this compression is transparent to application and users.

When AME is enabled for a LPAR, the Operating system will compress a portion of the LPAR’s memory  and leave the remaining portion of memory uncompressed. This results in memory effectively being broken up into 2 pools: Compressed poll and uncompressed pool.

When an application needs data that is compressed, the OS will automatically decompress the data and move it from the compressed pool to uncompressed pool. When the uncompressed pool is full the OS will compress data and move it from the uncompressed pool to compressed pool.

When configuring AME, the memory expansion factor to be set for the LPAR.

LPAR Expanded Memory Size=Real Memory Size of LPAR * Memory Expansion Factor.

AME Supports: - P7 systems, HMC V7R7.1.0.0, eFW: 7.1, AIX: 6.1 TL4 SP2.

AME will not compress File Pages and Pinned virtual Memory Pages.

AME Planning tool (AMEPAT) will monitor existing AIX work load and provide an indication of whether the work load is a good candidate for AME based.

AME requires CPU resource for Memory compression and decompression activity. The amount of CPU required for compression and decompression will vary based on work load.

AME Planning Tool: This tool will monitor a workload’s memory usage and data compressibility over a user-configurable period of time. The tool will then generate a report with a list of possible AME configurations for this workload. This tool is available as part of AIX 6.1 TL4 SP2.
Ex: - amepat 120

Output columns:- System Configuration, System Resource Statistics, AME Modeled Statistics (Expansion Factor, Modeled True memory size, Modeled Memory Gain, CPU Usage Estimate),  AME Recommendation.

AME Configuration:- In LPAR profile à Memory Tab à enable “Active Memory Expansion factor” and mention the Factor value recommended by AMEPAT.

Memory Deficit:- If a LPAR is configured with a memory size of 20 GB and a mmemory expansion factor of 1.5 this results in a total target expanded memory size of 30 GB. However workload compresses in the facror of 1.4. So the expanded memory size that can be achieved is 27.2 GB. There is a short fall of 2.8GB. 2.8 short fall referred as a memory deficit.

Memory expansion factor can be changed dynamically.

Monitoring of AME can be done through AMEPAT. Report attributes à System Configuration (Partition details), System Resource statistics ( CPU Util, Virtual Memory Size, True memory in use, Pinned Memory, File Cache Size, Available Memory), AME Statistics ( AME CPU Usage, Compressed Memory, Compression Ratio).

Vmstat –c à provides compression & decompression statistics.
Lparstat –c à Provides an indication of the CPU Utilization for AME compression & decompression activity
Svmon à view of memory usage broken down into compressed & uncompressed memory.
Topas à will show memory compression statistics

AME & AMS can be used together. If the LPAR enabled with AMS the actual memory can change dynamically. But this will not effect on the LPAR’s expanded memory size.

Page Loaning:- Page Loaning is a feature that enables the hypervisor and OS to collaborate on memory management in the AMS environment. When the hypervisor needs to take physical memory back to the hypervisor, there is a vmo tunable, ams_loan_policy, that controls the aggressiveness of the page loaning.

Normally AIX will satisfy page loaning requests by paging out pages from memory to disk. In the case of a LPAR where both AMS & AME are enabled, AIX will rely on memory compression to satisfy page loaning request.

Friday, October 7, 2011

Tivoli Storage Manager Overview

TSM Version: 5.3.2

IBM Tivoli storage manager stores copies of data offline.

It protects hundreds of computers running a variety of OS

Components:

· Administrative interface

· TSM Server

· Scheduler

· Backup-Archive Client

· TSM Database

· TSM Recovery log

· Storage Pools

· Policy-Based Management

· Tape Library

Administrative interface: TSM Administration center, which operates on the Integrated Solutions Console (ISC), provides a task oriented GUI for storage administrators. Tasks such as creating server maintenance scripts, scheduling, adding storage devices, setting policy domains, user management, viewing the health monitor.

TSM Server: The role of TSM server is to store the backup or archive data from the backup-archive clients that it supports , to storage media. It also has a database of information to keep track of the data it manages, including policy management objects

Scheduler: Administrator defined schedules allow for the automation of Tivoli storage manager server and backup-archive client operations.

Backup-Archive Client: The TSM backup-archive client is a service which sends data to, and retrieves data from TSM server. The TSM backup-archive client must be installed on every machine that needs to transfer data to server managed storage called storage pools.

TSM Database: TSM Saves information in the TSM database about each file, raw LV, database that it backs up, archives. This information includes the File name, size, management class. Data is stored in a storage pool.

TSM Recovery Log: The recovery log keeps track of all changes made to the database, If a system outage were to occur, a record of the changes would be available for recovery.

Storage Pools: Storage pools are collections of like media that provide storage for backed up, archived and migrated files.

Policy-Based Management: Business policy is used to centrally manage backup-archive client data. Policies are created by the administrator and stored in the database on the server.

Tape Library: TSM supports a variety of library types, including manual libraries, SCSI libraries, 349X and 358X libraries.

Backup-Restore functionality:

TSM can perform backups of both files and raw lv’s. When backing up files TSM server database keeps a list of all files and their attributes (time, date, size, access control lists)

Backup: Creates a copy of file to protect against the operational loss or destruction of that file. Customers control backups by defining the backup frequency and number of versions.

Restore: Places backup copies of files into a customer designated system. By default the most recent version of each archive file requested is replaced.

4 levels of backups:

· Byte level ( Small amounts of data)

· Block Level (bigger amount of data)

· File level ( normal files)

· Image level ( includes file system and files)

TSM uses Progressive Backup Methodology also known as Incremental Backups.

Long term storage capabilities through Archive-Retrieve Function:

Archiving is useful when you want to store data that is infrequently accessed but must still be kept available. TSM has the capability of archiving for 30 years.

· Archive: Creates a copy of file or set of files. This feature enables customers to keep unlimited archive copies of a file.

· Retrieve: Allows users to copy an archive file from the storage pool to the work station.

Administration center on the integrated solutions console: GUI for managing IBM TSM administrative function is called the Administration Center.

Automation Capabilities: It includes a central scheduling component that allows the automatic processing of administrative commands and backup-archive client operations during a specific time period when the schedule is activated.

Scheduling is split into 2 categories:

· Administrative scheduling

· Backup-archive client scheduling.

Data storage and Data Management:

Types of storage media on which TSM stores data: Storage media can be disks, optical and tape assigned to a storage pool.

Storage pools contain backup files, archived files and space managed files. These storage pools are chained in order to create a storage hierarchy. The disk pool is usually first in the chain and followed by tape.

Policy –Based Approach: Backup-archive client data managed by business policy. Policies created by the administrator and stored in the database on the server.

Policy Domain: A group of nodes managed by the same set of policy constraints as defined by the policy sets. A node may only to be defined to one policy domain per server.

Policy Set: A collection of management class definitions. A policy domain may contain number of policy sets.

Management Class: A collection of management attributes called copy groups. 2 sets of MC attributes: backup and archive

Copy group: Management attributes describing backup and archive characteristics. There is a backup copy group and an archive copy group.

TSM Licensing:

3 License types: tsmbasic.lic, tsmee.lic and dataret.lic

Tuesday, August 30, 2011

ATAPE Driver installation & Upgradation

From AIX:
List the tape devices: lsdev -Cc tape
Remove the drive and library device names: rmdev -dl
For example, if the drive device name is /dev/rmt1: rmdev -dl rmt1

Upgrade Atape
Ftp the Atape driver in binary mode from IBM FIXCentral:
http://www.ibm.com/support/fixcentral/

Remove the older Atape driver (optional): installp -u Atape.driver

Install and commit the Atape driver. For example, if you downloaded the file to /tmp/Atape.9.3.5.0.bin: installp -acXd /tmp/Atape.9.3.5.0.bin all

Configure the tape device: cfgmgr -v (-v is not required but will show where it hangs if it does)

Verify the new devices are Available: lsdev -Cc tape

(Note: While not always absolutely necessary, it is strongly recommended to reboot the system after upgrading the Atape.driver)

Thursday, May 19, 2011

HMC Critical Consolidated Backup

Backup Critical Console Data :

The Backup Critical Console Data task is used to back up the HMC configuration and profile data to the formatted DVD-RAM media. The backup DVD-ROM media is used only when recovering the HMC from the software or hardware problem. The following data is included in the backup DVD-RAM media:

  • User configuration
  • User preferences, including each user’s home directory
  • HMC configuration files that record the following customizing:
    • TCP/IP
    • Rexec/ssh facility setting
    • Remote virtual terminal setting
    • Time zone setting
  • HMC log files located in the /var/log directory
  • Service functions settings, such as Inventory Scout, Service Agent, and Service Focal Point
  • Partition profile data backup

Procedure to automate Critical Console Data backup :

You can automate the Backup Critical Console Data task using the HMC scheduler. Unless you plan on rotating the DVD media, only the most-recent backup image will be on the DVD. To schedule the Backup Critical Console Data task, do the following:

  1. In the Navigation area, expand the HMC Management folder.
  2. In the Navigation area, click the HMC Configuration icon.
  3. In the Contents area, click Schedule Operations.
  4. From the list, select the HMC you want to back up and click OK.
  5. Select Options > New.
  6. In the Add a Scheduled Operation window, select Backup Critical Console Data and click OK.
  7. In the appropriate fields, enter the time and date that you want this backup to occur.
  8. Click the Repeat tab and select the intervals at which you want the backup to repeat and press Enter.
  9. When you have set the backup time and date, click Save. When the Action Completed window opens, click OK. A description of the operation displays in the Scheduled Operations window

AIX Troubleshooting

1. How to force a failover of an EtherChannel ?
# /usr/lib/methods/ethchan_config -f Etherchannel_Device

2. How to add a backup adapter to an existing etherchannel device ?
# /usr/lib/methods/ethchan_config -a -b Etherchannel_Device Ethernet_Adapter

3. How to change the address to ping attribute of an EtherChannel ?
# /usr/lib/methods/ethchan_config -c Etherchannel_Device netaddr New_Ping_IP_Addr

4. How to list the available major numbers in a system ?
# lvlstmajor

5. How to list the major number of a volume group ?
# lvgenmajor rootvg

6. Consider a situation where you have a VG in a PV. But you have not imported that.
Now you need to find a list of attributes of that volume group before importing/varyon it.
Answer the below questions :

a. How to list the maximum number of logical volumes allowed in the VG ?
# lqueryvg -p PVname -N

b. How to show the PP size ?
# lqueryvg -p PVname -s

c. How to show the number of free PPs in the VG ?
# lqueryvg -p PVname -F

d. How to show the current number of LVs in the VG ?
# lqueryvg -p PVname -n

e. How to list the current number of PVs in the VG ?
# lqueryvg -p PVname -c

f. How to list the total number of VGDAs for the VG ?
# lqueryvg -p PVname -D

g. How to list each LVID, LV name, state for each logical volume ?
# lqueryvg -p PVname -l

h. How to list each PVID, number of VGDAs and state for each PV in the VG ?
# lqueryvg -p PVname -P

i. How to list all the attributes with tags for the vG ?
# lqueryvg -p PVname -At

j. How to list the VGID from that physical volume ?
# lqueryvg -p PVname -v

7. How do you move a physical partition ( actually its just a data between PPs) ?
# lmigratepp -g VGID -p old_PVID -n old_PPNum -P new_PVID -N new_PPNum

8. How to retrive the VG name for a particular LV from ODM ?
# getlvodm -b LVID

9. How to retrive all configured PVs from ODM ?
# getlvodm -C

10. How to retrive the major number for a VGID from ODM ?
# getlvodm -d VGID

11. How to retrive the logical volume allocation characteristics for a LVID from ODM ?
# getlvodm -c LVID

12. How to retrive the free configured PVs from ODM ?
# getlvodm -F

13. How to retrive the strip size for a LVID from ODM ?
# getlvodm -F LVID

14. How to retrive the PV name for a PVID from ODM ?
# getlvodm -g PVID

15. How to retrive all VG names from the ODM ?
# getlvodm -h

16. How to retrive the VGID for a PVID from ODM ?
# getlvodm -j PVID

17. How to retrive the LVs and LVIDs for a VG name or VGID from ODM ?
# getlvodm -L VGDescriptor

18. How to retrive the LVID/LV Name for a LV Name or LVID from ODM ?
# getlvodm -l LVDescriptor

19. How to retrive the mount point for a LVID from ODM ?
# getlvodm -m LVID

20. How to retrive the stripe width for a LVID from ODM ?
# getlvodm -N LVID

21. How to retrive the PVID/PN name for a PV name or PVID from ODM ?
# getlvodm -p PVDesciptor

22. How to retrive the PV names, PVIDs and VGs of all configured PVs from ODM ?
# getlvodm -P

23. How to retrive the relocatable flag for a LVID from ODM ?
# getlvodm -r LVID

24. How to retrive the VG state for a VG from ODM ?
# getlvodm -s VGDescriptor

25. How to retrive the timestamp for a VG from ODM ?
# getlvodm -T VGDescriptor

26. How to retrive the VG name for a VGID from ODM ?
# getlvodm -t VGID

27. How to retrive the auto-on value for a VG name or VGID from ODM ?
# getlvodm -v VGDesciptor

28. How to retrive the VGID for a vG name ?
# getlvodm -v VGDesciptor

29. How to retrive the PV names and PVIDs for a VG from ODM ?
# getlvodm -w VGDesciptor

30. How to retrive the LV type ffor a LVID from ODM ?
# getlvodm -y LVID

31. How to retrive the concurrent capable flag for a VG from ODM ?
# getlvodm -X VGDescriptor

32. How to retrive the auto-on concurrent flag for a VG from ODM ?
# getlvodm -x VGDescriptor

33. How to display the contents of LVCB ?
# getlvcb -A LVName

34. How to list the number of copies of a LV from LVCB ?
# getlvcb -c LVName

35. How to list the file system name of a LV from LVCB ?
# getlvcb -f LVName

36. How to list the label of a LV from LVCB ?
# getlvcb -L LVName

37. How to display the type of the file system from LVCB ?
# getlvcb -t LVName

38. How to display the upper limit from LVCB ?
# getlvcb -u LVName

39. How to list the current defrag state of a file system ?
# defrag -q Filesystem

40. How to lsit the current and future (if degragmented) state of a file system ?
# degrag -r Filesystem

41. How to defragment a file system ?
# defrag Filesystem

42. How to run fsck on 2 filesystems simultaneously on different drives ?
# dfsck FileSystem1 FileSystem2

43. How to list the superblock, i-name map, disk map information for a file system ?
# dumpfs Filesystem

44. Where is the magic file located ?
/etc/magic

45. How do you remove a file system data from /etc/filesystems ?
# imfs -x -l LVName

46. How do you list inode, last update/modify/access timestamp of a file ?
# istat FileName

47. How do you update the i-node table and write buffered files to the hard disk ?
# sync

48. How do you list the filesystems in a volume group ?
# lsvgfs VGName

49. How do you redefine the set of PVs of a VG in the ODM ?
# redefinevg -d PVName VGName

50. How do you replace a PV in a VG ?
# replacepv SourcePV DestinationPV

Flashcopy Commands


## INITIATING FLASH COPY ON DS8000 USING MKFLASH ##

mkflash -dev -record 0014:0030 0015:0031 000D:0032 000E:0033 000F:0034 0108:0035 0109:0036 010A:0037 0003:0038 0004:0039 0005:003A 0006:003B 0007:003C 0008:003D 0009:003E 000A:003F 0100:0040 0101:0041 0102:0042 0103:0043 001C:0044 0114:0045 0115:0046 0116:0047 002B:0048 002F:0049 010D:004A 010E:004B 0106:004F 000C:004E 000B:004D 010F:004C 001E:0050 001F:0051 0020:0052 0021:0053

## CHECK FLASH COPY STATUS ON DS8000 USING LSFLASH ##

lsflash -dev -l 0014:0030 0015:0031 000D:0032 000E:0033 000F:0034 0108:0035 0109:0036 010A:0037 0003:0038 0004:0039 0005:003A 0006:003B 0007:003C 0008:003D 0009:003E 000A:003F 0100:0040 0101:0041 0102:0042 0103:0043 001C:0044 0114:0045 0115:0046 0116:0047 002B:0048 002F:0049 010D:004A 010E:004B 0106:004F 000C:004E 000B:004D 010F:004C 001E:0050 001F:0051 0020:0052 0021:0053

## INITIATING INCREMENTAL FLASH COPY ON DS8000 USING RESYNCFLASH ##

resyncflash -dev -l 0014:0030 0015:0031 000D:0032 000E:0033 000F:0034 0108:0035 0109:0036 010A:0037 0003:0038 0004:0039 0005:003A 0006:003B 0007:003C 0008:003D 0009:003E 000A:003F 0100:0040 0101:0041 0102:0042 0103:0043 001C:0044 0114:0045 0115:0046 0116:0047 002B:0048 002F:0049 010D:004A 010E:004B 0106:004F 000C:004E 000B:004D 010F:004C 001E:0050 001F:0051 0020:0052 0021:0053

## REMOVING FLASH COPY MAPPINGS ON DS8000 USING RMFLASH ##

rmflash -dev -l 0014:0030 0015:0031 000D:0032 000E:0033 000F:0034 0108:0035 0109:0036 010A:0037 0003:0038 0004:0039 0005:003A 0006:003B 0007:003C 0008:003D 0009:003E 000A:003F 0100:0040 0101:0041 0102:0042 0103:0043 001C:0044 0114:0045 0115:0046 0116:0047 002B:0048 002F:0049 010D:004A 010E:004B 0106:004F 000C:004E 000B:004D 010F:004C 001E:0050 001F:0051 0020:0052 0021:0053