Tuesday, April 19, 2011

NIM Short Notes

NIM

Master: It refers the machine where you setup and maintain your NIM environment

Client: It can be target for NIM Master-operations. Such as installation, updation.

NIM Classes: Machines, Network, Resources, Groups

Group: Collection of machines or resources

Resources : lpp_source, SPOT, mksysb, bosinst_data, script, image_data, installp_bundle

Lsnim – extract the contents in nim master

Lsnim –c machines à Shows the machine names

Lsnim –l

/etc/bootptab: This file is used by the bootpd daemon. With no operations this file empty. This file gets updated automatically by the NIM master when a NIM operation is executed that requires the client machine to boot from a NIM SPOT.

/etc/exports: Any sort of installation, boot, mksysb savevg etc operation requires the use of NFS. This file will be updated with which locations are NFS exported from the master to the client and the permissions associated with those exports.

/etc/hosts: It gives a relationship between a system’s hostname and an IP address. If your IP address does not match up to the correct hostname your installation fails.

/etc/niminfo: This file should always exist on the NIM master. This file is built when you first initialize the NIM environment. This is required to run nim commands and perform nim operations. If the /etc/niminfo file accidentally deleted you can rebuild the file

/tftpboot: The main purpose of this directory is to hold the boot images that are created by nim when a boot or installation is initiated. This directory also holds informational files about the clients that are having a boot or installation operation performed.

SPOT: Shared product object tree.Its a directory of code(installed filesets). That is used during client booting procedure. This content equals to /usr file system(Binaries, executables and libraries, header files and shell scripts).

boot images can exist in the /tftpboot directory. kernels will be stored in /tftpboot directory.

lsnim -t spot -> list the different available spots

To find oslevel -r output by lsnim -l . If the SPOT and mksysb are not at the same level, installation will only work if the SPOT is at higher level than the mksysb.

lpp_source: Similar to AIX install CD's. It contains AIX licensed program products(LPPs) in Backup File Format.

Lpp_source with attribute simages=yes means create a SPOT, and to install the AIX operating system.

Lpp_source with attribute simages=no means can not be used to install base AIX operating system.

lpp_source types: lsnim -t lpp_source

Mksysb:this resource is a file containing the image of the root volume group of machine.

It is used to restore a machine.

Defining MKSYSB resource:Nim –o define –t mksysb –a source= -a server=master –a location= resource name.

Lsnim –t mksyb

bosinst_data: bosinst_data resource is a flat ascii file, like bosinst.data used for restoring system backup images from tape or CD/DVD. this resource is for Push/pull installation of multiple machines at the same time.

script: contains all the commands to perform customization. such as file system resizing, additional user creation ..

To start a nim environment

  1. Select a machine to be the master
  2. Install AIX for the master
  3. install NIM file sets : bos.sysmgt.nim.master, bos.sysmgt.nim.spot
  4. Configure the selected machine as a NIM Master using smitty nimconfigà Mention Network name, Interface : nimconfig –a netname=net_10_1_1 –a pif_name=en0 –a netboot_kernel=mp –a cable_type=tp –a client_reg=no
  5. When machine added to NIM environment, the /etc/niminfo file is created.
  6. To rebuild the NIM master /etc/niminfo file, use the nimconfig –r command
  7. To rebuild and recover NIM client /etc/niminfo file use the niminit command. Ex: niminit –a master= -a name=
  8. Create file systems for nim à The lpp_source and SPOT resources are directories and the related filesystems must be created.
  9. Define basic resources(lpp_source, SPOT)à smitty nim_mkres
  10. Define the client (smitty nim_mkmac)
  11. Start the client installation(smitty nim_task_inst)
  12. Verify the /etc/bootptab
  13. Verify the boot files created or not in the /bootptab

NIM Daemons: nimesis, nimd, bootpd, tftpd

NIM Master uses the bootpd and tftpd

The bootpd daemon will also use the /etc/bootptab file when a NIM client is configured to be booted from the NIM master.

The tftpd daemon uses the /etc/tftpaccess.ctl file to determine which directory hierarchy is allowed to share.

/var/adm/ras directory contains the nim master log files.

/usr/ad/ras/nimlog file contains information about failed NIM operations.

Alog command to view the nim logs: alog –f /usr/adm/ras/nimlog -o

Estimation of minimum disk requirements: lpp_source à 6GB, SPOTà2GB, mksysbà40GB

File System Hierarchy:

/tftpboot: This is used for NIM master boot images(Kernels) and info files for the NIM clients.

/export/lpp_source: Used for storing versions of AIX base level filesets in specific directories

/export/spot: Used for storing non /usr

/export/images: This is used for storing system backup images. Images can be created by NIM mksysb.

/export/mksysb: This directory for the mksysb image files to install on clients-approx 1.5GB per image.

/export/res: for bosinst_data, image_data and scripts.

/export/53 contains lppsource_53TL6 and spot_53TL6

NIM server size depends upon how many versions of AIX filesets, TLs, PTFs, Service Packs

Filesets for NIM Master.: bos.net.tcp.server, bos.net.nfs.server, bos.sysmgt.nim.master, bos.sysmgt.nim.spot.

Master config: smitty nimàconfigure NIM environmentàadvanced configurationàInitialize nim master only. ( give details like network name and interface)

Making the lppsource:

  • Copy software from CD or DVD into /export/53 file system Smitty bffcreateà ( give input device, software package to copy , directory for storing sw package)
  • Define it as a NIM resource: smitty nimàconfigure the NIM environment àAdvanced configurationà Create basic installation resourcesà create a new LPP_source ( give resource server, LPP_source name, LPP_source directory)

Making the SPOT: smitty NIMà configure the NIM environmentà Advanced configuration à Create Basic installation resources à Create a New Spot ( give Resource server, input device, SPOT name and SPOT directory.

NIM Configuration:

Define Client Machine: smitty nim à perform nim administration tasksà manage machinesà define a machine(NIM machine name, machine type(standalone), hardware platform type(chrp), kernel to use for network boot(mp), cable type(tp),

Display NIM network objects: lsnim –l –c networks

The Basic NIM installation resources:

1) one nim lpp_source and one SPOT

2) for mksysb installation mksysb resource and SPOT

Define lpp_source: nim –o define –t lpp_source –a server=master –a location=/export/lpp_source/lpp5300 –a source=/dev/cd0 lpp5300

Creating a NIM lpp_source from a directory àNim –o define –t lpp_source –a server=master –a location=/export/lpp_source/lpp5300 lpp5300

Removing NIM lpp_source: nim –o remove lpp5300

Check a NIM lpp_source: nim –Fo check lpp5304

Creating a NIM spot: nim –o define –t spot –a server=master –a location=/export/spot –a source=lpp5300 –a installp_flags=-aQg spot5300

Listing Filesets in a SPOT: nim –o lslpp –a filesets=all –a lslpp_flags=La spot6100-01

Nim –o lslpp spot6100-01

Listing fixes in a SPOT: nim –o fix_query spot6100-01

TL of a SPOT: lsnim –l spot6100-01|grep oslevel

Listing Client Filesets: nim –o lslpp –a filesets=all –a lslpp_flags=-La client.

Removing the NIM spot à nim –o remove spot5300

Checking the SPOTànim –o check spot5300

Resetting the NIM spot à nim –Fo check spot5300

Create a nim clientà nim –o define –t standalone –a if1=”net_10_1_1 lpar55 0 ent0” LPAR55

Define NIM machines using smit nim_mkmac

Removing NIM client Definition: nim –o remove LPAR55

Installing NIM Clients:

Base Operating System Installation

System Clone installation

Automated customization after a generic BOS install.

BOS install through NIM:

  • Nim –o allocate –a spot=spot5304 –a lpp_source=lpp5304 LPAR55
  • Initiate the install: nim –o bos_inst –a source=spot5304 –a installp_flags=agX –a accept_licenses=yes LPAR55
  • If the installation is unsuccessful, you need to reallocate the resources
  • Reset and deallocate NIM resources: Nim –Fo reset LPAR55, Nim –Fo deallocate –a subclass=all LPAR55
  • View the progress of installation: Nim –o showlog –a log_type=boot LPAR55

Using SMIT to install standalone client: smitty nim_bosinstà select a target for the operationà select the installation type à select the lpp_sourceà select the spot

After Initial program load à SMS Menuà setup remote IPLàinterpartition logical LANà Select IP parameters(Client IP, Server IP, Gateway, Subnetmask)àPing TestàExecute Ping Testà Select Boot Optionà Select install/Boot Device(Network)à Select normal boot mode

Steps to migrate the NIM master to AIX 5L V5.3

  1. Unmount all NFS mounts
  2. Document the AIX and NIM master configuration(snap –ac, lsnim)
  3. Perform NIM database backup à smitty nim_backup_dbà Backup the NIM database
  4. Perform a mksysb of the NIM Master
  5. insert Aix 5l v5.3 CD volume 1 into the CD drive

Creating mksysb from NIM client:

Nim –o define –t mksysb –a server=master –a source=lpar5 –a mk_image=yes –a location=/export/images/mksysb.lpar5 mksysb_lpar5

Backup the VIO server:

Backupios –file /home/padmin/viobackup/VIO.mksysb –mksysb

Restoring the VIO server:

Defining mksysb resource: smitty nim_mkres(select mksysb)à Define Spot resource: smitty nim_mkres(select spot)à perform the BOS installation

NIM Commands

nimconfig -a pif_name=en0 -a netname=net1àTo initialise the NIM master with network name net1

nimconfig -r àTo rebuild /etc/niminfo file which contains the variables for NIM

nim -o define -t lpp_source -a source=/dev/cd0 -a server=master –a location=/export/lpp_source/lpp_source1 lpp_source1à To define lpp_source1 image in /export/lpp_source/lpp_source directory from source cd0

nim -o define -t mksysb -a server=master -a location=/resources/mksysb.image mksysb1à To define mksysb resource mksysb1, from source /resources/mksysb.image on master

nim -o remove inst_resourceà To remove the resource

nim –o showres lpp_source6100àListing contents of the lpp_source.

Nim –o showres –a instfix_flags=T lppsource_61_01

nim -o check lpp_source1à To check the status of lpp_source lpp_source1

nim -o allocate -a spot=spot1 -a lpp_source=lpp_source1 node1à To allocate the resources spot1 and lpp_source1 to the client node1

nim -o bos_inst node1à To initialise NIM for the BOS installation on node1 with the allocated resources

nim -o dkls_init dcmdsà To initialize the machine dcmds as diskless operation

nim -o dtls_init dcmdsàTo initialize the machine dcmds for dataless operation

nim -o cust dcmdsàTo initialize the machine dcmds for customize operation

nim -o diag dcmdsàTo initialize the machine dcmds for diag operation

nim -o maint dcmdsàTo initialize the machine dcmds for maintenance operation

nim -o define -t standalone -a platform=rspc -a if1="net1 dcmds xxxxx" -a cable_type1=bnc dcmdsàTo define the machine dcmds as standalone with platform as rspc and network as net1 with cable

type bnc and mac address xxxxx

nim -o unconfig masteràTo unconfigure nim master master

nim -o allocate -a spot=spot1 dcmdsàTo allocate the resource spot1 from machine dcmds

nim -o deallocate -a spot=spot1 dcmdsàTo de allocate the resource spot1 from machine dcmds

nim -o remove dcmdsàTo remove machine dcmds after removing all resources associated to it

nim -o reboot dcmdsàTo reboot ther client dcmds

nim -o define -t lpp_source -a location=/software/lpp1 -a server=master -a source=/dev/cd0 lpp1àTo define lppsource lpp1 on master at /software/lpp1 directory from source device /dev/cd0

lsnim àTo list the nim resources

lsnim -l dcmdsà To list the detailed info about the object dcmds

lsnim -O dcmds àTo list the operation dcmds object can support

lsnim -c resources dcmds àTo list the resources allocated to the machine dcmds

nimclientà The client version of nim command (User can obtain same results of nim in server )

NIM Master Configuration:

Nim Installation:

File sets required for NIM installation:

  • bos.sysmgt.nim.master
  • bos.sysmgt.nim.client
  • bos.sysmgt.nim.spot

Put volume 1 of your media installp –acgXd /dev/cd0 bos.sysmgt.nim OR use smit install_all

Initial Setup: smit nim_config_env

Initializing the NIM master: nimconfig –a pif_name=en0 –a master_port=1058 –a netname=master_net –a cable_type=bnc

Or smitty nimconfig.

Lsnim –l master à you will see information about NIM master

Lsnim –l |more àThe boot resource created a /tftpboot directory to hold all of your boot images. All NIM clients that are on the same subnet as this master will be assigned to master_net network.

Set up first lpp_source resource: Create file system called /export/nim/lpp_source.

Nim –o define –t lpp_source –a location=/export/nim/lpp_source/53_05 –a server=master –a comments=’5300-05 lpp_source’ –a multi_volume=yes –a source=/dev/cd0 –a packages=all 5305_lpp

Or

Smit nim_mkresà select lpp_source

If you wish to add other volumes you can use

A) bffcreate the volumes into the lpp_source

B) Use NIM to add the volumes smitty nim_res_opàselect lpp_sourceàselect updateàgive target lpp_source and source

Lsnim –l 5305_lpp

Rstate: If this is not set to ‘ready for use’ then you cannot use this resource. Running a check on the lpp_source will allow you to clear this up. Nim –o check

Set up first SPOT resource: create file system called /export/nim/spot.

Nim –o define –t spot –a server=master –a source=5305_lpp –a location=/export/nim/spot –a auto_expand=yes –a comments=’5300-05 spot’ 5305_spot

OR

Smitty nim_mkresàselect SPOT.

Lsnim –l 5305_spot

Unconfiguring NIM master: nim –o unconfigure master.

Installing software on a client: smitty nim (or smit nim_inst_latest)àPerform NIM software installation and Maintenance tasksàInstall and update softwareàInstall Softwareàselect the client and the lpp_source.

Updating client software to a latest level: smitty nim(or nim_update_all)àperform nim software installation and maintenance tasksàinstall and update softwareàupdate installed software to latest levelàselect the client then select the lpp_source.

Alternate Disk Install for new TLs: smitty nim(or nim_alt_clone)àPerform NIM software installation and Maintenance tasksà Alternate Disk Installationà Clone the rootvg to an alternate disk ( select taget machine and disk)

Alternate disk install for a new release: smit nim(or nimadm_migrate)àPerform NIM software installation and Maintenance tasksàAlternate disk installationà NIM alternate disk migrationàperform nim alternate disk migration( select client, disk name, LPP source and SPOT name)

Performing Installs from the client: smit nim(or nim_client_inst) à install and upgrade software

RTE installation:

  • Requires lpp_source and spot
  • Default is to install the BOS.autoi bundle
  • Define the client on the NIM master
  • Prepare the NIM master to supply RTE install resources to the client
  • Initiate the installation from the client

Defining the client: smit nim_mkmac (give hostname) enter give machine typeàstandalone, hardware platform typeàchrp, communication protocol need by clientànimsh, cable typeàN/A.

Client on a new network: smit nim_mkmacà give hostname and enter. Type of network attached to network install interface àent(Ethernet network) enter. Give NIM networkànetwork2 and subnetmaskà255.255.255.0, default gateway used by machine and master.

Setting up the master to install: smit nim_bosinstàselect target machineàselect the installation type (rte)àselect the lpp_sourceàselect the SPOTàinstall the base OS on standalone clients

Checking the NIM master: lsnim –l client, tail –l /etc/bootptab(bf field in /etc/bootptab specifies the boot file that will

be transferred to the client using TFTP after the client contacts the master using BOOTP) , ls –l /tftpboot(Actually a symbolic link) , showmont –e ( shows exported files)

Typical Install Sequence:

  • Client initates BOOTP request to NIM server.
  • NIM server responds with information about the boot file (bootptab file)
  • Client initates TFTP of boot file from NIM server.
  • Client runs boot file
  • Client NFS mounts SPOT and lpp_source
  • Install Operating system

Accessing SMS: Open HMCà select LPARà Activateà select profile (default)àclick on open a terminal windowàAdvancedàselect boot mode SMS àok à select remote IPLà select the adapteràselect internet protocol version (IPv4)àselect network service(BOOTP)àSetting IP parameters( select client IP, server IP, gateway, subnet mask)àsetting the bootlist(select install/boot deviceàselect networkà select network service(BOOTP)àselect the normal boot mode) à setting the boot(are you sure you want to exit SMS(yes).

Monitoring progress on the master: lsnim –l client ( info à prompting_for_data_at_console)

Installation: Main BOS installation menu ( select install now with default settings)

To view the bosinst log à nim –o showlog –a log_type=bosinst client.

Listing valid operations for an object type: lsnim –Pot master

Listing valid operations for an object: lsnim –O client

Rebuilding the /etc/niminfo file: nimconfig –r

Niminit -a name=client –a master=master

Backing up the NIM database: smitty nim_backup_db (default value is /etc/objrepos/nimdb.backup)

Restore the previously created backup: smitty nim_restore_db

NIM Log files:

/var/adm/ras/nimlog

/var/adm/ras/nim.installp

/var/adm/ras/nimsh.log

/var/adm/ras/bosinstlog

/var/adm/ras/nim.setup

High availability ( Alternate NIM master)

/etc/niminfo: Lists the active NIM master, and a list of valid alternate masters.

Configure alternate NIM master: smit niminit_altmstr

Synchronizing NIM database: smit nim_altmstr ( select synchronize alternate master’s NIM DB)

Installation & Upgradation

Install a Client Using NIM:

  • Configure NIM master
  • Defined basic NIM resources
  • Defined NIM client you want to install
  • Smit nim_bosinst
  • Select a target for the BOS installation operation
  • Select rte installation for the installation type
  • Select the lpp_source resource for the BOS installation
  • Select the SPOT resource for the BOS installation
  • Select bosinst_data resource that is capable of performing a non prompted BOS installation
  • Select resolv_conf resource
  • Select accept new license agreement option and select yes
  • Press enter
  • Check the status using lsnim –l client name

Clone a rootvg using alternate disk installation:

  • Check the status of physical disks(lspv) hdisk1 as an alternate disk.
  • Check the fileset bos.alt_disk_install.rte using lslpp. If its not installed install using geninstall –d /dev/cd0 bos.alt_disk_install.rte
  • Smit alt_cloneàselect hdisk1à

VIO updates:

  • Applying updates from a local hard disk(Login to the vio by padminàCreate a directoryàusing ftp transfer the update files(Or) mount remote_machine_name:directory /mntàcommit previous updates(updateios –commit)àApply the update ( updateios –accept –install –dev àverify the update using ioslevel àreboot the vio(shutdown –restart)

Migration AIX Version 5.3:

Prechecks:

  • System Requirements: Physical Memory:128MB, Paging Space: 512MB, Disk Space:2.2GB
  • Take snap of system using snap –a
  • Documenting your hardware using lsdev –CHc memory, bootinfo –r, lsdev –CHc disk, lsdev –CHc adapter, df –k
  • Documenting your software: oslevel, lslpp –la, lslicense, lsattr –El sys0,
  • Take system backup and data VG

Migration Process:

  • Change the boot list using bootlist –m normal cd0
  • Insert the aix 5l base CD into the CD-ROM
  • Boot the system from CD. You will get the installation menu.
  • The default method of installation is migration
  • Select the hdisk, select advanced options for 32-bit or 64-bit

Migration using alternate disk install:

  • The OS is copied to another disk on your NIM client
  • The copied rootvg is then migrated to 5L.

Process: smit nim(or nimadm_migrate)àPerform NIM software installation and Maintenance tasksàAlternate disk installationà NIM alternate disk migrationàperform nim alternate disk migration( select client, disk name, LPP source and SPOT name)

Migration AIX Version 6.1:

  • Take an mksysb backup for rootvg on bootable media
  • Make a copy: /etc/inetd.conf, /etc/inittab, /etc/motd, /usr/dt/config/Xservers
  • Check and remove restricted tunables in /etc/nextboot
  • Ensure root user is primary authentication method of system. Lsuser –a auth1 root; chuser auth1=system root
  • Users who have logged must be log off
  • Check error log: errpt
  • Verify the processor capacity(32/64bit): prtconf –c
  • Insert 6.1 dvd and mount it: mount –v cdrfs –o ro /dev/cd0 /mnt
  • Copy file /mnt/usr/lpp/bos/pre_migration to /tmp and run /tmp/pre_migration. The output will be stored in /home/pre_migration.yymmddhhmmss
  • Shutdown –F
  • Manually turn on system and boot from DVD
  • Select current console as the system console
  • Select English language
  • Select change/show installation settings and make sure the installation type is migration
  • Run /usr/lpp/bos/post_migration

Migrating HACMP cluster to 5.3:

  • Enough disk space is required on /(1.2 MB) and /usr(120MB)
  • Minimum RAM 128MB required

Pre migration Steps:

  • Take a snapshot
  • Take system backup
  • Run lppchk –v, check ML, check errpt, df –k, lsps –s,

Rolling Migration:

  • From the working cluster we saved a snapshot
  • Took MKSYSB
  • Create alt_disk_install
  • Stop hacmp with takeover. Check that its moved to another node. Confirm with clfindres.
  • Install latest AIX fixes
  • Update and verify RSCT levels
  • Remove and replace SDD. (stopsrc –s sddsrv, rmdev –dl dpo –R, uninstall SDD with smitty remove, install latest SDD)
  • Run smitty update_all
  • Reboot the node
  • Repeat the steps to other servers
  • Check the cluster state by lssrc –ls clstrmgrES

Snapshot Migration:

  • Stop HACMP on all nodes
  • Run smitty remove and deinstall cluster.*
  • Migrate the AIX/RSCT
  • Install the HACMP packages on all nodes
  • Reboot all nodes
  • Convert the snapshot: /usr/es/sbin/cluster/conversion/clconvert_snapshot –v 5.2 –s snapshot.odm
  • Apply the snapshot: smitty hacmpàextended configurationàsnapshot configurationàapply a cluster snapshotàselect snapshot and press enter
  • Start cluster services one node at a time.

Post Migration steps:

  • Verify and synchronize the cluster configuration
  • Do failover test

HMC & LPAR Short Notes

HMC AND LPAR

HMC device is required to perform LPAR , DLPAR and CUOD configuration and management.

A single HMC can manage 48 i5 systems and 254 LPARs

In a partition there is a maximum no. of 64 virtual processors.

A mix of dedicated and shared processors within the same partition is not supported.

Sharing a pool of virtualized processors is known as Micro Partitioning technology

The maximum no.of physical processors on p5 is 64.

In Micro partition technology the minimum capacity is 1/10 processing units.

Virtual Ethernet enables inter partition communication without a dedicated physical network adapter.

The virtual IO server owns the real resources that are shared with other clients.

Shared Ethernet adapter is a new service that acts as a layer 2 network switch to route network traffic from a virtual Ethernet to a real network adapter.

On p5 – 595 Max no.of processors- 64, Max Memory Size – 2TB, Dedicated processor partitions-64, Shared processor partitions- 254.

HMC model for p5 – 595 is 7310-C04 or 7310-CR3

HMC Functions: LPAR, DLPAR, Capacity on demand without reboot, Inventory and microcode management, Remote power control.

254 partitions supported by one HMC.

A Partition Profile is used to allocate resources such as processing units, memory and IO cards to a partition. Several partition profiles may be created for the same partition.

System profile is a collection of partition profiles. A partition profile cannot be added to a system profile if the partition resources are already committed to another partition profile.

To change from one system profile to another, all the running partitions must be shutdown.

To find the current firmware level: lscfg –vp |grep –p ‘Platform Firmware:’

Simultaneous multi threading : The instructions from the OS are loaded simultaneously into the processor and executed.

DLPAR : DLPAR allows us to add, move or remove processors, memory and IO resources to, from or between active partitions manually without having to restart or shutdown the partition.

Unused processing units available in shared processor pool.

Dedicated processors are whole processors that are assigned to a single partition. The minimum no. of dedicated processors you must assign is one processor.

When a partition with dedicated processors is powered down, their processors will be available to the shared processor pool. This capability is enabled by “Allow idle processors to be shared”.

Idle processors from active partitions with dedicated processors can be used by any uncapped partition that requires additional processing units to complete its jobs.

Shared processor minimum processing unit is 0.1

Capped : The processor usage never exceeds the assigned processing capacity.

Uncapped : Processing capacity may be exceeded when the shared processor pool has spare processing units.

Weight is a number range between 0-255. If there are 3 processors available in the shared processor pool , partition A has an uncapped weight of 80 and B has 160. The LPAR A will receive 1 processing unit and B receive 2 processing units.

Minimum Memory is the minimum amount of memory which needed by the logical partition to start.

Desired memory is the requested amount of memory for the partition. The partition will receive an amount of memory between minimum and desired. Desired memory is the amount of memory which LPAR needs to have when the lpar is powered on. If the managed system does not have the desired amount of memory but only has lesser memory , those uncommitted memory resources will be assigned to the LPAR when the LPAR is activated.

You cant increase the memory more than maximum value.

Installed memory is the total no. of memory units that are installed on the managed system

Creating a new LPAR :

Server and Partition à Server Management à right click partitionsà Createà logical partition

Give partition ID(Numeric between 1 and 254) and name (max 31 characters)

Give partition type (AIX or linux, i5/OS, VIO)

Select work load management group NO

Give profile name

Specify the Min, Desired and Max memory

Select the dedicated/shared processors

If you select dedicated then give min,desired and max processors

If you select the shared give min,desired and max processing units and click advanced

And click the radio button(capped/uncapped) and give the virtual processors(min,max,desired)

If you select the uncapped give the weight also.

Allocate physical IO resources : Select the IO and click the add as required/add as desired.

IO resources can be configured as required or desired. A required resource is needed for the partition to start when the profile is activated. Desired resources are assigned to the partition if they are available when the profile is activated.

And select the console, location code

To create another profile Right click the partition à createà profileà give profile id.

Change default profile : Right click the partition àchange default profileà choose profile.

Restart options :

DUMP : Initiate a main storage or system memory dump on the logical partition and restart the logical partition when complete.

Immediate : as quickly as possible , without notifying the logical partition.

DUMP Retry : Retry a main storage or system memory dump on the logical partition and restart the logical partition when complete.

Shutdown options :

Delayed : Shutdown the logical partition by starting the delayed power off sequence.

Immediate : as quickly as possible , without notifying the logical partition.

DLPAR:

DLPAR can be performed against the following types :

Physical Adapters

Processors

Memory

VIO Adapters

Right click the partition à Dynamic Logical Partitioningà Physical adapter resourcesà add/move/remove

Licensed Internal Code Updates: To install licensed internal code fixes on your managed systems for a current release click “change licensed internal code for the current release”

To upgrade licensed internal code fixes on your managed systems for a current release click “upgrade licensed internal code for the current release”

HMC Security: Servers and Clients communicate over the secure sockets layer(SSL). Which provides server authentication, data encryption and data integration.

HMC Serial number lshmc -v

To format the DVD-RAM media

The following steps show how to format the DVD-RAM disk:

1. Place a DVD-RAM disk in to the HMC DVD drive.

2. In the HMC Navigation area,under your managed system, click Licensed Internal Code

Maintenance.

3. Then click HMC Code Update.

4. In the right-hand window, click Format Removable Media.

5. Then select the Format DVD radio button.

6. Select Backup/restore.

7. Then click the Format button.

The DVD-RAM disk should be formatted in a few seconds, after which you will receive a

“Format DVD has been successfully completed - ACT0001F” message.

Back up to formatted DVD media

Use the following steps to back up the CCD to the formatted DVD media:

1. In the HMC Navigation area, click Licensed Internal Code Maintenance.

2. Then click the HMC Code Update.

3. In the right-hand window, click Back up Critical Console Data.

4. Select the Back up to DVD on local system radio button and click the Next button.

5. Enter some valid text in the description window and click OK.