Tuesday, April 19, 2011

WPAR Short notes

WPAR & 6.1

It’s a software based virtualization solution for creating and managing multiple individual AIX OS environments within a single AIX based LPAR.

Live partition Mobility: It’s a PowerVM feature. Its ability to migrate a running LPAR between systems

WPARs reduce the number of managed LPARs

Inside the WPAR, the application having the following benefits:

Private execution environments

Dedicated network addresses and filesystems.

Interprocess communication that is restricted to processes executing only in the same WPAR

System WPAR: It’s a instance of AIX. Contains dedicated writable filesystems and system service daemons. It can share the global environment /usr and /opt filesystems in read only mode.

Application WPAR: It’s a WPAR that’s host only a single application or process. It shares file system of the global environment. It will not run any system service daemons

It’s not possible to remotely log in to an application partition or remotely.

Global Environment: It owns all physical and virtual resources of the LPAR. It allocates the resources to the WPAR. Most performance and tuning activities are performed from this environment. Sys admin must be logged in to the global environment to create,activate and manage WPARs.

Processes: A process running inside a WPAR can only see other processes in the WPAR.

Processe running in other WPARs and global environment are invisible to it. Processes can only access resources that are explicitly available inside the WPAR.

Users: Application WPARs inherit their user profiles from the global environment, so they have same privileges that the global environment does. System WPARs maintain independent set of users.

Resources: Resources created or owned by the global environment can only used by the global environment unless they are explicitly shared with a WPAR. Resources created or owned by a WPAR are visible only to that WPAR and global environment. To isolation of filesystems between system WPARs. A separate directory tree under the /wpars directory is created for each wpar. Inside this directory each WPAR maintains its own home,tmp,var directories. A system wpar will also mount the global environments /opt and /usr filesystems as readonly. Application wpars do not create their own filesystems, so they are usually allowed access to the filesystems owned by the global environments.

Each system WPAR assigned its own network address. Communication between WPARs running under the same instance via the loopback interface.

When to use workload partitions:

  • Improve application availability
  • Simplify OS and APP management
  • Manage application resource utilization

Upper limit of the no. of WPARs that can be executed within LPAR is 8192.

WPAR administration:

  • To use main WPAR menu: smit wpar
  • To use application WPAR menu: smit manage_appwpar
  • To use system WPAR menu: smit mange_syswpar

Create System WPAR: mkwpar –n wpar001

Mkwpar –n wpar001 –N address=9.3.5.182

First OS creates and mounts the WPAR’s file system. Next it populates them with the necessary ststem files. Finally it synchronizes the root part of the installed software. When the creation of new WPAR is complete, it is left in the defines state.

Starting a WPAR:

Lswpar ( Defines state)

Name state type hostname directory

Wpar001 D S Wpar001 /wpars/wpar001

Startwpar wpar001(Mounting file systems and adding IP address.)

Lswpar ( Defines state)

Name state type hostname directory

Wpar001 A S Wpar001 /wpars/wpar001

You can login to the wpar using clogin from the global environment or telnet.clogin doesn’t depends on a TCP/IP connection.

To determine you are in the WPAR or inside the global environment execute the uname –W command. It returns 0 if you are in the global environment. And a value other than 0 if you are inside a WPAR

Stopping of a WPAR: shutdown –F (stopping the WPAR from inside the WPAR)

Stopwpar wpar001(stopping the WPAR from Global Environment).

-F ( Stopping a WPAR from global environment force the WPAR shutdown)

-N shutdown immediately.

Rebooting a WPAR: shudown –Fr (Rebooting WPAR from inside the wpar)

Rebootwpar wpar001( Reboot from global environment)

Changing a WPAR:

You can change WPAR’s name only when the WPAR is in the Defined state.

Chwpar –n wpar001.

Broken state:If a WPAR gets an undefined state.

Investigation:

Logs check( /var/adm/ras, /var/adm/WPARs)

Check the processes ps - @(It shows processes by WPAR)

Removing a WPAR: Verify WPAR is in Defined state. Take Backup. Rmwpar wpar001.

WPAR states:

Defined D WPAR is created but not started

Active A It’s a normal state.

Broken B When failure occurs

Transitional T WPAR is in the process of changing from one state to another.

Paused P This state is reached when a WPAR has had a successful checkpoint or restore data.

Mobile partitions can be created by c flag

Creating Application WPAR: Execute the application with in the WPAR: wparexec. Ex: wparexec /hga/myapp

Wparexec command starting myapp immediately after creation. This type of WPAR only exists while the application is running. When the application ends, the WPAR also ends and all of its resources are freed.

If the application WPAR has a dependency on a filesystem that is not mounted, it will mount the file system automatically.

Lswpar(Transitional State)

Name state type hostname directory

Myapp T A myapp /

Lswpar(Active State)

Name state type hostname directory

Myapp A A myapp /

Lswpar( It disappears)

File Systems:

Types of File systems: namefs, jfs, jfs2, NFS.

By default system creates /,/tmp,/home,/var as jfs2 and /opt,/proc,/usr and namefs.

Creating a filesystem for a running WPAR:crfs –v jfs2 –m /wpars/wpar001/newfs –u wpr00 –a logname=INLINE –a size=1G

Changing a file system: chfs –a size=512M /wpars/wpar001/newfs

Backing up the global environment: Stop all wpars, Then run a mksysb, mkdvd, mkcd command with the –N flag.

IBM workload partition manager for AIX is a tool for monitoring and managing WPARs.

Aix 6.1

Workload partition manager (Extra software package need to install)

  • Live application mobility ( move one partition from one system to another)
  • Automatically move partition if necessary

Aix 6 requisites: power 4,5,6

Wpar: light weight miniature aix running in aix. It’s a hypervisor partitioning

Wpars share the global system resources with the copy of aix. It’s shares aix OS kernel and its shares processors, memory and adapters for IO from global resources.

Each wpar shared /usr /opt with global AIX read only.

Private filesystems: /, /tmp, /var, /home.

Its own network ip address and hostname

A separate administrative and security domain

2 types of wpar

· System

· Application

Live application mobility: Moving a running wpar to another machine or LPAR.

  • Install new machine(move wpar very fast way)
  • Multi system work load balancing ( load balancing of cpus, memory and IO)
  • Use mobility when upgrade machine(aix or firmware) or for repair

System wpar: it’s a copy of aix

  • Create it and its goes to defined state, run it, activate it and we can stop it and if its not required remove it.
  • It’s a complete virtualized os environment ( runs multiple services and applications)
  • Runs services like inetd,cron,syslog
  • Own root user, users and groups.
  • Does not share any file systems with other wpars or global system.

Application wpar:

  • Isolate an individual application
  • Light weight , one process . can start further processes.
  • Created and started in seconds
  • Starts when created. Automatically removed when application stops.
  • Shares global file systems
  • Good for HPC(high performance computing) means long running applications

Wpar manager:

  • Install wpar agent and it will talk to the all wpars in a machine
  • Wpar manager can see the wpars running on the machine
  • By using web browser it can communicate with wpars
  • Web servers ruuuning
  • It’s a graphical interface
  • Create, remove and moving wpars
  • Start & stop them
  • Monitoring & reporting
  • Manual relocation
  • Automated relocation

Workload application mobility: relocate

  • On wpar manager select wparàclick relocateàselect target aix
  • Chkptwpar –k àit freeze wpar, save the wpar processes, state to a statefile on nfs.kill the wpar processes once they are no longer needed
  • Restartwpar: this command take the statefile, rebuild the wpar processes & state and start the wpar.

Reasons for using wpars:

  • Reducing system admin time. Many applications on one instance. Reduces install and update of aix, monitoring, becakup, recovery etc..
  • Application encapsulation, treated apps as an isolated unitsàcreate/remove start/stop, checkpoint/resume
  • Rapid environment creation of a new application
  • Reduce costs only one copy of aix plus shared access to aix disks.
  • Simple to move an application to a different machine, application mobility, performance balancing,

Starting and stopping wpar:

  • Access the wpar standard console. It’s a secured link port no:hostname: 14443/ibm/console logon
  • Managed systems(an entire physical server or a LPAR) and work load partitions are under resource views tab
  • Wpar active state means its running, its not runningà defined, green tick means mobility, transitional stateàworking
  • Select wpars in defined stateàactionsàstartàok
  • Select wpars in active stateàactionsàstopàselect normal stop/hard stop/force wpar to stopàok
  • Monitoring the action using monitoringàtask activity
  • (OR) run the /usr/sbin/stopwpar –h sec_wpar on global system.

Application Mobility(moving wpar between machines):

  • Check the wpar mobility or not if not u cant move
  • Select wparàactionsàrelocateà click browseàok
  • Monitor the activity using task connectivity from monitoring tab.

Creating a WPAR(quick way):

  • newàgive wpar nameàgive hostnameàgive managed systemàselect system/applicationàif its application give application name and select/deselect enable mobilityàif its system wpar select/deselect use private /usr,/opt and enable mobility.
  • Give nfs server and remote directoty if u select enable mobility.
  • (OR) /usr/sbin/mkwpar –c –h wparname –n wp13 –M dev=/nfs/wp13root directory=/ host=managed system mountopts=rw vfs=nfs –R active=yes –S
  • It’s a defined state so actionsàstart.

Creating wpar(detailed way):

  • Guided activitiesàCreate workload partitionànextàselect partition type(System/application)àgive partition nameànextàdeploy this wpar to an existing managed systemàgive managed system nameàgive passwordà click on start workload partition when system starts and start the wpar immediately upon deployment à nextàenable relocationàgive network detailsàgive nfs server name and remote directory

Mobility between power4, power5 and power6 machines:

  • Compatibility check: select wparàclick on actionsàcompatability(it shows managed systems that meet the basic requirements for relocating the selected wpar)
  • Wpar cannot move between different machines like power 4 to 5. so first you have to stop the wpar and removed with preserve local file systems on server option. Then wpar is undeployed state. Then click wparàactionsàdeployàenter target system à click on start the wpar immediately upon deployment, preserve file systemsàok.

WPAR properties:

  • Change properties: select wparàactionsàview/modify wpar
  • Change the processors using resource control

Access and controlling via the command line:

  • Lswparà gives wpar details (name,state,type,hostname,directory)
  • Stratwpar mywpar
  • Stopwpar –hN mywpar
  • Lswpar –L mywpar
  • Mkwpar –n first
  • Mkwpar –n –h -N netmask address
  • -c for checkpoint
  • -M directory=/ vfs=nfs host=9.9.9.9 dev=/nfs/wp13 /opt
  • Startwpar wp13
  • Clogin wp13

Application Mobility:

Source AIX: /opt/mcr/bin/chkptwpar wp13 –d /wpars/wp13/tmp/state –o /wpars/wp13/tmp/checkpoint.log –k

Rmwpar –p wp13

Target AIX: /opt/mcr/bin/restartwpar wp13 –d /wpars/wp13/tmp/state –o /wpars/wp13/rmp/restart.log

Running application wpar:

Wparexec –n temp –h hostname /usr/bin/sleep 30

Process: starting wpar, mounting, loading, stopping

Comparing WPAR & Global AIX:

  • Wpar: df –n (/, /home, /tmp, /var ànfs mounts, /opt,/usr à read only)
  • Host wp13 à hostname and ip address
  • All ip addresses of wpars must be placed as ip alias on global aix
  • No physical volumes available on wpars.
  • No paging space available on wpars
  • All processes reside on wpars must be same running on global aix
  • Ps –ef -@|pg( extra column will that is wpar name)
  • Ps –ef -@ wp13|pg
  • Topas -@ on global aix
  • Topas on wpar will give some results for wpar and some are global aix. Yellow are global aix and white are wpar.

VIO Short Notes

VIO Short Note

PowerVM: It allows to increase the utilization of servers. Power VM includes Logical partitioning, Micro Partitioning, Systems Virtualization, VIO, hypervisor and so on.

Simultaneous Multi Threading : SMT is an IBM microprocessor technology that allows 2 separate H/W instruction streams to run concurrently on the same physical processor.

Virtual Ethernet : VLAN allows secure connection between logical partitions without the need for a physical IO adapter or cabling. The ability to securely share Ethernet bandwidth across multiple partitions increases H/W utilization.

Virtual SCSI: VSCSI provides secure communication between the partitions and VIO server.The combination of VSCSI and VIO capabilities allows you to share storage adapter bandwidth and to subdivide single large disks into smaller segments. The adapters and disks can shared across multiple partitions, increase utilization.

VIO server : Physical resources allows you to share the group of partitions.The VIO server can use both virtualized storage and network adapters, making use of VSCSI and virtual Ethernet.

Redundant VIO server: AIX or linux partitions can be a client of one or more VIO servers at the same time. A good strategy to improve availability for sets of client partitions is to connect them to 2 VIO servers. The reason for redundancy is ability to upgrade latest technologies without affecting production workloads.

Micro Partitioning: Sharing the processing capacity from one or more logical partitions. The benefit of Micro Partitioning is that it allows significantly increased overall utilization . n of processor resources. A micro partition must have 0.1 processing units. Maximum no.of partitions on any system P server is 254.

Uncapped Mode : The processing capacity can exceed the entitled capacity when resources are available in the shared processor pool and the micro partition is eligible to run.

Capped Mode : The processing capacity can never exceed the entitled capacity.

Virtual Processors :A virtual processor is a representation of a physical processor that is presented to the operating system running in a micro partition.

If a micro partition is having 1.60 processing units , and 2 virtual processors. Each virtual processor will have 0.80 processing units.

Dedicated processors : Dedicated processors are whole processors that are assigned to dedicated LPARs . The minimum processor allocation for an LPAR is one.

IVM(Integrated virtualization manager): IVM is a h/w management solution that performs a subset of the HMC features for a single server, avoiding the need of a dedicated HMC server.

Live partition Mobility: Allows you to move running AIX or Linux partitions from one physical Power6 server to another without disturb.

VIO

Version for VIO 1.5

For VIO command line interface is IOSCLI

The environment for VIO is oem_setup_env

The command for configuration through smit is cfgassist

Initial login to the VIO server is padmin

Help for vio commands ex: help errlog

Hardware requirements for creating VIO :

  1. Power 5 or 6
  2. HMC
  3. At least one storage adapter
  4. If you want to share Physical disk then one big Physical disk
  5. Ethernet adapter
  6. At least 512 MB memory

Latest version for vio is 2.1 fixpack 23

Copying the virtual IO server DVD media to a NIM server:

Mount /cdrom

Cd /cdrom

Cp /cdrom/bosinst.data /nim/resources

Execute the smitty installios command

Using smitty installios you can install the VIO S/w.

Topas –cecdisp flag shows the detailed disk statistics

Viostat –extdisk flag shows detailed disk statistics.

Wklmgr and wkldagent for handling workload manager. They can be used to record performance data and that can be viewed by wkldout.

Chtcpip command for changing tcpip parameters

Viosecure command for handling the secure settings

Mksp : to create a storage pool

Chsp: Adds or removes physical volumes from the storage pool

Lssp: lists information about storage pool

Mkbdsp: Attaches storage from storage pool to virtual SCSI adapter

Rmbdsp: removes storage from virtual scsi adapter and return it to storage pool

Default storage pool is rootvg

Creation of VIO server using HMC version 7 :

Select the managed system -> Configuration -> Create Logical Partition -> VIO server

Enter the partition name and ID.

Check the mover service box if the VIO server partition to be created will be supporting partition mobility.

Give a partition profile name ex:default

Processors : You can assign entire processors to your partition for dedicated use, or you can assign partial processors units from the shared processor pool. Select shared.

Specify the minimum, desired and maximum processing units.

Specify minimum, desired and maximum virtual processors. And select the uncapped weight is 191

The system will try to allocate the desired values

The partition will not start if the managed system cannot provide the minimum amount of processing units.

You cannot dynamically increase the amount of processing units to more than the maximum,

Assign the memory also min, desired and max.

The ratio between minimum and maximum amount of memory cannot be more than 1/64

IO selects the physical IO adapters for the partition. Required means the partition will not be able to start unless these are available in this partition. Desired means that the partition can start also without these adapters. A required adapter can not be moved in a dynamic LPAR operation.

VIO server partition requires a fiber channel adapter to attach SAN disks for the client partitions. It also requires an Ethernet adapter for shared Ethernet adapter bridging to external networks.

VIO requires minimum of 30GB of disk space.

Create Virtual Ethernet and SCSI adapters: increase the maximum no of virtual adapters to 100

The maximum no of adapters must not set more than 1024.

In actions -> select create -> Ethernet adapter give Adapter ID and VLAN id.

Select Access External Network Check Box to use this adapter as a gateway between internal and external network.

And also create SCSI adapter also.

VIO server S/W installation :

  1. Place the CD/DVD in P5 Box
  2. Activate the VIO server by clicking the activate. Select the default partition
  3. Then check the Open terminal window or console section and click the advanced. And OK.
  4. Under the boot mode drop down list select SMS.

After installation is complete login with padmin and press a(for s/w maintenance agreement terms)

License –accept for accepting the license.

Creating a shared Ethernet adapter

  1. lsdev –virtual ( check the virtual Ethernet adapter)
  2. lsdev –type adapter ( Check the physical Ethernet adapter)
  3. you use the lsmap –all –net command to check the slot numbers of the virtual Ethernet adapter.
  4. mkvdev –sea ent0 –vadapter ent2 –default ent2 –defaultid 1
  5. lsmap –all –net
  6. use the cfgassist or mktcpip command configure the tcp/ip or
  7. mktcpip –hostname vio_server1 –inetaddr 9.3.5.196 –interface ent3 –netmask 255.255.244.0 –gateway 9.3.4.1

Defining virtual disks

Virtual disks can either be whole physical disks, logical volumes or files. The physical disks can be local or SAN disks.

Create the virtual disks

  1. login to the padmin and run cfgdev command to rebuild the list of visible devices.
  2. lsdev –virtual (make sure virtual scsi server adapters available ex:vhost0)
  3. lsmap –all à to check the slot numbers and vhost adapter numbers.
  4. mkvg –f –vg rootvg_clients hdisk2 à Creating rootvg_clients vg.
  5. mklv –lv dbsrv_rvg rootvg_clients 10G

Creating virtual device mappings:

  1. lsdev –vpd |grep vhost
  2. mkvdev –vdev dbsrv_rvg -vadapter vhost2 –dev dbsrv_rvg
  3. lsdev –virtual
  4. lsmap –all

fget_config –Av command provided on the IBM DS4000 series for a listing of LUN names.

Virtual SCSI Optical devices:

A dvd or cd device can be virtualized and assigned to client partitions. Only one VIO client can access the device at a time.

Steps :

  1. let the DVD drive assign to VIO server
  2. Create a server SCSI adapter using the HMC.
  3. Run the cfgdev command to get the new vhost adapter. Check using lsdev –virtual
  4. Create the virtual device for the DVD drive.(mkvdev –vdev cd0 –vadapter vhost3 –dev vcd)
  5. Create a client scsi adapter in each lpar using the HMC.
  6. Run the cfgmgr

Moving the drive :

  1. Find the vscsi adapter using lscfg |grep Cn(n is the slot number)
  2. rmdev –Rl vscsin
  3. run the cfgmgr in target LPAR

Through dsh command find which lpar is currently holding the drive.

Unconfiguring the dvd drive :

  1. rmdev –dev vcd –ucfg
  2. lsdev –slots
  3. rmdev –dev pci5 –recursive –ucfg
  4. cfgdev
  5. lsdev –virtual

Mirroring the VIO rootvg:

  1. chvg –factor 6 rootvg (rootvg can include upto 5 PVs with 6096 PPs)
  2. extendvg –f rootvg hdisk2
  3. lspv
  4. mirrorios –f hdisk2
  5. lsvg –lv rootvg
  6. bootlist –mode –normal –ls

Creating Partitions :

  1. Create new partition using HMC with AIX/linux
  2. give partition ID and Partition name
  3. Give proper memory settings(min/max/desired)
  4. Skip the physical IO
  5. give proper processing units (min/desired/max)
  6. Create virtual ethernet adapter ( give adapter ID and VLAN id)
  7. Create virtual SCSI adapter
  8. In optional settings

· Enable connection monitoring

· Automatically start with managed system

· Enable redundant error path reporting

  1. bootmodes select normal

Advanced Virtualization:

Providing continuous availability of VIO servers : use multiple VIO servers for providing highly available virtual scsi and shared Ethernet services.

IVM supports a single VIO server.

Virtual scsi redundancy can be achieved by using MPIO and LVM mirroring at client partition and VIO server level.

Continuous availability for VIO

  • Shared Ethernet adapter failover
  • Network interface backup in the client
  • MPIO in the client with SAN
  • LVM Mirroring

Virtual Scsi Redundancy:

Virtual scsi redundancy can be achieved using MPIO and LVM mirroring.

Client is using MPIO to access a SAN disk, and LVM mirroring to access 2 scsi disks.

MPIO: MPIO for highly available virtual scsi configuration. The disks on the storage are assigned to both virtual IO servers. The MPIO for virtual scsi devices only supports failover mode.

Configuring MPIO:

  • Create 2 virtual IO server partitions
  • Install both VIO servers
  • Change fc_err_recov( to fast_fail and dyntrk(AIX tolerate cabling changes) to yes. ( chdev –dev fscsi0 –attr fc_err_recov=fast_fail dyntrk=yes –perm
  • Reboot the VIO servers
  • Create the client partitions. Add virtual Ethernet adapters
  • Use the fget_config(fget_config –vA) command to get the LUN to hdisk mappings.
  • Use the lsdev –dev hdisk –vpd command to retrieve the information.
  • The reserve_policy for each disk must be set to no_reserve.(chdev –dev hdisk2 –attr reserve_policy=no_reserve)
  • Map the hdisks to vhost adapters.( mkvdev –vdev hdisk2 –vadapter vhost0 –dev app_server)
  • Install the client partitions.
  • Configure the client partitions
  • Testing MPIO

Configure the client partitions:

  • Check the MPIO configuration (lspv, lsdev –Cc disk)
  • Run lspath
  • Enable the health check mode (chdev –l hdisk0 –a hcheck_interval=50 –P
  • Enable the vscsi client adapter path timeout ( chdev –l vscsi0 –a vscsi_path_to=30 –P)
  • Changing the priority of a path( chpath –l hdisk0 –p vscsi0 –a priority=2)

Testing MPIO:

  • Lspath
  • Shutdown VIO2
  • Lspath
  • Start the vio2
  • Lspath

LVM Mirroring: This is for setting up highly available virtual scsi configuration. The client partitions are configured with 2 virtual scsi adapters. Each of these virtual scsi adapters is connected to a different VIO server and provides one disk to the client partition.

Configuring LVM Mirroring:

  • Create 2 virtual IO partitions, select one Ethernet adapter and one storage adapter
  • Install both VIO servers
  • Configure the virtual scsi adapters on both servers
  • Create client partitions. Each client partition needs to be configured with 2 virtual scsi adapters.
  • Add one or two virtual Ethernet adapters
  • Create the volume group and logical volumes on VIO1 and VIO2
  • A logical volume from the rootvg_clients VG should be mapped to each of the 4 vhost devices.( mkvdev –vdev nimsrv_rvg –vadapter vhost0 –dev vnimsrv_rvg)
  • Lsmap –all
  • When you bring up the client partitions you should have hdisk0 and hdisk1. Mirror the rootvg.
  • Lspv
  • Lsdev –Cc disk
  • Extendvg rootvg hdisk1
  • Mirrorvg –m rootvg hdisk1
  • Test LVM mirroring

Testing LVM mirroring:

  • Lsvg –l rootvg
  • Shutdown VIO2
  • Lspv hdisk1 (check the pvstate, stale partitions)
  • Reactivate VIO and varyonvg rootvg
  • Lspv hdisk1
  • Lsvg –l rootvg

Shared Ethernet adapter: It can be used to connect a physical network to a virtual Ethernet network. Several client partitions to share one physical adapter.

Shared Ethernet Redundancy: This is for temporary failure of communication with external networks. Approaches to achieve continuous availability:

  • Shared Ethernet adapter failover
  • Network interface backup

Shared Ethernet adapter failover: It offers Ethernet redundancy. In a SEA failover configuration 2 VIO servers have the bridging functionality of the SEA. They use a control channel to determine which of them is supplying the Ethernet service to the client. The client partition gets one virtual Ethernet adapter bridged by 2 VIO servers.

Requirements for configuring SEA failover:

  • One SEA on one VIOs acts as the primary adapter and the second SEA on the second VIOs acts as a backup adapter.
  • Each SEA must have at least one virtual Ethernet adapter with the “access external network flag(trunk flag) checked. This enables the SEA to provide bridging functionality between the 2 VIO servers.
  • This adapter on both the SEA’s has the same pvid
  • Priority value defines which of the 2 SEA’s will be the primary and which is the secondary. An adapter with priority 1 will have the highest priority.

Procedure for configuring SEA failover:

  • Configure a virtual Ethernet adapter via DLPAR. (ent2)
    • Select the VIOàClick task buttonàchoose DLPARàvirtual adapters
    • Click actionsàCreateàEthernet adapter
    • Enter Slot number for the virtual Ethernet adapter into adapter ID
    • Enter the Port virtual Lan ID(PVID). The PVID allows the virtual Ethernet adapter to communicate with other virtual Ethernet adapters that have the same PVID.
    • Select IEEE 802.1
    • Check the box “access external network”
    • Give the virtual adapter a low trunk priority
    • Click OK.
  • Create another virtual adapter to be used as a control channel on VIOS1.( give another VLAN ID, do not check the box “access external network” (ent3)
  • Create SEA on VIO1 with failover attribute. ( mkvdev –sea ent0 –vadapter ent2 –default ent2 –defaultid 1 –attr ha_mode=auto ctl_chan=ent3. Ex: ent4
  • Create VLAN Ethernet adapter on the SEA to communicate to the external VLAN tagged network ( mkvdev –vlan ent4 –tagid 222) Ex:ent5
  • Assign an IP address to SEA VLAN adapter on VIOS1. using mktcpip
  • Same steps to VIO2 also. ( give the higher trunk priority:2)

Client LPAR Procedure:

  • Create client LPAR same as above.

Network interface backup : NIB can be used to provide redundant access to external networks when 2 VIO servers used.

Configuring NIB:

  • Create 2 VIO server partitions
  • Install both VIO servers
  • Configure each VIO server with one virtual Ethernet adapter. Each VIO server needs to be a different VLAN.
  • Define SEA with the correct VLAN ID
  • Add virtual Scsi adapters
  • Create client partitions
  • Define the ether channel using smitty etherchannel

Configuring multiple shared processor pools:

ConfigurationàShared processor pool management àSelect the pool nameà

VIOs Security:

Enable basic firewall settings: viosecure –firewall on

view all open ports on firewall configuration: viosecure –firewall view

To view current security settings: viosecure –view nonint

Change system security settings to default: viosecure –level default

List all failed logins : lsfailedlogin

Dump the global command log: lsgcl

Backup:

Create a mksysb file of the system on a nfs mount: backupios –file /mnt/vios.mksysb –mksysb

Create a backup of all structures of VGs and/or storage pools: savevgstruct vdiskvg ( data will be stored to /home/ios/vgbackups)

List all backups made with savevgstruct: restorevgstruct –ls

Backup the system to a NFS mounted file system: backupios –file /mnt

Performance Monitoring:

Retrieve statistics for ent0: entstat –all ent0

Reset the statistics for ent0: entstat –reset ent0

View disk statistics: viostat 2

Show summary for the system in stats: viostat –sys 2

Show disk stats by adapter: viostat –adapter 2

Turn on disk performance counters: chdev –dev sys0 –attr iostat=true

Topas –cecdisp

Link aggregation on the VIO server:

Link aggregation means you can give one IP address to two network cards and connect to two different switches for redundancy purpose. One network card will be active on one time.

Devices à communication à Etherchannel/IEEE 802.3 ad Link Aggregation à Add an etherchannel / Link aggregation

Select ent0 and mode 8023ad

Select backup adapter as redundancy ex: ent1

Automatically virtual adapter will be created named ent2.

Then put IP address : smitty tcpip à Minimum configuration and startup à select ent2 à Put IP address

VLANs:

AIX Troubleshooting

Troubleshooting AIX and HACMP

Core dump:

  • Find core dump files: /usr/samples/findcore/corepath, getvfsname
  • Debug and analyze the core: snapcore –d /tmp/coredir core.16928.24200405

Boot Process:

  • To check the boot process: alog –t boot –o
  • Failure to locate a boot image: The boot image of the disk may be corrupted. Access rootvg from bootable media(select start maintenance mode for system recoveryàaccess a root VGà0 to continue)àrun boboot command

Corrupted FS/corrupted JFS log device/Failing FSCK/bad disk: Boot from CDrom/mksysb tapeàselect start maintenance mode/system recoveryàaccess rootvgàformat the default jfs log using /usr/sbin/logform /dev/hd8àrun fsck –y /dev/hd1,hd2,hd3,hd4,hd9var (If fsck find any errors repair the FS using fsck –p /dev/hd#)àlslv –m hd5(for finding boot disk)àrecreate the boot image using bosboot –ad /dev/hdisk#, bootlist –m normal hdisk#-->shutdown –Fr

Remove much of system configuration and save it to backup directory: mount /dev/hd4 /mnt; ,mount /dev/hd2 /usr; mkdir /mnt/etc/objrepos/bak; cp /mnt/etc/objrepos/Cu* /mnt/etc/objrepos/bak; umount all; exit

Save the clean ODM database: savebase –d /dev/hdisk#

Check file system sizes using : df /dev/hd3; df /dev/hd4

Check the /etc/inittab file is missing or not

Check all permissions ls –al / .profile /etc/environment /etc/profile

Check for ls –al /bin /bin/bsh /bin/sh /lib /u /unix

Check ls –l /etc/fsck /sbin/rc.boot or missed or not

No Login Prompt: ps ax |grep consoleàcheck getty process is running or not; lscons

System dump:

  • Estimating dump size: sysdumpdev –e
  • To view current dump device: sysdumpdev –l (/dev/hd6)
  • To specify the primary dump device : sysdumpdev –P –p /dev/hd7
  • To specify the secondary dump device: sysdumpdev –P –s /dev/hd7
  • Create dump device: estimate the size sysdumpdev –e; mklv –y hd7 –t sysdump rootvg 7
  • Check the dump resources used by the system dump: /usr/lib/ras/dumpcheck –p
  • Change the size of a dump device: chps –s ‘1’ hd6
  • Always allow system dump: sysdumpdev –k
  • Get the last dump information: sysdumpdev –L

TCP/IP troubleshooting:

  • Traceroute shows each gateway that the packet traverses on its way to finding the target host. Traceroute uses the UDP protocol. And ping uses ICMP protocol. If you receive any answer from local gateway then the problem with the remote host problem. If you receive nothing then local network problem.

NFS troubleshooting:

  • Verify that the network connections
  • Verify inetd, portmap and biod daemons are running in the client
  • Verify valid mount point exists
  • Verify that server is up and running using rpcinfo –p server
  • Verify mountd, portmap and nfsd daemons running on NFS using rpcinfo –u server mount, portmap, nfs
  • Check the /etc/exports file using showmount –e server
  • Identifying the cause of slow access times for NFS: stopsrc –s biod ; startsrc –s biod
  • Use nfsstat –s and nfsstat –c commands to determine if the client or server is retransmitting large blocks.
  • NFS error messages: mountd will not start, server not responding: port mapper failure – RPC timed out, mount: access denied, mount: you are not allowed

LVM Troubleshooting:

  • VG lost:
    1. NON rootvg
      • exportvg data_vg
      • remove the bad disk from ODM using rmdev –l hdisk# -d
      • create new disks and reboot
      • if you have savevg backup: restvg –f /dev/rmt0 hdisk#
      • if you don’t have savevg backup recreate VG LV FS
      • restore FS data using restore –rqvf /dev/rmt0
    2. Rootvg

· shutdown the system and replace the bad disks

· boot in maintenance mode

· restore from a mksysb image( power off the machineàturn on the poweràplace the bootable mediaàpress 5 / F5àwhen installation screen appears select start maintenance mode for system recoveryàselect install from a system backup

· import each VG into a new ODM.

Boot Problem Management:

LED

User Action

553

Access the rootvg. Issue ‘df –k’. Check if /tmp, /usr or / are full.

553

Access the rootvg. Check /etc/inittab (empty, missing or corrupt?). Check /etc/environment.

551, 555, 557

Access the rootvg. Re-create the BLV:

# bosboot –ad /dev/hdiskx

551, 552, 554, 555, 556, 557

Access rootvg before mounting the rootvg filesystems. Re-create the JFS log:

# logform /dev/hd8

Run fsck afterwards

552, 554, 556

Run fsck against all rootvg filesystems. If fsck indicates errors (not an AIXV4 filesystem), repair the superblock (each filesystem has two superblocks, one in logical block 1 and a copy in logical block 31, so copy block 31 to block 1)

# dd count=1 bs=4k skip-31 seek=1 if=/dev/hd4 of=/dev/hd4

551

Access rootvg and unlock the rootvg:

chvg –u rootvg

523 – 534

ODM files are missing or inaccessible. Restore the missing files from a system backup.

518

Mount of /usr or /var failed? Check the /etc/filesystem. Check network (remote mount)., filesystems (fsck) and hardware.