Clustered LDOM Configuration generic


Presentation Description

No description available.


Presentation Transcript

Clustered LDOM Configuration (T4, Solaris Cluster 4, Solaris 11):

Clustered LDOM Configuration (T4, Solaris Cluster 4, Solaris 11) Dwai Lahiri

Setting up the Primary/control Domain:

Setting up the Primary/control Domain set up VConsole -- [email protected]:~# ldm add- vconscon port-range=5000-5100 primary_vcc primary Set vcpus – [email protected]:~# ldm set- vcpu 8 primary Set memory – [email protected]:~# ldm set-memory 16g primary Create initial configuration and reboot – [email protected]:~# ldm add- config initial [email protected]:~# ldm list- config factory-default initial [current] [email protected]:~# shutdown -y -i6 -g0

Setting up VSwitches:

Setting up VSwitches NOTE: Current configuration is 2 aggregation groups (agg1 and agg2) set up combining 2 pairs of 1GBE nics ). [email protected]:~# ldm add- vsw pvid =107 vid =103,104 net-dev=agg1 linkprop =phys-state inter- vnet -link=on primary_vsw0 primary [email protected]:~# ldm add- vsw pvid =107 vid =103,104 net-dev=agg2 linkprop =phys-state inter- vnet -link=on primary_vsw1 primary NOTE: After creating the two vswitches , Solaris 11 creates following alias corresponding to virtual nics vsw0 and vsw1 respectively – [email protected]:~# dladm show- phys|grep vsw net43 Ethernet up 1000 full vsw0 net13 Ethernet up 1000 full vsw1

List configured services and bindings:

List configured services and bindings l dm list-services l dm list-bindings

SAN Storage Configuration:

SAN Storage Configuration MPxIO is enabled for FC interfaces by running stmsboot -e -D fp and rebooting. Now with Solaris 11, all MPxIO aliased controllersare c0 to avoid complications of different ' ctd ' names for the same LUN in a clustered/shared-storage model. 8. c0t600601601E7A2200DC500196AF4EE111d0 <DGC-RAID 5-0430 cyl 51198 alt 2 hd 256 sec 16> / scsi_vhci /[email protected] 9. c0t600601601E7A2200E4FA3D52AF4EE111d0 <DGC-RAID 5-0430 cyl 32766 alt 2 hd 4 sec 16> / scsi_vhci /[email protected] 10. c0t600601601E7A22005E6DE97DAF4EE111d0 <DGC-RAID 5-0430 cyl 51198 alt 2 hd 256 sec 16> / scsi_vhci /[email protected] 11. c0t600601601E7A220040C300ACAF4EE111d0 <DGC-RAID 5-0430 cyl 51198 alt 2 hd 256 sec 16> / scsi_vhci /[email protected] 12. c0t600601601E7A220046C3F042AF4EE111d0 <DGC-RAID 5-0430 cyl 32766 alt 2 hd 4 sec 16> / scsi_vhci /[email protected] 13. c0t600601601E7A220094500DBEAF4EE111d0 <DGC-RAID 5-0430 cyl 51198 alt 2 hd 256 sec 16> / scsi_vhci /[email protected]

Clustering the CDs:

Clustering the CDs [email protected]:~/. ssh # pkg publisher PUBLISHER TYPE STATUS URI ha-cluster origin online solaris origin online solaris mirror online NOTE: Here we have set up a local repository for ha-cluster pkgs on node-01 and shared it out on the network. Cluster software configuration is as follows – Install the pconsole software – [email protected]:~/. ssh # pkg install terminal/ pconsole Install the Full HA Cluster set – [email protected]:~/. ssh # pkg install ha-cluster-full Install the Full HA Cluster Data Services set – [email protected]:~/. ssh # pkg install ha-cluster-data-services-full Do this on all members of the cluster After this step is completed on each node of the cluster, run scinstall from one of the nodes of the cluster. [email protected]:~# scinstall Reboot the first node manually. After reboot, log into the node and run clquorum reset (to take the node out of installation mode). At which point, the node will join the cluster.

HA LDOM Creation:

HA LDOM Creation Configure LDOM to reset for Control Domain failures NOTE: Run the next command on all nodes that will control the Guest LDOM [email protected]:~# ldm set-domain failure-policy=reset primary Set the domain master for the guest domain – # ldm set-domain master=primary ldg1 # ldm set-var auto-boot?=false ldg1 Add a shared-disk Server service – [email protected]:~# ldm add-vds primary_shared_vds1 primary

Create an LDOM:

Create an LDOM [email protected]:~# ldm add-domain ha_ldom1 [email protected]:~# ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 16G 2.4% 1h 30m ldg1 active -n---- 5000 16 16G 0.4% 10d 23h 47m ha_ldom1 inactive ------ Add VCPUS [email protected]:~# ldm add-vcpu 8 ha_ldom1 Add Memory [email protected]:~# ldm add-memory 16g ha_ldom1

Create LDOM (continued):

Create LDOM (continued) Add a shared lun to the VDS [email protected]:~# ldm add-vdsdev /dev/did/dsk/d9s2 [email protected]_shared_vds1 Add a vdisk to the new LDOM [email protected]:~# ldm add-vdisk vdisk1 [email protected]_shared_vds1 ha_ldom1 Set primary as domain-master [email protected]:~# ldm set-domain master=primary ha_ldom1 Disable auto-boot [email protected]:~# ldm set-var auto-boot?=false ha_ldom1

Create Global FS for LDOM migration password files:

Create Global FS for LDOM migration password files Create a Global (Clustered/ PxFS ) configuration Filesystem – format /dev/did/ rdsk /d14 partition> p Current partition table (unnamed): Total disk cylinders available: 32766 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wu 0 0 (0/0/0) 0 2 backup wu 0 - 32765 1023.94MB (32766/0/0) 2097024 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 - 32764 1023.91MB (32765/0/0) 2096960 7 unassigned wm 0 0 (0/0/0) 0 partition> l Ready to label disk, continue? y Make the FS [email protected]:/ var / tmp /safe# newfs /dev/did/ rdsk /d14s6 Make entries corresponding to this filesystem in /etc/ vfstab of every node in the cluster /dev/global/dsk/d14s6 /dev/global/rdsk/d14s6 /global/config ufs 2 no logging Create the RG – [email protected]:/ var / tmp /safe# clrg create global_config_rg Create the HASP RG – [email protected]:/# clrs create -g global_config_rg -t HAStoragePlus -p FilesystemMountPoints =/global/ config -p Affinityon =TRUE global_config_hasp_rs Verify that the global fs is mounted on all nodes of the cluster Test failover of the RG across the nodes (while ensuring that global mountpoints are available)

Configure HA-LDOM Cluster Resource:

Configure HA-LDOM Cluster Resource Add SUNW.ldom data service to the cluster clresourcetype register SUNW.ldom Stop the LDOM manually ldm stop ha_ldom1 Create Cluster rg and rs – clrg create test_ldom_rg Clrs create –g test_ldom_rg –t SUNW.ldom –p password_file=/global/config/passwd –p Domain_name=ha_ldom1 test_ldom_ldm_rs Set the Cluster resource migration mode clrs set –p Migration_type=MIGRATE test_ldom_ldm_rs NOTE: The two modes available are “NORMAL” and “MIGRATE”. MIGRATE instructs HA-LDOM agent to try and do a warm or live migration, while NORMAL instructs the agent to do a failover. If Warm/Live migration fails, a failover migration is defaulted to, irrespective of the mode that is set.