Clustered LDOM Configuration generic

Views:
 
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Clustered LDOM Configuration (T4, Solaris Cluster 4, Solaris 11):

Clustered LDOM Configuration (T4, Solaris Cluster 4, Solaris 11) Dwai Lahiri

Setting up the Primary/control Domain:

Setting up the Primary/control Domain set up VConsole -- root@node-01:~# ldm add- vconscon port-range=5000-5100 primary_vcc primary Set vcpus – root@node-01:~# ldm set- vcpu 8 primary Set memory – root@node-01:~# ldm set-memory 16g primary Create initial configuration and reboot – root@node-01:~# ldm add- config initial root@node-01:~# ldm list- config factory-default initial [current] root@node-01:~# shutdown -y -i6 -g0

Setting up VSwitches:

Setting up VSwitches NOTE: Current configuration is 2 aggregation groups (agg1 and agg2) set up combining 2 pairs of 1GBE nics ). root@node-01:~# ldm add- vsw pvid =107 vid =103,104 net-dev=agg1 linkprop =phys-state inter- vnet -link=on primary_vsw0 primary root@node-01:~# ldm add- vsw pvid =107 vid =103,104 net-dev=agg2 linkprop =phys-state inter- vnet -link=on primary_vsw1 primary NOTE: After creating the two vswitches , Solaris 11 creates following alias corresponding to virtual nics vsw0 and vsw1 respectively – root@node-01:~# dladm show- phys|grep vsw net43 Ethernet up 1000 full vsw0 net13 Ethernet up 1000 full vsw1

List configured services and bindings:

List configured services and bindings l dm list-services l dm list-bindings

SAN Storage Configuration:

SAN Storage Configuration MPxIO is enabled for FC interfaces by running stmsboot -e -D fp and rebooting. Now with Solaris 11, all MPxIO aliased controllersare c0 to avoid complications of different ' ctd ' names for the same LUN in a clustered/shared-storage model. 8. c0t600601601E7A2200DC500196AF4EE111d0 <DGC-RAID 5-0430 cyl 51198 alt 2 hd 256 sec 16> / scsi_vhci /ssd@g600601601e7a2200dc500196af4ee111 9. c0t600601601E7A2200E4FA3D52AF4EE111d0 <DGC-RAID 5-0430 cyl 32766 alt 2 hd 4 sec 16> / scsi_vhci /ssd@g600601601e7a2200e4fa3d52af4ee111 10. c0t600601601E7A22005E6DE97DAF4EE111d0 <DGC-RAID 5-0430 cyl 51198 alt 2 hd 256 sec 16> / scsi_vhci /ssd@g600601601e7a22005e6de97daf4ee111 11. c0t600601601E7A220040C300ACAF4EE111d0 <DGC-RAID 5-0430 cyl 51198 alt 2 hd 256 sec 16> / scsi_vhci /ssd@g600601601e7a220040c300acaf4ee111 12. c0t600601601E7A220046C3F042AF4EE111d0 <DGC-RAID 5-0430 cyl 32766 alt 2 hd 4 sec 16> / scsi_vhci /ssd@g600601601e7a220046c3f042af4ee111 13. c0t600601601E7A220094500DBEAF4EE111d0 <DGC-RAID 5-0430 cyl 51198 alt 2 hd 256 sec 16> / scsi_vhci /ssd@g600601601e7a220094500dbeaf4ee111

Clustering the CDs:

Clustering the CDs root@node-02:~/. ssh # pkg publisher PUBLISHER TYPE STATUS URI ha-cluster origin online http://node-01.uscc.com/ solaris origin online http://pkg.oracle.com/solaris/release/ solaris mirror online http://pkg-cdn1.oracle.com/solaris/release/ NOTE: Here we have set up a local repository for ha-cluster pkgs on node-01 and shared it out on the network. Cluster software configuration is as follows – Install the pconsole software – root@node-02:~/. ssh # pkg install terminal/ pconsole Install the Full HA Cluster set – root@node-02:~/. ssh # pkg install ha-cluster-full Install the Full HA Cluster Data Services set – root@node-02:~/. ssh # pkg install ha-cluster-data-services-full Do this on all members of the cluster After this step is completed on each node of the cluster, run scinstall from one of the nodes of the cluster. root@node-01:~# scinstall Reboot the first node manually. After reboot, log into the node and run clquorum reset (to take the node out of installation mode). At which point, the node will join the cluster.

HA LDOM Creation:

HA LDOM Creation Configure LDOM to reset for Control Domain failures NOTE: Run the next command on all nodes that will control the Guest LDOM root@node-02:~# ldm set-domain failure-policy=reset primary Set the domain master for the guest domain – # ldm set-domain master=primary ldg1 # ldm set-var auto-boot?=false ldg1 Add a shared-disk Server service – root@node-02:~# ldm add-vds primary_shared_vds1 primary

Create an LDOM:

Create an LDOM root@node-02:~# ldm add-domain ha_ldom1 root@node-02:~# ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 16G 2.4% 1h 30m ldg1 active -n---- 5000 16 16G 0.4% 10d 23h 47m ha_ldom1 inactive ------ Add VCPUS root@node-02:~# ldm add-vcpu 8 ha_ldom1 Add Memory root@node-02:~# ldm add-memory 16g ha_ldom1

Create LDOM (continued):

Create LDOM (continued) Add a shared lun to the VDS root@node-02:~# ldm add-vdsdev /dev/did/dsk/d9s2 d9@primary_shared_vds1 Add a vdisk to the new LDOM root@node-02:~# ldm add-vdisk vdisk1 d9@primary_shared_vds1 ha_ldom1 Set primary as domain-master root@node-02:~# ldm set-domain master=primary ha_ldom1 Disable auto-boot root@node-02:~# ldm set-var auto-boot?=false ha_ldom1

Create Global FS for LDOM migration password files:

Create Global FS for LDOM migration password files Create a Global (Clustered/ PxFS ) configuration Filesystem – format /dev/did/ rdsk /d14 partition> p Current partition table (unnamed): Total disk cylinders available: 32766 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wu 0 0 (0/0/0) 0 2 backup wu 0 - 32765 1023.94MB (32766/0/0) 2097024 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 - 32764 1023.91MB (32765/0/0) 2096960 7 unassigned wm 0 0 (0/0/0) 0 partition> l Ready to label disk, continue? y Make the FS root@node-02:/ var / tmp /safe# newfs /dev/did/ rdsk /d14s6 Make entries corresponding to this filesystem in /etc/ vfstab of every node in the cluster /dev/global/dsk/d14s6 /dev/global/rdsk/d14s6 /global/config ufs 2 no logging Create the RG – root@node-02:/ var / tmp /safe# clrg create global_config_rg Create the HASP RG – root@node-02:/# clrs create -g global_config_rg -t HAStoragePlus -p FilesystemMountPoints =/global/ config -p Affinityon =TRUE global_config_hasp_rs Verify that the global fs is mounted on all nodes of the cluster Test failover of the RG across the nodes (while ensuring that global mountpoints are available)

Configure HA-LDOM Cluster Resource:

Configure HA-LDOM Cluster Resource Add SUNW.ldom data service to the cluster clresourcetype register SUNW.ldom Stop the LDOM manually ldm stop ha_ldom1 Create Cluster rg and rs – clrg create test_ldom_rg Clrs create –g test_ldom_rg –t SUNW.ldom –p password_file=/global/config/passwd –p Domain_name=ha_ldom1 test_ldom_ldm_rs Set the Cluster resource migration mode clrs set –p Migration_type=MIGRATE test_ldom_ldm_rs NOTE: The two modes available are “NORMAL” and “MIGRATE”. MIGRATE instructs HA-LDOM agent to try and do a warm or live migration, while NORMAL instructs the agent to do a failover. If Warm/Live migration fails, a failover migration is defaulted to, irrespective of the mode that is set.

authorStream Live Help