Monday, August 2, 2010

solaris 10 Zones configured with VCS

Configuration Steps


To Build a standard VCS Cluster to Support Zones, the following must be done:

a. Build Servers (Including VxVM/VCS)

b. Configure / Enable FSS Scheduler

c. Configure / Enable Default Processor Pools

d. Configure VCS Servicegroups

e. Create Zone



4. Build Servers (Including VCS)

When building a VCS Cluster to Support Zones, the latest Solaris 10x86 Build should be used. VCS Supports Zones starting in 4.1 MP1 Release.



5. Enable FSS Scheduler

Once Servers have been build, need to enable the FSS (Fair Share Scheduler), by executing the following commands:



# dispadmin -d FSS



Move All existing processes into FSS scheduler:



# priocntl -s -c FSS -i all





6. Enable Processor pools, and default pool

Create default system pool, this will be the default pool where all CPU.s are assigned unless an application has specifically requested a set number of CPUs and does not want to share with other applications (when cpu available).



Execute the following commands to enable pools:



# pooladm -e

# poolcfg -c discover

# pooladm -c



Set default pools scheduler to FSS class:



# poolcfg -c 'modify pool pool_default (string pool.scheduler="FSS")'

# pooladm -c



7. Setup Veritas Volumes

Prior to setting up Zones, should first setup the proper volumes/filesystems:

Initialize Devices

# /etc/vx/bin/vxdisksetup -I



Initialize DiskGroups

# vxdg init DG#_



Initialize Volumes

# vxassist -g make



Create Filesystems

# mkfs -Fvxfs -olargefiles /dev/vx/rdsk//



Recommended filesystem for zone is 16g:

# vxassist -g make zone_os 16g



8. Create zone

Create Zone using create_zone script from the jumpstart server: /jumpstart_10x86v0/stdbuild/scripts/create_zone:



# mount :/jumpstart_10x86v0/stdbuild /mnt

# cd /mnt/scripts

# ./create_zone



Need to specify a Zone Name?



./create_zone -z [ -l localisation ] [ -c ] [ -m ]

[ -s ] -p [ -i ]

[-e ] -n

example:

./create_zone -z pe-as1-d -l dselab -n

./create_zone -z pe-as1-d -l dselab -c 2 -n

./create_zone -z pe-as1-d -l dselab -c 2 -m 2G -n

./create_zone -z pe-as1-d -l dselab -m 2G -s 200 -p /zone_os -n

-v [for debug verbose]



Unless -n is used, script will only show commands used.



The script will create the project for the Zone on the server where it was run.

/etc/project:

dseds1d:100:dse-ds1-d:::project.cpu-hares=(priv,100,none);project.pool=pool_default

dseas1d:101:dse-as1-d:::project.cpu-hares=(priv,100,none);project.pool=pool_default



Example:

# ./create_zone -z dse-ds1-d -l dselab -s 200 -p /dse-ds1/zone_os -I 30.6.25.20 -e ce0 -n



After create script completes, the rest of standard build will be applied during first zone boot. Once zone has been completely built, can include additional filesystems, (either by lofs or direct mount under /zone_os//root/ directory).



Example:



# zonecfg -z dse-ds1-d

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/ora01

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/ora01

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/ora02

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/ora02

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/ora03

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/ora03

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/oraarch

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/oraarch

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/oratemp

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/oratemp

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> commit



Modify zone.cpu-shares (this is updated automatically if .s included with create_zone script)

zonecfg:dse-ds1-d> add rctl

zonecfg:dse-ds1-d:rctl> set name=zone.cpu-shares

zonecfg:dse-ds1-d:rctl> add value (priv=privileged,limit=100,action=none)

zonecfg:dse-ds1-d:rctl> end

zonecfg:dse-ds1-d> commit

After rebooting the ZONE the partitions should be mounted:

# df -k

Filesystem kbytes used avail capacity Mounted on

/ 16777216 3938350 12036933 25% /

/dev 16777216 3938350 12036933 25% /dev

/orape-ds1/ora01 10485760 19652 9811983 1% /orape-ds1/ora01

/orape-ds1/ora02 10485760 19651 9811985 1% /orape-ds1/ora02

/orape-ds1/ora03 10485760 19651 9811985 1% /orape-ds1/ora03

/orape-ds1/oraarch 10485760 19651 9811985 1% /orape-ds1/oraarch

/orape-ds1/oratemp 10485760 19651 9811985 1% /orape-ds1/oratemp

proc 0 0 0 0% /proc

ctfs 0 0 0 0% /system/contract

swap 5235304 268 5235036 1% /etc/svc/volatile

mnttab 0 0 0 0% /etc/mnttab

/usr/lib/libc/libc_hwcap2.so.1

16777216 3938350 12036933 25% /lib/libc.so.1

fd 0 0 0 0% /dev/fd

swap 1048576 0 1048576 0% /tmp

swap 20480 20 20460 1% /var/run





If dedicating CPU.s can assign CPU pool to zone:

bash-3.00# zonecfg -z dse-ds1-d

zonecfg:dse-ds1-d> set pool=dse-ds1-d



else, add it to global pool which should be using FSS (Faire Share Scheduler).



zonecfg:dse-ds1-d> set pool=pool_default

zonecfg:dse-ds1-d> commit



All projects should be created on all cluster nodes and Zone, so that all shares are listed properly.



9. Create Project / Assign users within Zone:

Login to ZONE and setup projects,users,shares.



projadd -c (project name can not contain - )

# projadd -c dse-ds1-d dseds1d



Assign cpu shares to Project / Zone

# projmod -sK "project.pool=pool_default" dseds1d

# projmod -sK "project.cpu-shares=(priv,100,none)" dseds1d

.

Setup project in Zone, assign users to Zone.

projmod -U oracle,orape01



# projmod -U oracle,orape01 dseds1d





10. Setup VCS

Configure VCS as per the usual standards/best practices. After creating Servicegroups, prior to creating ZONE resource, should first shutdown ZONE.



VCS Example:



Create Cluster:

haclus -modify Administrators root

haclus -modify ClusterName dse-cluster2



Create MultiNIC Group/Resource:

hagrp -add acb-mnic

hagrp -modify acb-mnic SystemList dse-clust1-da 0 dse-clust1-db 1

hagrp -modify acb-mnic AutoStart 0



hares -add MNIC_acb-mnic_bge0_bge1 MultiNICB acb-mnic

hares -modify MNIC_acb-mnic_bge0_bge1 Device bge0 0 bge1 1

hares -modify MNIC_acb-mnic_bge0_bge1 NetMask "255.255.255.0"

hares -modify MNIC_acb-mnic_bge0_bge1 IgnoreLinkStatus 0





Create ServiceGroup:

hagrp -add dse-ds1-d

hagrp -modify dse-ds1-d SystemList dse-clust1-da 0 dse-clust1-db 1



Create DiskGroup

hares -add DG1_dse-ds1-d DiskGroup dse-ds1-d

hares -modify DG1_dse-ds1-d DiskGroup DG1_dse-ds1-d



Create Proxy Resource:

hares -add PXY_dse-ds1-d_bge0_bge1 Proxy dse-ds1-d

hares -modify PXY_dse-ds1-d_bge0_bge1 TargetResName MNIC_acb-mnic_bge0_bge1



Create Volume Resources

hares -add VOL_DG1_dse-ds1-d_admin Volume dse-ds1-d

hares -modify VOL_DG1_dse-ds1-d_admin Volume admin

hares -modify VOL_DG1_dse-ds1-d_admin DiskGroup DG1_dse-ds1-d



hares -add VOL_DG1_dse-ds1-d_ora01 Volume dse-ds1-d

hares -modify VOL_DG1_dse-ds1-d_ora01 Volume ora01

hares -modify VOL_DG1_dse-ds1-d_ora01 DiskGroup DG1_dse-ds1-d



hares -add VOL_DG1_dse-ds1-d_ora02 Volume dse-ds1-d

hares -modify VOL_DG1_dse-ds1-d_ora02 Volume ora02

hares -modify VOL_DG1_dse-ds1-d_ora02 DiskGroup DG1_dse-ds1-d



hares -add VOL_DG1_dse-ds1-d_ora03 Volume dse-ds1-d

hares -modify VOL_DG1_dse-ds1-d_ora03 Enabled 1

hares -modify VOL_DG1_dse-ds1-d_ora03 Volume ora03

hares -modify VOL_DG1_dse-ds1-d_ora03 DiskGroup DG1_dse-ds1-d



Create Mount Resources:

hares -add MNT_dse-ds1-d_admin Mount dse-ds1-d

hares -modify MNT_dse-ds1-d_admin MountPoint "/dse-ds1"

hares -modify MNT_dse-ds1-d_admin BlockDevice "/dev/vx/dsk/DG1_dse-ds1-d/admin"

hares -modify MNT_dse-ds1-d_admin FSType vxfs

hares -modify MNT_dse-ds1-d_admin FsckOpt "%-y"



hares -add MNT_dse-ds1-d_ora01 Mount dse-ds1-d

hares -modify MNT_dse-ds1-d_ora01 MountPoint "/dse-ds1/ora01"

hares -modify MNT_dse-ds1-d_ora01 BlockDevice "/dev/vx/dsk/DG1_dse-ds1-d/ora01"

hares -modify MNT_dse-ds1-d_ora01 FSType vxfs

hares -modify MNT_dse-ds1-d_ora01 FsckOpt "%-y"



hares -add MNT_dse-ds1-d_ora02 Mount dse-ds1-d

hares -modify MNT_dse-ds1-d_ora02 MountPoint "/dse-ds1/ora02"

hares -modify MNT_dse-ds1-d_ora02 BlockDevice "/dev/vx/dsk/DG1_dse-ds1-d/ora02"

hares -modify MNT_dse-ds1-d_ora02 FSType vxfs

hares -modify MNT_dse-ds1-d_ora02 FsckOpt "%-y"



hares -add MNT_dse-ds1-d_zone Mount dse-ds1-d

hares -modify MNT_dse-ds1-d_zone MountPoint "/dse-ds1/zone_os"

hares -modify MNT_dse-ds1-d_zone BlockDevice "/dev/vx/dsk/DG1_dse-ds1-d/zone"

hares -modify MNT_dse-ds1-d_zone FsckOpt "%-y"



Create ZONE Resource:

hares -add Zone_dse-ds1-d Zone dse-ds1-d

hares -modify Zone_dse-ds1-d ZoneName dse-ds1-d



Create Links:

hares -link DG1_dse-ds1-d PXY_dse-ds1-d

hares -link MNT_dse-ds1-d_admin DG1_dse-ds1-d

hares -link MNT_dse-ds1-d_admin VOL_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_ora01 MNT_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_ora01 VOL_DG1_dse-ds1-d_ora01

hares -link MNT_dse-ds1-d_ora02 MNT_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_ora02 VOL_DG1_dse-ds1-d_ora02

hares -link MNT_dse-ds1-d_ora03 MNT_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_ora03 VOL_DG1_dse-ds1-d_ora03

hares -link MNT_dse-ds1-d_zone MNT_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_zone VOL_dse-ds1-d_zone

hares -link VOL_DG1_dse-ds1-d_ora01 DG1_dse-ds1-d

hares -link VOL_DG1_dse-ds1-d_ora02 DG1_dse-ds1-d

hares -link VOL_DG1_dse-ds1-d_ora03 DG1_dse-ds1-d

hares -link VOL_DG1_dse-ds1-d_oraarch DG1_dse-ds1-d

hares -link VOL_DG1_dse-ds1-d_oratemp DG1_dse-ds1-d

hares -link VOL_dse-ds1-d_admin DG1_dse-ds1-d

hares -link VOL_dse-ds1-d_zone DG1_dse-ds1-d

hares -link Zone_dse-ds1-d MNT_dse-ds1-d_zone

hares -link Zone_dse-ds1-d MNT_dse-ds1-d_ora01

hares -link Zone_dse-ds1-d MNT_dse-ds1-d_ora02

hares -link Zone_dse-ds1-d MNT_dse-ds1-d_ora03



Add a new cluster user, with group administrator privileges for the group containing the zone.



# hauser -add dse-adm -priv Administrator -group dse-ds1-d

Enter New Password:

Enter Again:



Ensure the local zone can resolve the host name of the global zone, either through DNS or through the /etc/hosts file.



Make sure the global zone.s host name is in DNS or the local /etc/hosts file.



Log into the local zone (zlogin ):

# zlogin -C -e T dse-ds1-d

[Connected to zone 'dse-ds1-d' console]



dse-ds1-d console login: root

Password:

Sep 25 07:24:15 dse-ds1-d login[23334]: ROOT LOGIN /dev/console

Last login: Mon Sep 25 07:21:24 on console

Machine: dse-ds1-d, SunOS 5.10, built 21 Sep 2006



Set the environment variable VCS_HOST to the host name of the global zone.



# export VCS_HOST=dse-clust1-da





Issue the command /opt/VRTSvcs/bin/halogin .



# /opt/VRTSvcs/bin/halogin dse-adm dse-adm



(failover and do the samething for the other cluster nodes).





11. Enable Application Agents in VCS to monitor resources within a Zone

Application Agents that are monitoring resources within a Zone, need to have ContainerType and Containername attributes defined.



To update existing Oracle/Sqlnet resources to support Zones:

# haconf -makerw

# haattr -add -static Oracle ContainerType -string -scalar Zone

# hatype -modify Oracle ArgList -add ContainerName

# haattr -add -static Sqlnet ContainerType -string -scalar Zone

# hatype -modify Sqlnet ArgList -add ContainerName



# haattr -add Oracle ContainerName

# haattr -add Sqlnet ContainerName

# hatype -modify Oracle ArgList -add ContainerName

# hatype -modify Sqlnet ArgList -add ContainerName



Example Oracle/Sqlnet definitions:

# hares -add ORA_dse-ds1-d_PE10GSOL Oracle dse-ds1-d

# hares -modify ORA_dse-ds1-d_PE10GSOL Critical 0

# hares -modify ORA_dse-ds1-d_PE10GSOL SID PE10GSOL

# hares -modify ORA_dse-ds1-d_PE10GSOL ContainerName dse-ds1-d



# hares -add SQL_dse-ds1-d_LISTENER_PE10GSOL Sqlnet dse-ds1-d

# hares -modify SQL_dse-ds1-d_LISTENER_PE10GSOL Critical 0

# hares -modify SQL_dse-ds1-d_LISTENER_PE10GSOL LISTENER LISTENER_PE10GSOL

# hares -modify SQL_dse-ds1-d_LISTENER_PE10GSOL ContainerName dse-ds1-d



Create Application Links to Zone:

# hares -link ORA_dse-ds1-d_PE10GSOL Zone_dse-ds1-d

# hares -link SQL_dse-ds1-d_LISTENER_PE10GSOL Zone_dse-ds1-d





12. Sybase RAW Devices within a Zone running on VxVM

Currently, there is an issue with Zone visibility and VxVM volumes from the global zone. The following is a workaround until its addressed:



Use the ls -l command to find out the major and minor numbers of the raw volume



In this example, the volume u1 is in the DG1_dse-ds1-d disk group within the global zone. The raw device in the global zone that corresponds to u1 is /dev/vx/rdsk/DG1_dse-ds1-d/u1.



Running the ls -l command on this device shows that the major number is 289, a

nd the minor number is 45000:



# ls -l /dev/vx/rdsk/DG1_dse-ds1-d

crw------- 1 root root 289, 45000 Aug 24 11:13 v1



Use the mknod command to create an entry for the VxVM volume in the non global zone. In this example, the major number is 289 and the minor number is 45000:



# cd /dse-ds1/zone_os/dev

# mknod u1 c 289 45000



5. Log into the non global zone and check the device entry for the volume:



# zlogin -l root dse-ds1-d



[Connected to zone .dse-ds1-d. pts/6]

Last login: Thu Aug 24 14:31:41 on pts/5

Sun Microsystems Inc. SunOS 5.10 Generic January 2005



# ls -l /dev/u1

crw-r--r-- 1 root root 289, 45000 Aug 24 16:10 /dev/v1



6. Verify that you can perform I/O with the device entry in the non global zone:



# dd if=/dev/u1 of=/dev/null

2048+0 records in

2048+0 records out

2 comments:

  1. Thank You very much for this detailed explanation. Could you add a section on adding oracle to the zone in the cluster

    ReplyDelete
  2. Helpful info. Fortunate me I found your site accidentally,
    and I am shocked why this twist of fate did not took place in advance!

    I bookmarked it.

    Feel free to visit my page ... do you agree (http://brandonleekitajchuk.com)

    ReplyDelete