Thursday, September 16, 2010

veritas volume manager Question

VxVM Disk




Q-1 How to add a disk to a disk group in Veritas Volume Manager?

Ans: To add the physical disk c0t0d0 in the disk group homedg calling it disk90 in Veritas Volume Manager:

# vxdg -g homedg adddisk disk90=c0t0d0



Q-2 How to remove a disk from a disk group in Veritas Volume Manager?

Ans: To remove a disk, disk90, from a disk group, homedg, in Veritas Volume Manager:

# vxdg -g homedg rmdisk disk90



Q-3 How to clear an import in Veritas Volume Manager after a crash?

Ans: # vxdisk clearimport c0t0d0s0



Q-4 How to list all disks and display their status in VxVM?

Ans: # vxdisk list



Q-5 How to remove a grayed out or obsolete disk, or remove a disk from a disk group in Veritas Volume Manager?

Ans: # vxdisk rm disk01



Q-6 How to remove a disk so that it is no longer under Veritas Volume Manager controls?

Ans: # vxdisk rm c0t0d0



Q-7 How to add or bring a disk under Veritas Volume Manager control?

Ans: To add or bring a disk under Veritas Volume Manager control:

# vxdiskadd c0t0d1

or

# vxdisksetup -i c0t0d1



Note: It might help to newfs the s2 slice of the disk and perform a vxdctl enable to get it to add a disk.



Q-8 How to remove a volume and any subdisks and plexes associated in VxVM?

Ans: # vxedit -rf rm volume_name





Q-9 How to rename the disk disk90 to be called disk80 in the group rootdg in Veritas Volume Manager?

Ans: # vxedit -g rootdg rename disk90 disk80



Q-10 How to set disk90 in the group homedg to be a hot spare in Veritas Volume Manager?

Ans: # vxedit -g homedg set spare=on disk90



Q-11 How to set the comment field of subdisk subdisk01-01 to "comments are here"?

Ans: # vxedit set comment"comments are here" subdisk01-01





Q-12 How to sets the user to ep, the group to epgrp and the mode rw-rw-rw on the volume vg01?

Ans: # vxedit set user=ep group=epgrp mode=0666 vg01



Disk Group



Q-1 How to display the default disk group?

Ans: #vxdg defaultdg



Q-2 How to set the default disk group?

Ans: # vxdctl defaultdg



Q-3 How to display disk group information?

Ans: # vxdg list or

# vxdg list



Q-4 How to display free disk space in disk group?

Ans: # vxdg free or

# vxdg –g free



Q-5 How to create a disk group?

Ans: # vxdg init cds=on
off



Q-6 How to create a disk group with a specified disk group version no.?

Ans: # vxdg –T init



Q-7 How to make a non CDS disk group to CDS disk group or vice versa?

Ans: # vxdg –g set cds=on
off



Q-8 How to import a disk group?

Ans # vxdg import



Q-9 How to import a destroyed disk group?

Ans: First you must know the DG ID of the destroyed disk group name. You can get the DG ID by displaying the included disk of destroyed DG.

# vxdg import



Q-10 How to disable/deport a disk group?

Ans: # vxdg deport



Q-11 How to rename a disk group during import operation?

Ans: # vxdg –t –n import



Q-12 How to rename a disk group during deport operation?

Ans: # vxdg –n deport



Q-13 How to clear locks on disk group during import?

Ans: # vxdg –C import



Q-14 How to forcible import the disk group?

Ans: # vxdg –f import



Q-15 How to move disk group object from one DG to another?

Ans: # vxdg –o expand move

# vxdg –o expand move datadg newdatadg disk01

It will move the entire associated object with disk02 from datadg to newdatadg.



Q-16 How to split a disk group to form a new disk group?

Ans: # vxdg –o expand split disk01 disk02

It will create a new DG with two specified disks.



Q-17 How to join two DGs into one?

Ans: # vxdg join



Q-18 How to destroy a disk group?

Ans: # vxdg destroy



Q-19 How to upgrade a Disk group?

Ans: # vxdg upgrade

It will upgrade the DG to the highest DG version supported by VxVM.

Or

# vxdg –T

To upgrade to a specified version no.



Mixed Questions:



Q-1 How to get volume information in Veritas Volume Manager?

Ans: # vxinfo



Q-2 How set the number of kernel thread in Veritas Volume Manager?

Ans: # vxiod set 10



Note: This is the daemon that allows for extended I/O calls without blocking calling processes. As this is a kernel thread you cannot see it with ps command so you have to use the vxiod command to see if it is running.



Q-3 How to create a plex from a subdisk in Veritas Volume Manager?

Ans: # vxmake plex sd=



Q-4 How to make a subdisk from a disk in Veritas Volume Manager?

Ans: To make a subdisk called subdisk-80 at the beginning of the disk disk80 of size 10000 blocks:

# vxmake sd subdisk-80 disk80,0,10000



If you wanted to put another subdisk on this disk then you would have an offset of the size of the previous subdisk (10000 in our case):

# vxmake sd subdisk-81 disk80,10000,20000



Q-5 How to set a plex offline in Veritas Volume Manager?

Ans: # vxmend off





Q-6 How to set a plex online in Veritas Volume Manager?

Ans: # vxmend on



Q-7 How to set a plex to a clean state in Veritas Volume Manager?

Ans: # vxmend fix clean plex-name



Q-8 How to mirror all the volumes on the disk rootdisk to disk90 in Veritas Volume Manager?

Ans: # vxmirror rootdisk disk90



Q-9 How to rebuild the partition table after a root disk failure in Veritas Volume Manager?

Ans: To rebuild the partition table after recovering from a root disk failure after re-mirroring the disk in Veritas Volume Manager:

# vxmksdpart -g rootdg diskpart 1 0x03 0x01



Q-10 How to attach a plex to a volume in Veritas Volume Manager?

Ans: # vxplex att



Q-11 How to display all the available information in Veritas Volume Manager?

Ans: # vxprint –ht



Q-12 How to display all the information about plexes in Veritas Volume Manager?

Ans: # vxprint -l

OR

# vxprint –lp



Q-13 How to display all the information about subdisks in Veritas Volume Manager?

Ans: # vxprint -l

OR

# vxprint –st



Q-14 How to display all the information about volumes in Veritas Volume Manager?

Ans: # vxprint -l volumename

OR

# vxprint -vl

OR

# vxprint –vt



Q-15 How to list all the volumes on a boot disk Veritas Volume Manager?

Ans: # vxprint -t -v -e 'aslist.aslist.sd_disk="boot-disk-name"'



Q-16 How to bring a volume back online in background mode in Veritas Volume Manager?

Ans: # vxrecover -b volume



Q-17 How to recover a volume in Veritas Volume Manager?

Ans: # vxrecover -s



Q-18 How to recover all volumes in Veritas Volume Manager?

Ans: To start recovery of all volumes in Veritas Volume Manager:

# vxrecover –s



Q-19 How to add a log disk to a volume in Veritas Volume Manager?

Ans: # vxsd aslog



Q-20 How to join subdisks in Veritas Volume Manager?

Ans: To join subdisk-88 and subdisk-77 to create the new bigger subdisk-99:

# vxsd join subdisk-88 subdisk-77 subdisk-99



Q-21 How to move the contents of a subdisk to another in Veritas Volume Manager?

Ans: To move the contents of subdisk-90 to subdisk-80 in Veritas Volume Manager:

# vxsd mv subdisk-90 subdisk-80



Q-22 How to report disk statistics in Veritas Volume Manager?

Ans: # vxstat –d



Q-23 How to trace all the I/O on the selected volume in Veritas Volume Manager?

Ans: # vxtrace



Q-24 Veritas Volume Manager GUI

Ans: # vxva



Q-25 How to put a volume in maintenance mode in Veritas Volume Manager?

Ans: # vxvol maint



Q-26 How to stop a volume in a disk group in Veritas Volume Manager?

Ans: # vxvol -g stop



Q-27 How to check which tasks is running in Veritas Volume Manager?

Ans: # vxtask list

Or

# vxtask monitor



Q-28 How to change the naming scheme?

Ans: # vxddladm set namingscheme=



Q-29 How to get the list of all enclosure?

Ans: # vxdmpadm listenclosure all



Q-30 How to check how many disks are in a particular enclosure?

Ans # vxdmpadm getdmpnode enclosure=



Q-31 How to get the path of a particular disk or how to check the enclosure of a particular device?

Ans # vxdmpadm getsubpaths dmpnodename=

Or

# vxdisk list



Q-32 How to restart VxVM configuration Daemon?

Ans # vxconfigd -k



Q-33 How to find the actual disk device name while disks are showing in enclosure based naming scheme?

Note: When disks are showing in enclosure based scheme, is shows disks name as EMC0_0, EMC0_1 rather than c0t0d0

Ans # vxdisk list –e



Q-34 How to start failed VxVM object, which got failed due to changed from OS based naming scheme to enclosure based naming.

Ans (1) First deport the disk group

(2) Run below given command

# /etc/vx/bin/vxdarestore

(3) Now import the disk group.



Q-35 How to reserve a disk for special purpose or vice versa? (Reserve disk can’t use for normal volume operation)



Ans: # vxedit –g set reserve=on diskname

And

# vxedit –g set reserve=off diskname



Q-36 How do you determine by how much a Veritas Volume can be expanded?

Ans: # vxassist –g maxgrow



Q-37 How do you grow a Veritas VXVM volume?

Ans: # vxresize –g +5g



Q-38 How many partitions are created in a disk when we initialize the disk under VxVM?

Ans: 2 partitions

Private region created on slice 3

Public region created on slice4



Q-39 What is the length of Private Region?

Ans: VxVM 5.0 = 32Mb

VxVM 4.0 = 1 Mb



Q-40 How do you determine Volume Status in Veritas VxVM?

Ans: # vxprint -htv



Q-41 Why would you deport a diskgroup in VxVM?

Ans: Normally we deport a disk group when we want to import that disk group on

other host.

# vxdg deport



Q-42 Interactive front end to the vxdisk program in VxVM?

Ans: # vxdiskadm



Q-43 How to display free space on the disks in Veritas volume Manager?

Ans: # vxdg free



Q-44 How to find how much a volume can be grown by in Veritas Volume Manager?

Ans: # vxassist maxgrow



Q-45 How to find the largest raid5 partition you can have in Veritas Volume Manager?

Ans: # vxassist maxsize layout=raid5



Q-46 How to find the largest stripe you can have in VxVM?

Ans: # vxassist maxsize layout=stripe



Q-47 How to move a volume to another disk except a particular one in Veritas Volume Manager?

Ans: To move a volume vg01 to any other disk except disk90 in Veritas Volume Manager:

# vxassist move vg01 !disk90



Q-48 How to set a preferred plex to read from in Veritas Volume Manager?

Ans: # vxvol rdpol prefer



Q-49 How to set a round robin read policy on the volume in Veritas Volume Manager?

Ans: # vxvol rdpol round volume_name



Q-50 How to verify and enable largefile support on a vxfs filesystem?

Ans: To verify if largefile support is enabled on a VXFS filesystem:

# fsadm -F vxfs /dir_name



If you need to enable largefile support:

# fsadm -F vxfs -o largefiles /dir_name



Q-51 How to add a log disk for a volume in Veritas Volume Manager?

Ans: To add a log disk for a raid5 or mirror of a volume in Veritas Volume Manager:

# vxassist addlog volume-name



Q-52 How to encapsulate the root disk?

Ans: We can encapsulate the root disk by vxdiskadm command.

# vxdiskadm  Encapsulate one or more disks



Q-53 How to mirror the root volume?

Ans: We can mirror all the volumes needed to boot with the below command:

# vxrootmir

It will mirror all the file systems needed to boot on the c0t1d0 disk.



Q-54 How to remove rootability?

Ans: We can un-encapsulate the root disk and take it out from all the file systems needed to boot the system from VxVM control by command vxunroot. This utility make the necessary changes to boot the system without VxVM support.

# vxunroot



Q-55 how to create a mirror on a previously defined volume in Veritas Volume Manager

Ans: Example to use the disks disk80 and disk90 to make a mirror on the volume called vg01:

# vxassist mirror vg01 disk80 disk90



Example to make a 50 mb mirror on volume called vg01 using any two free disks:

# vxassist mirror vg01 50m layout=mirror



Q-56 How to create a raid5 volume in Veritas Volume Manager?

Ans: To create a raid5 volume in Veritas Volume Manager using any available disks:

# vxassist make vg01 100m layout=raid5



Q-57 How to create a volume in Veritas Volume Manager?

Ans: Example to make a volume called vg01 of size 100m using any available disk:

# vxassist –g make 100m



Example to make a volume called vg01 to be 100m big using the disk disk80:

# vxassist –g make 100m disk80



Q-58 How to create a volume with a mirror and log in Veritas Volume Manager?

Ans: Example to make a volume named vg01 with 50mb stripe disk using disks disk80 and disk90 and mirror this on a stripped mirror using disk92 and disk95 and use a log subdisk:

# vxassist–g make layout=mirror,stripe,log disk80 disk90 disk92 disk95



Q-59 How to create a volume with a mirror in Veritas Volume Manager?

Ans: Example to make a volume vg01 with a 50mb mirror using the two disks disk80 and disk90:

# vxassist –g make 100m layout=mirror disk80 disk90



Q-60 How to grow the size of a volume in Veritas Volume Manager?

Ans: Examples to grow the size of the volume vg01 to 2000 512byte sectors:

# vxassist growto vg01 2000

OR

# vxassist growby vg01 2000



Q-61 How to mirror a volume on any free disk in Veritas Volume Manager?

Ans: To mirror a volume vg01 on any free disk in Veritas Volume Manager:

# vxassist mirror vg01



Q-62 How to mirror volumes in a disk group in Veritas Volume Manager?

Ans: Example to mirror volume vol80 to vol90 in the disk group rootdg:

# vxassist -g rootdg mirror vol80 vol90



Q-63 How to shrink the size of a volume in Veritas Volume Manager?

Ans: Examples to shrink the volume vg01 by 2000 512byte sectors:

# vxassist shrinkto vg01 2000

OR

# vxassist shrinkby vg01 2000



Q-64 How to verify the main daemon for Veritas Volume Manager?

Ans: vxconfigd is the main daemon of Veritas Volume Manager which must be running at all times. It is started at system startup.

We can check its status by below given way:

# vxdctl mode



Or we can verify it is running with a ps command:

# ps -ef
grep vxconfigd



Q-65 How to enable, disable or verify the vxconfigd daemon in Veritas Volume Manager?



To verify the vxconfigd daemon in Veritas Volume Manager:

# vxdctl mode



To enable the vxconfigd daemon:

# vxdctl enable



To disable the vxconfigd daemon:

# vxdctl disable





Q-66 How to upgrade VxVM?

Ans:





Q-67 What is vxbootsetup utility?

Ans: The vxbootsetup utility configures physical disks so that they can be used to boot the system. Before vxbootsetup is called to configure a disk, mirrors of the root, swap, /usr and /var volumes (if they exist) should be created on the disk. These mirrors should be restricted mirrors of the volume. The vxbootsetup utility configures a disk by writing a boot track at the beginning of the disk and by creating physical disk partitions in the UNIX VTOC that match the mirrors of the root, swap, /usr and /var.

With no medianame arguments, all disks that contain usable mirrors of the root, swap, /usr and /var volumes are configured to be bootable. If medianame arguments are specified, only the named disks are configured.

vxbootsetup requires that the root volume is named rootvol and has a usage type of root. The swap volume is required to be named swapvol and to have a usage type of swap. The volumes containing /usr and /var (if any) are expected to be named usr and var, respectively.

Q-68 What is vxrootmir utility?

Ans: The vxrootmir script creates mirrors of volumes required in booting. It creates a mirror for rootvol, swapvol and standvol. It also creates mirrors of usr, var and home if they exist as separate volumes on the boot disk. The mirror is created on the specified disk media device.

The specified disk media device should have enough space to contain the mirror for all the source volumes mentione above, or else it will fail. Also, corresponding slices must be free because it is used to create the partition for root.

All disk partitions for the new volume mirrors are created.



This script is called by the vxmirror command if the root disk is required to be mirrored. It is also called from the vxdiskadm menus through the choice of the mirror volumes on a disk operation.



Q-69 What is vxmirror utility?

Ans: The vxmirror command provides a mechanism to mirror all the contents of a specified disk, to mirror all currently un-mirrored volumes in the specified disk group, or to change or display the current defaults for mirroring. All volumes that have only a single plex (mirror copy), will be mirrored by adding an additional plex.

Volumes containing subdisks that reside on more than one disk will not be mirrored by vxmirror.

vxmirror is generally called from the vxdiskadm menus. It is not an interactive command and once called, will continue until completion of the operation or until a failure is detected.

Q-70 What is vxunroot utility?

Ans: The vxunroot script causes the root, swap, usr and var file systems to be converted back into being accessible directly through disk partitions instead of through volume devices. Other changes made to ensure the booting of the system from the root volume are also removed such that the system will boot with no dependency on the Volume Manager.



For vxunroot to work properly, all but one plexes of rootvol, swapvol, usr and var should be removed. The plexes left behind for the above volumes should be the ones created by vxrootmir or the original ones created when the root disk was encapsulated. This will ensure that the underlying subdisks will have equivalent partitions defined for them on the disk. If none of these conditions are met, the vxunroot operation will fail and none of the volumes will be converted to disk partitions.



Q-71 How to recover from the root disk and root-mirror disk failure?

Ans:

1) Boot the system into single user mode from Solaris installation CD.

ok boot cdrom –s



2) Use the format command to create partitions on the new root disk (c0t0d0s2). These should be identical in size to those on the original root disk before encapsulation unless you are using this procedure to change their sizes.

3) Create the file system on this slice

# mkfs –F ufs /dev/rdsk/c0t0d0s0



4) Mount the root slice on /a

# mount /dev/dsk/c0t0d0s0 /a



5) Now restore the / file system from backup.



6) Now run the installboot command to install bootblk.

# cd /usr/platform/`uname –i`/lib/fs/ufs

# installboot bootblk /dev/rdsk/c0t0d0s0

Or

# installboot /usr/platform/`uname –i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0



7) Now restore /usr file system if it is there.



8) Create /etc/vx/reconfig.d/state.d/install-db file to prevent the configuration daemon from starting.

# touch /a/etc/vx/reconfig.d/state.d/install-db



9) Now comment 2 lines in /etc/system file as given below

# cp /a/etc/system /a/etc/system.orig

# vi /a/etc/system

* set vxio:vol_rootdev_is_volume=1

* rootdev:/pseudo/vxio@0:0



10) Now edit the vfstab file to replace the volume device names with the disk slices.



11) Now remove the /a/dev/vx/dsk/bootdg and /a/dev/vx/rdsk/bootdg

# rm /a/dev/vx/dsk/bootdg

# rm /a/dev/vx/rdsk/bootdg



12) Now reboot the system. System will be booted into multi user mode.

# init 6







Q-72 How to recover the VxVM configuration after reinstallation of Solaris?

Ans: Reinstallation is necessary if all copies of your root disks are damaged, or if certain critical files are lost due to file system damage. Disconnect all the disks which are not involved in reinstallation process.



1) Install the Solaris OS



2) Install the VxVM software and also install the VxVM license also.



2) Recover the VxVM configuration.

1) touch the /etc/vx/reconfig.d/state.d/install-db file

2) Shut down the system

3) Reattach the disks which were removed before installation.

4) Reboot the system and when the system comes up, bring it into single user mode.

5) Remove the /etc/vx/reconfig.d/state.d/install-db file

6) Start VxVM IO daemons.

# vxiod set 10

7) Start the VxVM daemon in disable mode

# vxconfigd –m disable

8) Initialize the vxconfigd daemon

# vxdctl init

9) Initialize the DMP subsystem

# vxdctl initdmp

10) Now start the vxconfigd daemon

# vxdctl enable

11) Now reboot the system



Now the configuration preserved on the disks not involved in the reinstallation has now been recovered.





Q-73 How to convert the SVM meta devices into VxVM volumes?

Ans: There are three utilities used to convert SVM meta devices into VxVM volumes.



1) Run the preconvert utility to analyze the current SVM configuration:

# preconvert

The preconvert utility analyzes the current Solaris Volume Manager configuration and builds a description for the new VxVM configuration. preconvert does not make any changes to the Solaris Volume Manager configuration or to the host system.



2) Now run the showconvert utility to display the preconvert conversions plan into readable format.

# showconvert



3) Now run the convertname utility to display the VxVM volume names .

# convertname /dev/md/dsk/d12

Note: The convertname utility takes Solaris Volume Manager device paths as arguments (metadevice paths or raw disk paths) and returns the VxVM volume path for the device as it will show after the conversion.



4) Now run the doconvert utility to start the actual conversion process

# doconvert



5) Now reboot the system to make the changes.

Monday, September 6, 2010

Add multiple user in solaris

1:Create a file username , in which add  all new usernames.
2: Run the below script

#!/bin/sh


x=201

for i in `cat /tmp/username`

do

x=`expr $x + 1`

useradd -u $x -g 10 -m -d /export/home/$i -s /bin/sh $i

done

ssh login without password

First log in on 192.168.30.72 as user user1 and generate a pair of authentication keys. Do not enter a passphrase:


----------------------------------------------------------------

user1@192.168.30.72:~> ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/a/.ssh/id_rsa):

Created directory '/home/a/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/a/.ssh/id_rsa.

Your public key has been saved in /home/a/.ssh/id_rsa.pub.

The key fingerprint is:

3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a@A

--------------------------------------------------------------------



Now use ssh to create a directory ~/.ssh as user b on B. (The directory may already exist, which is fine):

-----------------------------------------------------------------------

user1@192.168.30.72:~> ssh user1@192.168.30.178 mkdir -p .ssh

user1@192.168.30.178's password:

------------------------------------------------------------------------

Finally append a's new public key to user1@192.168.30.178:.ssh/authorized_keys and enter user1's password one last time:

-----------------------------------------------------------------------------------------------------

user1@192.168.30.72:~> cat .ssh/id_rsa.pub
ssh user1@192.168.30.178 'cat >> .ssh/authorized_keys'

user1@192.168.30.178's password:

----------------------------------------------------------------------



From now on you can log into B as b from A as a without password:

------------------------------------------------------------------------

user1@192.168.30.72:~> ssh user1@192.168.30.178

Monday, August 2, 2010

solaris 10 Zones configured with VCS

Configuration Steps


To Build a standard VCS Cluster to Support Zones, the following must be done:

a. Build Servers (Including VxVM/VCS)

b. Configure / Enable FSS Scheduler

c. Configure / Enable Default Processor Pools

d. Configure VCS Servicegroups

e. Create Zone



4. Build Servers (Including VCS)

When building a VCS Cluster to Support Zones, the latest Solaris 10x86 Build should be used. VCS Supports Zones starting in 4.1 MP1 Release.



5. Enable FSS Scheduler

Once Servers have been build, need to enable the FSS (Fair Share Scheduler), by executing the following commands:



# dispadmin -d FSS



Move All existing processes into FSS scheduler:



# priocntl -s -c FSS -i all





6. Enable Processor pools, and default pool

Create default system pool, this will be the default pool where all CPU.s are assigned unless an application has specifically requested a set number of CPUs and does not want to share with other applications (when cpu available).



Execute the following commands to enable pools:



# pooladm -e

# poolcfg -c discover

# pooladm -c



Set default pools scheduler to FSS class:



# poolcfg -c 'modify pool pool_default (string pool.scheduler="FSS")'

# pooladm -c



7. Setup Veritas Volumes

Prior to setting up Zones, should first setup the proper volumes/filesystems:

Initialize Devices

# /etc/vx/bin/vxdisksetup -I



Initialize DiskGroups

# vxdg init DG#_



Initialize Volumes

# vxassist -g make



Create Filesystems

# mkfs -Fvxfs -olargefiles /dev/vx/rdsk//



Recommended filesystem for zone is 16g:

# vxassist -g make zone_os 16g



8. Create zone

Create Zone using create_zone script from the jumpstart server: /jumpstart_10x86v0/stdbuild/scripts/create_zone:



# mount :/jumpstart_10x86v0/stdbuild /mnt

# cd /mnt/scripts

# ./create_zone



Need to specify a Zone Name?



./create_zone -z [ -l localisation ] [ -c ] [ -m ]

[ -s ] -p [ -i ]

[-e ] -n

example:

./create_zone -z pe-as1-d -l dselab -n

./create_zone -z pe-as1-d -l dselab -c 2 -n

./create_zone -z pe-as1-d -l dselab -c 2 -m 2G -n

./create_zone -z pe-as1-d -l dselab -m 2G -s 200 -p /zone_os -n

-v [for debug verbose]



Unless -n is used, script will only show commands used.



The script will create the project for the Zone on the server where it was run.

/etc/project:

dseds1d:100:dse-ds1-d:::project.cpu-hares=(priv,100,none);project.pool=pool_default

dseas1d:101:dse-as1-d:::project.cpu-hares=(priv,100,none);project.pool=pool_default



Example:

# ./create_zone -z dse-ds1-d -l dselab -s 200 -p /dse-ds1/zone_os -I 30.6.25.20 -e ce0 -n



After create script completes, the rest of standard build will be applied during first zone boot. Once zone has been completely built, can include additional filesystems, (either by lofs or direct mount under /zone_os//root/ directory).



Example:



# zonecfg -z dse-ds1-d

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/ora01

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/ora01

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/ora02

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/ora02

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/ora03

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/ora03

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/oraarch

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/oraarch

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> add fs

zonecfg:dse-ds1-d:fs> set dir=/dse-ds1/oratemp

zonecfg:dse-ds1-d:fs> set special=/dse-ds1/oratemp

zonecfg:dse-ds1-d:fs> set type=lofs

zonecfg:dse-ds1-d:fs> end

zonecfg:dse-ds1-d> commit



Modify zone.cpu-shares (this is updated automatically if .s included with create_zone script)

zonecfg:dse-ds1-d> add rctl

zonecfg:dse-ds1-d:rctl> set name=zone.cpu-shares

zonecfg:dse-ds1-d:rctl> add value (priv=privileged,limit=100,action=none)

zonecfg:dse-ds1-d:rctl> end

zonecfg:dse-ds1-d> commit

After rebooting the ZONE the partitions should be mounted:

# df -k

Filesystem kbytes used avail capacity Mounted on

/ 16777216 3938350 12036933 25% /

/dev 16777216 3938350 12036933 25% /dev

/orape-ds1/ora01 10485760 19652 9811983 1% /orape-ds1/ora01

/orape-ds1/ora02 10485760 19651 9811985 1% /orape-ds1/ora02

/orape-ds1/ora03 10485760 19651 9811985 1% /orape-ds1/ora03

/orape-ds1/oraarch 10485760 19651 9811985 1% /orape-ds1/oraarch

/orape-ds1/oratemp 10485760 19651 9811985 1% /orape-ds1/oratemp

proc 0 0 0 0% /proc

ctfs 0 0 0 0% /system/contract

swap 5235304 268 5235036 1% /etc/svc/volatile

mnttab 0 0 0 0% /etc/mnttab

/usr/lib/libc/libc_hwcap2.so.1

16777216 3938350 12036933 25% /lib/libc.so.1

fd 0 0 0 0% /dev/fd

swap 1048576 0 1048576 0% /tmp

swap 20480 20 20460 1% /var/run





If dedicating CPU.s can assign CPU pool to zone:

bash-3.00# zonecfg -z dse-ds1-d

zonecfg:dse-ds1-d> set pool=dse-ds1-d



else, add it to global pool which should be using FSS (Faire Share Scheduler).



zonecfg:dse-ds1-d> set pool=pool_default

zonecfg:dse-ds1-d> commit



All projects should be created on all cluster nodes and Zone, so that all shares are listed properly.



9. Create Project / Assign users within Zone:

Login to ZONE and setup projects,users,shares.



projadd -c (project name can not contain - )

# projadd -c dse-ds1-d dseds1d



Assign cpu shares to Project / Zone

# projmod -sK "project.pool=pool_default" dseds1d

# projmod -sK "project.cpu-shares=(priv,100,none)" dseds1d

.

Setup project in Zone, assign users to Zone.

projmod -U oracle,orape01



# projmod -U oracle,orape01 dseds1d





10. Setup VCS

Configure VCS as per the usual standards/best practices. After creating Servicegroups, prior to creating ZONE resource, should first shutdown ZONE.



VCS Example:



Create Cluster:

haclus -modify Administrators root

haclus -modify ClusterName dse-cluster2



Create MultiNIC Group/Resource:

hagrp -add acb-mnic

hagrp -modify acb-mnic SystemList dse-clust1-da 0 dse-clust1-db 1

hagrp -modify acb-mnic AutoStart 0



hares -add MNIC_acb-mnic_bge0_bge1 MultiNICB acb-mnic

hares -modify MNIC_acb-mnic_bge0_bge1 Device bge0 0 bge1 1

hares -modify MNIC_acb-mnic_bge0_bge1 NetMask "255.255.255.0"

hares -modify MNIC_acb-mnic_bge0_bge1 IgnoreLinkStatus 0





Create ServiceGroup:

hagrp -add dse-ds1-d

hagrp -modify dse-ds1-d SystemList dse-clust1-da 0 dse-clust1-db 1



Create DiskGroup

hares -add DG1_dse-ds1-d DiskGroup dse-ds1-d

hares -modify DG1_dse-ds1-d DiskGroup DG1_dse-ds1-d



Create Proxy Resource:

hares -add PXY_dse-ds1-d_bge0_bge1 Proxy dse-ds1-d

hares -modify PXY_dse-ds1-d_bge0_bge1 TargetResName MNIC_acb-mnic_bge0_bge1



Create Volume Resources

hares -add VOL_DG1_dse-ds1-d_admin Volume dse-ds1-d

hares -modify VOL_DG1_dse-ds1-d_admin Volume admin

hares -modify VOL_DG1_dse-ds1-d_admin DiskGroup DG1_dse-ds1-d



hares -add VOL_DG1_dse-ds1-d_ora01 Volume dse-ds1-d

hares -modify VOL_DG1_dse-ds1-d_ora01 Volume ora01

hares -modify VOL_DG1_dse-ds1-d_ora01 DiskGroup DG1_dse-ds1-d



hares -add VOL_DG1_dse-ds1-d_ora02 Volume dse-ds1-d

hares -modify VOL_DG1_dse-ds1-d_ora02 Volume ora02

hares -modify VOL_DG1_dse-ds1-d_ora02 DiskGroup DG1_dse-ds1-d



hares -add VOL_DG1_dse-ds1-d_ora03 Volume dse-ds1-d

hares -modify VOL_DG1_dse-ds1-d_ora03 Enabled 1

hares -modify VOL_DG1_dse-ds1-d_ora03 Volume ora03

hares -modify VOL_DG1_dse-ds1-d_ora03 DiskGroup DG1_dse-ds1-d



Create Mount Resources:

hares -add MNT_dse-ds1-d_admin Mount dse-ds1-d

hares -modify MNT_dse-ds1-d_admin MountPoint "/dse-ds1"

hares -modify MNT_dse-ds1-d_admin BlockDevice "/dev/vx/dsk/DG1_dse-ds1-d/admin"

hares -modify MNT_dse-ds1-d_admin FSType vxfs

hares -modify MNT_dse-ds1-d_admin FsckOpt "%-y"



hares -add MNT_dse-ds1-d_ora01 Mount dse-ds1-d

hares -modify MNT_dse-ds1-d_ora01 MountPoint "/dse-ds1/ora01"

hares -modify MNT_dse-ds1-d_ora01 BlockDevice "/dev/vx/dsk/DG1_dse-ds1-d/ora01"

hares -modify MNT_dse-ds1-d_ora01 FSType vxfs

hares -modify MNT_dse-ds1-d_ora01 FsckOpt "%-y"



hares -add MNT_dse-ds1-d_ora02 Mount dse-ds1-d

hares -modify MNT_dse-ds1-d_ora02 MountPoint "/dse-ds1/ora02"

hares -modify MNT_dse-ds1-d_ora02 BlockDevice "/dev/vx/dsk/DG1_dse-ds1-d/ora02"

hares -modify MNT_dse-ds1-d_ora02 FSType vxfs

hares -modify MNT_dse-ds1-d_ora02 FsckOpt "%-y"



hares -add MNT_dse-ds1-d_zone Mount dse-ds1-d

hares -modify MNT_dse-ds1-d_zone MountPoint "/dse-ds1/zone_os"

hares -modify MNT_dse-ds1-d_zone BlockDevice "/dev/vx/dsk/DG1_dse-ds1-d/zone"

hares -modify MNT_dse-ds1-d_zone FsckOpt "%-y"



Create ZONE Resource:

hares -add Zone_dse-ds1-d Zone dse-ds1-d

hares -modify Zone_dse-ds1-d ZoneName dse-ds1-d



Create Links:

hares -link DG1_dse-ds1-d PXY_dse-ds1-d

hares -link MNT_dse-ds1-d_admin DG1_dse-ds1-d

hares -link MNT_dse-ds1-d_admin VOL_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_ora01 MNT_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_ora01 VOL_DG1_dse-ds1-d_ora01

hares -link MNT_dse-ds1-d_ora02 MNT_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_ora02 VOL_DG1_dse-ds1-d_ora02

hares -link MNT_dse-ds1-d_ora03 MNT_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_ora03 VOL_DG1_dse-ds1-d_ora03

hares -link MNT_dse-ds1-d_zone MNT_dse-ds1-d_admin

hares -link MNT_dse-ds1-d_zone VOL_dse-ds1-d_zone

hares -link VOL_DG1_dse-ds1-d_ora01 DG1_dse-ds1-d

hares -link VOL_DG1_dse-ds1-d_ora02 DG1_dse-ds1-d

hares -link VOL_DG1_dse-ds1-d_ora03 DG1_dse-ds1-d

hares -link VOL_DG1_dse-ds1-d_oraarch DG1_dse-ds1-d

hares -link VOL_DG1_dse-ds1-d_oratemp DG1_dse-ds1-d

hares -link VOL_dse-ds1-d_admin DG1_dse-ds1-d

hares -link VOL_dse-ds1-d_zone DG1_dse-ds1-d

hares -link Zone_dse-ds1-d MNT_dse-ds1-d_zone

hares -link Zone_dse-ds1-d MNT_dse-ds1-d_ora01

hares -link Zone_dse-ds1-d MNT_dse-ds1-d_ora02

hares -link Zone_dse-ds1-d MNT_dse-ds1-d_ora03



Add a new cluster user, with group administrator privileges for the group containing the zone.



# hauser -add dse-adm -priv Administrator -group dse-ds1-d

Enter New Password:

Enter Again:



Ensure the local zone can resolve the host name of the global zone, either through DNS or through the /etc/hosts file.



Make sure the global zone.s host name is in DNS or the local /etc/hosts file.



Log into the local zone (zlogin ):

# zlogin -C -e T dse-ds1-d

[Connected to zone 'dse-ds1-d' console]



dse-ds1-d console login: root

Password:

Sep 25 07:24:15 dse-ds1-d login[23334]: ROOT LOGIN /dev/console

Last login: Mon Sep 25 07:21:24 on console

Machine: dse-ds1-d, SunOS 5.10, built 21 Sep 2006



Set the environment variable VCS_HOST to the host name of the global zone.



# export VCS_HOST=dse-clust1-da





Issue the command /opt/VRTSvcs/bin/halogin .



# /opt/VRTSvcs/bin/halogin dse-adm dse-adm



(failover and do the samething for the other cluster nodes).





11. Enable Application Agents in VCS to monitor resources within a Zone

Application Agents that are monitoring resources within a Zone, need to have ContainerType and Containername attributes defined.



To update existing Oracle/Sqlnet resources to support Zones:

# haconf -makerw

# haattr -add -static Oracle ContainerType -string -scalar Zone

# hatype -modify Oracle ArgList -add ContainerName

# haattr -add -static Sqlnet ContainerType -string -scalar Zone

# hatype -modify Sqlnet ArgList -add ContainerName



# haattr -add Oracle ContainerName

# haattr -add Sqlnet ContainerName

# hatype -modify Oracle ArgList -add ContainerName

# hatype -modify Sqlnet ArgList -add ContainerName



Example Oracle/Sqlnet definitions:

# hares -add ORA_dse-ds1-d_PE10GSOL Oracle dse-ds1-d

# hares -modify ORA_dse-ds1-d_PE10GSOL Critical 0

# hares -modify ORA_dse-ds1-d_PE10GSOL SID PE10GSOL

# hares -modify ORA_dse-ds1-d_PE10GSOL ContainerName dse-ds1-d



# hares -add SQL_dse-ds1-d_LISTENER_PE10GSOL Sqlnet dse-ds1-d

# hares -modify SQL_dse-ds1-d_LISTENER_PE10GSOL Critical 0

# hares -modify SQL_dse-ds1-d_LISTENER_PE10GSOL LISTENER LISTENER_PE10GSOL

# hares -modify SQL_dse-ds1-d_LISTENER_PE10GSOL ContainerName dse-ds1-d



Create Application Links to Zone:

# hares -link ORA_dse-ds1-d_PE10GSOL Zone_dse-ds1-d

# hares -link SQL_dse-ds1-d_LISTENER_PE10GSOL Zone_dse-ds1-d





12. Sybase RAW Devices within a Zone running on VxVM

Currently, there is an issue with Zone visibility and VxVM volumes from the global zone. The following is a workaround until its addressed:



Use the ls -l command to find out the major and minor numbers of the raw volume



In this example, the volume u1 is in the DG1_dse-ds1-d disk group within the global zone. The raw device in the global zone that corresponds to u1 is /dev/vx/rdsk/DG1_dse-ds1-d/u1.



Running the ls -l command on this device shows that the major number is 289, a

nd the minor number is 45000:



# ls -l /dev/vx/rdsk/DG1_dse-ds1-d

crw------- 1 root root 289, 45000 Aug 24 11:13 v1



Use the mknod command to create an entry for the VxVM volume in the non global zone. In this example, the major number is 289 and the minor number is 45000:



# cd /dse-ds1/zone_os/dev

# mknod u1 c 289 45000



5. Log into the non global zone and check the device entry for the volume:



# zlogin -l root dse-ds1-d



[Connected to zone .dse-ds1-d. pts/6]

Last login: Thu Aug 24 14:31:41 on pts/5

Sun Microsystems Inc. SunOS 5.10 Generic January 2005



# ls -l /dev/u1

crw-r--r-- 1 root root 289, 45000 Aug 24 16:10 /dev/v1



6. Verify that you can perform I/O with the device entry in the non global zone:



# dd if=/dev/u1 of=/dev/null

2048+0 records in

2048+0 records out
Introduction


This article presents step-by-step procedures for creating a JumpStart server using the Solaris 10 OS for x86/x64 platforms. This version of the Solaris 10 OS is used both for the OS running on the server and for the OS on the JumpStart installation server. Both AMD Opteron and Intel processor-based machines were used as servers and clients with no preference given as to which processor type was used for each.



Instructions are provided to:



Create a JumpStart Installation Server

Create the Client Configuration Files

Share the Installation and Configuration Directories

Create the Client tftpboot Files

Configure and Run the DHCP Server

Perform a Hands-Off JumpStart Installation

Pre-boot Execution Environment (PXE) must be enabled on the clients in order to allow them to boot from the network. On some clients, PXE needs to be enabled in the BIOS.



A bug in the creation of the client boot file is addressed in the Final Clean-Up section.







Solaris JumpStart Procedure

These instructions are for setting up a JumpStart installation server that will install the Solaris 10 OS for x86/x64 platforms on two clients.



In this exercise, the node name of the JumpStart server is stinger2 and its IP address is 172.16.64.194. The default router's IP address is 172.16.64.1. The network address where the JumpStart server resides is 172.16.64.0.



If you need to, you can download the Solaris 10 OS.



1. Create a JumpStart Installation Server



a. Create an installation directory on the server:



# mkdir -p /export/install

b. Put the Solaris 10 OS for x86/x64 platforms DVD in the DVD player on the server. Create the installation server by going to the Solaris_10/Tools directory on the DVD and running the setup_install_server command. The Solaris software is copied to the newly created directory. Specify the absolute path name as the argument.



# cd /cdrom/cdrom0/Solaris_10/Tools

# ./setup_install_server /export/install

Verifying target directory...

Calculating the required disk space for the Solaris_10 product

\
/-\
/-

Calculating space required for the installation boot image

\
/-\
/-

Copying the CD image to disk...

\
/-\
/-

Copying Install Boot Image hierarchy...

\
/-\
/-

Copying /boot x86 netboot hierarchy...

\
/-\
/-

Install Server setup complete

#

c. Verify that the install directory has been populated.



# du -sk /export/install



3083278 /export/install

d. Remove the DVD from the DVD player.



# cd /;eject

Create the Client Configuration Files



The client configuration files are used to control a custom JumpStart installation.



a. Create a configuration directory where the files will reside:



# mkdir /export/config

b. Create the sysidcfg file:



The first file to create is the sysidcfg file. This file must be properly formatted with correct information or the file is ignored and the hands-off JumpStart installation is aborted. The installation then defaults to the standard Solaris interactive installation.



A JumpStart client looks for a file named sysidcfg before the OS installation begins. This file must be named "sysidcfg", so in order to have multiple versions of the file, each sysidcfg file must reside in a separate directory. Each client can have its own sysidcfg file, or multiple clients can use the same sysidcfg file. The sysidcfg file is assigned to a client by the add_install_client command. The following shows the creation of two sysidcfg files:



# cd /export/config

# mkdir sysidcfg1

# cd sysidcfg1

# vi sysidcfg



system_locale=en_US.ISO8859-1

timezone=US/Pacific

timeserver=localhost

terminal=vt100

name_service=NONE

security_policy=NONE

root_password=

network_interface=bge0 {hostname=client1

netmask=255.255.255.0

protocol_ipv6=no

default_route=172.16.64.1}



:wq



# cd ../

# mkdir sysidcfg2

# cd sysidcfg2

# vi sysidcfg



system_locale=en_US.ISO8859-1

timezone=US/Pacific

timeserver=localhost

terminal=vt100

name_service=NONE

security_policy=NONE

root_password=

network_interface=bge0 {hostname=client2

netmask=255.255.255.0

protocol_ipv6=no

default_route=172.16.64.1}



:wq

c. Create the rules file:



The next file to create is the rules file. This is a text file that contains a rule for each client or group of clients on which the Solaris OS will be installed. Each line of the rules file tells JumpStart which begin, profile, and finish files to use for each client or group of clients.



There is only one rules file. It can contain multiple lines depending upon how many unique configurations are needed. The following shows the contents of a rules file that contains information for two separate clients:



# cd /export/config

# vi rules



hostname client1 begin1 profile1 finish1

hostname client2 begin2 profile2 finish2



:wq

The rules file is used by the check script when it creates the rules.ok file. Successful creation of the rules.ok file is required for custom JumpStart installations.



d. Create the begin file:



The begin file is a user-defined Bourne shell script that is used to perform tasks on the client before the installation of the Solaris OS begins. Typical tasks include creating derived profiles and the backing up of files before upgrading.



Multiple begin files can be used if desired. The following shows the creation of two begin files:



# cd /export/config

# vi begin1



#!/bin/sh

echo "Begin Script for JumpStart client1..."



:wq



# vi begin2



#!/bin/sh

echo "Begin Script for JumpStart client2..."



:wq



# chmod 755 begin*

During installation on the client, output from the begin file is written to /tmp/begin.log. After the installation is done, the log file can be found in /var/sadm/system/logs/begin.log.



The Solaris 10 Installation Guide includes a Custom JumpStart Environment Variables section that describes variables you can use in begin scripts.



e. Create the finish file:



The finish file is a user-defined Bourne shell script that is used to perform tasks on the client after the installation of the Solaris OS has completed. This script is typically used to add additional files, add packages and patches, customize the root environment, and install additional software.



Multiple finish files can be used if desired. The following shows the creation of two finish files:



# cd /export/config

# vi finish1



#!/bin/sh

echo "Finish Script for JumpStart client1..."

echo "Get rid of the nfs prompt during the initial boot"

touch /a/etc/.NFS4inst_state.domain



:wq



# vi finish2



#!/bin/sh

echo "Finish Script for JumpStart client2..."

echo "Get rid of the nfs prompt during the initial boot"

touch /a/etc/.NFS4inst_state.domain



:wq



# chmod 755 finish*

The Solaris installation program mounts the client systems' file systems on /a. The finish script can be used to add, change, or remove files with respect to /a. These file systems remain mounted on /a until the initial system reboot.



The JumpStart directory, that is, /export/install, is mounted on the directory that is specified by the SI_CONFIG_DIR variable. The directory is set to /tmp/install_config by default. Files can be copied from the JumpStart directory to the client by commands run in the finish script. Files that are to be added to the installed system are placed in the JumpStart directory and are then accessible by the client.



The following line in the finish script copies a file to the newly installed file system hierarchy on the client:



cp /tmp/install_config/ /a//

f. Create the profile file:



The profile file is a text file that defines how the Solaris OS is installed on a client.



Multiple profile files can be created. Several clients can use the same profile file, or each client can have its own profile file. The following shows the creation of two profile files:



# cd /export/config

# vi profile1



# install_type MUST be first

install_type initial_install



# start with the minimal required number of packages

cluster SUNWCXall

cluster SUNWCapache delete

cluster SUNWCpcmc delete

cluster SUNWCpcmcx delete

cluster SUNWCthai delete

cluster SUNWClp delete

cluster SUNWCnis delete

cluster SUNWCppp delete



# format the entire disk for Solaris

fdisk all solaris all



# define how the disk is partitioned

partitioning explicit

filesys rootdisk.s0 6144 /

filesys rootdisk.s1 1024 swap

filesys rootdisk.s7 free /state/partition1



# install systems as standalone

system_type standalone



# specify patches to install

patch 119281-06 nfs 172.16.64.194:/export/patches



# specify packages to install

package SPROcc add nfs 172.16.64.194:/export/packages



:wq



# vi profile2



# install_type MUST be first

install_type initial_install



# start with the minimal required number of packages

cluster SUNWCXall

cluster SUNWCapache delete

cluster SUNWCpcmc delete

cluster SUNWCpcmcx delete

cluster SUNWCthai delete

cluster SUNWClp delete

cluster SUNWCnis delete

cluster SUNWCppp delete



# format the entire disk for Solaris

fdisk all solaris all



# define how the disk is partitioned

partitioning explicit

filesys rootdisk.s0 6144 /

filesys rootdisk.s1 4096 swap

filesys rootdisk.s7 free /state/partition1



# install systems as standalone

system_type standalone



# specify patches to install

patch 119281-06 nfs 172.16.64.194:/export/patches



# specify packages to install

package SPROcc add nfs 172.16.64.194:/export/packages



:wq

g. Create the check script:



The check script is used to validate that the rules and profile files are correctly set up. First copy the check script to the local directory, that is, /export/config, as shown:



# cd /export/config

# cp /export/install/Solaris_10/Misc/jumpstart_sample/check .

h. Run the check script:



# ./check

Validating rules...

Validating profile profile1...

Validating profile profile2...

The custom JumpStart configuration is ok.

If no errors are found, the rules.ok file is created. This file is the same as the rules file but with its comments and blank lines removed. The check script adds the following comment to the end of the rules.ok file:



# version=2 checksum=

3. Share the Installation and Configuration Directories



a. Modify dfstab to share the JumpStart directories.



b. Edit the /etc/dfs/dfstab file:



# vi /etc/dfs/dfstab



# Place share(1M) commands here for automatic execution

# on entering init state 3.

#

# Issue the command 'svcadm enable network/nfs/server' to

# run the NFS daemon processes and the share commands, after

# adding the very first entry to this file.

#

# share [-F fstype] [ -o options] [-d ""]

# [resource]

# for example,

# share -F nfs -o rw=engineering -d "home dirs" /export/home2



share -F nfs -o ro,anon=0 /export/install

share -F nfs -o ro,anon=0 /export/config

share -F nfs -o ro,anon=0 /export/patches

share -F nfs -o ro,anon=0 /export/packages



:wq



c. Start the NFS server.



# /etc/init.d/nfs.server start

d. Share the directories.



# shareall

# share

- /export/install ro,anon=0 ""

- /export/config ro,anon=0 ""

- /export/patches ro,anon=0 ""

- /export/packages ro,anon=0 ""

e. Verify file sharing.



# showmount -e localhost

export list for localhost:

/export/install (everyone)

/export/config (everyone)

/export/patches (everyone)

/export/packages (everyone)

4. Create the Client tftpboot Files



a. Run the add_install_client script for each client.



b. Go to the location of the add_install_client script:



# cd /export/install/Solaris_10/Tools

c. Run the add_install_client script for each client on the network that performs a JumpStart installation. Ensure that you use the correct arguments for each client. The -e argument is the MAC address for the client and the -p argument shows the directory name of the sysidcfg file that is used by the client. The following shows running add_install_client for two separate clients:



# ./add_install_client \

-d \

-e 00:0a:e4:37:16:4d \

-s 172.16.64.194:/export/install \

-c 172.16.64.194:/export/config \

-p 172.16.64.194:/export/config/sysidcfg1 i86pc



enabling tftp in /etc/inetd.conf

Converting /etc/inetd.conf

enabling network/tftp/udp6 service

copying boot file to /tftpboot/pxegrub.I86PC.Solaris_10-1



If not already configured, enable PXE boot by creating

a macro named 01000AE429C1FD with:

Boot server IP (BootSrvA) : 172.16.64.194

Boot file (BootFile) : 01000AE429C1FD



# ./add_install_client \

-d \

-e 00:0a:e4:2a:33:f8 \

-s 172.16.64.194:/export/install \

-c 172.16.64.194:/export/config \

-p 172.16.64.194:/export/config/sysidcfg2 i86pc



enabling tftp in /etc/inetd.conf

Converting /etc/inetd.conf

enabling network/tftp/udp6 service

copying boot file to /tftpboot/pxegrub.I86PC.Solaris_10-1



If not already configured, enable PXE boot by creating

a macro named 01000AE42A33F8 with:

Boot server IP (BootSrvA) : 172.16.64.194

Boot file (BootFile) : 01000AE42A33F8

The Boot server IP and Boot file values are used later when macros are created while dhcpmgr is running.



5. Configure and Run the DHCP Server



a. Run dhcpmgr:



# /usr/sadm/admin/bin/dhcpmgr

Java Accessibility Bridge for GNOME loaded.

The first screen appears:

(Click on images to enlarge)







b. Select Configure as a DHCP Server and click OK.



The DHCP Configuration Wizard appears:







c. Select Text Files and click Next.







d. Verify the storage path and click Next.







e. Select a nameservice and click Next.







f. Verify the lease information and click Next.







g. Verify the DNS domain information and click Next.







h. Verify the network information and click Next.







i. Select the network type and routing option and click Next.







j. Verify the NIS domain information and click Next.







k. Verify the NIS+ domain information and click Next.







l. Review the settings and click Finish.



The DHCP Manager appears and you are asked to start the Address Wizard:







m. Click Yes.



The Address Wizard appears:







n. Type the number of IP addresses and click Next.







o. Verify the server information and click Next.







p. Verify the IP addresses and click Next.







q. Verify the client configuration information and click Next.







r. Select the lease type and click Next.







s. Review the settings and click Finish.



With dhcpmgr still running, create the BootFile and BootSrvA macros. To access the Create Macros form, first select the Macros tab on the DHCP Manager form. Then select Edit->Create on the top menu.



The Create Macro form appears:







t. Create the BootFile portion of the macro by typing into the Name field the name that was generated by the add_install_client script for the first client. This name is also used in the Option Value field. After typing the information, click Add.







u. Create the BootSrvA portion of the macro by typing the network IP address of the JumpStart server into the Option Value field. After typing the information, click Add. Then click OK to complete the creation of the first macro.



v. Repeat the same process to create the second macro.







x. Click Add.







y. Click Add and then click OK.



z. After generating the second macro, select File->Exit in the DHCP Manager window to end the dhcpmgr utility.



Final Clean-Up



When you ran the add_install_client script, the script created a menu.lst file for each client.



Due to a bug in the creation of the /tftpboot/menu.lst file, you must add the following text after kernel/unix on line 4:



- install dhcp

Therefore, change the file from this:



default=0

timeout=30

title Solaris_10 Jumpstart

kernel /I86PC.Solaris_10-1/multiboot kernel/unix -B \

install_config=172.16.64.194:/export/config, \

sysid_config=172.16.64.194:/export/config/sysidcfg1, \

install_media=172.16.64.194:/export/install, \

install_boot=172.16.64.194:/export/install/boot

module /I86PC.Solaris_10-1/x86.miniroot

to this:



default=0

timeout=4

title Solaris_10 Jumpstart

kernel /I86PC.Solaris_10-1/multiboot kernel/unix - install dhcp -B \

install_config=172.16.64.194:/export/config, \

sysid_config=172.16.64.194:/export/config/sysidcfg1, \

install_media=172.16.64.194:/export/install, \

install_boot=172.16.64.194:/export/install/boot

module /I86PC.Solaris_10-1/x86.miniroot



Perform a Hands-Off JumpStart Installation



Boot the clients.



After the prompt is displayed, press F12 on the client's keyboard.



Network Boot Request....



CLIENT MAC ADDR: 00 0A E4 2A 33 F8 GUID: 11223344 556 7788 99AA \

BBCCDDEEFF00



DHCP....\
/-\
/-

If everything has been set up correctly, the installation runs to completion. If a problem occurs, the installer exits and drops into a shell. The cause of the error is recorded in the install_log file.







Post Installation

The following are the log files for the JumpStart installation:



/var/sadm/system/logs/install_log

begin_log

finish_log

sysidtool.log

Solaris multipathing

I've got this Sun box running Solaris 8 that I've managed to get three 10/100 hme ethernet cards into. All are connected to the 100Mbit switch that runs most of our LAN. This the quick guide to how I set up multipathing on those three interfaces. As clearly stated in Sun's docs, each interface involved in failover must be assigned to a group (I use the same group for all three, it can be named about whatever you want), and assigned an additional IP address for in.mpathd to use for testing whether the interface is up or not. While these additional IPs will only be relevant to this host, they must NOT be in use anywhere else on your subnet.


Conventions

Network: 10.0.0.0/24

Hostname: acadie

Domain: internal

Interfaces: hme0 hme1 hme2

Failover group name: mofo

Main "live" address: 10.0.0.101

hme0 "test" address: 10.0.0.110

hme1 "test" address: 10.0.0.111

hme2 "test" address: 10.0.0.112

Configuration files

/etc/hosts:

#

# Internet host table

#

127.0.0.1 localhost loghost

10.0.0.101 acadie.internal acadie

10.0.0.110 acadie-hme0

10.0.0.111 acadie-hme1

10.0.0.112 acadie-hme2

/etc/netmasks:

10.0.0.0 255.255.255.0

/etc/hostname.hme0:

acadie netmask + broadcast + up \

group mofo \

addif acadie-hme0 netmask + broadcast + \

deprecated -failover up

/etc/hostname.hme1:

acadie-hme1 netmask + broadcast + \

group mofo \

deprecated -failover standby up

/etc/hostname.hme2:

acadie-hme2 netmask + broadcast + \

group mofo \

deprecated -failover standby up

Command line

The above configuration is all that is required to make this configuration persistent across reboots. If, however, you are in the position of having to implement this on a running machine without rebooting, you pretty much just run `ifconfig` for each interface, with the arguments shown in the /etc/hostname.hme? files above.

For example, if you're already up and running on hme0, and want to add hme1 and hme2 as failover interfaces to hme0:

acadie# ifconfig hme0

hme0: flags=1000843 mtu 1500 index 2

inet 10.0.0.101 netmask ffffff00 broadcast 10.0.0.255

ether 8:0:20:c5:10:15

Assign hme0 to your failover group, and add an alias to it for the testing address:

acadie# ifconfig hme0 group mofo

acadie# ifconfig hme0 addif 10.0.0.110 netmask 255.255.255.0 \

broadcast 10.0.0.255 -failover deprecated up

Then add hme1 and hme2 in:

acadie# ifconfig hme1 plumb 10.0.0.111 netmask 255.255.255.0 \

broadcast 10.0.0.255 group mofo deprecated -failover standby up

acadie# ifconfig hme2 plumb 10.0.0.112 netmask 255.255.255.0 \

broadcast 10.0.0.255 group mofo deprecated -failover standby up

Note: You can substitute in hostnames for the IP addresses in those ifconfig commands, provided they are in /etc/hosts.

acadie# ifconfig -a

lo0: flags=1000849 mtu 8232 index 1

inet 127.0.0.1 netmask ff000000

hme0: flags=1000843 mtu 1500 index 2

inet 10.0.0.101 netmask ffffff00 broadcast 10.0.0.255

groupname mofo

ether 8:0:20:c5:10:15

hme0:1: flags=9040843 mtu 1500 index 2

inet 10.0.0.110 netmask ffffff00 broadcast 10.0.0.255

hme1: flags=9040843 mtu 1500 index 3

inet 10.0.0.111 netmask ffffff00 broadcast 10.0.0.255

groupname mofo

ether 8:0:20:c5:c0:53

hme2: flags=9040843 mtu 1500 index 4

inet 10.0.0.112 netmask ffffff00 broadcast 10.0.0.255

groupname mofo

ether 0:60:5b:e:2:dd

Solaris 10 mirror root

I wonder how many out there have been handed a Solaris 10 system with no free slice for the metadb’s. Sure on older versions of Solaris is easy… disable swap and re-slice it into a smaller swap, and a metadb slice. Well it doesn’t quite work that smoothly in the latest Solaris 10, update 3 as I wrote this, Thanks to there safety code that protects you from re-labeling a mounted drive. Well I’m sure someone is going to mention about “NOINUSE_CHECK=1” environment variable, okay it sounds like its going to work, well in update 3 apparently they wrote a few more checks, and it doesn’t work. So my next thought was pull out an install disk and fix it that way, of course the client hadn’t located his null modem cable and I’m 8000 miles away, and here I am stuck on a ssh login to the machine, and he wants root mirrored with SVM, of course it’s the first thing he drops in my lap and I’m trying to prove I’m not just a over rated quack, I know most of the consultants out their reading this have been there.







So here is what I tried, and the actual solution that everyone can use I’m not going to give the commands and error messages along the way because I didn’t think to save it,.







So first idea is to resize /export/home rather than swap, who knows he may need all his swap, so I disable swap and then told the system not to mount /export/home via the vfstab and then rebooted. Ran format and it bitched about slices being in use, so then I tried with the ‘NOINUSE_CHECK=1 format’ okay started it and it didn’t complain as much so I know the environment variable is doing something so I modify the slices, and try again to write the label, no luck.. I asked on #solaris, and they came up with prtvtoc that gives me a print out of the current partition table in sector format and then you can modify it and write it out using fmthard with the new slices, since I’m a bit under pressure and have never had to resort to editing slices in sectors I tried it a couple times but just couldn’t get the math right for fmthard it kept complaining that the slices didn’t start on the cyclinder boundry, and needed to come up with a better solution.







WARNING: COMMANDS MENTIONED HERE CAN DESTORY DATA AND RUIN YOUR LOVE LIFE, GET YOU FIRED, CAUSE HAIR LOSS, GIVE YOU A HEART ATTACK, EVEN YOUR DOG WILL HATE YOU AND PEE IN YOUR SHOES IF YOU USE THEM WITHOUT KNOWING WHAT YOU ARE DOING AND YOUR DATA IS NOT 100% BACKED UP, NO WARRANTY IS IMPLIED, DON'T BLAME ME NO MATTER WHAT HAPPENS TO YOUR SYSTEM.







The solution I found turns out to be fairly simple.







Step 1: save a copy of the first drives partition table to a text file.



#prtvtoc /dev/rdsk/c0t0d0s2 > oldparttable







Step 2: write it to the drive that will be the mirror and pray that it’s the same specs as the current root drive, well I got lucky and it was the same. It may be necessary to put an SMI label on the drive if its new or had another non Solaris OS on it.







#fmthard –s oldparttable /dev/rdsk/c0t1d0s2







Step 3: use format to modify the slices on the non mounted drive







Step 4: save a copy of the modified table on the non mounted drive







#prtvtoc /dev/rdsk/c0t1d0s2 > newparttable







Step 5: write the new partition table to the mounted drive.







#fmthard –s newparttable /dev/rdsk/c0t0d0s2







Step 6 reboot the system, so it sees the root drive has a new partition table.







Step 7 newfs /export/home slice.







Step 8 modify vfstab to mount /export/home again, restore data thankfully it was a new install so there was no data to restore.







Create the metadb’s and mirror the root slice as documented in numerous other places on the net. So in the end I saved a 4 hour reinstall and was able to complete the task without a serial cable.







DONE.

what is superblock and how to recover it

The scope of this article not covers about the basic things about file system, but its about troubleshooting of currupt file system.

So, when you create file system on hard drive it will sub devided into multiple file system blocks.

Blocks are used for -

1. To store user data

2. Some blocks used for to store file system's metadata.

(Metadata is kind of structure of your file system and it contents superblock, indoes and directories)



Superblock - Each of your filesystem has a superblock. File system like ext2. ext3 etc. Superblock contents the information about file system like -

* File system type

* Size

* Status

* Information about other metadata



Now you will guess that how important is superblock for your filesystem, if that is that currupt then you may not able to use that partition or may you will error while tring to mount that filesystem.

Following are some errors when superblock get currupts or some bad sectors

- You cant able to mount the filesystem, it will refuse to mount

- Filesystem gets hang

- Sometimes though you are able to mount that filesystem, but strange behavior occures.



These kind of errors occures because of bunch reasons. Most of the time fsck works fine for these errors -

$e2fsck -f /dev/hda3



(-f option for forcefully checking even filesystem seems clean)



Now fsck doesnt work because of lost of superblock, what you will do??

Note that Linux maintains multiple redundant copies of the superblock in every filesystem. You can find out this information with this following command -

$dumpe2fs /dev/hda6 grep -i superblock

dumpe2fs 1.32 (09-Nov-2002)

Primary superblock at 1, Group descriptors at 2-2

Backup superblock at 8193, Group descriptors at 8194-8194

Backup superblock at 24577, Group descriptors at 24578-24578

Backup superblock at 40961, Group descriptors at 40962-40962

Backup superblock at 57345, Group descriptors at 57346-57346

Backup superblock at 73729, Group descriptors at 73730-73730



To repair file system by alternative superblock

$e2fsck -f -b 8193 /dev/hda6



(Take backup using dd before doing running commands)



If you are using Sun Solaris, as My experience frequent power failure can get you hell :-( . I am using old sparc and one time in month I have run fsck using commands as per my last blog. So if your Sun Solaris lost the superblock then boot from cdrom or network, to retrive information about your filesystem's superblock give following command -

$newfs -N /dev/rdsk/devicename



Now use alternative superblock

$fsck -F ufs -o b=block-number /dev/rdsk/devicename



okie guys, hope this information helps somebody.

Replacing failed disk devices with the Solaris Volume Manager

The following example will show how to replace a disk named c0t0d0 using the cfgadm(1m) and metareplace(1m) utilities. The first step is to remove (if they exist) any meta state databases on the disk that needs to be replaced. To locate the locations of all meta state databases, the metadb(1m) command can be run with the “-i” option:


$ metadb -i

If meta devices exist, you can run metadb(1m) with the “-d” option to remove the databases. The following example deletes all meta state databases on slice 7 of the disk (c0t0d0) that we are going to replace:

$ metdb -d c0t0d0s7

Once the meta state databases are removed, you can use cfgadm(1m)’s “-c unconfigure” option to remove an occupant (an entity that lives in a receptacle) from Solaris:

$ cfgadm -c unconfigure c0::dsk/c0t0d0

Once Solaris unconfigures the device, you can physically replace the disk. Once the drive is replaced, you can run cfgadm(1m) with the “-c configure” option to let Solaris know the occupant is available for use:

$ cfgadm -c configure c0::dsk/c0t0d0

Once Solaris know the drive is available, you will need to VTOC the new drive with fmthard(1m) or format(1m). This will add a disk label to the drive, which defines the partions types and sizes. Once a valid VTOC is installed, you can invoke the trusty old metareplace(1m) utility to replace the faulted meta devices. The following example will replace the device associated with meta device d10, and cause the meta device to start synchronizing data from the other half of the mirror ( if RAID level 1 is used):

$ metareplace -e d10 c0t0d0s0

Solaris Performance Monitoring & Tuning - iostat , vmstat & netstat

Solaris Performance Monitoring & Tuning - iostat , vmstat & netstat


Introduction to iostat , vmstat and netstat

This document is primarily written with reference to solaris performance monitoring and tuning but these tools are available in other unix variants also with slight syntax difference.

iostat , vmstat and netstat are three most commonly used tools for performance monitoring . These comes built in with the operating system and are easy to use .iostat stands for input output statistics and reports statistics for i/o devices such as disk drives . vmstat gives the statistics for virtual Memory and netstat gives the network statstics .

Following paragraphs describes these tools and their usage for performance monitoring and if you need more information there are some very good solaris performance monitoring books available at www.besttechbooks.com.

Table of content :

1. Iostat

• Syntax

• example

• Result and Solutions

2. vmstat

• syntax

• example

• Result and Solutions

3. netstat

• syntax

• example

• Result and Solutions

4. Next Steps

________________________________________

Input Output statistics ( iostat )

iostat reports terminal and disk I/O activity and CPU utilization. The first line of output is for the time period since boot & each subsequent line is for the prior interval . Kernel maintains a number of counters to keep track of the values.

iostat's activity class options default to tdc (terminal, disk, and CPU). If any other option/s are specified, this default is completely overridden i.e. iostat -d will report only statistics about the disks.



syntax:

Basic synctax is iostat interval count

option - let you specify the device for which information is needed like disk , cpu or terminal. (-d , -c , -t or -tdc ) . x options gives the extended statistics .

interval - is time period in seconds between two samples . iostat 4 will give data at each 4 seconds interval.

count - is the number of times the data is needed . iostat 4 5 will give data at 4 seconds interval 5 times









Example

$ iostat -xtc 5 2

extended disk statistics tty cpu

disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b tin tout us sy wt id

sd0 2.6 3.0 20.7 22.7 0.1 0.2 59.2 6 19 0 84 3 85 11 0

sd1 4.2 1.0 33.5 8.0 0.0 0.2 47.2 2 23

sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0

sd3 10.2 1.6 51.4 12.8 0.1 0.3 31.2 3 31





The fields have the following meanings:



disk name of the disk

r/s reads per second

w/s writes per second

Kr/s kilobytes read per second

Kw/s kilobytes written per second

wait average number of transactions waiting for service (Q length)

actv average number of transactions actively

being serviced (removed from the

queue but not yet

completed)

%w percent of time there are transactions waiting

for service (queue non-empty)

%b percent of time the disk is busy (transactions

in progress)





Results and Solutions:

The values to look from the iostat output are:

• Reads/writes per second (r/s , w/s)

• Percentage busy (%b)

• Service time (svc_t)

If a disk shows consistently high reads/writes along with , the percentage busy (%b) of the disks is greater than 5 percent, and the average service time (svc_t) is greater than 30 milliseconds, then one of the following action needs to be taken

1.)Tune the application to use disk i/o more efficiently by modifying the disk queries and using available cache facilities of application servers .

2.) Spread the file system of the disk on to two or more disk using disk striping feature of volume manager /disksuite etc.

3.) Increase the system parameter values for inode cache , ufs_ninode , which is Number of inodes to be held in memory. Inodes are cached globally (for UFS), not on a per-file system basis

4.) Move the file system to another faster disk /controller or replace existing disk/controller to a faster

one.

Virtual Memory Statistics ( vmstat )

vmstat - vmstat reports virtual memory statistics of process, virtual memory, disk, trap, and CPU activity.



On multicpu systems , vmstat averages the number of CPUs into the output. For per-process statistics .Without options, vmstat displays a one-line summary of the virtual memory activity since the system was booted.



syntax:

Basic synctax is vmstat interval count

option - let you specify the type of information needed such as paging -p , cache -c ,.interrupt -i etc.

if no option is specified information about process , memory , paging , disk ,interrupts & cpu is displayed .

interval - is time period in seconds between two samples . vmstat 4 will give data at each 4 seconds interval.

count - is the number of times the data is needed . vmstat 4 5 will give data at 4 seconds interval 5

times.

Example

The following command displays a summary of what the system

is doing every five seconds.



example% vmstat 5

procs memory page disk faults cpu

r b w swap free re mf pi p fr de sr s0 s1 s2 s3 in sy cs us sy id

0 0 0 11456 4120 1 41 19 1 3 0 2 0 4 0 0 48 112 130 4 14 82

0 0 1 10132 4280 0 4 44 0 0 0 0 0 23 0 0 211 230 144 3 35 62

0 0 1 10132 4616 0 0 20 0 0 0 0 0 19 0 0 150 172 146 3 33 64

0 0 1 10132 5292 0 0 9 0 0 0 0 0 21 0 0 165 105 130 1 21 78

The fields of vmstat's display are

procs

r in run queue

b blocked for resources I/O, paging etc.

w swapped



memory (in Kbytes)

swap - amount of swap space currently available

free - size of the free list



page ( in units per second).

re page reclaims - see -S option for how this field is modified.

mf minor faults - see -S option for how this field is modified.

pi kilobytes paged in

po kilobytes paged out

fr kilobytes freed

de anticipated short-term memory shortfall (Kbytes)

sr pages scanned by clock algorithm



disk ( operations per second )

There are slots for up to four disks, labeled with a single letter and number.

The letter indicates the type of disk (s = SCSI, i = IPI, etc) . The number is

the logical unit number.



faults

in (non clock) device interrupts

sy system calls

cs CPU context switches



cpu - breakdown of percentage usage of CPU time. On multiprocessors this is an a

verage across all processors.

us user time

sy system time

id idle time



Results and Solutions:

A. CPU issues:

Following columns has to be watched to determine if there is any cpu issue

1. Processes in the run queue (procs r)

2. User time (cpu us)

3. System time (cpu sy)

4. Idle time (cpu id)

procs cpu

r b w us sy id

0 0 0 4 14 82

0 0 1 3 35 62

0 0 1 3 33 64

0 0 1 1 21 78

Problem symptoms:

1.) If the number of processes in run queue (procs r) are consistently greater than the number of CPUs on the system it will slow down system as there are more processes then available CPUs

2.) if this number is more than four times the number of available CPUs in the system then system is facing shortage of cpu power and will greatly slow down the processess on the system.

3.) If the idle time (cpu id) is consistently 0 and if the system time (cpu sy) is double the user time (cpu us) system is facing shortage of CPU resources.



Resolution :

Resolution to these kind of issues involves tuning of application procedures to make efficient use of cpu and as a last resort increasing the cpu power or adding more cpu to the system.





B. Memory Issues:

Memory bottlenecks are determined by the scan rate (sr) . The scan rate is the pages scanned by the clock algorithm per second. If the scan rate (sr) is continuously over 200 pages per second then there is a memory shortage.



Resolution :

1. Tune the applications & servers to make efficient use of memory and cache.

2. Increase system memory .

3. Implement priority paging in s in pre solaris 8 versions by adding line "set priority paging=1" in

/etc/system. Remove this line if upgrading from Solaris 7 to 8 & retaining old /etc/system file.

________________________________________

Network Statistics (netstat)

netstat displays the contents of various network-related data structures in depending on the options selected.

Syntax :

netstat