Wednesday, February 20, 2013

Linux new disk management

Linux new disk management:
rescan can be issued by typing the following command:
echo "- - -" > /sys/class/scsi_host/host#/scan
fdisk -l
tail -f /var/log/message


How Do I Delete a Single Device Called /dev/sdc?

In addition to re-scanning the entire bus, a specific device can be added or existing device deleted using the following command:
# echo 1 > /sys/block/devName/device/delete
# echo 1 > /sys/block/sdc/device/delete
How Do I Add a Single Device Called /dev/sdc?

To add a single device explicitly, use the following syntax:


# echo "scsi add-single-device " > /proc/scsi/scsi

here,

    : Host
    : Bus (Channel)
    : Target (Id)
    : LUN numbers

For e.g. add /dev/sdc with host # 0, bus # 0, target # 2, and LUN # 0, enter:
# echo "scsi add-single-device 0 0 2 0">/proc/scsi/scsi
# fdisk -l
# cat /proc/scsi/scsi
Sample Outputs:

Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: VMware,  Model: VMware Virtual S Rev: 1.0
  Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 00
  Vendor: VMware,  Model: VMware Virtual S Rev: 1.0
  Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 02 Lun: 00
  Vendor: VMware,  Model: VMware Virtual S Rev: 1.0
  Type:   Direct-Access                    ANSI SCSI revision: 02

Step #3: Format a New Disk

Now, you can create partition using fdisk and format it using mkfs.ext3 command:
# fdisk /dev/sdc
# mkfs.ext3 /dev/sdc3
Step #4: Create a Mount Point And Update /etc/fstab

# mkdir /disk3
Open /etc/fstab file, enter:
# vi /etc/fstab
Append as follows:

/dev/sdc3               /disk3           ext3    defaults        1 2

Save and close the file.
Optional Task: Label the partition

You can label the partition using e2label. For example, if you want to label the new partition /backupDisk, enter
# e2label /dev/sdc1 /backupDisk

Linux System Service configuration


A typical Linux system can be configured to boot into one of 5 different runlevels. During the boot process the init process looks in the /etc/inittab file to find the default runlevel. Having identified the runlevel it proceeds to execute the appropriate startup scripts located in the /etc/rc.d sub-directory.

For example if you have a runlevel of 5 configured then the init process will work through the list of startup scripts located in /etc/rc.d/rc5.d. These startup scripts start either with the letter "S" or "K" followed by a number and then a (hopefully) description word. For example the startup script for NFS (Networked File System) is typcically S60nfs whilst the stratup script for YUM system might be called K01yum.

Scripts that start with an "S" are invoked before those prefixed with a "K". The number in the filename controls the order in which the script will be executed with that group (either "S" or "K"). You wouldn't, for example, want to start NFS before the basic networking is up and running. It is also worth noting that the files in the rc.d sub-directories are not the actual scripts themselves but rather symbolic links to the actual files located in /etc/rc.d/init.d.

There are number of ways to control what services get started wihtout having to delve into the /etc/rc.d sub-directories yourself.

The command line tool chkconfig (usually located in /sbin) can be used to list and configure which services get started at boot time. To list all service settings run the following command:

    /sbin/chkconfig --list

This will display a long list of services showing whether or not they are started up at various runlevels. You may want to narrow the search down using grep. For example to list the entry for the HTTP daemon you would do the following:

    /sbin/chkconfig --list | grep httpd

which should result in something like:

    httpd           0:off   1:off   2:off   3:on    4:off   5:off    6:off

Alternatively you may just be interested to know what gets started for runlevel 3:

    /sbin/chkconfig --list | grep '3:on'

chkconfig can also be used to change the settings. If we wanted the HTTP service to start up when we at runlevel 5 we would issue the following command:

    /sbin/chkconfig --level 5 httpd on

A number of graphical tools are also available for administering services. On RedHat 9 you can run the following command:

    redhat-config-services

The equivalent command on RedHat Fedora Core is:

    system-config-services

The above graphical tools allow you to view which services will start for each runlevel, add or remove services for each runlevel and also manually start or stop services.

Another useful tool if you do not have a graphical desktop running or access via a remote X server is the ntsysv command. ntsysv resides in /sbin on most systems. Whilst a convenient tool when you don't have an X server running the one draw back of ntsysv is that it only allows you to change the settings for the current runlevel.

Tuesday, February 19, 2013

Veritas cluster software upgrade and maintenance

UPGRADE MAINTENANCE PROCEDURE

Here's a procedure to upgrade VCS or shutdown VCS during
hardware maintenance.

1. Open, freeze each Service Group, and close the VCS config.

   haconf -makerw
   hagrp -freeze -persistent
   haconf -dump makero

2. Shutdown VCS but keep services up.

   hastop -all -force

3. Confirm VCS has shut down on each system.

   gabconfig -a

4. Confirm GAB is not running on any disks.

   gabdisk -l  (use this if upgrading from VCS 1.1.x)

   gabdiskhb -l
   gabdiskx -l

   If it is, remove it from the disks on each system.

   gabdisk -d  (use this if upgrading from VCS 1.1.x)

   gabdiskhb -d
   gabdiskx -d

5. Shutdown GAB and confirm it's down on each system.

   gabconfig -U
   gabconfig -a

6. Identify the GAB kernel module number and unload it
   from each system.

   modinfo | grep gab
   modunload -i

7. Shutdown LLT. On each system, type:

   lltconfig -U

   Enter "y" if any questions are asked.

8. Identify the LLT kernel module number and unload it from
   each system.

   modinfo | grep llt
   modunload -i

9. Rename VCS startup and stop scripts on each system.

   cd /etc/rc2.d
   mv S70llt s70llt
   mv S92gab s92gab
   cd /etc/rc3.d
   mv S99vcs s99vcs
   cd /etc/rc0.d
   mv K10vcs k10vcs

10. Make a backup copy of /etc/VRTSvcs/conf/config/main.cf.
    Make a backup copy of /etc/VRTSvcs/conf/config/types.cf.

    Starting with VCS 1.3.0, preonline and other trigger scripts must
    be in /opt/VRTSvcs/bin/triggers. Also, all preonline scripts in
    previous versions (such as VCS 1.1.2) must now be combined in one
    preonline script.

11. Remove old VCS packages.

    pkgrm VRTScsga VRTSvcs VRTSgab VRTSllt VRTSperl VRTSvcswz

    If you are upgrading from 1.0.1 or 1.0.2, you must also remove the package
    VRTSsnmp, and any packages containing a .2 extension, such as VRTScsga.2,
    VRTSvcs.2, etc.

    Also remove any agent packages such as VRTSvcsix (Informix),
    VRTSvcsnb (NetBackup), VRTSvcssor (Oracle), and VRTSvcssy (Sybase).

    Install new VCS packages.

    Restore your main.cf and types.cf files.

12. Start LLT, GAB and VCS.

    cd /etc/rc2.d
    mv s70llt S70llt
    mv s92gab S92gab
    cd /etc/rc3.d
    mv s99vcs S99vcs
    cd /etc/rc0.d
    mv k10vcs K10vcs

    /etc/rc2.d/S70llt start
    /etc/rc2.d/S92gab
    /etc/rc3.d/S99vcs start

13. Check on status of VCS.

    hastatus
    hastatus -sum

14. Unfreeze all Service Groups.

    haconf -makerw
    hagrp -unfreeze -persistent
    haconf -dump -makero

Saturday, February 16, 2013

Solaris zone step by step

root@frneucvt01-r1# ls -ltr
total 2
drwxr-xr-x   2 root     root         512 Apr  7 12:16 frneucvt01-r2
root@frneucvt01-r1# ls
frneucvt01-r2
root@frneucvt01-r1# bash
root@frneucvt01-r1# zonecfg -z frneucvt01-r2
frneucvt01-r2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:frneucvt01-r2> create
zonecfg:frneucvt01-r2> set zonepath=/export/zones/frneucvt01-r2
zonecfg:frneucvt01-r2> set autoboot=true
zonecfg:frneucvt01-r2> add inherit-pkg-dir
zonecfg:frneucvt01-r2:inherit-pkg-dir> set dir=/lib
zonecfg:frneucvt01-r2:inherit-pkg-dir> end
An inherit-pkg-dir resource with the dir '/lib' already exists.
zonecfg:frneucvt01-r2:inherit-pkg-dir> add inherit-pkg-dir
usage:
add
        (global scope)
add
        (resource scope)
        Add specified resource to configuration.
zonecfg:frneucvt01-r2:inherit-pkg-dir> end
An inherit-pkg-dir resource with the dir '/lib' already exists.
zonecfg:frneucvt01-r2:inherit-pkg-dir> set dir=/platform
zonecfg:frneucvt01-r2:inherit-pkg-dir> end
An inherit-pkg-dir resource with the dir '/platform' already exists.
zonecfg:frneucvt01-r2:inherit-pkg-dir> set dir=/sbin
zonecfg:frneucvt01-r2:inherit-pkg-dir> end
An inherit-pkg-dir resource with the dir '/sbin' already exists.
zonecfg:frneucvt01-r2:inherit-pkg-dir> dir=/usr
syntax error at 'd'
Commands:

add
        (global scope)
add
        (resource scope)
cancel
clear
commit
create [-F] [ -a | -b | -t
contains a log of the zone installation.
root@frneucvt01-r1#


oot@frneucvt01-r1# zoneadm list -v
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
root@frneucvt01-r1# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   - frneucvt01-r2    installed  /export/zones/frneucvt01-r2    native   shared
root@frneucvt01-r1# zoneadm -z frneucvt01-r2 ready
zoneadm: zone 'frneucvt01-r2': WARNING: The zone.cpu-shares rctl is set but
zoneadm: zone 'frneucvt01-r2': FSS is not the default scheduling class for
zoneadm: zone 'frneucvt01-r2': this zone.  FSS will be used for processes
zoneadm: zone 'frneucvt01-r2': in the zone but to get the full benefit of FSS,
zoneadm: zone 'frneucvt01-r2': it should be the default scheduling class.
zoneadm: zone 'frneucvt01-r2': See dispadmin(1M) for more details.




root@frneucvt01-r1# zlogin -C frneucvt01-r2
[Connected to zone 'frneucvt01-r2' console]

[NOTICE: Zone booting up]


root@frneucvt01-r1# zoneadm -z frneucvt01-r2 boot

What type of terminal are you using?
 1) ANSI Standard CRT
 2) DEC VT52
 3) DEC VT100
 4) Heathkit 19
 5) Lear Siegler ADM31
 6) PC Console
 7) Sun Command Tool
 8) Sun Workstation
 9) Televideo 910
 10) Televideo 925
 11) Wyse Model 50
 12) X Terminal Emulator (xterms)
 13) CDE Terminal Emulator (dtterm)
 14) Other
Type the number of your choice and press Return: 3
Creating new rsa public/private host key pair
Creating new dsa public/private


"/etc/ssh/sshd_config" 15 edit this file 



root@frneucvt01-r1# zonecfg -z frneucvt01-r2
zonecfg:frneucvt01-r2> add fs set   
zonecfg:frneucvt01-r2:fs> set dir=/data_ora  /rt zone directory
zonecfg:frneucvt01-r2:fs> set special=/DATA/frneucvt01-r2/data_ora /opt golbal directory
zonecfg:frneucvt01-r2:fs> set type=lofs
zonecfg:frneucvt01-r2:fs> end

root@frneucvt01-r1# mount -F lofs /DATA/frneucvt01-r2/product_weblogic/ /export/zones/frneucvt01-r2/root/product_weblogic/


root@frneucvt01-r1# mount -F lofs /DATA/frneucvt01-r2/data_ora  /export/zones/frneucvt01-r2/root/data_ora


root@frneucvt01-r1# mount -F lofs /DATA/frneucvt01-r3/u01  /export/zones/frneucvt01-r3/root/u01


root@frneucvt01-r1# mount -F lofs /DATA/frneucvt01-r3/u02  /export/zones/frneucvt01-r3/root/u02


global # newfs /dev/md/rdsk/d100
newfs: construct a new file system /dev/md/rdsk/d100: (y/n)? y
Warning: 1280 sector(s) in last cylinder unallocated
/dev/md/rdsk/d100:      1024000 sectors in 712 cylinders of 15 tracks, 96 sectors
        500.0MB in 45 cyl groups (16 c/g, 11.25MB/g, 5440 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 23168, 46304, 69440, 92576, 115712, 138848, 161984, 185120, 208256,
 806720, 829856, 852992, 876128, 899264, 922400, 945536, 968672, 991808,
 1014944,
global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> set special=/dev/md/dsk/d100
zonecfg:zone1:fs> set raw=/dev/md/rdsk/d100
zonecfg:zone1:fs> set type=ufs
zonecfg:zone1:fs> end
zonecfg:zone1> exit

At this point we could reboot the zone and have the new file system mounted during zone boot. However, there is no need to restart the zone because the file system can be mounted into the running zone from the global zone. The only thing we have to do now is add the mountpoint in the zone ourselves: 

global # mkdir /export/zones/zone1/root/u01


global # mount /dev/md/dsk/d100 /export/zones/zone1/root/u01

global # zonecfg -z zone1
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=hme0
zonecfg:zone1:net> set address=192.168.1.13/24
zonecfg:zone1:net> end
zonecfg:zone1> exit
global # ifconfig hme0 addif 192.168.1.13 netmask + broadcast + zone zone1 up
Created new logical interface hme0:3
Setting netmask of hme0:3 to 255.255.255.0

The key point here is the 'zone' option of ifconfig. Running ifconfig -a inside the zone shows that we now have the extra network interface. And without having to reboot the zone! 

zone1 # ifconfig -a
lo0:5: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
hme0:2: flags=1000843 mtu 1500 index 2
        inet 129.159.206.38 netmask ffffffc0 broadcast 129.159.206.63
hme0:3: flags=1000843 mtu 1500 index 2
        inet 192.168.1.13 netmask ffffff00 broadcast 192.168.1.255


# save -vvv -D7 -S backup-par2 /opt



Clone a zone
From a global zone: 

zoneadm -z halt
zonecfg -z export -f zone.cfg
Modify zone.cfg file as needed. Particularly, change zonepath and IP address(es). Then 
zonecfg -z -f zone.cfg


Copy zonename.tar to a new host. On the new host execute: 

tar -xf zonename.tar
zonecfg -z  
zonename: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zonename>create -a
zonecfg:zonename>info
Make any necessary adjustments to the configuration. Then 

zonecfg:zonename>exit
zoneadm -z attach


To create a whole root zone, remove all inherit-pkg-dir resources as shown below
zonecfg -z
zonecfg:zonename>remove inherit-pkg-dir=/sbin
zonecfg:zonename>remove inherit-pkg-dir=/usr
zonecfg:zonename>remove inherit-pkg-dir=/platform
zonecfg:zonename>remove inherit-pkg-dir=/lib
zonecfg:zonename>exit


The interface would be


#zoneadm -z zonename lock

With the zone locked no changes in the global zone effects the
non-global zones.

#zoneadm -z zonename unlock

Now packages installed after being unlocked would be installed in the
non-global zone at the same time as happens now.


Existing System Setup 

SunFire T1000 with a single sparse root zone (zone1) installed in /export/zones/zone1. The objective is to create a clone of zone1 called zone2 
but using a different IP address and physical network port. I am not using any ZFS datasets (yet).

Procedure 

1. Export the configuration of the zone you want to clone/copy

# zonecfg -z zone1 export > zone2.cfg

2. Change the details of the new zone that differ from the existing one (e.g. IP address, data set names, network interface etc.)

# vi zone2.cfg

3. Create a new (empty, unconfigured) zone in the usual manner based on this configuration file

# zonecfg -z zone2 -f zone2.cfg

4. Ensure that the zone you intend to clone/copy is not running

# zoneadm -z zone1 halt

5. Clone the existing zone

# zoneadm -z zone2 clone zone1
Cloning zonepath /export/zones/zone1...
This took around 5 minutes to clone a 1GB zone (see notes below)
 

6. Verify both zones are correctly installed

# zoneadm list -vi
ID NAME STATUS PATH
0 global running /
- zone1 installed /export/zones/zone1
- zone2 installed /export/zones/zone2

7. Boot the zones again (and reverify correct status)

# zoneadm -z zone1 boot
# zoneadm -z zone2 boot
# zoneadm list -vi
ID NAME STATUS PATH
0 global running /
5 zone1 running /export/zones/zone1
6 zone2 running /export/zones/zone2

8. Configure the new zone via its console (very important)

# zlogin -C zone2


The above step is required to configure the locale, language, IP settings of the new zone. 
It also creates the system-wide RSA key pairs for the new zone, without which you cannot SSH into the zone. 
If this step not done, many of the services on the new zone will not start and you may observe /etc/.UNCONFIGURED errors in certain log files.


Delegating ZFS File system to a Non-Global Zone 

 

Delegating the file system will provide control to non-global zone for managing the file system properties a
nd the priviledge to perform activities like create snapshot,clone over the file system. 


[root@geekyfacts]# zonecfg -z tzone
zonecfg:tzone> add dataset
zonecfg:tzone:dataset> set name=testpool/zonefs
zonecfg:tzone:dataset> end
zonecfg:tzone> commit
zonecfg:tzone> exit
[root@geekyfacts]#


Example: Zones + Raw Devices global#zonecfg -z zone1 zonecfg:zone1> add device
zonecfg:zone1:device> set match=/dev/rdsk/c0d0s6
zonecfg:zone1:device> end
zonecfh:zone1> add device
zonecfg:zone1:device> set match=/dev/dsk/c0d0s6
zonecfg:zone1:device> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> ^D

>Adds a raw device directly into the non-global zone
>Creates device node for the new device
>Match can include wildcards and is evaluated each time the zone boots

zone1# newfs /dev/rdsk/c0d0s6 zone1# mount /dev/dsk/c0d0s6 /opt/local 


Reporting Memory Utilization and the Memory Cap Enforcement Threshold

# rcapstat -g
    id project   nproc    vm   rss   cap    at avgat   pg  avgpg
376565    rcap       0    0K    0K   10G    0K    0K   0K     0K
physical memory utilization: 55%   cap enforcement threshold: 0%
    id project   nproc    vm   rss   cap    at avgat   pg  avgpg
376565    rcap       0    0K    0K   10G    0K    0K   0K     0K
physical memory utilization: 55%   cap enforcement threshold: 0%

For linux Branded zone :

global# **zonecfg -z myzone "create -t SUNWlx; set zonepath=/export/myzone_root"**


global# **zoneadm -z myzone install -d** //// **server**

solaris ZFS cheat sheet with example

zpool create tank disk1 disk2

zpool create tank mirror disk1 disk2 mirror disk3 disk4

zpool create tank raidz disk1 disk2 disk3 disk4 

zpool status -v tank

zpool create -n tank mirror disk1 disk2 ( not create any pool but just check)

zpool destory tank (To destory the pool )

zpool add tank mirror disk1 disk2 


zpool add -n tank mirror disk1 disk2

zpool add tank raidz disk1 disk2 disk3

zpool add tank log mirror disk2 disk3 


zpool remove tank disk1 disk2  

zpool attach tank disk1 disk2

zpool attach tank disk2 disk3 ( doing 3 way mirror)

zpool detach tank disk2

zpool offline tank disk2

zpool offline -t tank disk2 ( tem offine , after reboot it become online again)

zpool online tank disk1

clearing storage pool device error

zpool clear tank

zpool clear tank disk1

replacing at same location

zpool replace tank disk1 ( If disk are of same layout)

zpool replace tank disk1 newdisk1 ( if disk are of differnt layout)

Spare pool

zpool create tank mirror disk1 disk2 spare disk3 disk4

zpool add -f tank  spare disk3 disk4


zpool remove tank disk1 ( to remove a stoarge pool )


zpool status -x tank


zpool get all tank ( To get property of pool )

zpool set autoreplace=on tank 


zpool get autoreplace tank 


To get I/O statistics

zpool iostat 

zpool iostat tank 2

zpool iostat -v 


zpool status -x 


Recover detroyed pool

zpoool destory tank

zpool import -D

zpool import -Df tank 

zpool upgrade -v ( To check the upgarde version )


Creating file system

zfs create tank/home

zfs create -o mountpoint=/export/zfs tank/home


zfs detsory tank/home



zfs destory -f tank/home

zfs renmae tank/home/maybee tank/ws/maybee

zfs list

zfs list -r pool/home/marks

zfs list /pool/home/marks

zfs list -o name,sharenfs,mountpoint

zfs set quota=50g tank/home/marks

zfs get all tank



zfs snapshot tank/home/ashu@friday ( takes only particluar snaphot)

zfs snaphot -r tank/home@friday ( Takes all the snapshot)  

zfs destroy tank/home/ashu@friday 

zfs rename tank/home/ashu@friday tank/home@today

zfs rename -r tank/home@friday tank/home@today ( All tree)


ls -ltr /tank/home/ashu/.zfs/sanpshot

zfs list -t snapshot /


zfs list -r -t snapshot -o name,creation tank/home


Rolling back snapshot

zfs rollback tank/home/ashu@friday

zfs rollback -r tank/home/ashu@friday ( -r option force delete the old snapshot)


zfs create tank/test

zfs create tank/test/productA

zfs snapshot tank/test/productA@today

zfs clone tank/test/productA@today tank/test/proudctAbeta

zfs list -r tank/test

zfs promote tank/test/productAbeta

zfs list -r tank/test


zpool scrub tank

zpool status -v tank 


zpool scrub -s tank  (Stoping the scrub)

ok boot cdrom -s

zpool import -R /a rpool

installboot -F zfs /usr/platform/'uname-i'/lib/fs/zfs/bootblk /dev/rdsk/c0t0td0s0


zfs create -V 2G rpool/swap1

swap -a /dev/zvol/dsk/rpool/swap1

swap -l


Cr



Go to /usr/share/lib/zoneinfo and do the following command:

bash# zic src/asia



As root, fire up the following command:

bash# export TZ=Asia/Jakarta



To activate the timezone at boot time, edit file /etc/TIMEZONE, change the TZ value to Asia/Jakarta

bash# ntpdate -b id.pool.ntp.org


 Run the following command in the prompt,

#rtc -z Asia/Calcutta                 ### replace "Asia/Calcutta" with your valid zonename
#rtc -c