Monday, October 7, 2013

VCS 6.0 configuration with Solaris 11 zfs and zones


VCS configuration with Solaris 11 zfs and zones.:

 
1: Create Zpool
Zpool create spoo c2t2d0

2: create Zfs filesystem:
Zfs create spoo/mnt

3: create sparsh Zone:
Zoncfg –z testzone

testZone> create
testZone >zonepath=/spoo/mnt/testzone
testZone >set ip-type=shared
testZone >remove anet
testZone >add net
testZone:net>set ip-address=192.168.0.200
testZone:net> set   onfigure-allowed-address= false
testZone:net>   set physical= net0
testZone:net>   set defrouter= 192.168.0.1
testzone:net>end
testzone>set pool=default_pool
testzone>verify
testzone>commit
exit

4:On Solaris-3 #pooladm –e

Solaris3#zoneadm –z  testzone boot
Solaris3#zoneadm –z testzone boot

 
Zlogin –C testzone

And configure zone console
5:After that configure VCS with the below command:

hagrp -add  testgrp
hagrp -modify testgrp SystemList  solaris-3 0 solaris-4 1
hagrp -modify testgrp AutoStartList solaris-3
hagrp -modify testgrp Parallel 0
hares -add  vcspool  Zpool  testgrp
hares -modify vcspool Critical 1
hares -modify vcspool ChkZFSMounts  1
hares -modify vcspool FailMode  continue
hares -modify vcspool ForceOpt  1
hares -modify vcspool ForceRecoverOpt  0
hares -modify vcspool PoolName spoo
hares -modify vcspool AltRootPath /
hares -modify vcspool ZoneResName  vcszone
hares -modify vcspool DeviceDir -delete -keys
hares -modify vcspool Enabled 1
hazonesetup -g testgrp -r vcszone -z testzone -p abc123 -a -s solaris-3,solaris-4
haconf -makerw
hares -link vcszone vcspool
haconf –dump makero

6:Now final step copy index and testzone.xml file from solaris3:/etc/zones and paste in solaris4:/etc/zones

7:
Vi /etc/zones/index and  change state of testzone configured

testzone:configured:/spoo/mnt/testzone:

 8: probe testgrp on both server

9:halt testzone on solaris-3 and export spoo

Zoneadm –z testzone halt
Zpool export spoo

10:Now run command:

hagrp –enable testgrp –sys solaris3

Linux NFS troubleshooting

NFS (Network File System) is a widely used and primitive protocol that allows computers to share files over a network. The main problems with NFS are that it relies on the inherently insecure UDP protocol, transactions are not encrypted and hosts and users cannot be easily authenticated. Below we will show a number of issues that one can follow to heal those security problems.

Let us clarify how the NFS service operates. An NFS server is the server with a file system (or a directory) which is called NFS file system (or NFS directory) that will be exported to an NFS client. The NFS client will then have to import (or mount) the exported file system (directory) to itself before being able to have access to the file system (directory). We will annotate each issue below with on server, on client, on client & server and misc. Those mean that issue is done on NFS server, NFS client, both NFS client and server, and miscellaneous, respectively.
NFS file systems should be installed on a separate disk or partition (on server)
By having file systems on a separate partition of a harddisk, we can ensure that malicious users can not simply fill up the entire harddisk by writing large files onto it. This will then be able to crash other services running on the same harddisk.
Prevent normal users on an NFS client from mounting an NFS file system (on server)
This can be done by adding parameter 'secure' in an item in /etc/exports, such as:
/home nfs-client(secure)
where the directory /home is the file system to be exported to the NFS client located at address nfs-client (specify the IP address or domain name of your NFS client).

Export an NFS file system in an appropriate permission mode (on server)
Let's say that you only need read-only permission on your exported NFS file system. Then the file system should be exported as read-only to prevent unintended or even intended modifications on those files. This is done by specifying parameter 'ro' in /etc/exports.
/home nfs-client(ro)

Restrict exporting an NFS file system to a certain set of NFS clients (on server)
Specify only a specific set of NFS clients that will be allowed to mount an NFS file system. If possible, use numeric IP addresses or fully qualified domain names, instead of aliases.
Use the 'root_squash' option in /etc/exports on the NFS server if possible (on server)
When this option is used, then while mounting using the command mount, the user ID ?root? on the NFS client will be replaced by the user ID ?nobody? on the NFS server. This is to prevent the root on the NFS client from taking a superuser privilege on the NFS server, thus perhaps illegally allowing him to modify files on the NFS server. Here is an example:
/home nfs-client(root_squash)

Disable suid (superuser ID) on an NFS file system (on client)
Add the 'nosuid' option (no superuser ID privilege) to an item in /etc/fstab (This file is used to determine which NFS file systems are to be mounted automatically at the startup time). This is to prevent files with suid bits set on the NFS server, e.g., Trojan horse files, from being executed on the NFS client, which could then lead to root compromise on the client. Or the root on the NFS client may accidentally execute those suid files. Here is an example of ?nosuid?. An item in /etc/fstab on the client may contain:
nfs-server:/home /mnt/nfs nfs ro,nosuid 0 0

where nfs-server is the IP address or domain name of the NFS server and /home is the directory on the NFS server to be mounted to the client computer at the directory /mnt/nfs. Alternatively, the 'noexec' option can be used to disable any file execution at all.
nfs-server:/home /mnt/nfs nfs ro,nosuid,noexec 0 0

Install the most recent patches for NFS and portmapper (on client & server)
NFS is known to be in the top-ten most common vulnerabilities reported by CERT and was abusively exploited. This means that the NFS server and portmapper on your system must be up-to-date to security patches.
Perform encryption over NFS traffic using SSH (on client & server)
Apart from the use of Secure Shell (SSH) for secure remote access, we can use it for tunnelling between an NFS client and server so that NFS traffic will be encrypted. The steps below will guide you how to encrypt NFS traffic using SSH.
Here is the simple diagram to show the concept of how NFS and SSH services cooperate.
nfs-client nfs-server
mount --- SSH <=================> SSHD --- NFS

From this figure, when you mount an NFS directory from a client computer, you will mount through SSH. After the mounting is done, the NFS traffic in both directions will be encrypted and so secure.
In the figure the NFS server is located at address nfs-server (use either the IP address or domain name of your NFS server instead), and the NFS client is at address nfs-client. Make sure that in both systems you have SSH and NFS related services already installed so you can use them.
There are two way configurations on the NFS client and server which are described in the two sections below.

NFS server configuration

Section 1.1 and 1.2 are what we have to do on the NFS server. Export an NFS directory to itself
For example, if the NFS server's IP address is 10.226.43.154 and the NFS directory to be exported is /home, then add the following line to /etc/exports
/home 10.226.43.154(rw,root_squash)

The reason for exporting directory /home to itself, instead of to an NFS client? IP address in an ordinary fashion, is that according to the figure above, we will feed the NFS data on the server to SSHD which is running at 10.226.43.154, instead of to the client computer in the usual case. The NFS data will then be forwarded securely to the client computer through the tunnel.
Note that the exported directory is allowed for read and write permission (rw). root_squash means the person who starts the mounting process to this directory will not obtain the root privilege on this NFS server.
Restart NFS and SSH daemons
Using Red Hat 7.2, you can manually start NFS and SSHD by issuing the following commands:
#/sbin/service nfs restart
#/sbin/service sshd restart

If you want to have them started automatically at startup time, with Red Hat 7.2 add the two lines below to the startup file /etc/rc.d/rc.local.
/sbin/service nfs start
/sbin/service sshd start

The term nfs in the commands above is a shell script that will start off two services, namely, NFS and MOUNTD.

NFS client configuration

Three sections below show what we have to do on the NFS client. Find the ports of NFS and MOUNTD on the NFS server Let's say you are now on the NFS client computer. To find the NFS and MOUNTD ports on the NFS server, use the command.
#rpcinfo -p nfs-server
   
   program vers proto   port
   100000    2   tcp    111  portmapper
   100000    2   udp    111  portmapper
   100003    2   tcp   2049  nfs
   100003    2   udp   2049  nfs
   100021    1   udp   1136  nlockmgr
   100021    3   udp   1136  nlockmgr
   100021    4   udp   1136  nlockmgr
   100011    1   udp    789  rquotad
   100011    2   udp    789  rquotad
   100011    1   tcp    792  rquotad
   100011    2   tcp    792  rquotad
   100005    2   udp   2219  mountd
   100005    2   tcp   2219  mountd
   
Note the lines with terms nfs and mountd. Under the column port, those are the ports for nfs and mountd. nfs has a port 2049 and mountd has a port 2219.

Setup the tunnel using SSH

On the NFS client computer, bind a SSH port with NFS port 2049.
#ssh -f -c blowfish -L 7777:nfs-server:2049 -l tony nfs-server /bin/sleep 86400
#tony@nfs-server's password:
#
where:
-c blowfish means SSH will use the algorithm blowfish to perform encryption.

-L 7777:nfs-server:2049 means binding the SSH client at port 7777 (or any other port that you want) to communicate with the NFS server at address nfs-server on port 2049.

-l tony nfs-server means in the process of login on the authentication server at address nfs-server (specify either the IP address or domain name of the authentication server), use the user login name tony to authenticate on the server.

/bin/sleep 86400 means to prevent spawning a shell on the client computer for 1 day (86,400 seconds). You can specify any larger number.

The line with #tony@nfs-server's password: will prompt the user tony for a password to continue authentication for the user.
Also on the NFS client computer, bind another SSH port with MOUNTD port 2219.
#ssh -f -c blowfish -L 8888:nfs-server:2219 -l tony nfs-server /bin/sleep 86400
#tony@nfs-server's password:
#
where:
-L 8888:nfs-server:2219 means binding this SSH client at port 8888 (or any other port that you want but not 7777 because you already used 7777) to communicate with the NFS server at address nfs-server on port 2219.
c) On the NFS client computer, mount NFS directory /home through the two SSH ports 7777 and 8888 at a local directory, say, /mnt/nfs.
#mount -t nfs -o tcp,port=7777 ,mountport=8888 localhost:/home /mnt/nfs

Normally the format of the command mount is to mount, at the IP address (or domain name) of the remote host, the remote NFS directory (/home) to the local directory (/mnt/nfs). However, the reason we mount at localhost instead of the nfs-server, is because the data after decryption at the left end of the tunnel (see the figure above also) is on the localhost, not the remote host.
Alternatively, if you want to mount the NFS directory automatically at startup time, add the following line to /etc/fstab
localhost:/home /mnt/nfs/ nfs tcp,rsize=8192,wsize=8192,intr,rw,bg,nosuid,port=7777,mountport=8888,noauto

Allow only traffic from authorised NFS clients to the NFS server (on server)
Supposing that an NFS server only provides the NFS service but nothing else so there are three ports available to use on the server, i.e., RPC Portmapper (on port 111), NFS (on port 2049), and Mountd (on port 2219). Here we can do some filtering on traffic that goes to the NFS server. Through the iptables firewall running locally on the NFS server (you must install iptables to use the following commands), allow only traffic from any authorised NFS client to the server.
Allow traffic from an authorised subnet 10.226.43.0/24 to the ports Portmapper, NFS, and Mountd.
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 111 -j ACCEPT
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 2049 -j ACCEPT
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 2219 -j ACCEPT

Deny something else.
#iptables -A INPUT -i eth0 -s 0/0 -dport 111 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -dport 2049 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -dport 2219 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -j DROP

Basically the NFS service operates through Portmapper service so if we block Portmapper port 111, we also then block NFS port 2049.
Alternatively, you can use the TCP wrapper to filter access to your portmapper by adding the line:
portmapper: 10.226.43.0/24
to /etc/hosts.allow to allow access to portmapper only from subnet 10.226.43.0/24.

Also add the line below to /etc/hosts.deny to deny access to all other hosts not specified above.
portmapper:ALL

Filter out Internet traffic to the NFS service on the routers and firewalls (misc)
In some cases for many organisations with their computers visible on the Internet, if the NFS service is also visible, then we may need to block Internet traffic to ports 111 (Portmapper), 2049 (NFS), and 2219 (Mountd) on your routers or firewalls to prevent unauthorised access to the two ports. With the iptables set up as your firewall, use the following rules:
#iptables -A INPUT -i eth0 ?d nfs-server -dport 111 -j DROP
#iptables -A INPUT -i eth0 ?d nfs-server -dport 2049 -j DROP
#iptables -A INPUT -i eth0 -d nfs-server -dport 2219 -j DROP

Use the software tool NFSwatch to monitor NFS traffic (misc)
NFSwatch allows you to monitor NFS packets (traffic) flowing between the NFS client and server. It can be downloaded fromftp://ftp.cerias.purdue.edu/pub/tools/unix/netutils/nfswatch/. One good reason that we need to monitor is that in case there is some malicious activity going on or already taking place, we would then use the log created by NFSwatch to trace back to how and where it came from. To monitor NFS packets between nfs-server and nfs-client, use the command:
   #nfswatch -dst nfs-server -src nfs-client

all hosts                   Wed Aug 28 10:12:40 2002   Elapsed time:   00:03:10
Interval packets:      1098 (network)        818 (to host)          0 (dropped)
Total packets:        23069 (network)      14936 (to host)          0 (dropped)
                      Monitoring packets from interface lo
                     int   pct   total                       int   pct   total
ND Read                0    0%        0 TCP Packets          461   56%    13678
ND Write               0    0%        0 UDP Packets          353   43%     1051
NFS Read             160   20%      271 ICMP Packets           0    0%        0
NFS Write              1    0%        1 Routing Control        0    0%       36
NFS Mount              0    0%        7 Address Resolution     2    0%       76
YP/NIS/NIS+            0    0%        0 Reverse Addr Resol     0    0%        0
RPC Authorization    166   20%      323 Ethernet/FDDI Bdcst    4    0%      179
Other RPC Packets      5    1%       56 Other Packets          2    0%      131
                                 1 file system
File Sys        int   pct   total
tmp(32,17)        0    0%     15
   

Specify the IP address (or domain name) of the source (-src) and that of the destination (-dst).

Tuesday, June 4, 2013

Solaris-9 to 10 live upgrade




 root@sol1 ~ $ sudo mkdir /media/solaris10-iso
root@sol1 ~ $
root@sol1 ~ $ sudo mount /media/ACER/Users/sol1/Downloads/Solaris-10-u7-ga-x86x64-dvd.iso /media/solaris10-iso/ -t iso9660 -o loop
root@sol1 ~ $
root@sol1 ~ $ ls -lrt /media/solaris10-iso/
total 490
-r–r–r– 1 root root 487593 2009-02-26 01:52 JDS-THIRDPARTYLICENSEREADME
-r–r–r– 1 root root 6582 2009-02-26 01:55 Copyright
-r-xr-xr-x 1 root root 257 2009-03-31 00:20 installer
dr-xr-xr-x 2 root root 2048 2009-03-31 00:34 License
dr-xr-xr-x 7 root root 2048 2009-03-31 00:34 Solaris_10
dr-xr-xr-x 3 root root 2048 2009-03-31 00:34 boot
root@sol1 ~ $
root@sol1 ~ $ df -h |grep solaris
/dev/loop0 2.2G 2.2G 0 100% /media/solaris10-iso
root@sol1 ~ $
root@sol1 ~ $ cat /etc/exports |tail -1
/media/solaris10-iso SERVER1(ro,root_squash) SERVER2(ro,root_squash) SERVER3(ro,root_squash) SERVER4(ro,root_squash)
root@sol1 ~ $
root@sol1 ~ $ sudo exportfs -a



exportfs: /etc/exports [2]: Neither ‘subtree_check’ or ‘no_subtree_check’ specified for export “SERVER1:/media/solaris10-iso”.
Assuming default behaviour (‘no_subtree_check’).
NOTE: this default has changed since nfs-utils version 1.0.x

root@sol1 ~ $
root@sol1 ~ $ exportfs



/media/solaris10-iso
SERVER4

root@sol1 ~ $
root@sol1 ~ $ showmount -e
Export list for sol1:
/media/solaris10-iso SERVER1,SERVER2,…
/home/sol1/Virtual_Machines/Virtualbox_Share SERVER1,SERVER2,…

###Mounting the shared ISO image as NFS mountpoint in Solaris 9 Server###


bash-2.05# uname -a
SunOS rskvmsol9 5.9 Generic_112234-03 i86pc i386 i86pc
bash-2.05# /etc/init.d/nfs.client start
bash-2.05# ps -ef |grep -i nfs
root 172 1 0 03:27:24 ? 0:00 /usr/lib/nfs/lockd
daemon 170 1 0 03:27:23 ? 0:00 /usr/lib/nfs/statd
root 324 286 0 03:39:11 pts/1 0:00 grep -i nfs
bash-2.05#
bash-2.05# mkdir /solaris10-iso
bash-2.05#
bash-2.05# mount -F nfs NFS_SERVER:/media/solaris10-iso /solaris10-iso/
bash-2.05#
bash-2.05# df -h |grep solaris
NFS_SERVER:/media/solaris10-iso 2.2G 2.2G 0K 100% /solaris10-iso

###Solaris Live Upgrade (From Solaris 9 to Solaris 10)###


bash-2.05# uname -a
SunOS rskvmsol9 5.9 Generic_112234-03 i86pc i386 i86pc
bash-2.05#
bash-2.05# cat /etc/release
Solaris 9 12/02 s9x_u2wos_10 x86
Copyright 2002 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 05 November 2002
bash-2.05#
bash-2.05# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d0s0 992M 466M 466M 51% /
/proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
fd 0K 0K 0K 0% /dev/fd
swap 884M 16K 884M 1% /var/run
/dev/dsk/c0d0s5 481M 15K 433M 1% /opt
swap 884M 0K 884M 0% /tmp
/dev/dsk/c0d0s7 3.9G 9K 3.9G 1% /export/home
10.176.80.232:/media/solaris10-iso
2.2G 2.2G 0K 100% /solaris10-iso

###Added a new disk for performing live upgrade###

bash-2.05# devfsadm -c disk
bash-2.05# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c0d0
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c0d1
/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
Specify disk (enter its number): ^D
bash-2.05#
bash-2.05# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c0d0
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c0d1
/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
Specify disk (enter its number): 1
AVAILABLE DRIVE TYPES:
0. DEFAULT
1. other
Specify disk type (enter its number): 0
selecting c0d1
No current partition list
No defect list found
[disk formatted, no defect list found]
FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
fdisk – run the fdisk program
repair – repair a defective sector
show – translate a disk address
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
volname – set 8-character volume name
! – execute , then return
quit
format> p
Please run fdisk first.
format> fdisk
No fdisk table exists. The default partition for the disk is:
a 100% “SOLARIS System” partition
Type “y” to accept the default partition, otherwise type “n” to edit the
partition table.
y
format> ^D
bash-2.05#
bash-2.05# prtvtoc /dev/dsk/c0d0s0 | fmthard -s – /dev/rdsk/c0d1s0
fmthard: New volume table of contents now in place.
bash-2.05#
#Now new disk is ready for creating new boot environment
#Now we need to patch the Solaris 9 server before installing live upgrade packages from Solaris 10(to which the operating environment is going to be upgraded)
#If you ignore patching, you can still create the new BE using lucreate command successfully. But you can’t upgrade the inactive BE to Solaris 10 as we desire. While upgrading using luupgrade command, we’ll encounter patching errors which will prevent us from proceeding further. I verified this fact practically already. Hence it is strongly recommended to patch the running OS (as suggested by “http://sunsolve.sun.com/search/document.do?assetkey=1-61-206844-1“)
bash-2.05# cd /var/tmp/solaris9_patches/
bash-2.05# ls -lrt
total 24
drwxr-xr-x 3 root other 512 Aug 30 2003 114483-04
drwxr-xr-x 3 root other 512 Sep 12 2005 115690-01
drwxr-xr-x 4 root other 512 Sep 15 2005 120465-01
drwxr-xr-x 3 root other 512 Aug 28 2006 114330-02
drwxr-xr-x 3 root other 512 Nov 13 2007 114194-11
drwxr-xr-x 4 root other 512 Apr 3 2008 137478-01
drwxr-xr-x 3 root other 512 May 29 2008 115167-08
drwxr-xr-x 6 root other 512 Jun 29 2009 114568-27
drwxr-xr-x 4 root other 512 Aug 21 21:19 114637-05
drwxr-xr-x 12 root other 512 Sep 23 21:57 114273-04
drwxr-xr-x 26 root other 1024 Jan 26 22:23 122301-48
bash-2.05# patchadd 120465-01/
Checking installed patches…
ERROR: This patch requires patch 117172-17
which has not been applied to the system.
Patchadd is terminating.
#Hence downloaded this patch 117172-17 (kernel patch) also & installed first
#This kernel patch needs to be applied single user mode & system needs to be restarted immediately
bash-2.05# ls -lrt
total 24
drwxr-xr-x 3 root other 512 Aug 30 2003 114483-04
drwxr-xr-x 24 root other 1024 Jan 22 2005 117172-17
drwxr-xr-x 3 root other 512 Sep 12 2005 115690-01
drwxr-xr-x 4 root other 512 Sep 15 2005 120465-01
drwxr-xr-x 3 root other 512 Aug 28 2006 114330-02
drwxr-xr-x 3 root other 512 Nov 13 2007 114194-11
drwxr-xr-x 4 root other 512 Apr 3 2008 137478-01
drwxr-xr-x 3 root other 512 May 29 2008 115167-08
drwxr-xr-x 6 root other 512 Jun 29 2009 114568-27
drwxr-xr-x 4 root other 512 Aug 21 21:19 114637-05
drwxr-xr-x 12 root other 512 Sep 23 21:57 114273-04
drwxr-xr-x 26 root other 1024 Jan 26 22:23 122301-48
#Run patchadd command for all of these patches in the following order & instruction,
#init s && patchadd 117172-17 && init 6
#patchadd 120465-01 && init 6
#patchadd 114568-27 (No need of installing the Patch 115690-01 as it was obsoleted by 114568-27)
#patchadd 114194-11
#patchadd 115167-08
#patchadd 114483-04
#patchadd 137478-01
#patchadd 114330-02
#init 6
#These patches should be sufficient for proceeding further with live upgrade#
bash-2.05# pkginfo |grep lusystem SUNWcqhpc COMPAQ Hot Plug PCI controller driver
system SUNWctlu Print utilities for CTL locales
system SUNWdthez Desktop Power Pack Help Volumes
application SUNWj2pi Java Plug-in
system SUNWmdr Solaris Volume Manager, (Root)
system SUNWmdu Solaris Volume Manager, (Usr)
system SUNWpiclu PICL Libraries, and Plugin Modules (Usr)
system SUNWvolg Volume Management Graphical User Interface
system SUNWvolr Volume Management, (Root)
system SUNWvolu Volume Management, (Usr)
system SUNWxwhl X Window System & Graphics Header links in /usr/include
#So there is no existing installation of live upgrade packages available. Hence mount Solaris 10 DVD & install liveupgrade packages to the system
bash-2.05#
bash-2.05# cd /solaris10-iso/Solaris_10/Tools/Installers/
bash-2.05# ./liveupgrade20 -noconsole -nodisplay
Sun Microsystems, Inc.
Binary Code License Agreement
Live Upgrade
READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED
SUPPLEMENTAL LICENSE TERMS (COLLECTIVELY “AGREEMENT”)
CAREFULLY BEFORE OPENING THE SOFTWARE MEDIA PACKAGE. BY
OPENING THE SOFTWARE MEDIA PACKAGE, YOU AGREE TO THE TERMS
OF THIS AGREEMENT. IF YOU ARE ACCESSING THE SOFTWARE
ELECTRONICALLY, INDICATE YOUR ACCEPTANCE OF THESE TERMS BY
SELECTING THE “ACCEPT” BUTTON AT THE END OF THIS
AGREEMENT. IF YOU DO NOT AGREE TO ALL THESE TERMS,
PROMPTLY RETURN THE UNUSED SOFTWARE TO YOUR PLACE
OF PURCHASE FOR A REFUND OR, IF THE SOFTWARE IS ACCESSED
ELECTRONICALLY, SELECT THE “DECLINE” BUTTON AT THE END OF
THIS AGREEMENT.



For inquiries please contact: Sun Microsystems, Inc., 4150
Network Circle, Santa Clara, California 95054, U.S.A.
bash-2.05#
bash-2.05# pkginfo |grep SUNWlu
application SUNWlucfg Live Upgrade Configuration
application SUNWlur Live Upgrade (root)
application SUNWluu Live Upgrade (usr)
bash-2.05# pkginfo -l SUNWlucfg SUNWlur SUNWluu
PKGINST: SUNWlucfg
NAME: Live Upgrade Configuration
CATEGORY: application
ARCH: i386
VERSION: 11.10,REV=2007.03.09.15.05
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Live Upgrade Configuration
PSTAMP: on10-adms-patch-x20080801100945
INSTDATE: Feb 17 2010 04:44
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 5 installed pathnames
3 shared pathnames
3 directories
35 blocks used (approx)
PKGINST: SUNWlur
NAME: Live Upgrade (root)
CATEGORY: application
ARCH: i386
VERSION: 11.10,REV=2005.01.09.21.46
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Live Upgrade (root)
PSTAMP: on10-adms-patch-x20080801100947
INSTDATE: Feb 17 2010 04:44
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 39 installed pathnames
9 shared pathnames
4 linked files
15 directories
13 executables
4230 blocks used (approx)
PKGINST: SUNWluu
NAME: Live Upgrade (usr)
CATEGORY: application
ARCH: i386
VERSION: 11.10,REV=2005.01.09.21.46
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Live Upgrade (usr)
PSTAMP: on10-adms-patch-x20080801100949
INSTDATE: Feb 17 2010 04:44
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 165 installed pathnames
7 shared pathnames
11 directories
45 executables
4022 blocks used (approx)
bash-2.05#
#Now creating new BE (inactive) which is just a copy of the running Solaris 9 OS
#Here naming the current BE as sol9 & new BE as sol10
#Before beginning,make sure that you allocate more disk space to new environment(ABE) (atleast twice/thrice that of Current BE). Live upgrade will fail during luupgrade command execution if the space is not sufficient!!!
bash-2.05# lucreate -c sol9 -n sol10 -m /:c0d1s0:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name expands to device path

Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Checking GRUB menu…
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device
is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Creating file systems on boot environment .
Creating file system for </> in zone on
.
Mounting file systems for boot environment .
Calculating required sizes of file systems for boot environment .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
WARNING: The file
contains a list of <2>
potential problems (issues) that were encountered while populating boot
environment .
INFORMATION: You must review the issues listed in
and determine if any must be resolved. In
general, you can ignore warnings about files that were skipped because
they did not exist or could not be opened. You cannot ignore errors such
as directories or files that could not be created, or file systems running
out of disk space. You must manually resolve any such problems before you
activate boot environment .
Creating compare databases for boot environment .
Creating compare database for file system
.
Creating compare database for file system </>.
Updating compare databases on boot environment .
Making boot environment bootable.
Updating bootenv.rc on ABE .
Skipping menu entry delete: Non existent GRUB menu

Population of boot environment successful.
Creation of boot environment successful.
bash-2.05#
###Now check for any errors & the status of the OS Environments
bash-2.05# ls -lrt /tmp/lucopy.errors.2707
-rw-r–r– 1 root other 24 Feb 17 07:10 /tmp/lucopy.errors.2707
bash-2.05#
bash-2.05# cat /tmp/lucopy.errors.2707
1155232 blocks
0 blocks
bash-2.05#
bash-2.05# lustatus -l /tmp/lucopy.errors.2707
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes yes yes no –
sol10 yes no no yes –
bash-2.05#
bash-2.05# lustatus sol10
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol10 yes no no yes –
bash-2.05#
bash-2.05# lustatus Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes yes yes no –
sol10 yes no no yes –
bash-2.05#
###Now we’ve a copy of Solaris 9 as ABE (Alternate Boot Environment). This needs to be upgraded to Solaris 10 (from DVD image) using luupgrade as follows,
bash-2.05# luupgrade -u -n sol10 -s /solaris10iso/
System has findroot enabled GRUB
Skipping menu entry delete: Non existent GRUB menu

Copying failsafe kernel from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is
Mounting miniroot at

Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 5% completed
Upgrading Solaris: 20% completed
Upgrading Solaris: 33% completed
Upgrading Solaris: 38% completed
Upgrading Solaris: 48% completed
Upgrading Solaris: 57% completed
Upgrading Solaris: 88% completed
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Deleted empty GRUB menu on ABE .
Updating package information on boot environment .
Package information successfully updated on boot environment .
Adding operating system patches to the BE .
The operating system patch installation is complete.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file
on boot
environment contains a log of the upgrade operation.
INFORMATION: The file
on boot
environment contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment is complete.
Installing failsafe
Failsafe install is complete.
bash-2.05#
###ABE upgradation has got completed successfully. Now activate the ABE as active BE & reboot(using init 6 and strictly not with reboot command) to check whether it boots properly.
bash-2.05# luactivate sol10
System has findroot enabled GRUB
A Live Upgrade Sync operation will be performed on startup of boot environment .
Generating boot-sign for ABE
Generating partition and slice information for ABE
No boot menu exists. Creating new menu file
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
GRUB menu has no default setting
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unchanged
Done eliding bootadm entries.

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c0d0s0 /mnt
3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File
propagation successful
File propagation successful
File propagation successful
File propagation successful
Deleting stale GRUB loader from all BEs.
File deletion successful
File deletion successful
File deletion successful
Activation of boot environment successful. ##After reboot, successfully Solaris 10 booted & perfectly working
bash-3.00# uname -a
SunOS rskvmsol9 5.10 Generic_139556-08 i86pc i386 i86pc
bash-3.00#
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes no no yes –
sol10 yes yes yes no -
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d1s0 4.4G 2.5G 1.9G 58% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 549M 756K 548M 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
fd 0K 0K 0K 0% /dev/fd
swap 548M 48K 548M 1% /tmp
swap 548M 24K 548M 1% /var/run
/dev/dsk/c0d0s7 3.9G 4.0M 3.9G 1% /export/home
###That’s All!!! Live Upgrade is a nice & pretty straight-forward feature which is quite beneficial for production environments where less downtime is desirable. We now have two boot environments & if we wish,we can remove the older one once we’re sure that the new BE becomes stable & performing well without any issues.
Note:
====
1. If We receive the error messages starting with “This system contains only a single GRUB menu for all boot environments…” while executing luupgrade, We can use the following solution to fix that to proceed further.
bash-2.05# luupgrade -u -n sol10 -s /solaris10iso/
This system contains only a single GRUB menu for all boot environments. To
enhance reliability and improve the user experience, live upgrade requires
you to run a one time conversion script to migrate the system to multiple
redundant GRUB menus. This is a one time procedure and you will not be
required to run this script on subsequent invocations of Live Upgrade
commands. To run this script invoke:
/usr/lib/lu/lux86menu_propagate /path/to/new/Solaris/install/image OR
/path/to/LiveUpgrade/patch
where /path/to/new/Solaris/install/image is an absolute
path to the Solaris media or netinstall image from which you installed the
Live Upgrade packages and /path/to/LiveUpgrade/patch is an absolute path
to the Live Upgrade patch from which this Live Upgrade script was patched
into the system.
##Here we encountered an error, this is related to creating multiple GRUB menus to reflect both the BEs. To fix this, issue the commands as suggested by the error output
bash-2.05# /usr/lib/lu/lux86menu_propagate /solaris10iso/
Validating the contents of the media
.
The media is a standard Solaris media.
The media contains a Solaris operating system image.
The media contains version <10>.
Installing latest Live Upgrade package/patch on all BEs
Updating Live Upgrade packages on all BEs
Successfully updated Live Upgrade packages on all BEs
Successfully extracted GRUB from media
System has no GRUB slice
Installing GRUB bootloader to all GRUB based BEs
System does not have an applicable x86 boot partition
install GRUB to all BEs successful
Converting root entries to findroot
Skipping elide of bootadm entries: Non-existent or zero length GRUB menu.
File
deletion successful
Successfully deleted GRUB_slice file
File deletion successful
Successfully deleted GRUB_root file
Propagating findroot GRUB for menu conversion.
File propagation successful
File propagation successful
File propagation successful
File propagation successful
Deleting stale GRUB loader from all BEs.
File deletion successful
File deletion successful
File deletion successful
Conversion was successful ##Now run luupgrade again
2. If your live upgrade gets failed at the end of execution of luupgrade command, it would have been mostly due to space constraint on the disk on which ABE is created. You can mount the disk in a temporary mount point & check the disk utilization. If it is 100% full, this is the reason for the failure. In such a case, We would have to delete the ABE using ludelete and re-format the disk and start the live upgrade process all over again from scratch!!!.The following is an example for such failure,
bash-2.05# luupgrade -u -n sol10 -s /solaris10iso/
System has findroot enabled GRUB
Skipping menu entry delete: Non existent GRUB menu

Copying failsafe kernel from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is
Mounting miniroot at

Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment .
ERROR: Unable to update package instance information on boot environment .
ABE boot partition backing deleted.
ABE GRUB has no capability information. Skipping GRUB upgrade.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file
on boot
environment contains a log of the upgrade operation.
INFORMATION: The file
on boot
environment contains a log of cleanup operations required.
WARNING: <99> packages failed to install properly on boot environment .
INFORMATION: The file
on
boot environment contains a list of packages that failed to
upgrade or install properly.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment failed.
Installing failsafe
cp: /tmp/.luupgrade.inf.4648/boot/multiboot: No space left on device
cp: /tmp/.luupgrade.inf.4648/boot/x86.miniroot-safe: No space left on device
ERROR: Failsafe install failed.
bash-2.05#
##Here the reason has been clearly provided in the output.Now check & confirm if this is due to space constraint by,
bash-2.05#mount /dev/dsk/c0d1s0 /a
bash-2.05# df -h /aFilesystem size used avail capacity Mounted on
/dev/dsk/c0d1s0 992M 992M 0 100% /a
bash-2.05#umount /a
##Delete the failed ABE as in the following example,
bash-2.05# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes yes yes no –
sol10 yes no no yes -
bash-2.05# ludelete -f sol10
System has findroot enabled GRUB
Skipping menu entry delete: Non existent GRUB menu

Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment deleted. bash-2.05# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes yes yes no –


Saturday, April 13, 2013

Change the default Maximum Transmission Unit (MTU) Size in solairs

 Change the default Maximum Transmission Unit (MTU) Size

The maximum transmission unit (MTU) is the size (in bytes) of the largest protocol data unit that it can pass onwards. MTU parameters usually appear in association with a communications interface (NIC, serial port, etc.). The MTU may be fixed by standards (as is the case with Ethernet) or decided at connect time (as is usually the case with point-to-point serial links).
A higher MTU brings greater efficiency because each packet carries more user data while protocol overheads, such as headers or underlying per-packet delays remain fixed, and higher efficiency means a slight improvement in bulk protocol throughput. However, large packets can occupy a slow link for some time, causing greater delays to following packets and increasing lag and minimum latency. For example, a 1500 byte packet, the largest allowed by Ethernet at the network layer (and hence most of the Internet), would tie up a 14.4k modem for about one second.

Now here are the step by step to increase the MTU size, on Solaris of course.
by default, if you type “ifconfig -a” you will get the MTU size is 1500:
bash-3.00# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=201000843 mtu 1500 index 2
inet 10.32.16.1 netmask ffffff00 broadcast 10.32.16.255
ether 8:0:27:73:25:e8
e1000g1: flags=201000843 mtu 1500 index 3
inet 10.32.16.2 netmask ffffff00 broadcast 10.32.16.255
ether 8:0:27:6a:34:ae
bash-3.00#


you can change MTU size by running “ifconfig mtu ”, but it doesn’t work for e1000g if the max default is still 1500. try and you will got error like this:
bash-3.00# ifconfig e1000g1 mtu 8000
ifconfig: setifmtu: SIOCSLIFMTU: e1000g1: Invalid argument
{tried on bge interface also on M5000 and the result still same:
root@server # ifconfig bge2 mtu 8000
ifconfig: setifmtu: SIOCSLIFMTU: bge2: Invalid argument
{using dladm command also failed:
root@server # dladm set-linkprop -p mtu=8000 bge2
dladm: warning: invalid link property ‘mtu’
READ from this link and this link, then finally I understand that changing MTU size is different for each interface types. and here are the conclusion:
Change MTU size for E1000g (Intel PRO/1000 Gigabit family device driver) interface:
Scenario:
I have 2 interface; e1000g0 and e1000g1. I need to change the MTU size to 8000  for e1000g1 interface only.
1.  Check current config with “ifconfig -a”
2. edit file “/kernel/drv/e1000g.conf”
go to “MaxFrameSize” line, change the zero values number like this:
MaxFrameSize=0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
# 0 is for normal ethernet frames.
# 1 is for upto 4k size frames.
# 2 is for upto 8k size frames.
# 3 is for upto 16k size frames.
# These are maximum frame limits, not the actual ethernet frame
# size. Your actual ethernet frame size would be determined by
# protocol stack configuration (please refer to ndd command man pages)
# For Jumbo Frame Support (9k ethernet packet)
# use 3 (upto 16k size frames)
Note: Above configuration only affect to e1000g1.
If you want to change MTU size to all interface, just simply change all zero values with 1,2,3 as you need:
Example: MaxFrameSize=2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2;
then, all your interface will support 8K MTU.
3. reboot
4. check the result with “ifconfig -a | grep mtu”
bash-3.00# ifconfig -a | grep mtu
lo0: flags=2001000849 mtu 8232 index 1
e1000g0: flags=201000843 mtu 1500 index 2
e1000g1: flags=1201000843 mtu 8106 index 3
bash-3.00#

5. After reboot, the MTU size is 8106. if you want 8000 mtu size, then edit file “/etc/hostname.e1000g1″:
bash-3.00# cat /etc/hostname.e1000g1
solaris10 mtu 8000
bash-3.00# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=201000843 mtu 1500 index 2
inet 10.32.16.1 netmask ffffff00 broadcast 10.32.16.255
ether 8:0:27:73:25:e8
e1000g1: flags=1201000843 mtu 8000 index 3
inet 10.32.16.2 netmask ffffff00 broadcast 10.32.16.255
ether 8:0:27:6a:34:ae

Change MTU size for bge (Broadcom Gigabit Ethernet device driver) interface:
bash­3.00# grep bge /etc/path_to_inst
“/pci@0,600000/pci@0/pci@8/pci@0/network@2″ 0 “bge”
“/pci@0,600000/pci@0/pci@8/pci@0/network@2,1″ 1 “bge”
­
bash­3.00# cat /etc/system
set bge:bge_jumbo_enable = 1
­
bash­3.00# cat /platform/sun4u/kernel/drv/bge.conf
default_mtu=9000;
name=”bge” parent=”/pci@0,600000″ unitaddress=”2″ default_mtu=9000;
bash­3.00# reboot
bash­3.00# ifconfig ­-a
Change MTU size for ce (Cassini Gigabit-Ethernet device driver) interface:

Thursday, April 4, 2013

Solaris network dladm – Display Link status,speed,duplex,statistics,MTU



In the past we have to mess around with the NDD commands and stats tools like kstat to find the network link status, speed, duplex information in Sun Solaris. With Solaris 10, this has become much easier with the dladm utility.
dladm is the admin utility for Data-Link Interface which helps to display informarthe like Link Status (UP/DOWN), Speed, Duplex, MTU, VLAN Tagged and crucially statistics of network traffic on each of the interfaces historically as well as in real time. dladm can also configure and admin Link Aggregation on multiple NICs which we will not focus here.

Show Link Status/Speed/Duplex
# dladm show-dev
nxge0 link: down speed: 0 Mbps duplex: unknown
nxge1 link: down speed: 0 Mbps duplex: unknown
nxge2 link: up speed: 1000 Mbps duplex: full
nxge3 link: up speed: 1000 Mbps duplex: full
As you can see above the “show-dev” option lists all the network interfaces with Link Status (UP/DOWN), current speed in Mbps and Duplex (Half/Full)
Show Link Status
# dladm show-link
nxge0 type: non-vlan mtu: 1500 device: nxge0
nxge1 type: non-vlan mtu: 1500 device: nxge1
nxge2 type: non-vlan mtu: 1500 device: nxge2
nxge3 type: non-vlan mtu: 1500 device: nxge3
Here “show-link” option reveals the MTU and the VLAN tagging detail on each of the interfaces on the system.
Show Stats of all Interfaces for all time
# dladm show-dev -s
ipackets rbytes ierrors opackets obytes oerrors
nxge0 0 0 0 0 0 0
nxge1 0 0 0 0 0 0
nxge2 179625752557169463759657 581104982 3964684165410
nxge3 22240891 1834257868 0 5198483 395084708 0
The “-s” option along with “show-dev” or “show-link” displays network traffic statistics including Input/Output packets, input/output errors.
Stats in real-time
To show the stats of a particular interface in real-time use the “-i” option which is the interval in seconds. The first line shows again historic stats and later on the real-time information for every “n” seconds (5 seconds in our example)
# dladm show-link -s -i 5 nxge2
ipackets rbytes ierrors opackets obytes oerrors
nxge2 179637824757173944575957 581119516 3964706801670
ipackets rbytes ierrors opackets obytes oerrors
nxge2 961 319105 0 150 17874 0
ipackets rbytes ierrors opackets obytes oerrors
nxge2 887 263850 0 117 16505 0
If we fail to mention the interface then it takes the default interface (1st in the list). As you can see from the example below, we see stats for nxge0 which is not connected.
# dladm show-link -s -i 5
ipackets rbytes ierrors opackets obytes oerrors
nxge0 0 0 0 0 0 0
ipackets rbytes ierrors opackets obytes oerrors
nxge0 0 0 0 0 0 0

Monday, April 1, 2013

zoneadmd is not able to start.


Solaris zones stuck in shutting down state or zonadm status is showing  in Down status or stuck in umounting file system.

 

 

Error: zoneadmd is not able to start.

 

Solution:

1: Go in global zone and check zone state and  run ps -ef | grep "zonename" , try to kill process id of hanged process and zoneadmd.

 

2: umount -f "zone mount point name"  exp : zone1 is installaed on /zone1 mount point run umount -f /zone1

 

3:Again kill all process whcich are showing in ps -ef | grep "zonename"

 

4: Run fsck on local zone path mount point

 

5: Mount and reboot the zone

 

6: Edit /etc/vfstab and comment faulted mount point

 

7: Again Reboot the zone and mount filesystm on another mount point.

Wednesday, February 20, 2013

Linux new disk management

Linux new disk management:
rescan can be issued by typing the following command:
echo "- - -" > /sys/class/scsi_host/host#/scan
fdisk -l
tail -f /var/log/message


How Do I Delete a Single Device Called /dev/sdc?

In addition to re-scanning the entire bus, a specific device can be added or existing device deleted using the following command:
# echo 1 > /sys/block/devName/device/delete
# echo 1 > /sys/block/sdc/device/delete
How Do I Add a Single Device Called /dev/sdc?

To add a single device explicitly, use the following syntax:


# echo "scsi add-single-device " > /proc/scsi/scsi

here,

    : Host
    : Bus (Channel)
    : Target (Id)
    : LUN numbers

For e.g. add /dev/sdc with host # 0, bus # 0, target # 2, and LUN # 0, enter:
# echo "scsi add-single-device 0 0 2 0">/proc/scsi/scsi
# fdisk -l
# cat /proc/scsi/scsi
Sample Outputs:

Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: VMware,  Model: VMware Virtual S Rev: 1.0
  Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 00
  Vendor: VMware,  Model: VMware Virtual S Rev: 1.0
  Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 02 Lun: 00
  Vendor: VMware,  Model: VMware Virtual S Rev: 1.0
  Type:   Direct-Access                    ANSI SCSI revision: 02

Step #3: Format a New Disk

Now, you can create partition using fdisk and format it using mkfs.ext3 command:
# fdisk /dev/sdc
# mkfs.ext3 /dev/sdc3
Step #4: Create a Mount Point And Update /etc/fstab

# mkdir /disk3
Open /etc/fstab file, enter:
# vi /etc/fstab
Append as follows:

/dev/sdc3               /disk3           ext3    defaults        1 2

Save and close the file.
Optional Task: Label the partition

You can label the partition using e2label. For example, if you want to label the new partition /backupDisk, enter
# e2label /dev/sdc1 /backupDisk

Linux System Service configuration


A typical Linux system can be configured to boot into one of 5 different runlevels. During the boot process the init process looks in the /etc/inittab file to find the default runlevel. Having identified the runlevel it proceeds to execute the appropriate startup scripts located in the /etc/rc.d sub-directory.

For example if you have a runlevel of 5 configured then the init process will work through the list of startup scripts located in /etc/rc.d/rc5.d. These startup scripts start either with the letter "S" or "K" followed by a number and then a (hopefully) description word. For example the startup script for NFS (Networked File System) is typcically S60nfs whilst the stratup script for YUM system might be called K01yum.

Scripts that start with an "S" are invoked before those prefixed with a "K". The number in the filename controls the order in which the script will be executed with that group (either "S" or "K"). You wouldn't, for example, want to start NFS before the basic networking is up and running. It is also worth noting that the files in the rc.d sub-directories are not the actual scripts themselves but rather symbolic links to the actual files located in /etc/rc.d/init.d.

There are number of ways to control what services get started wihtout having to delve into the /etc/rc.d sub-directories yourself.

The command line tool chkconfig (usually located in /sbin) can be used to list and configure which services get started at boot time. To list all service settings run the following command:

    /sbin/chkconfig --list

This will display a long list of services showing whether or not they are started up at various runlevels. You may want to narrow the search down using grep. For example to list the entry for the HTTP daemon you would do the following:

    /sbin/chkconfig --list | grep httpd

which should result in something like:

    httpd           0:off   1:off   2:off   3:on    4:off   5:off    6:off

Alternatively you may just be interested to know what gets started for runlevel 3:

    /sbin/chkconfig --list | grep '3:on'

chkconfig can also be used to change the settings. If we wanted the HTTP service to start up when we at runlevel 5 we would issue the following command:

    /sbin/chkconfig --level 5 httpd on

A number of graphical tools are also available for administering services. On RedHat 9 you can run the following command:

    redhat-config-services

The equivalent command on RedHat Fedora Core is:

    system-config-services

The above graphical tools allow you to view which services will start for each runlevel, add or remove services for each runlevel and also manually start or stop services.

Another useful tool if you do not have a graphical desktop running or access via a remote X server is the ntsysv command. ntsysv resides in /sbin on most systems. Whilst a convenient tool when you don't have an X server running the one draw back of ntsysv is that it only allows you to change the settings for the current runlevel.