Monday, October 7, 2013

VCS 6.0 configuration with Solaris 11 zfs and zones


VCS configuration with Solaris 11 zfs and zones.:

 
1: Create Zpool
Zpool create spoo c2t2d0

2: create Zfs filesystem:
Zfs create spoo/mnt

3: create sparsh Zone:
Zoncfg –z testzone

testZone> create
testZone >zonepath=/spoo/mnt/testzone
testZone >set ip-type=shared
testZone >remove anet
testZone >add net
testZone:net>set ip-address=192.168.0.200
testZone:net> set   onfigure-allowed-address= false
testZone:net>   set physical= net0
testZone:net>   set defrouter= 192.168.0.1
testzone:net>end
testzone>set pool=default_pool
testzone>verify
testzone>commit
exit

4:On Solaris-3 #pooladm –e

Solaris3#zoneadm –z  testzone boot
Solaris3#zoneadm –z testzone boot

 
Zlogin –C testzone

And configure zone console
5:After that configure VCS with the below command:

hagrp -add  testgrp
hagrp -modify testgrp SystemList  solaris-3 0 solaris-4 1
hagrp -modify testgrp AutoStartList solaris-3
hagrp -modify testgrp Parallel 0
hares -add  vcspool  Zpool  testgrp
hares -modify vcspool Critical 1
hares -modify vcspool ChkZFSMounts  1
hares -modify vcspool FailMode  continue
hares -modify vcspool ForceOpt  1
hares -modify vcspool ForceRecoverOpt  0
hares -modify vcspool PoolName spoo
hares -modify vcspool AltRootPath /
hares -modify vcspool ZoneResName  vcszone
hares -modify vcspool DeviceDir -delete -keys
hares -modify vcspool Enabled 1
hazonesetup -g testgrp -r vcszone -z testzone -p abc123 -a -s solaris-3,solaris-4
haconf -makerw
hares -link vcszone vcspool
haconf –dump makero

6:Now final step copy index and testzone.xml file from solaris3:/etc/zones and paste in solaris4:/etc/zones

7:
Vi /etc/zones/index and  change state of testzone configured

testzone:configured:/spoo/mnt/testzone:

 8: probe testgrp on both server

9:halt testzone on solaris-3 and export spoo

Zoneadm –z testzone halt
Zpool export spoo

10:Now run command:

hagrp –enable testgrp –sys solaris3

Linux NFS troubleshooting

NFS (Network File System) is a widely used and primitive protocol that allows computers to share files over a network. The main problems with NFS are that it relies on the inherently insecure UDP protocol, transactions are not encrypted and hosts and users cannot be easily authenticated. Below we will show a number of issues that one can follow to heal those security problems.

Let us clarify how the NFS service operates. An NFS server is the server with a file system (or a directory) which is called NFS file system (or NFS directory) that will be exported to an NFS client. The NFS client will then have to import (or mount) the exported file system (directory) to itself before being able to have access to the file system (directory). We will annotate each issue below with on server, on client, on client & server and misc. Those mean that issue is done on NFS server, NFS client, both NFS client and server, and miscellaneous, respectively.
NFS file systems should be installed on a separate disk or partition (on server)
By having file systems on a separate partition of a harddisk, we can ensure that malicious users can not simply fill up the entire harddisk by writing large files onto it. This will then be able to crash other services running on the same harddisk.
Prevent normal users on an NFS client from mounting an NFS file system (on server)
This can be done by adding parameter 'secure' in an item in /etc/exports, such as:
/home nfs-client(secure)
where the directory /home is the file system to be exported to the NFS client located at address nfs-client (specify the IP address or domain name of your NFS client).

Export an NFS file system in an appropriate permission mode (on server)
Let's say that you only need read-only permission on your exported NFS file system. Then the file system should be exported as read-only to prevent unintended or even intended modifications on those files. This is done by specifying parameter 'ro' in /etc/exports.
/home nfs-client(ro)

Restrict exporting an NFS file system to a certain set of NFS clients (on server)
Specify only a specific set of NFS clients that will be allowed to mount an NFS file system. If possible, use numeric IP addresses or fully qualified domain names, instead of aliases.
Use the 'root_squash' option in /etc/exports on the NFS server if possible (on server)
When this option is used, then while mounting using the command mount, the user ID ?root? on the NFS client will be replaced by the user ID ?nobody? on the NFS server. This is to prevent the root on the NFS client from taking a superuser privilege on the NFS server, thus perhaps illegally allowing him to modify files on the NFS server. Here is an example:
/home nfs-client(root_squash)

Disable suid (superuser ID) on an NFS file system (on client)
Add the 'nosuid' option (no superuser ID privilege) to an item in /etc/fstab (This file is used to determine which NFS file systems are to be mounted automatically at the startup time). This is to prevent files with suid bits set on the NFS server, e.g., Trojan horse files, from being executed on the NFS client, which could then lead to root compromise on the client. Or the root on the NFS client may accidentally execute those suid files. Here is an example of ?nosuid?. An item in /etc/fstab on the client may contain:
nfs-server:/home /mnt/nfs nfs ro,nosuid 0 0

where nfs-server is the IP address or domain name of the NFS server and /home is the directory on the NFS server to be mounted to the client computer at the directory /mnt/nfs. Alternatively, the 'noexec' option can be used to disable any file execution at all.
nfs-server:/home /mnt/nfs nfs ro,nosuid,noexec 0 0

Install the most recent patches for NFS and portmapper (on client & server)
NFS is known to be in the top-ten most common vulnerabilities reported by CERT and was abusively exploited. This means that the NFS server and portmapper on your system must be up-to-date to security patches.
Perform encryption over NFS traffic using SSH (on client & server)
Apart from the use of Secure Shell (SSH) for secure remote access, we can use it for tunnelling between an NFS client and server so that NFS traffic will be encrypted. The steps below will guide you how to encrypt NFS traffic using SSH.
Here is the simple diagram to show the concept of how NFS and SSH services cooperate.
nfs-client nfs-server
mount --- SSH <=================> SSHD --- NFS

From this figure, when you mount an NFS directory from a client computer, you will mount through SSH. After the mounting is done, the NFS traffic in both directions will be encrypted and so secure.
In the figure the NFS server is located at address nfs-server (use either the IP address or domain name of your NFS server instead), and the NFS client is at address nfs-client. Make sure that in both systems you have SSH and NFS related services already installed so you can use them.
There are two way configurations on the NFS client and server which are described in the two sections below.

NFS server configuration

Section 1.1 and 1.2 are what we have to do on the NFS server. Export an NFS directory to itself
For example, if the NFS server's IP address is 10.226.43.154 and the NFS directory to be exported is /home, then add the following line to /etc/exports
/home 10.226.43.154(rw,root_squash)

The reason for exporting directory /home to itself, instead of to an NFS client? IP address in an ordinary fashion, is that according to the figure above, we will feed the NFS data on the server to SSHD which is running at 10.226.43.154, instead of to the client computer in the usual case. The NFS data will then be forwarded securely to the client computer through the tunnel.
Note that the exported directory is allowed for read and write permission (rw). root_squash means the person who starts the mounting process to this directory will not obtain the root privilege on this NFS server.
Restart NFS and SSH daemons
Using Red Hat 7.2, you can manually start NFS and SSHD by issuing the following commands:
#/sbin/service nfs restart
#/sbin/service sshd restart

If you want to have them started automatically at startup time, with Red Hat 7.2 add the two lines below to the startup file /etc/rc.d/rc.local.
/sbin/service nfs start
/sbin/service sshd start

The term nfs in the commands above is a shell script that will start off two services, namely, NFS and MOUNTD.

NFS client configuration

Three sections below show what we have to do on the NFS client. Find the ports of NFS and MOUNTD on the NFS server Let's say you are now on the NFS client computer. To find the NFS and MOUNTD ports on the NFS server, use the command.
#rpcinfo -p nfs-server
   
   program vers proto   port
   100000    2   tcp    111  portmapper
   100000    2   udp    111  portmapper
   100003    2   tcp   2049  nfs
   100003    2   udp   2049  nfs
   100021    1   udp   1136  nlockmgr
   100021    3   udp   1136  nlockmgr
   100021    4   udp   1136  nlockmgr
   100011    1   udp    789  rquotad
   100011    2   udp    789  rquotad
   100011    1   tcp    792  rquotad
   100011    2   tcp    792  rquotad
   100005    2   udp   2219  mountd
   100005    2   tcp   2219  mountd
   
Note the lines with terms nfs and mountd. Under the column port, those are the ports for nfs and mountd. nfs has a port 2049 and mountd has a port 2219.

Setup the tunnel using SSH

On the NFS client computer, bind a SSH port with NFS port 2049.
#ssh -f -c blowfish -L 7777:nfs-server:2049 -l tony nfs-server /bin/sleep 86400
#tony@nfs-server's password:
#
where:
-c blowfish means SSH will use the algorithm blowfish to perform encryption.

-L 7777:nfs-server:2049 means binding the SSH client at port 7777 (or any other port that you want) to communicate with the NFS server at address nfs-server on port 2049.

-l tony nfs-server means in the process of login on the authentication server at address nfs-server (specify either the IP address or domain name of the authentication server), use the user login name tony to authenticate on the server.

/bin/sleep 86400 means to prevent spawning a shell on the client computer for 1 day (86,400 seconds). You can specify any larger number.

The line with #tony@nfs-server's password: will prompt the user tony for a password to continue authentication for the user.
Also on the NFS client computer, bind another SSH port with MOUNTD port 2219.
#ssh -f -c blowfish -L 8888:nfs-server:2219 -l tony nfs-server /bin/sleep 86400
#tony@nfs-server's password:
#
where:
-L 8888:nfs-server:2219 means binding this SSH client at port 8888 (or any other port that you want but not 7777 because you already used 7777) to communicate with the NFS server at address nfs-server on port 2219.
c) On the NFS client computer, mount NFS directory /home through the two SSH ports 7777 and 8888 at a local directory, say, /mnt/nfs.
#mount -t nfs -o tcp,port=7777 ,mountport=8888 localhost:/home /mnt/nfs

Normally the format of the command mount is to mount, at the IP address (or domain name) of the remote host, the remote NFS directory (/home) to the local directory (/mnt/nfs). However, the reason we mount at localhost instead of the nfs-server, is because the data after decryption at the left end of the tunnel (see the figure above also) is on the localhost, not the remote host.
Alternatively, if you want to mount the NFS directory automatically at startup time, add the following line to /etc/fstab
localhost:/home /mnt/nfs/ nfs tcp,rsize=8192,wsize=8192,intr,rw,bg,nosuid,port=7777,mountport=8888,noauto

Allow only traffic from authorised NFS clients to the NFS server (on server)
Supposing that an NFS server only provides the NFS service but nothing else so there are three ports available to use on the server, i.e., RPC Portmapper (on port 111), NFS (on port 2049), and Mountd (on port 2219). Here we can do some filtering on traffic that goes to the NFS server. Through the iptables firewall running locally on the NFS server (you must install iptables to use the following commands), allow only traffic from any authorised NFS client to the server.
Allow traffic from an authorised subnet 10.226.43.0/24 to the ports Portmapper, NFS, and Mountd.
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 111 -j ACCEPT
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 2049 -j ACCEPT
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 2219 -j ACCEPT

Deny something else.
#iptables -A INPUT -i eth0 -s 0/0 -dport 111 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -dport 2049 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -dport 2219 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -j DROP

Basically the NFS service operates through Portmapper service so if we block Portmapper port 111, we also then block NFS port 2049.
Alternatively, you can use the TCP wrapper to filter access to your portmapper by adding the line:
portmapper: 10.226.43.0/24
to /etc/hosts.allow to allow access to portmapper only from subnet 10.226.43.0/24.

Also add the line below to /etc/hosts.deny to deny access to all other hosts not specified above.
portmapper:ALL

Filter out Internet traffic to the NFS service on the routers and firewalls (misc)
In some cases for many organisations with their computers visible on the Internet, if the NFS service is also visible, then we may need to block Internet traffic to ports 111 (Portmapper), 2049 (NFS), and 2219 (Mountd) on your routers or firewalls to prevent unauthorised access to the two ports. With the iptables set up as your firewall, use the following rules:
#iptables -A INPUT -i eth0 ?d nfs-server -dport 111 -j DROP
#iptables -A INPUT -i eth0 ?d nfs-server -dport 2049 -j DROP
#iptables -A INPUT -i eth0 -d nfs-server -dport 2219 -j DROP

Use the software tool NFSwatch to monitor NFS traffic (misc)
NFSwatch allows you to monitor NFS packets (traffic) flowing between the NFS client and server. It can be downloaded fromftp://ftp.cerias.purdue.edu/pub/tools/unix/netutils/nfswatch/. One good reason that we need to monitor is that in case there is some malicious activity going on or already taking place, we would then use the log created by NFSwatch to trace back to how and where it came from. To monitor NFS packets between nfs-server and nfs-client, use the command:
   #nfswatch -dst nfs-server -src nfs-client

all hosts                   Wed Aug 28 10:12:40 2002   Elapsed time:   00:03:10
Interval packets:      1098 (network)        818 (to host)          0 (dropped)
Total packets:        23069 (network)      14936 (to host)          0 (dropped)
                      Monitoring packets from interface lo
                     int   pct   total                       int   pct   total
ND Read                0    0%        0 TCP Packets          461   56%    13678
ND Write               0    0%        0 UDP Packets          353   43%     1051
NFS Read             160   20%      271 ICMP Packets           0    0%        0
NFS Write              1    0%        1 Routing Control        0    0%       36
NFS Mount              0    0%        7 Address Resolution     2    0%       76
YP/NIS/NIS+            0    0%        0 Reverse Addr Resol     0    0%        0
RPC Authorization    166   20%      323 Ethernet/FDDI Bdcst    4    0%      179
Other RPC Packets      5    1%       56 Other Packets          2    0%      131
                                 1 file system
File Sys        int   pct   total
tmp(32,17)        0    0%     15
   

Specify the IP address (or domain name) of the source (-src) and that of the destination (-dst).

Tuesday, June 4, 2013

Solaris-9 to 10 live upgrade




 root@sol1 ~ $ sudo mkdir /media/solaris10-iso
root@sol1 ~ $
root@sol1 ~ $ sudo mount /media/ACER/Users/sol1/Downloads/Solaris-10-u7-ga-x86x64-dvd.iso /media/solaris10-iso/ -t iso9660 -o loop
root@sol1 ~ $
root@sol1 ~ $ ls -lrt /media/solaris10-iso/
total 490
-r–r–r– 1 root root 487593 2009-02-26 01:52 JDS-THIRDPARTYLICENSEREADME
-r–r–r– 1 root root 6582 2009-02-26 01:55 Copyright
-r-xr-xr-x 1 root root 257 2009-03-31 00:20 installer
dr-xr-xr-x 2 root root 2048 2009-03-31 00:34 License
dr-xr-xr-x 7 root root 2048 2009-03-31 00:34 Solaris_10
dr-xr-xr-x 3 root root 2048 2009-03-31 00:34 boot
root@sol1 ~ $
root@sol1 ~ $ df -h |grep solaris
/dev/loop0 2.2G 2.2G 0 100% /media/solaris10-iso
root@sol1 ~ $
root@sol1 ~ $ cat /etc/exports |tail -1
/media/solaris10-iso SERVER1(ro,root_squash) SERVER2(ro,root_squash) SERVER3(ro,root_squash) SERVER4(ro,root_squash)
root@sol1 ~ $
root@sol1 ~ $ sudo exportfs -a



exportfs: /etc/exports [2]: Neither ‘subtree_check’ or ‘no_subtree_check’ specified for export “SERVER1:/media/solaris10-iso”.
Assuming default behaviour (‘no_subtree_check’).
NOTE: this default has changed since nfs-utils version 1.0.x

root@sol1 ~ $
root@sol1 ~ $ exportfs



/media/solaris10-iso
SERVER4

root@sol1 ~ $
root@sol1 ~ $ showmount -e
Export list for sol1:
/media/solaris10-iso SERVER1,SERVER2,…
/home/sol1/Virtual_Machines/Virtualbox_Share SERVER1,SERVER2,…

###Mounting the shared ISO image as NFS mountpoint in Solaris 9 Server###


bash-2.05# uname -a
SunOS rskvmsol9 5.9 Generic_112234-03 i86pc i386 i86pc
bash-2.05# /etc/init.d/nfs.client start
bash-2.05# ps -ef |grep -i nfs
root 172 1 0 03:27:24 ? 0:00 /usr/lib/nfs/lockd
daemon 170 1 0 03:27:23 ? 0:00 /usr/lib/nfs/statd
root 324 286 0 03:39:11 pts/1 0:00 grep -i nfs
bash-2.05#
bash-2.05# mkdir /solaris10-iso
bash-2.05#
bash-2.05# mount -F nfs NFS_SERVER:/media/solaris10-iso /solaris10-iso/
bash-2.05#
bash-2.05# df -h |grep solaris
NFS_SERVER:/media/solaris10-iso 2.2G 2.2G 0K 100% /solaris10-iso

###Solaris Live Upgrade (From Solaris 9 to Solaris 10)###


bash-2.05# uname -a
SunOS rskvmsol9 5.9 Generic_112234-03 i86pc i386 i86pc
bash-2.05#
bash-2.05# cat /etc/release
Solaris 9 12/02 s9x_u2wos_10 x86
Copyright 2002 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 05 November 2002
bash-2.05#
bash-2.05# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d0s0 992M 466M 466M 51% /
/proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
fd 0K 0K 0K 0% /dev/fd
swap 884M 16K 884M 1% /var/run
/dev/dsk/c0d0s5 481M 15K 433M 1% /opt
swap 884M 0K 884M 0% /tmp
/dev/dsk/c0d0s7 3.9G 9K 3.9G 1% /export/home
10.176.80.232:/media/solaris10-iso
2.2G 2.2G 0K 100% /solaris10-iso

###Added a new disk for performing live upgrade###

bash-2.05# devfsadm -c disk
bash-2.05# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c0d0
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c0d1
/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
Specify disk (enter its number): ^D
bash-2.05#
bash-2.05# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c0d0
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c0d1
/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
Specify disk (enter its number): 1
AVAILABLE DRIVE TYPES:
0. DEFAULT
1. other
Specify disk type (enter its number): 0
selecting c0d1
No current partition list
No defect list found
[disk formatted, no defect list found]
FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
fdisk – run the fdisk program
repair – repair a defective sector
show – translate a disk address
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
volname – set 8-character volume name
! – execute , then return
quit
format> p
Please run fdisk first.
format> fdisk
No fdisk table exists. The default partition for the disk is:
a 100% “SOLARIS System” partition
Type “y” to accept the default partition, otherwise type “n” to edit the
partition table.
y
format> ^D
bash-2.05#
bash-2.05# prtvtoc /dev/dsk/c0d0s0 | fmthard -s – /dev/rdsk/c0d1s0
fmthard: New volume table of contents now in place.
bash-2.05#
#Now new disk is ready for creating new boot environment
#Now we need to patch the Solaris 9 server before installing live upgrade packages from Solaris 10(to which the operating environment is going to be upgraded)
#If you ignore patching, you can still create the new BE using lucreate command successfully. But you can’t upgrade the inactive BE to Solaris 10 as we desire. While upgrading using luupgrade command, we’ll encounter patching errors which will prevent us from proceeding further. I verified this fact practically already. Hence it is strongly recommended to patch the running OS (as suggested by “http://sunsolve.sun.com/search/document.do?assetkey=1-61-206844-1“)
bash-2.05# cd /var/tmp/solaris9_patches/
bash-2.05# ls -lrt
total 24
drwxr-xr-x 3 root other 512 Aug 30 2003 114483-04
drwxr-xr-x 3 root other 512 Sep 12 2005 115690-01
drwxr-xr-x 4 root other 512 Sep 15 2005 120465-01
drwxr-xr-x 3 root other 512 Aug 28 2006 114330-02
drwxr-xr-x 3 root other 512 Nov 13 2007 114194-11
drwxr-xr-x 4 root other 512 Apr 3 2008 137478-01
drwxr-xr-x 3 root other 512 May 29 2008 115167-08
drwxr-xr-x 6 root other 512 Jun 29 2009 114568-27
drwxr-xr-x 4 root other 512 Aug 21 21:19 114637-05
drwxr-xr-x 12 root other 512 Sep 23 21:57 114273-04
drwxr-xr-x 26 root other 1024 Jan 26 22:23 122301-48
bash-2.05# patchadd 120465-01/
Checking installed patches…
ERROR: This patch requires patch 117172-17
which has not been applied to the system.
Patchadd is terminating.
#Hence downloaded this patch 117172-17 (kernel patch) also & installed first
#This kernel patch needs to be applied single user mode & system needs to be restarted immediately
bash-2.05# ls -lrt
total 24
drwxr-xr-x 3 root other 512 Aug 30 2003 114483-04
drwxr-xr-x 24 root other 1024 Jan 22 2005 117172-17
drwxr-xr-x 3 root other 512 Sep 12 2005 115690-01
drwxr-xr-x 4 root other 512 Sep 15 2005 120465-01
drwxr-xr-x 3 root other 512 Aug 28 2006 114330-02
drwxr-xr-x 3 root other 512 Nov 13 2007 114194-11
drwxr-xr-x 4 root other 512 Apr 3 2008 137478-01
drwxr-xr-x 3 root other 512 May 29 2008 115167-08
drwxr-xr-x 6 root other 512 Jun 29 2009 114568-27
drwxr-xr-x 4 root other 512 Aug 21 21:19 114637-05
drwxr-xr-x 12 root other 512 Sep 23 21:57 114273-04
drwxr-xr-x 26 root other 1024 Jan 26 22:23 122301-48
#Run patchadd command for all of these patches in the following order & instruction,
#init s && patchadd 117172-17 && init 6
#patchadd 120465-01 && init 6
#patchadd 114568-27 (No need of installing the Patch 115690-01 as it was obsoleted by 114568-27)
#patchadd 114194-11
#patchadd 115167-08
#patchadd 114483-04
#patchadd 137478-01
#patchadd 114330-02
#init 6
#These patches should be sufficient for proceeding further with live upgrade#
bash-2.05# pkginfo |grep lusystem SUNWcqhpc COMPAQ Hot Plug PCI controller driver
system SUNWctlu Print utilities for CTL locales
system SUNWdthez Desktop Power Pack Help Volumes
application SUNWj2pi Java Plug-in
system SUNWmdr Solaris Volume Manager, (Root)
system SUNWmdu Solaris Volume Manager, (Usr)
system SUNWpiclu PICL Libraries, and Plugin Modules (Usr)
system SUNWvolg Volume Management Graphical User Interface
system SUNWvolr Volume Management, (Root)
system SUNWvolu Volume Management, (Usr)
system SUNWxwhl X Window System & Graphics Header links in /usr/include
#So there is no existing installation of live upgrade packages available. Hence mount Solaris 10 DVD & install liveupgrade packages to the system
bash-2.05#
bash-2.05# cd /solaris10-iso/Solaris_10/Tools/Installers/
bash-2.05# ./liveupgrade20 -noconsole -nodisplay
Sun Microsystems, Inc.
Binary Code License Agreement
Live Upgrade
READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED
SUPPLEMENTAL LICENSE TERMS (COLLECTIVELY “AGREEMENT”)
CAREFULLY BEFORE OPENING THE SOFTWARE MEDIA PACKAGE. BY
OPENING THE SOFTWARE MEDIA PACKAGE, YOU AGREE TO THE TERMS
OF THIS AGREEMENT. IF YOU ARE ACCESSING THE SOFTWARE
ELECTRONICALLY, INDICATE YOUR ACCEPTANCE OF THESE TERMS BY
SELECTING THE “ACCEPT” BUTTON AT THE END OF THIS
AGREEMENT. IF YOU DO NOT AGREE TO ALL THESE TERMS,
PROMPTLY RETURN THE UNUSED SOFTWARE TO YOUR PLACE
OF PURCHASE FOR A REFUND OR, IF THE SOFTWARE IS ACCESSED
ELECTRONICALLY, SELECT THE “DECLINE” BUTTON AT THE END OF
THIS AGREEMENT.



For inquiries please contact: Sun Microsystems, Inc., 4150
Network Circle, Santa Clara, California 95054, U.S.A.
bash-2.05#
bash-2.05# pkginfo |grep SUNWlu
application SUNWlucfg Live Upgrade Configuration
application SUNWlur Live Upgrade (root)
application SUNWluu Live Upgrade (usr)
bash-2.05# pkginfo -l SUNWlucfg SUNWlur SUNWluu
PKGINST: SUNWlucfg
NAME: Live Upgrade Configuration
CATEGORY: application
ARCH: i386
VERSION: 11.10,REV=2007.03.09.15.05
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Live Upgrade Configuration
PSTAMP: on10-adms-patch-x20080801100945
INSTDATE: Feb 17 2010 04:44
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 5 installed pathnames
3 shared pathnames
3 directories
35 blocks used (approx)
PKGINST: SUNWlur
NAME: Live Upgrade (root)
CATEGORY: application
ARCH: i386
VERSION: 11.10,REV=2005.01.09.21.46
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Live Upgrade (root)
PSTAMP: on10-adms-patch-x20080801100947
INSTDATE: Feb 17 2010 04:44
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 39 installed pathnames
9 shared pathnames
4 linked files
15 directories
13 executables
4230 blocks used (approx)
PKGINST: SUNWluu
NAME: Live Upgrade (usr)
CATEGORY: application
ARCH: i386
VERSION: 11.10,REV=2005.01.09.21.46
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Live Upgrade (usr)
PSTAMP: on10-adms-patch-x20080801100949
INSTDATE: Feb 17 2010 04:44
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 165 installed pathnames
7 shared pathnames
11 directories
45 executables
4022 blocks used (approx)
bash-2.05#
#Now creating new BE (inactive) which is just a copy of the running Solaris 9 OS
#Here naming the current BE as sol9 & new BE as sol10
#Before beginning,make sure that you allocate more disk space to new environment(ABE) (atleast twice/thrice that of Current BE). Live upgrade will fail during luupgrade command execution if the space is not sufficient!!!
bash-2.05# lucreate -c sol9 -n sol10 -m /:c0d1s0:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name expands to device path

Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Checking GRUB menu…
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device
is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Creating file systems on boot environment .
Creating file system for </> in zone on
.
Mounting file systems for boot environment .
Calculating required sizes of file systems for boot environment .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
WARNING: The file
contains a list of <2>
potential problems (issues) that were encountered while populating boot
environment .
INFORMATION: You must review the issues listed in
and determine if any must be resolved. In
general, you can ignore warnings about files that were skipped because
they did not exist or could not be opened. You cannot ignore errors such
as directories or files that could not be created, or file systems running
out of disk space. You must manually resolve any such problems before you
activate boot environment .
Creating compare databases for boot environment .
Creating compare database for file system
.
Creating compare database for file system </>.
Updating compare databases on boot environment .
Making boot environment bootable.
Updating bootenv.rc on ABE .
Skipping menu entry delete: Non existent GRUB menu

Population of boot environment successful.
Creation of boot environment successful.
bash-2.05#
###Now check for any errors & the status of the OS Environments
bash-2.05# ls -lrt /tmp/lucopy.errors.2707
-rw-r–r– 1 root other 24 Feb 17 07:10 /tmp/lucopy.errors.2707
bash-2.05#
bash-2.05# cat /tmp/lucopy.errors.2707
1155232 blocks
0 blocks
bash-2.05#
bash-2.05# lustatus -l /tmp/lucopy.errors.2707
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes yes yes no –
sol10 yes no no yes –
bash-2.05#
bash-2.05# lustatus sol10
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol10 yes no no yes –
bash-2.05#
bash-2.05# lustatus Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes yes yes no –
sol10 yes no no yes –
bash-2.05#
###Now we’ve a copy of Solaris 9 as ABE (Alternate Boot Environment). This needs to be upgraded to Solaris 10 (from DVD image) using luupgrade as follows,
bash-2.05# luupgrade -u -n sol10 -s /solaris10iso/
System has findroot enabled GRUB
Skipping menu entry delete: Non existent GRUB menu

Copying failsafe kernel from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is
Mounting miniroot at

Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 5% completed
Upgrading Solaris: 20% completed
Upgrading Solaris: 33% completed
Upgrading Solaris: 38% completed
Upgrading Solaris: 48% completed
Upgrading Solaris: 57% completed
Upgrading Solaris: 88% completed
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Deleted empty GRUB menu on ABE .
Updating package information on boot environment .
Package information successfully updated on boot environment .
Adding operating system patches to the BE .
The operating system patch installation is complete.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file
on boot
environment contains a log of the upgrade operation.
INFORMATION: The file
on boot
environment contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment is complete.
Installing failsafe
Failsafe install is complete.
bash-2.05#
###ABE upgradation has got completed successfully. Now activate the ABE as active BE & reboot(using init 6 and strictly not with reboot command) to check whether it boots properly.
bash-2.05# luactivate sol10
System has findroot enabled GRUB
A Live Upgrade Sync operation will be performed on startup of boot environment .
Generating boot-sign for ABE
Generating partition and slice information for ABE
No boot menu exists. Creating new menu file
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
GRUB menu has no default setting
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unchanged
Done eliding bootadm entries.

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c0d0s0 /mnt
3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File
propagation successful
File propagation successful
File propagation successful
File propagation successful
Deleting stale GRUB loader from all BEs.
File deletion successful
File deletion successful
File deletion successful
Activation of boot environment successful. ##After reboot, successfully Solaris 10 booted & perfectly working
bash-3.00# uname -a
SunOS rskvmsol9 5.10 Generic_139556-08 i86pc i386 i86pc
bash-3.00#
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes no no yes –
sol10 yes yes yes no -
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d1s0 4.4G 2.5G 1.9G 58% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 549M 756K 548M 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
fd 0K 0K 0K 0% /dev/fd
swap 548M 48K 548M 1% /tmp
swap 548M 24K 548M 1% /var/run
/dev/dsk/c0d0s7 3.9G 4.0M 3.9G 1% /export/home
###That’s All!!! Live Upgrade is a nice & pretty straight-forward feature which is quite beneficial for production environments where less downtime is desirable. We now have two boot environments & if we wish,we can remove the older one once we’re sure that the new BE becomes stable & performing well without any issues.
Note:
====
1. If We receive the error messages starting with “This system contains only a single GRUB menu for all boot environments…” while executing luupgrade, We can use the following solution to fix that to proceed further.
bash-2.05# luupgrade -u -n sol10 -s /solaris10iso/
This system contains only a single GRUB menu for all boot environments. To
enhance reliability and improve the user experience, live upgrade requires
you to run a one time conversion script to migrate the system to multiple
redundant GRUB menus. This is a one time procedure and you will not be
required to run this script on subsequent invocations of Live Upgrade
commands. To run this script invoke:
/usr/lib/lu/lux86menu_propagate /path/to/new/Solaris/install/image OR
/path/to/LiveUpgrade/patch
where /path/to/new/Solaris/install/image is an absolute
path to the Solaris media or netinstall image from which you installed the
Live Upgrade packages and /path/to/LiveUpgrade/patch is an absolute path
to the Live Upgrade patch from which this Live Upgrade script was patched
into the system.
##Here we encountered an error, this is related to creating multiple GRUB menus to reflect both the BEs. To fix this, issue the commands as suggested by the error output
bash-2.05# /usr/lib/lu/lux86menu_propagate /solaris10iso/
Validating the contents of the media
.
The media is a standard Solaris media.
The media contains a Solaris operating system image.
The media contains version <10>.
Installing latest Live Upgrade package/patch on all BEs
Updating Live Upgrade packages on all BEs
Successfully updated Live Upgrade packages on all BEs
Successfully extracted GRUB from media
System has no GRUB slice
Installing GRUB bootloader to all GRUB based BEs
System does not have an applicable x86 boot partition
install GRUB to all BEs successful
Converting root entries to findroot
Skipping elide of bootadm entries: Non-existent or zero length GRUB menu.
File
deletion successful
Successfully deleted GRUB_slice file
File deletion successful
Successfully deleted GRUB_root file
Propagating findroot GRUB for menu conversion.
File propagation successful
File propagation successful
File propagation successful
File propagation successful
Deleting stale GRUB loader from all BEs.
File deletion successful
File deletion successful
File deletion successful
Conversion was successful ##Now run luupgrade again
2. If your live upgrade gets failed at the end of execution of luupgrade command, it would have been mostly due to space constraint on the disk on which ABE is created. You can mount the disk in a temporary mount point & check the disk utilization. If it is 100% full, this is the reason for the failure. In such a case, We would have to delete the ABE using ludelete and re-format the disk and start the live upgrade process all over again from scratch!!!.The following is an example for such failure,
bash-2.05# luupgrade -u -n sol10 -s /solaris10iso/
System has findroot enabled GRUB
Skipping menu entry delete: Non existent GRUB menu

Copying failsafe kernel from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is
Mounting miniroot at

Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment .
ERROR: Unable to update package instance information on boot environment .
ABE boot partition backing deleted.
ABE GRUB has no capability information. Skipping GRUB upgrade.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file
on boot
environment contains a log of the upgrade operation.
INFORMATION: The file
on boot
environment contains a log of cleanup operations required.
WARNING: <99> packages failed to install properly on boot environment .
INFORMATION: The file
on
boot environment contains a list of packages that failed to
upgrade or install properly.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment failed.
Installing failsafe
cp: /tmp/.luupgrade.inf.4648/boot/multiboot: No space left on device
cp: /tmp/.luupgrade.inf.4648/boot/x86.miniroot-safe: No space left on device
ERROR: Failsafe install failed.
bash-2.05#
##Here the reason has been clearly provided in the output.Now check & confirm if this is due to space constraint by,
bash-2.05#mount /dev/dsk/c0d1s0 /a
bash-2.05# df -h /aFilesystem size used avail capacity Mounted on
/dev/dsk/c0d1s0 992M 992M 0 100% /a
bash-2.05#umount /a
##Delete the failed ABE as in the following example,
bash-2.05# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes yes yes no –
sol10 yes no no yes -
bash-2.05# ludelete -f sol10
System has findroot enabled GRUB
Skipping menu entry delete: Non existent GRUB menu

Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment deleted. bash-2.05# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
sol9 yes yes yes no –