Monday, October 7, 2013

VCS 6.0 configuration with Solaris 11 zfs and zones


VCS configuration with Solaris 11 zfs and zones.:

 
1: Create Zpool
Zpool create spoo c2t2d0

2: create Zfs filesystem:
Zfs create spoo/mnt

3: create sparsh Zone:
Zoncfg –z testzone

testZone> create
testZone >zonepath=/spoo/mnt/testzone
testZone >set ip-type=shared
testZone >remove anet
testZone >add net
testZone:net>set ip-address=192.168.0.200
testZone:net> set   onfigure-allowed-address= false
testZone:net>   set physical= net0
testZone:net>   set defrouter= 192.168.0.1
testzone:net>end
testzone>set pool=default_pool
testzone>verify
testzone>commit
exit

4:On Solaris-3 #pooladm –e

Solaris3#zoneadm –z  testzone boot
Solaris3#zoneadm –z testzone boot

 
Zlogin –C testzone

And configure zone console
5:After that configure VCS with the below command:

hagrp -add  testgrp
hagrp -modify testgrp SystemList  solaris-3 0 solaris-4 1
hagrp -modify testgrp AutoStartList solaris-3
hagrp -modify testgrp Parallel 0
hares -add  vcspool  Zpool  testgrp
hares -modify vcspool Critical 1
hares -modify vcspool ChkZFSMounts  1
hares -modify vcspool FailMode  continue
hares -modify vcspool ForceOpt  1
hares -modify vcspool ForceRecoverOpt  0
hares -modify vcspool PoolName spoo
hares -modify vcspool AltRootPath /
hares -modify vcspool ZoneResName  vcszone
hares -modify vcspool DeviceDir -delete -keys
hares -modify vcspool Enabled 1
hazonesetup -g testgrp -r vcszone -z testzone -p abc123 -a -s solaris-3,solaris-4
haconf -makerw
hares -link vcszone vcspool
haconf –dump makero

6:Now final step copy index and testzone.xml file from solaris3:/etc/zones and paste in solaris4:/etc/zones

7:
Vi /etc/zones/index and  change state of testzone configured

testzone:configured:/spoo/mnt/testzone:

 8: probe testgrp on both server

9:halt testzone on solaris-3 and export spoo

Zoneadm –z testzone halt
Zpool export spoo

10:Now run command:

hagrp –enable testgrp –sys solaris3

Linux NFS troubleshooting

NFS (Network File System) is a widely used and primitive protocol that allows computers to share files over a network. The main problems with NFS are that it relies on the inherently insecure UDP protocol, transactions are not encrypted and hosts and users cannot be easily authenticated. Below we will show a number of issues that one can follow to heal those security problems.

Let us clarify how the NFS service operates. An NFS server is the server with a file system (or a directory) which is called NFS file system (or NFS directory) that will be exported to an NFS client. The NFS client will then have to import (or mount) the exported file system (directory) to itself before being able to have access to the file system (directory). We will annotate each issue below with on server, on client, on client & server and misc. Those mean that issue is done on NFS server, NFS client, both NFS client and server, and miscellaneous, respectively.
NFS file systems should be installed on a separate disk or partition (on server)
By having file systems on a separate partition of a harddisk, we can ensure that malicious users can not simply fill up the entire harddisk by writing large files onto it. This will then be able to crash other services running on the same harddisk.
Prevent normal users on an NFS client from mounting an NFS file system (on server)
This can be done by adding parameter 'secure' in an item in /etc/exports, such as:
/home nfs-client(secure)
where the directory /home is the file system to be exported to the NFS client located at address nfs-client (specify the IP address or domain name of your NFS client).

Export an NFS file system in an appropriate permission mode (on server)
Let's say that you only need read-only permission on your exported NFS file system. Then the file system should be exported as read-only to prevent unintended or even intended modifications on those files. This is done by specifying parameter 'ro' in /etc/exports.
/home nfs-client(ro)

Restrict exporting an NFS file system to a certain set of NFS clients (on server)
Specify only a specific set of NFS clients that will be allowed to mount an NFS file system. If possible, use numeric IP addresses or fully qualified domain names, instead of aliases.
Use the 'root_squash' option in /etc/exports on the NFS server if possible (on server)
When this option is used, then while mounting using the command mount, the user ID ?root? on the NFS client will be replaced by the user ID ?nobody? on the NFS server. This is to prevent the root on the NFS client from taking a superuser privilege on the NFS server, thus perhaps illegally allowing him to modify files on the NFS server. Here is an example:
/home nfs-client(root_squash)

Disable suid (superuser ID) on an NFS file system (on client)
Add the 'nosuid' option (no superuser ID privilege) to an item in /etc/fstab (This file is used to determine which NFS file systems are to be mounted automatically at the startup time). This is to prevent files with suid bits set on the NFS server, e.g., Trojan horse files, from being executed on the NFS client, which could then lead to root compromise on the client. Or the root on the NFS client may accidentally execute those suid files. Here is an example of ?nosuid?. An item in /etc/fstab on the client may contain:
nfs-server:/home /mnt/nfs nfs ro,nosuid 0 0

where nfs-server is the IP address or domain name of the NFS server and /home is the directory on the NFS server to be mounted to the client computer at the directory /mnt/nfs. Alternatively, the 'noexec' option can be used to disable any file execution at all.
nfs-server:/home /mnt/nfs nfs ro,nosuid,noexec 0 0

Install the most recent patches for NFS and portmapper (on client & server)
NFS is known to be in the top-ten most common vulnerabilities reported by CERT and was abusively exploited. This means that the NFS server and portmapper on your system must be up-to-date to security patches.
Perform encryption over NFS traffic using SSH (on client & server)
Apart from the use of Secure Shell (SSH) for secure remote access, we can use it for tunnelling between an NFS client and server so that NFS traffic will be encrypted. The steps below will guide you how to encrypt NFS traffic using SSH.
Here is the simple diagram to show the concept of how NFS and SSH services cooperate.
nfs-client nfs-server
mount --- SSH <=================> SSHD --- NFS

From this figure, when you mount an NFS directory from a client computer, you will mount through SSH. After the mounting is done, the NFS traffic in both directions will be encrypted and so secure.
In the figure the NFS server is located at address nfs-server (use either the IP address or domain name of your NFS server instead), and the NFS client is at address nfs-client. Make sure that in both systems you have SSH and NFS related services already installed so you can use them.
There are two way configurations on the NFS client and server which are described in the two sections below.

NFS server configuration

Section 1.1 and 1.2 are what we have to do on the NFS server. Export an NFS directory to itself
For example, if the NFS server's IP address is 10.226.43.154 and the NFS directory to be exported is /home, then add the following line to /etc/exports
/home 10.226.43.154(rw,root_squash)

The reason for exporting directory /home to itself, instead of to an NFS client? IP address in an ordinary fashion, is that according to the figure above, we will feed the NFS data on the server to SSHD which is running at 10.226.43.154, instead of to the client computer in the usual case. The NFS data will then be forwarded securely to the client computer through the tunnel.
Note that the exported directory is allowed for read and write permission (rw). root_squash means the person who starts the mounting process to this directory will not obtain the root privilege on this NFS server.
Restart NFS and SSH daemons
Using Red Hat 7.2, you can manually start NFS and SSHD by issuing the following commands:
#/sbin/service nfs restart
#/sbin/service sshd restart

If you want to have them started automatically at startup time, with Red Hat 7.2 add the two lines below to the startup file /etc/rc.d/rc.local.
/sbin/service nfs start
/sbin/service sshd start

The term nfs in the commands above is a shell script that will start off two services, namely, NFS and MOUNTD.

NFS client configuration

Three sections below show what we have to do on the NFS client. Find the ports of NFS and MOUNTD on the NFS server Let's say you are now on the NFS client computer. To find the NFS and MOUNTD ports on the NFS server, use the command.
#rpcinfo -p nfs-server
   
   program vers proto   port
   100000    2   tcp    111  portmapper
   100000    2   udp    111  portmapper
   100003    2   tcp   2049  nfs
   100003    2   udp   2049  nfs
   100021    1   udp   1136  nlockmgr
   100021    3   udp   1136  nlockmgr
   100021    4   udp   1136  nlockmgr
   100011    1   udp    789  rquotad
   100011    2   udp    789  rquotad
   100011    1   tcp    792  rquotad
   100011    2   tcp    792  rquotad
   100005    2   udp   2219  mountd
   100005    2   tcp   2219  mountd
   
Note the lines with terms nfs and mountd. Under the column port, those are the ports for nfs and mountd. nfs has a port 2049 and mountd has a port 2219.

Setup the tunnel using SSH

On the NFS client computer, bind a SSH port with NFS port 2049.
#ssh -f -c blowfish -L 7777:nfs-server:2049 -l tony nfs-server /bin/sleep 86400
#tony@nfs-server's password:
#
where:
-c blowfish means SSH will use the algorithm blowfish to perform encryption.

-L 7777:nfs-server:2049 means binding the SSH client at port 7777 (or any other port that you want) to communicate with the NFS server at address nfs-server on port 2049.

-l tony nfs-server means in the process of login on the authentication server at address nfs-server (specify either the IP address or domain name of the authentication server), use the user login name tony to authenticate on the server.

/bin/sleep 86400 means to prevent spawning a shell on the client computer for 1 day (86,400 seconds). You can specify any larger number.

The line with #tony@nfs-server's password: will prompt the user tony for a password to continue authentication for the user.
Also on the NFS client computer, bind another SSH port with MOUNTD port 2219.
#ssh -f -c blowfish -L 8888:nfs-server:2219 -l tony nfs-server /bin/sleep 86400
#tony@nfs-server's password:
#
where:
-L 8888:nfs-server:2219 means binding this SSH client at port 8888 (or any other port that you want but not 7777 because you already used 7777) to communicate with the NFS server at address nfs-server on port 2219.
c) On the NFS client computer, mount NFS directory /home through the two SSH ports 7777 and 8888 at a local directory, say, /mnt/nfs.
#mount -t nfs -o tcp,port=7777 ,mountport=8888 localhost:/home /mnt/nfs

Normally the format of the command mount is to mount, at the IP address (or domain name) of the remote host, the remote NFS directory (/home) to the local directory (/mnt/nfs). However, the reason we mount at localhost instead of the nfs-server, is because the data after decryption at the left end of the tunnel (see the figure above also) is on the localhost, not the remote host.
Alternatively, if you want to mount the NFS directory automatically at startup time, add the following line to /etc/fstab
localhost:/home /mnt/nfs/ nfs tcp,rsize=8192,wsize=8192,intr,rw,bg,nosuid,port=7777,mountport=8888,noauto

Allow only traffic from authorised NFS clients to the NFS server (on server)
Supposing that an NFS server only provides the NFS service but nothing else so there are three ports available to use on the server, i.e., RPC Portmapper (on port 111), NFS (on port 2049), and Mountd (on port 2219). Here we can do some filtering on traffic that goes to the NFS server. Through the iptables firewall running locally on the NFS server (you must install iptables to use the following commands), allow only traffic from any authorised NFS client to the server.
Allow traffic from an authorised subnet 10.226.43.0/24 to the ports Portmapper, NFS, and Mountd.
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 111 -j ACCEPT
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 2049 -j ACCEPT
#iptables -A INPUT -i eth0 -s 10.226.43.0/24 -dport 2219 -j ACCEPT

Deny something else.
#iptables -A INPUT -i eth0 -s 0/0 -dport 111 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -dport 2049 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -dport 2219 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -j DROP

Basically the NFS service operates through Portmapper service so if we block Portmapper port 111, we also then block NFS port 2049.
Alternatively, you can use the TCP wrapper to filter access to your portmapper by adding the line:
portmapper: 10.226.43.0/24
to /etc/hosts.allow to allow access to portmapper only from subnet 10.226.43.0/24.

Also add the line below to /etc/hosts.deny to deny access to all other hosts not specified above.
portmapper:ALL

Filter out Internet traffic to the NFS service on the routers and firewalls (misc)
In some cases for many organisations with their computers visible on the Internet, if the NFS service is also visible, then we may need to block Internet traffic to ports 111 (Portmapper), 2049 (NFS), and 2219 (Mountd) on your routers or firewalls to prevent unauthorised access to the two ports. With the iptables set up as your firewall, use the following rules:
#iptables -A INPUT -i eth0 ?d nfs-server -dport 111 -j DROP
#iptables -A INPUT -i eth0 ?d nfs-server -dport 2049 -j DROP
#iptables -A INPUT -i eth0 -d nfs-server -dport 2219 -j DROP

Use the software tool NFSwatch to monitor NFS traffic (misc)
NFSwatch allows you to monitor NFS packets (traffic) flowing between the NFS client and server. It can be downloaded fromftp://ftp.cerias.purdue.edu/pub/tools/unix/netutils/nfswatch/. One good reason that we need to monitor is that in case there is some malicious activity going on or already taking place, we would then use the log created by NFSwatch to trace back to how and where it came from. To monitor NFS packets between nfs-server and nfs-client, use the command:
   #nfswatch -dst nfs-server -src nfs-client

all hosts                   Wed Aug 28 10:12:40 2002   Elapsed time:   00:03:10
Interval packets:      1098 (network)        818 (to host)          0 (dropped)
Total packets:        23069 (network)      14936 (to host)          0 (dropped)
                      Monitoring packets from interface lo
                     int   pct   total                       int   pct   total
ND Read                0    0%        0 TCP Packets          461   56%    13678
ND Write               0    0%        0 UDP Packets          353   43%     1051
NFS Read             160   20%      271 ICMP Packets           0    0%        0
NFS Write              1    0%        1 Routing Control        0    0%       36
NFS Mount              0    0%        7 Address Resolution     2    0%       76
YP/NIS/NIS+            0    0%        0 Reverse Addr Resol     0    0%        0
RPC Authorization    166   20%      323 Ethernet/FDDI Bdcst    4    0%      179
Other RPC Packets      5    1%       56 Other Packets          2    0%      131
                                 1 file system
File Sys        int   pct   total
tmp(32,17)        0    0%     15
   

Specify the IP address (or domain name) of the source (-src) and that of the destination (-dst).