Tuesday, November 27, 2012

Re-create a ZFS Root Pool and Restore Root Pool Snapshots


All the steps are performed on the local system.
  1. Boot from an installation DVD or the network.
    • SPARC - Select one of the following boot methods:
      ok boot net -s
      ok boot cdrom -s
      If you don't use -s option, you'll need to exit the installation program.
    • x86 – Select the option for booting from the DVD or the network. Then, exit the installation program.
  2. Mount the remote snapshot file system if you have sent the root pool snapshots as a file to the remote system.For example:
    # mount -F nfs remote-system:/rpool/snaps /mnt
    If your network services are not configured, you might need to specify the remote-system's IP address.
  3. If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you must relabel the disk.
  4. Relabeling the Root Pool Disk

    You might need to replace a disk in the root pool for the following reasons:
  5. The root pool is too small and you want to replace it with a larger disk
  6. The root pool disk is failing. If the disk is failing so that the system won't boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the root pool disk.
Part of recovering the root pool might be to replace or relabel the root pool disk. Follow the steps below to relabel and replace the root pool disk.
  1. Physically attach the replacement disk.
  2. If the replacement disk has an EFI label, the fdisk output looks similar to the following on an x86 system.
    # fdisk /dev/rdsk/c1t1d0p0
    selecting c1t1d0p0
      Total disk size is 8924 cylinders
                 Cylinder size is 16065 (512 byte) blocks
    
                                                   Cylinders
          Partition   Status    Type          Start   End   Length    %
          =========   ======    ============  =====   ===   ======   ===
              1                 EFI               0  8924    8925    100
    .
    .
    .
    Enter Selection: 6
    Use fdisk to change this to a Solaris partition.
  3. Select one of the following to create a Solaris fdisk partition for a disk on an x86 system or create an SMI label for a disk on a SPARC system.
    • On an x86 system, create a Solaris fdisk partition that can be used for booting by selecting 1=SOLARIS2. You can create a Solaris partition by using the fdisk -B option that creates one Solaris partition that uses the whole disk. Beware that the following command uses the whole disk.
      # fdisk -B /dev/rdsk/c1t1d0p0
      Display the newly created Solaris partition. For example:
       Total disk size is 8924 cylinders
                   Cylinder size is 16065 (512 byte) blocks
      
                                                     Cylinders
            Partition   Status    Type          Start   End   Length    %
            =========   ======    ============  =====   ===   ======   ===
                1       Active    Solaris2          1  8923    8923    100
      .
      .
      .
      Enter Selection: 6
    • On a SPARC based system, make sure you have an SMI label. Use the format -e command to determine if the disk label is EFI or SMI and relabel the disk, if necessary. In the output below, the disk label includes sectors and not cylinders. This is an EFI label.
      # format -e
      Searching for disks...done
      AVAILABLE DISK SELECTIONS:
             0. c1t0d0 
                /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w21000004cf7fac8a,0
             1. c1t1d0 
                /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w21000004cf7fad21,0
      Specify disk (enter its number): 1
      selecting c1t1d0
      [disk formatted]
      format> p
      partition> p
      Current partition table (original):
      Total disk sectors available: 71116541 + 16384 (reserved sectors)
      
      Part      Tag    Flag     First Sector        Size        Last Sector
        0        usr    wm                34      33.91GB         71116541    
        1 unassigned    wm                 0          0              0    
        2 unassigned    wm                 0          0              0    
        3 unassigned    wm                 0          0              0    
        4 unassigned    wm                 0          0              0    
        5 unassigned    wm                 0          0              0    
        6 unassigned    wm                 0          0              0    
        7 unassigned    wm                 0          0              0    
        8   reserved    wm          71116542       8.00MB         71132925    
      partition> label
      [0] SMI Label
      [1] EFI Label
      Specify Label type[1]: 0
      Auto configuration via format.dat[no]? 
      Auto configuration via generic SCSI-2[no]? 
      partition>  
  1. Re-create the root pool.For example:
    # zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=
    /etc/zfs/zpool.cache rpool c1t1d0s0
  2. Restore the root pool snapshots.This step might take some time. For example:
    # cat /mnt/rpool.0804 | zfs receive -Fdu rpool
    Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.
    To restore the actual root pool snapshots that are stored in a pool on a remote system, use syntax similar to the following:
    # rsh remote-system zfs send -Rb tank/snaps/rpool@snap1 | zfs receive -F rpool
  3. Verify that the root pool datasets are restored.For example:
    # zfs list
  4. Set the bootfs property on the root pool BE.For example:
    # zpool set bootfs=rpool/ROOT/zfsBE rpool
  5. Install the boot blocks on the new disk.
    • SPARC:
      # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    • x86:
      # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
  6. Reboot the system.
    # init 6

Wednesday, November 7, 2012

Check Storage LUN map with solaris host

Check Storage LUN map with solaris host

cfgadm, fcinfo and LUN mapping on Solaris

OK, so you have a Solaris 10 host with SAN connected storage – how do you make sense of the LUNs you can see? What tools can be used to interrogate the storage and build a mental image of what you have been presented with? This article is intended as a brief introduction to some of the commands in Solaris that will help you achieve your tasks.
Firstly, in order to allow your storage admin to map you some LUNs, you’ll need to provide him with the WWNs of the HBA ports in your server. This is so he can map the LUNs you’ve asked for to the WWNs of your server. These can be found using the fcinfo command. Start with ‘fcinfo hba-port’, Note that the output below shows all 4 of my ports, only 2 of which are occupied and online (c3 and c5).
...
        OS Device Name: /dev/cfg/c3
...
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb

...
        OS Device Name: /dev/cfg/c5
...
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
The full output;
bash-3.00# fcinfo hba-port
HBA Port WWN: 21000024ff295a34
        OS Device Name: /dev/cfg/c2
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835637
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: unknown
        State: offline
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: not established
        Node WWN: 20000024ff295a34
HBA Port WWN: 21000024ff295a35
        OS Device Name: /dev/cfg/c3
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835637
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 20000024ff295a35
HBA Port WWN: 21000024ff295a36
        OS Device Name: /dev/cfg/c4
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835638
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: unknown
        State: offline
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: not established
        Node WWN: 20000024ff295a36
HBA Port WWN: 21000024ff295a37
        OS Device Name: /dev/cfg/c5
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835638
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 20000024ff295a37
bash-3.00# 
It is the ‘HBA Port WWN’ that you need to give to your storage admin. He may appreciate the full output, that will confirm a few other items for him such as the link speed and your HBA manufacturer and driver version numbers.
Using the -l (minus ell) flag shows additional information such as link statistics…
bash-3.00# fcinfo hba-port -l 21000024ff295a37
HBA Port WWN: 21000024ff295a37
        OS Device Name: /dev/cfg/c5
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835638
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 20000024ff295a37
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0
bash-3.00# 
The details (WWN) of the remote ports can be viewed using ‘fcinfo remote-port -p ‘
bash-3.00# fcinfo remote-port -p 21000024ff295a37
Remote Port WWN: 24540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
Remote Port WWN: 25540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
Remote Port WWN: 22120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
Remote Port WWN: 23120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
The -l option can still be used in conjunction with this to show link statistics, and the -s option will show you the LUNs; This is very handy as it shows the LUN number / device name mappings.
...
        LUN: 4
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BC0928d0s2

...
The full output is below;
bash-3.00# fcinfo remote-port -l -p 21000024ff295a37
Remote Port WWN: 24540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 2
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
Remote Port WWN: 25540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 2
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
Remote Port WWN: 22120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
        Link Error Statistics:
                Link Failure Count: 2
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
Remote Port WWN: 23120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
        Link Error Statistics:
                Link Failure Count: 2
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0
bash-3.00# fcinfo remote-port -ls -p 21000024ff295a37
Remote Port WWN: 24540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 2
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES            
          OS Device Name: /dev/es/ses8
Remote Port WWN: 25540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 2
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES            
          OS Device Name: /dev/es/ses8
Remote Port WWN: 22120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
        Link Error Statistics:
                Link Failure Count: 2
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
        LUN: 0
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B70928d0s2
        LUN: 1
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B90928d0s2
        LUN: 2
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BA0928d0s2
        LUN: 3
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BB0928d0s2
        LUN: 4
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BC0928d0s2
        LUN: 5
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BD0928d0s2
        LUN: 6
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BE0928d0s2
        LUN: 7
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BF0928d0s2
        LUN: 8
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C00928d0s2
        LUN: 9
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C10928d0s2
        LUN: 10
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C20928d0s2
        LUN: 11
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C30928d0s2
        LUN: 12
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C40928d0s2
        LUN: 13
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C50928d0s2
        LUN: 14
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B80928d0s2
        LUN: 254
          Vendor: 3PARdata
          Product: SES            
          OS Device Name: /dev/es/ses9
Remote Port WWN: 23120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
        Link Error Statistics:
                Link Failure Count: 2
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0
        LUN: 0
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B70928d0s2
        LUN: 1
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B90928d0s2
        LUN: 2
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BA0928d0s2
        LUN: 3
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BB0928d0s2
        LUN: 4
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BC0928d0s2
        LUN: 5
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BD0928d0s2
        LUN: 6
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BE0928d0s2
        LUN: 7
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BF0928d0s2
        LUN: 8
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C00928d0s2
        LUN: 9
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C10928d0s2
        LUN: 10
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C20928d0s2
        LUN: 11
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C30928d0s2
        LUN: 12
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C40928d0s2
        LUN: 13
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C50928d0s2
        LUN: 14
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B80928d0s2
        LUN: 254
          Vendor: 3PARdata
          Product: SES            
          OS Device Name: /dev/es/ses9
The cfgadm command can be used to view the system ‘attachment points’ which are broadly defined as the location of certain hardware resources visible to Solaris. Running cfgadm with the -al option will list these attachment points. This will include SAN HBAs and LUNs, USB devices, internal disks such as SATA etc.
The type field gives an indication of the type of the device, you will see some self explanatory entries such as ‘scsi-bus’ (which I think about as a controller), disk – very self explanatory. A fibre channel SAN HBA is usually seen as fc, if it is connected to the fabric/SAN it will show as fc-fabric. If you see an entry labelled ESI then know that you are seeing the Enclosure through the Enclosure Services Interface.
The output below is partially truncated.
bash-3.00# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 disk         connected    configured   unknown
c0::dsk/c0t1d0                 disk         connected    configured   unknown
c0::dsk/c0t2d0                 disk         connected    configured   unknown
c0::dsk/c0t3d0                 disk         connected    configured   unknown
c0::dsk/c0t4d0                 disk         connected    configured   unknown
c2                             fc           connected    unconfigured unknown
c3                             fc-fabric    connected    configured   unknown
c3::22110002ac000928           disk         connected    configured   unknown
c3::23110002ac000928           disk         connected    configured   unknown
c3::24530002ac0009e2           ESI          connected    configured   unknown
c3::25530002ac0009e2           ESI          connected    configured   unknown
c4                             fc           connected    unconfigured unknown
c5                             fc-fabric    connected    configured   unknown
c5::22120002ac000928           disk         connected    configured   unknown
c5::23120002ac000928           disk         connected    configured   unknown
c5::24540002ac0009e2           ESI          connected    configured   unknown
c5::25540002ac0009e2           ESI          connected    configured   unknown
sata0/0                        sata-port    empty        unconfigured ok
sata0/1                        sata-port    empty        unconfigured ok
Use the ‘-o show_FCP_dev’ option to get cfgadm to show not only the controllers and the enclosures, but also any fibre channel disks that may be visible on the channel:
Ap_Id                          Type         Receptacle   Occupant     Condition
c2                             fc           connected    unconfigured unknown
c3                             fc-fabric    connected    configured   unknown
c3::22110002ac000928,0         disk         connected    configured   unknown
c3::22110002ac000928,1         disk         connected    configured   unknown
c3::22110002ac000928,2         disk         connected    configured   unknown
c3::22110002ac000928,3         disk         connected    configured   unknown
c3::22110002ac000928,4         disk         connected    configured   unknown
c3::22110002ac000928,5         disk         connected    configured   unknown
c3::22110002ac000928,6         disk         connected    configured   unknown
...
c3::22110002ac000928,12        disk         connected    configured   unknown
c3::22110002ac000928,13        disk         connected    configured   unknown
c3::22110002ac000928,14        disk         connected    configured   unknown
c3::22110002ac000928,254       ESI          connected    configured   unknown
.....
c3::23110002ac000928,13        disk         connected    configured   unknown
c3::23110002ac000928,14        disk         connected    configured   unknown
c3::23110002ac000928,254       ESI          connected    configured   unknown
c3::24530002ac0009e2,254       ESI          connected    configured   unknown
c3::25530002ac0009e2,254       ESI          connected    configured   unknown
c4                             fc           connected    unconfigured unknown
c5                             fc-fabric    connected    configured   unknown
c5::22120002ac000928,0         disk         connected    configured   unknown
c5::22120002ac000928,1         disk         connected    configured   unknown
c5::22120002ac000928,2         disk         connected    configured   unknown
c5::22120002ac000928,3         disk         connected    configured   unknown
c5::22120002ac000928,4         disk         connected    configured   unknown
c5::22120002ac000928,5         disk         connected    configured   unknown
....
c5::22120002ac000928,14        disk         connected    configured   unknown
c5::22120002ac000928,254       ESI          connected    configured   unknown
c5::23120002ac000928,0         disk         connected    configured   unknown
....
c5::23120002ac000928,12        disk         connected    configured   unknown
c5::23120002ac000928,13        disk         connected    configured   unknown
c5::23120002ac000928,14        disk         connected    configured   unknown
c5::23120002ac000928,254       ESI          connected    configured   unknown
c5::24540002ac0009e2,254       ESI          connected    configured   unknown
c5::25540002ac0009e2,254       ESI          connected    configured   unknown
bash-3.00# 
Throughout the outputs above you’ll notice that these disks are multipathed and visible through 2 separate controllers (c3 and c5). Enable STMS (see stmsboot1m)) to aggregate those 2 paths to a single controller. You will gain a pseudo controller when you do this. In this case, the controller becomes c6 (aggregate of c3 and c5). The new disk targets created by STMS then are visible in format:
      18. c6t50002AC001C40928d0   MKCHAD13
          /scsi_vhci/disk@g50002ac001c40928
      19. c6t50002AC001C50928d0   MKCHAD14
          /scsi_vhci/disk@g50002ac001c50928
Once we have these multiple paths, luxadm can be used to interrogate the controller and view the subpaths (and their state). First, run a ‘luxadm probe’ which will scan the devices and present a list.
bash-3.00# luxadm probe             

Found Fibre Channel device(s):
  Node WWN:2ff70002ac0009e2  Device Type:SES device
    Logical Path:/dev/es/ses8
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001B80928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C50928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C40928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C30928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C20928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C10928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C00928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BF0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BE0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BD0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BC0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BB0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BA0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001B90928d0s2
  Node WWN:2ff70002ac000928  Device Type:SES device
    Logical Path:/dev/es/ses9
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001B70928d0s2
bash-3.00#
Now you can select a logical path and use with ‘luxadm display ’ to view the individual paths;
bash-3.00# luxadm display /dev/rdsk/c6t50002AC001B80928d0s2DEVICE PROPERTIES for disk: /dev/rdsk/c6t50002AC001B80928d0s2
  Vendor:               3PARdata
  Product ID:           VV             
  Revision:             0000
  Serial Num:           01B80928
  Unformatted capacity: 40960.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0xffff
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c6t50002AC001B80928d0s2
  /devices/scsi_vhci/disk@g50002ac001b80928:c,raw
   Controller           /dev/cfg/c3
    Device Address              22110002ac000928,e
    Host controller port WWN    21000024ff295a35
    Class                       primary
    State                       ONLINE
   Controller           /dev/cfg/c3
    Device Address              23110002ac000928,e
    Host controller port WWN    21000024ff295a35
    Class                       primary
    State                       ONLINE
   Controller           /dev/cfg/c5
    Device Address              22120002ac000928,e
    Host controller port WWN    21000024ff295a37
    Class                       primary
    State                       ONLINE
   Controller           /dev/cfg/c5
    Device Address              23120002ac000928,e
    Host controller port WWN    21000024ff295a37
    Class                       primary
    State                       ONLINE

bash-3.00#
Things to note are that the size, write cache, read cache and path state are all shown and that the ’,e’ after the Device Address is the LUN number in hex.
The unique part of the LUN indentifier (the LUN id) is embedded halfway through the mpxio / STMS name. To extract from format, for example;
bash-3.00# echo | format |grep c6 | cut -c22-25
01B7
01B8
01B9
01BA
01BB
01BC
01BD
01BE
01BF
01C0
01C1
01C2
01C3
01C4
01C5
This information, in conjunction with luxadm can be used to correlate disks to LUN numbers and LUN IDs. Although the fcinfo commands shown above are generally the easier way to obtain this information. Notice that the LUN number is appended to the Device Address in hex.
bash-3.00# for disk in $( ls /dev/rdsk/c6t50002AC00*s2 )
> do
> echo $disk
> echo $disk | cut -c22-26
> luxadm display $disk |grep 'Device Address'
> done
/dev/rdsk/c6t50002AC001B70928d0s2
01B70
    Device Address              22110002ac000928,0
    Device Address              23110002ac000928,0
    Device Address              22120002ac000928,0
    Device Address              23120002ac000928,0

/dev/rdsk/c6t50002AC001B90928d0s2
01B90
    Device Address              22110002ac000928,1
    Device Address              23110002ac000928,1
    Device Address              22120002ac000928,1
    Device Address              23120002ac000928,1
/dev/rdsk/c6t50002AC001BA0928d0s2
01BA0
    Device Address              22110002ac000928,2
    Device Address              23110002ac000928,2
    Device Address              22120002ac000928,2
    Device Address              23120002ac000928,2
/dev/rdsk/c6t50002AC001BB0928d0s2
01BB0
    Device Address              22110002ac000928,3
    Device Address              23110002ac000928,3
    Device Address              22120002ac000928,3
    Device Address              23120002ac000928,3
/dev/rdsk/c6t50002AC001BC0928d0s2
01BC0
    Device Address              22110002ac000928,4
    Device Address              23110002ac000928,4
    Device Address              22120002ac000928,4
    Device Address              23120002ac000928,4
/dev/rdsk/c6t50002AC001BD0928d0s2
01BD0
    Device Address              22110002ac000928,5
    Device Address              23110002ac000928,5
    Device Address              22120002ac000928,5
    Device Address              23120002ac000928,5
/dev/rdsk/c6t50002AC001BE0928d0s2
01BE0
    Device Address              22110002ac000928,6
    Device Address              23110002ac000928,6
    Device Address              22120002ac000928,6
    Device Address              23120002ac000928,6
/dev/rdsk/c6t50002AC001BF0928d0s2
01BF0
    Device Address              22110002ac000928,7
    Device Address              23110002ac000928,7
    Device Address              22120002ac000928,7
    Device Address              23120002ac000928,7
/dev/rdsk/c6t50002AC001C00928d0s2
01C00
    Device Address              22110002ac000928,8
    Device Address              23110002ac000928,8
    Device Address              22120002ac000928,8
    Device Address              23120002ac000928,8
/dev/rdsk/c6t50002AC001C10928d0s2
01C10
    Device Address              22110002ac000928,9
    Device Address              23110002ac000928,9
    Device Address              22120002ac000928,9
    Device Address              23120002ac000928,9
/dev/rdsk/c6t50002AC001C20928d0s2
01C20
    Device Address              22110002ac000928,a
    Device Address              23110002ac000928,a
    Device Address              22120002ac000928,a
    Device Address              23120002ac000928,a
/dev/rdsk/c6t50002AC001C30928d0s2
01C30
    Device Address              22110002ac000928,b
    Device Address              23110002ac000928,b
    Device Address              22120002ac000928,b
    Device Address              23120002ac000928,b
/dev/rdsk/c6t50002AC001C40928d0s2
01C40
    Device Address              22110002ac000928,c
    Device Address              23110002ac000928,c
    Device Address              22120002ac000928,c
    Device Address              23120002ac000928,c
/dev/rdsk/c6t50002AC001C50928d0s2
01C50
    Device Address              22110002ac000928,d
    Device Address              23110002ac000928,d
    Device Address              22120002ac000928,d
    Device Address              23120002ac000928,d
/dev/rdsk/c6t50002AC001B80928d0s2
01B80
    Device Address              22110002ac000928,e
    Device Address              23110002ac000928,e
    Device Address              22120002ac000928,e
    Device Address              23120002ac000928,e
Labelling the disks with a volume name, which is completely optional, but quite a useful feature relies on a reference to the LUN numbers to ensure the correct labels are assigned to the correct disks. I like to use the following which will print the LUN number for me just before presenting me with a format dialog in which to assign the appropriate volname;
for disk in $( ls /dev/rdsk/c6t50002AC00*s2 )
do 
   echo $disk
   echo $disk | cut -c22-26
   luxadm display $disk |grep 'Device Address' 
   format $disk 
done
A truncated output example (without the actual format screens) will be something like;
/dev/rdsk/c6t50002AC001BD0928d0s2
01BD0
    Device Address              22110002ac000928,5
    Device Address              23110002ac000928,5
    Device Address              22120002ac000928,5
    Device Address              23120002ac000928,5
format /dev/rdsk/c6t50002AC001BD0928d0s2
/dev/rdsk/c6t50002AC001BE0928d0s2
01BE0
    Device Address              22110002ac000928,6
    Device Address              23110002ac000928,6
    Device Address              22120002ac000928,6
    Device Address              23120002ac000928,6
format /dev/rdsk/c6t50002AC001BE0928d0s2
Your format command with named and labelled LUNs will look something like:
       5. c6t50002AC001B70928d0   MKCHAD01
          /scsi_vhci/disk@g50002ac001b70928
       6. c6t50002AC001B80928d0   MKCHAD15
          /scsi_vhci/disk@g50002ac001b80928
       7. c6t50002AC001B90928d0   MKCHAD02
          /scsi_vhci/disk@g50002ac001b90928
       8. c6t50002AC001BA0928d0   MKCHAD03
          /scsi_vhci/disk@g50002ac001ba0928
       9. c6t50002AC001BB0928d0   MKCHAD04
          /scsi_vhci/disk@g50002ac001bb0928
      10. c6t50002AC001BC0928d0   MKCHAD05
          /scsi_vhci/disk@g50002ac001bc0928
      11. c6t50002AC001BD0928d0   MKCHAD06
          /scsi_vhci/disk@g50002ac001bd0928
      12. c6t50002AC001BE0928d0   MKCHAD07
          /scsi_vhci/disk@g50002ac001be0928
      13. c6t50002AC001BF0928d0   MKCHAD08
          /scsi_vhci/disk@g50002ac001bf0928
      14. c6t50002AC001C00928d0   MKCHAD09
          /scsi_vhci/disk@g50002ac001c00928
      15. c6t50002AC001C10928d0   MKCHAD10
          /scsi_vhci/disk@g50002ac001c10928
      16. c6t50002AC001C20928d0   MKCHAD11
          /scsi_vhci/disk@g50002ac001c20928
      17. c6t50002AC001C30928d0   MKCHAD12
          /scsi_vhci/disk@g50002ac001c30928
      18. c6t50002AC001C40928d0   MKCHAD13
          /scsi_vhci/disk@g50002ac001c40928
      19. c6t50002AC001C50928d0   MKCHAD14
          /scsi_vhci/disk@g50002ac001c50928
Specify disk (enter its number): 
The next steps are to configure your LUNs as required. This will depend on the intended usage, perhaps they will be used as RAW or ASM volumes for database usage, perhaps as ZFS zpools for filesystem usage.

Tuesday, April 17, 2012

Move disks between VxVM diskgroups

This is not easily done. However you may be able to move a volume from one diskgroup to another, beware, some people have messed this up. Here are the basic steps




Lets say you have two diskgroups sourcedg and targetdg. The sourcedg has a volume data1 that you want in targetdg. data1 is a simple volume with a subdisk on disk01, c1t0d0





backup the data in volume data1 in case this goes wrong.

Save the VM configuration for that particular volume (don't store the file in the volume)

vxprint -g sourcedg -hmQq data1 > /data.file

vxdisk list > /vxdisk.file (save the disk name/device mappings)

Unmount, stop and remove the volume data1, yes thats right remove it! (removing a volume does not actually destroy the data on the disks, it simply deletes the mappings of the volume/plex/subdisk)

Remove the disks that the data1 resided on, and add them to the new diskgroup with the same DM name.

vxdg -g sourcedg rmdisk disk01

vxdg -g targetdg adddisk disk01=c1t0d0

Rebuild the volume mapping form the saved file

vxmake -g targetdg -d /data.file

Start the volume>

vxvol start data01



The above example is very simple as the volume sat on only one disk. If that disk was used by other volumes that are not to be moved to the newdg , then we have a problem. You can only move the disk out of a DG, when all the subdisks are gone.



Wednesday, January 18, 2012

Dtrace -powerful tracing and analysis tool

DTrace Introduction

DTrace is Solaris 10's new Dynamic Tracing facility. It allows us to peer into the innards of running processes and customize our view to exclude extraneous information and close in on the source of a problem.
DTrace also has capabilities that allow us to examine a crash dump or trace the boot process.
A number of freely available scripts have been made available as the DTrace Toolkit. The toolkit provides both programming examples and also extremely useful tools for different types of system monitoring.
The DTrace facility provides data to a number of consumers, including commands such as dtrace and lockstat, as well as programs calling libraries that access DTrace through the dtrace kernel driver.

Probes

DTrace is built on a foundation of objects called probes. Probes are event handlers that fire when their particular event occurs. DTrace can bind a particular action to the probe to make use of the information.
Probes report on a variety of information about their event. For example, a probe for a kernel function may report on arguments, global variables, timestamps, stack traces, currently running processes or the thread that called the function.
Kernel modules that enable probes are packaged into sets known as providers. In a DTrace context, a module is a kernel module (for kernel probes) or a library name (for applications). A function in DTrace refers to the function associated with a probe, if it belongs to a program location.
Probes may be uniquely addressed by a combination of the provider, module, function and name. These are frequently organized into a 4-tuple when invoked by the dtrace command.
Alternatively, each probe has a unique integer identifier, which can vary depending on Solaris patch level.
These numbers, as well as the provider, module, function and name, can be listed out through the dtrace -l command. The list will vary from system to system, depending on what is installed. Probes can be listed by function, module or name by specifying it with the -f, -m or -n options, respectively.
Running a dtrace without a -l, but with a -f, -m or -n option, enables all matching probes. All the probes in a provider can be enabled by using the -P option. An individual probe can be enabled by using its 4-tuple with the -n option.
(Note: Do not enable more probes than necessary. If too many probes are enabled, it may adversely impact performance. This is particularly true of sched probes.)
Some probes do not list a module or function. These are called "unanchored" probes. Their 4-tuple just omits the nonexistent information.

Providers

Providers are kernel modules that create related groups of probes. The most commonly referenced providers are:
  • fbt: (Function Boundary Tracing) Implements probes at the entry and return points of almost all kernel functions.
  • io: Implements probes for I/O-related events.
  • pid: Implements probes for user-level processes at entry, return and instruction.
  • proc: Implements probes for process creation and life-cycle events.
  • profile: Implements timer-driven probes.
  • sched: Implements probes for scheduling-related events.
  • sdt: (Statistically Defined Tracing) Implements programmer-defined probes at arbitrary locations and names within code. Obviously, the programmer should define names whose meaning is intuitively clear.
  • syscall: Implements entry and return probes for all system calls.
  • sysinfo: Probes for updates to the sys kstat.
  • vminfo: Probes for updates to the vm kstat.

Command Components

The dtrace command has several components:
  • A 4-tuple identifier:provider:module :function:name
    Leaving any of these blank is equivalent to using a wildcard match. (If left blank, the left-most members of the 4-tuple are optional.)
  • A predicate determines whether the action should be taken. They are enclosed in slashes: /predicate/. The predicate is a C-style relational expression which must evaluate to an integer or pointer. If omitted, the action is executed when the probe fires. Some predicate examples are:
    • executable name matches csh:/execname == "csh"/
    • process ID does not match 1234:/pid != 1234/
    • arg0 is 1 and arg1 is not 0:/arg0 == 1 && arg1 !=0/
  • An action (in the D scripting language) to be taken when the probe fires and the predicate is satisfied. Typically, this is listed in curly brackets: {}
Several command examples are provided at the bottom of the page.

D Scripting Language

In order to deal with operations that can become confusing on a single command line, a D script can be saved to a file and run as desired. A D script will have one or more probe clauses, which consist of one or more probe-descriptions, along with the associated predicates and actions:
#!/usr/sbin/dtrace -sprobe-description[, probe-description...]/predicate/
{
action; [action; ...]}
The probe-description section consists of one or more 4-tuple identifiers. If the predicate line is not present, it is the same as a predicate that is always true. The action(s) specified are to be run if the probe fires and the predicate is true.
Each recording action dumps data to a trace buffer. By default, this is the principal buffer.
Several programming examples are provided at the bottom of the page.

D Variables

D specifies both associative arrays and scalar variables. Storage for these variables is not pre-allocated. It is allocated when a non-zero value is assigned and deallocated when a zero value is assigned.
D defines several built-in variables, which are frequently used in creating predicates and actions. The most commonly used built-in variables for D are the following:
  • args[]: The args[] array contains the arguments, specified from 0 to the number of arguments less one. These can also be specified by argn, where this is the n+1th argument.
  • curpsinfo: psinfo structure of current process.
  • curthread: pointer to the current thread's kthread_t
  • execname: Current executable name
  • pid: Current process ID
  • ppid: Parent process ID
  • probefunc: function name of the current probe
  • probemod: module name of the current probe
  • probename: name of the current probe
  • timestamp: Time since boot in ns
Listing Probes
You can list all DTrace probes by passing the
-l option to the dtrace command:#

To count all the probes that are available on your system, you can type the following command:

#
dtrace -l | wc -l
dtrace -l

Sunday, January 8, 2012

Solaris 11 IPS configuration

Oracle Solaris 11 has been released and coming up as the first cloud OS, Oracle Solaris 11 coming with more features and enhancements. One of it is IPS (Image Packing Systems) this the new way you manage the software patches and packages, IPS make you easy to manage package and patches.




Why you configure Local IPS?

1. Performance (more speed from local network)

2. Security (you don’t want your client to connect to internet)

3. Replication (You want manage the repository, and to make your installation today is exactly the same for the installation next Year)



Prerequisite :

1. Install Oracle Solaris 11

2. 2 DVD ISO repository : https://edelivery.oracle.com



you need to unzip the iso, below the tow iso repository look like :





view source

print?

1 rachmat@solaris:~$ ls -l *iso

2 -rw-r--r-- 1 rachmat staff 3537872896 Nov 13 06:53 V28915-01.iso

3 -rw-r--r-- 1 rachmat staff 3403360256 Nov 13 14:52 V28916-01.iso



3. Create Zpool for IPS Repository (it’s optional, but I create Zpool specially for holding the repository)


01 root@solaris:~# format

02 Searching for disks...done

03

04 AVAILABLE DISK SELECTIONS:

05 0. c3t0d0

06 /pci@0,0/pci8086,2829@d/disk@0,0

07 1. c3t2d0

08 /pci@0,0/pci8086,2829@d/disk@2,0

09 Specify disk (enter its number): ^D

10

11 root@solaris:~# zpool create ipspool c3t2d0

12 root@solaris:~# zfs create ipspool/ips

13

14 -optional :

15 root@solaris:~# zfs set atime=off ipspool/ips

16 root@solaris:~# zfs set compression=on ipspool/ips



set a time to off for better performance, it’s optional.



4. Mount the first repository iso and rsync it, after finished umount it :




1 root@solaris:~# lofiadm -a /export/home/rachmat/V28915-01.iso /dev/lofi/1

2 root@solaris:~# mount -F hsfs /dev/lofi/1 /mnt

3 root@solaris:~# ls /mnt/

4 COPYRIGHT NOTICES README repo

5

6 root@solaris:~# rsync -aP /mnt/repo /ipspool/ips

7 root@solaris:~# umount /mnt

8 root@solaris:~# lofiadm -d /dev/lofi/1



5. Mount the second repository iso and rsync it, after finished umount it :



1 root@solaris:~# lofiadm -a /export/home/rachmat/V28916-01.iso /dev/lofi/1

2 root@solaris:~# mount -F hsfs /dev/lofi/1 /mnt

3 root@solaris:~# ls /mnt/

4 COPYRIGHT NOTICES README repo

5

6 root@solaris:~# rsync -aP /mnt/repo /ipspool/ips

7 root@solaris:~# umount /mnt

8 root@solaris:~# lofiadm -d /dev/lofi/1



6. set the repository to your repository folder, in this case /ipspool/ips/repo



1 root@solaris:~# svccfg -s application/pkg/server setprop pkg/inst_root=/ipspool/ips/repo



7. Set the readonly to true :



1 root@solaris:~# svccfg -s application/pkg/server setprop pkg/readonly=true



8. Enable and refresh pkg serer :




1 root@solaris:~# svcadm refresh application/pkg/server

2 root@solaris:~# svcadm enable application/pkg/server



9. setting up the client, to use only this repository




1 root@solaris:~# pkg set-publisher -G '*' -g solaris



Where the origin is your repository server domain / ip address



view source

print?

1 root@solaris:~# pkg set-publisher -G '*' -g http://10.0.7.1 solaris



10. Verify and testing :



01 root@solaris:~# pkg publisher

02 PUBLISHER TYPE STATUS URI

03 solaris origin online http://10.0.7.1/

04

05 root@solaris:~# pkg install nmap

06 Packages to install: 2

07 Create boot environment: No

08 Create backup boot environment: No

09 Services to change: 1

10

11 DOWNLOAD PKGS FILES XFER (MB)

12 Completed 2/2 454/454 3.1/3.1

13

14 PHASE ACTIONS

15 Install Phase 538/538

16

17 PHASE ITEMS

18 Package State Update Phase 2/2

19 Image State Update Phase 2/2



Note : See the README file of repository for the complete options.