Tuesday, November 27, 2012

Re-create a ZFS Root Pool and Restore Root Pool Snapshots


All the steps are performed on the local system.
  1. Boot from an installation DVD or the network.
    • SPARC - Select one of the following boot methods:
      ok boot net -s
      ok boot cdrom -s
      If you don't use -s option, you'll need to exit the installation program.
    • x86 – Select the option for booting from the DVD or the network. Then, exit the installation program.
  2. Mount the remote snapshot file system if you have sent the root pool snapshots as a file to the remote system.For example:
    # mount -F nfs remote-system:/rpool/snaps /mnt
    If your network services are not configured, you might need to specify the remote-system's IP address.
  3. If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you must relabel the disk.
  4. Relabeling the Root Pool Disk

    You might need to replace a disk in the root pool for the following reasons:
  5. The root pool is too small and you want to replace it with a larger disk
  6. The root pool disk is failing. If the disk is failing so that the system won't boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the root pool disk.
Part of recovering the root pool might be to replace or relabel the root pool disk. Follow the steps below to relabel and replace the root pool disk.
  1. Physically attach the replacement disk.
  2. If the replacement disk has an EFI label, the fdisk output looks similar to the following on an x86 system.
    # fdisk /dev/rdsk/c1t1d0p0
    selecting c1t1d0p0
      Total disk size is 8924 cylinders
                 Cylinder size is 16065 (512 byte) blocks
    
                                                   Cylinders
          Partition   Status    Type          Start   End   Length    %
          =========   ======    ============  =====   ===   ======   ===
              1                 EFI               0  8924    8925    100
    .
    .
    .
    Enter Selection: 6
    Use fdisk to change this to a Solaris partition.
  3. Select one of the following to create a Solaris fdisk partition for a disk on an x86 system or create an SMI label for a disk on a SPARC system.
    • On an x86 system, create a Solaris fdisk partition that can be used for booting by selecting 1=SOLARIS2. You can create a Solaris partition by using the fdisk -B option that creates one Solaris partition that uses the whole disk. Beware that the following command uses the whole disk.
      # fdisk -B /dev/rdsk/c1t1d0p0
      Display the newly created Solaris partition. For example:
       Total disk size is 8924 cylinders
                   Cylinder size is 16065 (512 byte) blocks
      
                                                     Cylinders
            Partition   Status    Type          Start   End   Length    %
            =========   ======    ============  =====   ===   ======   ===
                1       Active    Solaris2          1  8923    8923    100
      .
      .
      .
      Enter Selection: 6
    • On a SPARC based system, make sure you have an SMI label. Use the format -e command to determine if the disk label is EFI or SMI and relabel the disk, if necessary. In the output below, the disk label includes sectors and not cylinders. This is an EFI label.
      # format -e
      Searching for disks...done
      AVAILABLE DISK SELECTIONS:
             0. c1t0d0 
                /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w21000004cf7fac8a,0
             1. c1t1d0 
                /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w21000004cf7fad21,0
      Specify disk (enter its number): 1
      selecting c1t1d0
      [disk formatted]
      format> p
      partition> p
      Current partition table (original):
      Total disk sectors available: 71116541 + 16384 (reserved sectors)
      
      Part      Tag    Flag     First Sector        Size        Last Sector
        0        usr    wm                34      33.91GB         71116541    
        1 unassigned    wm                 0          0              0    
        2 unassigned    wm                 0          0              0    
        3 unassigned    wm                 0          0              0    
        4 unassigned    wm                 0          0              0    
        5 unassigned    wm                 0          0              0    
        6 unassigned    wm                 0          0              0    
        7 unassigned    wm                 0          0              0    
        8   reserved    wm          71116542       8.00MB         71132925    
      partition> label
      [0] SMI Label
      [1] EFI Label
      Specify Label type[1]: 0
      Auto configuration via format.dat[no]? 
      Auto configuration via generic SCSI-2[no]? 
      partition>  
  1. Re-create the root pool.For example:
    # zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=
    /etc/zfs/zpool.cache rpool c1t1d0s0
  2. Restore the root pool snapshots.This step might take some time. For example:
    # cat /mnt/rpool.0804 | zfs receive -Fdu rpool
    Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.
    To restore the actual root pool snapshots that are stored in a pool on a remote system, use syntax similar to the following:
    # rsh remote-system zfs send -Rb tank/snaps/rpool@snap1 | zfs receive -F rpool
  3. Verify that the root pool datasets are restored.For example:
    # zfs list
  4. Set the bootfs property on the root pool BE.For example:
    # zpool set bootfs=rpool/ROOT/zfsBE rpool
  5. Install the boot blocks on the new disk.
    • SPARC:
      # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    • x86:
      # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
  6. Reboot the system.
    # init 6

Wednesday, November 7, 2012

Check Storage LUN map with solaris host

Check Storage LUN map with solaris host

cfgadm, fcinfo and LUN mapping on Solaris

OK, so you have a Solaris 10 host with SAN connected storage – how do you make sense of the LUNs you can see? What tools can be used to interrogate the storage and build a mental image of what you have been presented with? This article is intended as a brief introduction to some of the commands in Solaris that will help you achieve your tasks.
Firstly, in order to allow your storage admin to map you some LUNs, you’ll need to provide him with the WWNs of the HBA ports in your server. This is so he can map the LUNs you’ve asked for to the WWNs of your server. These can be found using the fcinfo command. Start with ‘fcinfo hba-port’, Note that the output below shows all 4 of my ports, only 2 of which are occupied and online (c3 and c5).
...
        OS Device Name: /dev/cfg/c3
...
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb

...
        OS Device Name: /dev/cfg/c5
...
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
The full output;
bash-3.00# fcinfo hba-port
HBA Port WWN: 21000024ff295a34
        OS Device Name: /dev/cfg/c2
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835637
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: unknown
        State: offline
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: not established
        Node WWN: 20000024ff295a34
HBA Port WWN: 21000024ff295a35
        OS Device Name: /dev/cfg/c3
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835637
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 20000024ff295a35
HBA Port WWN: 21000024ff295a36
        OS Device Name: /dev/cfg/c4
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835638
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: unknown
        State: offline
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: not established
        Node WWN: 20000024ff295a36
HBA Port WWN: 21000024ff295a37
        OS Device Name: /dev/cfg/c5
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835638
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 20000024ff295a37
bash-3.00# 
It is the ‘HBA Port WWN’ that you need to give to your storage admin. He may appreciate the full output, that will confirm a few other items for him such as the link speed and your HBA manufacturer and driver version numbers.
Using the -l (minus ell) flag shows additional information such as link statistics…
bash-3.00# fcinfo hba-port -l 21000024ff295a37
HBA Port WWN: 21000024ff295a37
        OS Device Name: /dev/cfg/c5
        Manufacturer: QLogic Corp.
        Model: 375-3356-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.01; EFI: 2.00;
        Serial Number: 0402R00-1023835638
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 20000024ff295a37
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0
bash-3.00# 
The details (WWN) of the remote ports can be viewed using ‘fcinfo remote-port -p ‘
bash-3.00# fcinfo remote-port -p 21000024ff295a37
Remote Port WWN: 24540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
Remote Port WWN: 25540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
Remote Port WWN: 22120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
Remote Port WWN: 23120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
The -l option can still be used in conjunction with this to show link statistics, and the -s option will show you the LUNs; This is very handy as it shows the LUN number / device name mappings.
...
        LUN: 4
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BC0928d0s2

...
The full output is below;
bash-3.00# fcinfo remote-port -l -p 21000024ff295a37
Remote Port WWN: 24540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 2
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
Remote Port WWN: 25540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 2
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
Remote Port WWN: 22120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
        Link Error Statistics:
                Link Failure Count: 2
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
Remote Port WWN: 23120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
        Link Error Statistics:
                Link Failure Count: 2
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0
bash-3.00# fcinfo remote-port -ls -p 21000024ff295a37
Remote Port WWN: 24540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 2
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES            
          OS Device Name: /dev/es/ses8
Remote Port WWN: 25540002ac0009e2
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac0009e2
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 2
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES            
          OS Device Name: /dev/es/ses8
Remote Port WWN: 22120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
        Link Error Statistics:
                Link Failure Count: 2
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 2
                Invalid CRC Count: 0
        LUN: 0
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B70928d0s2
        LUN: 1
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B90928d0s2
        LUN: 2
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BA0928d0s2
        LUN: 3
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BB0928d0s2
        LUN: 4
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BC0928d0s2
        LUN: 5
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BD0928d0s2
        LUN: 6
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BE0928d0s2
        LUN: 7
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BF0928d0s2
        LUN: 8
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C00928d0s2
        LUN: 9
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C10928d0s2
        LUN: 10
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C20928d0s2
        LUN: 11
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C30928d0s2
        LUN: 12
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C40928d0s2
        LUN: 13
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C50928d0s2
        LUN: 14
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B80928d0s2
        LUN: 254
          Vendor: 3PARdata
          Product: SES            
          OS Device Name: /dev/es/ses9
Remote Port WWN: 23120002ac000928
        Active FC4 Types: SCSI
        SCSI Target: yes
        Node WWN: 2ff70002ac000928
        Link Error Statistics:
                Link Failure Count: 2
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0
        LUN: 0
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B70928d0s2
        LUN: 1
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B90928d0s2
        LUN: 2
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BA0928d0s2
        LUN: 3
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BB0928d0s2
        LUN: 4
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BC0928d0s2
        LUN: 5
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BD0928d0s2
        LUN: 6
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BE0928d0s2
        LUN: 7
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001BF0928d0s2
        LUN: 8
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C00928d0s2
        LUN: 9
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C10928d0s2
        LUN: 10
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C20928d0s2
        LUN: 11
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C30928d0s2
        LUN: 12
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C40928d0s2
        LUN: 13
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001C50928d0s2
        LUN: 14
          Vendor: 3PARdata
          Product: VV             
          OS Device Name: /dev/rdsk/c6t50002AC001B80928d0s2
        LUN: 254
          Vendor: 3PARdata
          Product: SES            
          OS Device Name: /dev/es/ses9
The cfgadm command can be used to view the system ‘attachment points’ which are broadly defined as the location of certain hardware resources visible to Solaris. Running cfgadm with the -al option will list these attachment points. This will include SAN HBAs and LUNs, USB devices, internal disks such as SATA etc.
The type field gives an indication of the type of the device, you will see some self explanatory entries such as ‘scsi-bus’ (which I think about as a controller), disk – very self explanatory. A fibre channel SAN HBA is usually seen as fc, if it is connected to the fabric/SAN it will show as fc-fabric. If you see an entry labelled ESI then know that you are seeing the Enclosure through the Enclosure Services Interface.
The output below is partially truncated.
bash-3.00# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 disk         connected    configured   unknown
c0::dsk/c0t1d0                 disk         connected    configured   unknown
c0::dsk/c0t2d0                 disk         connected    configured   unknown
c0::dsk/c0t3d0                 disk         connected    configured   unknown
c0::dsk/c0t4d0                 disk         connected    configured   unknown
c2                             fc           connected    unconfigured unknown
c3                             fc-fabric    connected    configured   unknown
c3::22110002ac000928           disk         connected    configured   unknown
c3::23110002ac000928           disk         connected    configured   unknown
c3::24530002ac0009e2           ESI          connected    configured   unknown
c3::25530002ac0009e2           ESI          connected    configured   unknown
c4                             fc           connected    unconfigured unknown
c5                             fc-fabric    connected    configured   unknown
c5::22120002ac000928           disk         connected    configured   unknown
c5::23120002ac000928           disk         connected    configured   unknown
c5::24540002ac0009e2           ESI          connected    configured   unknown
c5::25540002ac0009e2           ESI          connected    configured   unknown
sata0/0                        sata-port    empty        unconfigured ok
sata0/1                        sata-port    empty        unconfigured ok
Use the ‘-o show_FCP_dev’ option to get cfgadm to show not only the controllers and the enclosures, but also any fibre channel disks that may be visible on the channel:
Ap_Id                          Type         Receptacle   Occupant     Condition
c2                             fc           connected    unconfigured unknown
c3                             fc-fabric    connected    configured   unknown
c3::22110002ac000928,0         disk         connected    configured   unknown
c3::22110002ac000928,1         disk         connected    configured   unknown
c3::22110002ac000928,2         disk         connected    configured   unknown
c3::22110002ac000928,3         disk         connected    configured   unknown
c3::22110002ac000928,4         disk         connected    configured   unknown
c3::22110002ac000928,5         disk         connected    configured   unknown
c3::22110002ac000928,6         disk         connected    configured   unknown
...
c3::22110002ac000928,12        disk         connected    configured   unknown
c3::22110002ac000928,13        disk         connected    configured   unknown
c3::22110002ac000928,14        disk         connected    configured   unknown
c3::22110002ac000928,254       ESI          connected    configured   unknown
.....
c3::23110002ac000928,13        disk         connected    configured   unknown
c3::23110002ac000928,14        disk         connected    configured   unknown
c3::23110002ac000928,254       ESI          connected    configured   unknown
c3::24530002ac0009e2,254       ESI          connected    configured   unknown
c3::25530002ac0009e2,254       ESI          connected    configured   unknown
c4                             fc           connected    unconfigured unknown
c5                             fc-fabric    connected    configured   unknown
c5::22120002ac000928,0         disk         connected    configured   unknown
c5::22120002ac000928,1         disk         connected    configured   unknown
c5::22120002ac000928,2         disk         connected    configured   unknown
c5::22120002ac000928,3         disk         connected    configured   unknown
c5::22120002ac000928,4         disk         connected    configured   unknown
c5::22120002ac000928,5         disk         connected    configured   unknown
....
c5::22120002ac000928,14        disk         connected    configured   unknown
c5::22120002ac000928,254       ESI          connected    configured   unknown
c5::23120002ac000928,0         disk         connected    configured   unknown
....
c5::23120002ac000928,12        disk         connected    configured   unknown
c5::23120002ac000928,13        disk         connected    configured   unknown
c5::23120002ac000928,14        disk         connected    configured   unknown
c5::23120002ac000928,254       ESI          connected    configured   unknown
c5::24540002ac0009e2,254       ESI          connected    configured   unknown
c5::25540002ac0009e2,254       ESI          connected    configured   unknown
bash-3.00# 
Throughout the outputs above you’ll notice that these disks are multipathed and visible through 2 separate controllers (c3 and c5). Enable STMS (see stmsboot1m)) to aggregate those 2 paths to a single controller. You will gain a pseudo controller when you do this. In this case, the controller becomes c6 (aggregate of c3 and c5). The new disk targets created by STMS then are visible in format:
      18. c6t50002AC001C40928d0   MKCHAD13
          /scsi_vhci/disk@g50002ac001c40928
      19. c6t50002AC001C50928d0   MKCHAD14
          /scsi_vhci/disk@g50002ac001c50928
Once we have these multiple paths, luxadm can be used to interrogate the controller and view the subpaths (and their state). First, run a ‘luxadm probe’ which will scan the devices and present a list.
bash-3.00# luxadm probe             

Found Fibre Channel device(s):
  Node WWN:2ff70002ac0009e2  Device Type:SES device
    Logical Path:/dev/es/ses8
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001B80928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C50928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C40928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C30928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C20928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C10928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001C00928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BF0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BE0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BD0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BC0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BB0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001BA0928d0s2
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001B90928d0s2
  Node WWN:2ff70002ac000928  Device Type:SES device
    Logical Path:/dev/es/ses9
  Node WWN:2ff70002ac000928  Device Type:Disk device
    Logical Path:/dev/rdsk/c6t50002AC001B70928d0s2
bash-3.00#
Now you can select a logical path and use with ‘luxadm display ’ to view the individual paths;
bash-3.00# luxadm display /dev/rdsk/c6t50002AC001B80928d0s2DEVICE PROPERTIES for disk: /dev/rdsk/c6t50002AC001B80928d0s2
  Vendor:               3PARdata
  Product ID:           VV             
  Revision:             0000
  Serial Num:           01B80928
  Unformatted capacity: 40960.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0xffff
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c6t50002AC001B80928d0s2
  /devices/scsi_vhci/disk@g50002ac001b80928:c,raw
   Controller           /dev/cfg/c3
    Device Address              22110002ac000928,e
    Host controller port WWN    21000024ff295a35
    Class                       primary
    State                       ONLINE
   Controller           /dev/cfg/c3
    Device Address              23110002ac000928,e
    Host controller port WWN    21000024ff295a35
    Class                       primary
    State                       ONLINE
   Controller           /dev/cfg/c5
    Device Address              22120002ac000928,e
    Host controller port WWN    21000024ff295a37
    Class                       primary
    State                       ONLINE
   Controller           /dev/cfg/c5
    Device Address              23120002ac000928,e
    Host controller port WWN    21000024ff295a37
    Class                       primary
    State                       ONLINE

bash-3.00#
Things to note are that the size, write cache, read cache and path state are all shown and that the ’,e’ after the Device Address is the LUN number in hex.
The unique part of the LUN indentifier (the LUN id) is embedded halfway through the mpxio / STMS name. To extract from format, for example;
bash-3.00# echo | format |grep c6 | cut -c22-25
01B7
01B8
01B9
01BA
01BB
01BC
01BD
01BE
01BF
01C0
01C1
01C2
01C3
01C4
01C5
This information, in conjunction with luxadm can be used to correlate disks to LUN numbers and LUN IDs. Although the fcinfo commands shown above are generally the easier way to obtain this information. Notice that the LUN number is appended to the Device Address in hex.
bash-3.00# for disk in $( ls /dev/rdsk/c6t50002AC00*s2 )
> do
> echo $disk
> echo $disk | cut -c22-26
> luxadm display $disk |grep 'Device Address'
> done
/dev/rdsk/c6t50002AC001B70928d0s2
01B70
    Device Address              22110002ac000928,0
    Device Address              23110002ac000928,0
    Device Address              22120002ac000928,0
    Device Address              23120002ac000928,0

/dev/rdsk/c6t50002AC001B90928d0s2
01B90
    Device Address              22110002ac000928,1
    Device Address              23110002ac000928,1
    Device Address              22120002ac000928,1
    Device Address              23120002ac000928,1
/dev/rdsk/c6t50002AC001BA0928d0s2
01BA0
    Device Address              22110002ac000928,2
    Device Address              23110002ac000928,2
    Device Address              22120002ac000928,2
    Device Address              23120002ac000928,2
/dev/rdsk/c6t50002AC001BB0928d0s2
01BB0
    Device Address              22110002ac000928,3
    Device Address              23110002ac000928,3
    Device Address              22120002ac000928,3
    Device Address              23120002ac000928,3
/dev/rdsk/c6t50002AC001BC0928d0s2
01BC0
    Device Address              22110002ac000928,4
    Device Address              23110002ac000928,4
    Device Address              22120002ac000928,4
    Device Address              23120002ac000928,4
/dev/rdsk/c6t50002AC001BD0928d0s2
01BD0
    Device Address              22110002ac000928,5
    Device Address              23110002ac000928,5
    Device Address              22120002ac000928,5
    Device Address              23120002ac000928,5
/dev/rdsk/c6t50002AC001BE0928d0s2
01BE0
    Device Address              22110002ac000928,6
    Device Address              23110002ac000928,6
    Device Address              22120002ac000928,6
    Device Address              23120002ac000928,6
/dev/rdsk/c6t50002AC001BF0928d0s2
01BF0
    Device Address              22110002ac000928,7
    Device Address              23110002ac000928,7
    Device Address              22120002ac000928,7
    Device Address              23120002ac000928,7
/dev/rdsk/c6t50002AC001C00928d0s2
01C00
    Device Address              22110002ac000928,8
    Device Address              23110002ac000928,8
    Device Address              22120002ac000928,8
    Device Address              23120002ac000928,8
/dev/rdsk/c6t50002AC001C10928d0s2
01C10
    Device Address              22110002ac000928,9
    Device Address              23110002ac000928,9
    Device Address              22120002ac000928,9
    Device Address              23120002ac000928,9
/dev/rdsk/c6t50002AC001C20928d0s2
01C20
    Device Address              22110002ac000928,a
    Device Address              23110002ac000928,a
    Device Address              22120002ac000928,a
    Device Address              23120002ac000928,a
/dev/rdsk/c6t50002AC001C30928d0s2
01C30
    Device Address              22110002ac000928,b
    Device Address              23110002ac000928,b
    Device Address              22120002ac000928,b
    Device Address              23120002ac000928,b
/dev/rdsk/c6t50002AC001C40928d0s2
01C40
    Device Address              22110002ac000928,c
    Device Address              23110002ac000928,c
    Device Address              22120002ac000928,c
    Device Address              23120002ac000928,c
/dev/rdsk/c6t50002AC001C50928d0s2
01C50
    Device Address              22110002ac000928,d
    Device Address              23110002ac000928,d
    Device Address              22120002ac000928,d
    Device Address              23120002ac000928,d
/dev/rdsk/c6t50002AC001B80928d0s2
01B80
    Device Address              22110002ac000928,e
    Device Address              23110002ac000928,e
    Device Address              22120002ac000928,e
    Device Address              23120002ac000928,e
Labelling the disks with a volume name, which is completely optional, but quite a useful feature relies on a reference to the LUN numbers to ensure the correct labels are assigned to the correct disks. I like to use the following which will print the LUN number for me just before presenting me with a format dialog in which to assign the appropriate volname;
for disk in $( ls /dev/rdsk/c6t50002AC00*s2 )
do 
   echo $disk
   echo $disk | cut -c22-26
   luxadm display $disk |grep 'Device Address' 
   format $disk 
done
A truncated output example (without the actual format screens) will be something like;
/dev/rdsk/c6t50002AC001BD0928d0s2
01BD0
    Device Address              22110002ac000928,5
    Device Address              23110002ac000928,5
    Device Address              22120002ac000928,5
    Device Address              23120002ac000928,5
format /dev/rdsk/c6t50002AC001BD0928d0s2
/dev/rdsk/c6t50002AC001BE0928d0s2
01BE0
    Device Address              22110002ac000928,6
    Device Address              23110002ac000928,6
    Device Address              22120002ac000928,6
    Device Address              23120002ac000928,6
format /dev/rdsk/c6t50002AC001BE0928d0s2
Your format command with named and labelled LUNs will look something like:
       5. c6t50002AC001B70928d0   MKCHAD01
          /scsi_vhci/disk@g50002ac001b70928
       6. c6t50002AC001B80928d0   MKCHAD15
          /scsi_vhci/disk@g50002ac001b80928
       7. c6t50002AC001B90928d0   MKCHAD02
          /scsi_vhci/disk@g50002ac001b90928
       8. c6t50002AC001BA0928d0   MKCHAD03
          /scsi_vhci/disk@g50002ac001ba0928
       9. c6t50002AC001BB0928d0   MKCHAD04
          /scsi_vhci/disk@g50002ac001bb0928
      10. c6t50002AC001BC0928d0   MKCHAD05
          /scsi_vhci/disk@g50002ac001bc0928
      11. c6t50002AC001BD0928d0   MKCHAD06
          /scsi_vhci/disk@g50002ac001bd0928
      12. c6t50002AC001BE0928d0   MKCHAD07
          /scsi_vhci/disk@g50002ac001be0928
      13. c6t50002AC001BF0928d0   MKCHAD08
          /scsi_vhci/disk@g50002ac001bf0928
      14. c6t50002AC001C00928d0   MKCHAD09
          /scsi_vhci/disk@g50002ac001c00928
      15. c6t50002AC001C10928d0   MKCHAD10
          /scsi_vhci/disk@g50002ac001c10928
      16. c6t50002AC001C20928d0   MKCHAD11
          /scsi_vhci/disk@g50002ac001c20928
      17. c6t50002AC001C30928d0   MKCHAD12
          /scsi_vhci/disk@g50002ac001c30928
      18. c6t50002AC001C40928d0   MKCHAD13
          /scsi_vhci/disk@g50002ac001c40928
      19. c6t50002AC001C50928d0   MKCHAD14
          /scsi_vhci/disk@g50002ac001c50928
Specify disk (enter its number): 
The next steps are to configure your LUNs as required. This will depend on the intended usage, perhaps they will be used as RAW or ASM volumes for database usage, perhaps as ZFS zpools for filesystem usage.