Saturday, December 9, 2017

Disable and Blacklist all devices in dm multipathing on RHEL7

If you want to use any other third party multipathing software apart from DM (Linux native multipathing) then to disable DM follow below steps:

1. Blacklist all devices in /etc/multipath.conf. Edit /etc/multipath.conf so that it contains only the following lines:

blacklist {
devnode "*"
}


2. Ensure dm-multipath does not start automatically at boot.

# systemctl disable multipathd.service

# systemctl list-unit-files | grep multipath
 multipathd.service                          disabled


3. Recreate initramfs so that dm-multipath is excluded from initramfs.

# dracut /boot/initramfs-wo-DM-$(uname -r).img $(uname -r)

Take a backup of existing /boot/initramfs-$(uname –r).img file and rename the above created initrmafs file to the name of orginal initramfs file.


4. Reboot the host with the new initramfs image and ensure dm-multipath does not have

5. To check if any devices are configured

# multipath -ll --- should not return any dm devices

How to get HBA/LUN/Paths information on ESXi 6.x ?

1. To check how many HBA's are installed and which are all having connectivity

# esxcli storage core adapter list
HBA Name  Driver      Link State  UID                                   Capabilities                         Description                                                    
--------  ----------  ----------  ------------------------------------  -----------------------------------  ---------------------------------------------------------------------------------------
vmhba0    hpdsa       link-n/a    unknown.vmhba0                                                             (0000:01:00.7) Hewlett Packard Enterprise HPE|Dynamic Smart Array B140i RAID Controller
vmhba1    lpfc        link-n/a    fc.20001402ecec2db2:10001402ecec2db2  Second Level Lun ID                  (0000:04:00.0) Emulex Corporation Emulex LightPulse LPe16000 PCIe Fibre Channel Adapter
vmhba2    lpfc        link-n/a    fc.20001402ecec2db3:10001402ecec2db3  Second Level Lun ID                  (0000:04:00.1) Emulex Corporation Emulex LightPulse LPe16000 PCIe Fibre Channel Adapter
vmhba3    qlnativefc  link-up     fc.500143803137785d:500143803137785c  Data Integrity, Second Level Lun ID  (0000:83:00.0) QLogic Corp 2600 Series 16Gb Fibre Channel to PCI Express HBA
vmhba4    qlnativefc  link-up     fc.500143803137785f:500143803137785e  Data Integrity, Second Level Lun ID  (0000:83:00.1) QLogic Corp 2600 Series 16Gb Fibre Channel to PCI Express HBA


2. To check brief info about WWNN/WWPN/Port_Speed/Status/Model/Firmware and Driver version

# esxcli storage san fc list
   Adapter: vmhba3
   Port ID: 010A00
   Node Name: 50:01:43:80:31:37:78:5d
   Port Name: 50:01:43:80:31:37:78:5c
   Speed: 16 Gbps
   Port Type: NPort
   Port State: ONLINE
   Model Description: Synergy 3830C 1
   Hardware Version: CU0410431-01  G
   OptionROM Version: 3.43
   Firmware Version: 8.05.60 (d0d5)
   Driver Name: qlnativefc
   DriverVersion: 2.1.57.0
 
 
3. To check multipath details for a device

# esxcli storage nmp device list

naa.60002ac0000000000000004b0001cb64
   Device Display Name: 3PARdata Fibre Channel Disk (naa.60002ac0000000000000004b0001cb64)
   Storage Array Type: VMW_SATP_ALUA
   Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_id=1,TPG_state=AO}}
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba4:C0:T0:L10, vmhba4:C0:T1:L10, vmhba3:C0:T0:L10, vmhba3:C0:T1:L10
   Is USB: false



4. To check path details for a device 

# esxcli storage core path list
fc.500143803137785f:500143803137785e-fc.2ff70002ac01cb64:21420002ac01cb64-naa.60002ac0000000000000004a0001cb64
   UID: fc.500143803137785f:500143803137785e-fc.2ff70002ac01cb64:21420002ac01cb64-naa.60002ac0000000000000004a0001cb64
   Runtime Name: vmhba4:C0:T1:L9
   Device: naa.60002ac0000000000000004a0001cb64
   Device Display Name: 3PARdata Fibre Channel Disk (naa.60002ac0000000000000004a0001cb64)
   Adapter: vmhba4
   Channel: 0
   Target: 1
   LUN: 9
   Plugin: NMP
   State: active
   Transport: fc
   Adapter Identifier: fc.500143803137785f:500143803137785e
   Target Identifier: fc.2ff70002ac01cb64:21420002ac01cb64
   Adapter Transport Details: WWNN: 50:01:43:80:31:37:78:5f WWPN: 50:01:43:80:31:37:78:5e
   Target Transport Details: WWNN: 2f:f7:00:02:ac:01:cb:64 WWPN: 21:42:00:02:ac:01:cb:64
   Maximum IO Size: 33553920

 
5. More info about each device

# esxcli storage core device list
   
naa.60002ac000000000000000860001cb64
   Display Name: 3PARdata Fibre Channel Disk (naa.60002ac000000000000000860001cb64)
   Has Settable Display Name: true
   Size: 2609152
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/naa.60002ac000000000000000860001cb64
   Vendor: 3PARdata
   Model: VV
   Revision: 3312
   SCSI Level: 6
   Is Pseudo: false
   Status: on
   Is RDM Capable: true
   Is Local: false
   Is Removable: false
   Is SSD: false
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 32
   Queue Full Threshold: 4
   Thin Provisioning Status: yes
   Attached Filters:
   VAAI Status: supported
   Other UIDs: vml.020005000060002ac000000000000000860001cb64565620202020
   Is Shared Clusterwide: true
   Is Local SAS Device: false
   Is SAS: false
   Is USB: false
   Is Boot USB Device: false
   Is Boot Device: false
   Device Max Queue Depth: 64
   No of outstanding IOs with competing worlds: 32
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: T1 T3 T1+DIX T3+DIX
   Supported Guard Types: IP, CRC
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false


6. To check active paths :

# esxcfg-mpath -L
vmhba4:C0:T0:L0 state:active naa.60002ac000000000000000050001cb64 vmhba4 0 0 0 NMP active san fc.500143803137785f:500143803137785e fc.2ff70002ac01cb64:20420002ac01cb64
vmhba4:C0:T0:L1 state:active naa.60002ac000000000000000400001cb64 vmhba4 0 0 1 NMP active san fc.500143803137785f:500143803137785e fc.2ff70002ac01cb64:20420002ac01cb64


Friday, December 8, 2017

How to do Lun reset and target reset on HPUX?

LUN reset - Resets the specified LUN by clearing any SCSI reservation on the LUN and making the LUN available to all servers again. The reset does not affect any of the other LUNs on the device. If another LUN on the device is reserved, it remains reserved.

Target reset - Resets the entire target. The reset clears any SCSI reservations on all the LUNs associated with that target and makes the LUNs available to all servers again.


Get the 3PAR raw disk number to which “LUN / Target reset” operations needs to be done:

$ scsimgr lun_map

LUN PATH INFORMATION FOR LUN : /dev/rdisk/disk449

Total number of LUN paths     = 2
World Wide Identifier(WWID)    = 0x60002ac0000000000000011f00000098

LUN path : lunpath59
Class                         = lunpath
Instance                      = 59
Hardware path                 = 0/0/4/0/0/0/0.0x21110002ac000098.0x400f000000000000
SCSI transport protocol       = fibre_channel
State                         = UNOPEN
Last Open or Close state      = ACTIVE

LUN path : lunpath74
Class                         = lunpath
Instance                      = 74
Hardware path                 = 0/0/6/0/0/0/0/4/0/0/1.0x20120002ac000098.0x400f000000000000
SCSI transport protocol       = fibre_channel
State                         = UNOPEN
Last Open or Close state      = ACTIVE



LUN Reset:


$ scsimgr lun_reset -D /dev/rdisk/disk449
Do you really want to continue? (y/[n])? y
scsimgr: lun_reset operation succeeded


Target Reset:


$ scsimgr warm_bdr -D /dev/rdisk/disk449
Do you really want to continue? (y/[n])? y
scsimgr: warm_bdr operation succeeded


How to do Lun reset and target reset on RHEL 6.x and 7.x OS version ?

LUN reset - Resets the specified LUN by clearing any reservation on the LUN and making the LUN available to all servers again. The reset does not affect any of the other LUNs on the device. If another LUN on the device is reserved, it remains reserved.

Target reset - Resets the entire target. The reset clears any reservations on all the LUNs associated with that target and makes the LUNs available to all servers again.

Make sure if below package is installed on your RHEL host:

# rpm -qa | grep sg3
sg3_utils-libs-1.37-12.el7.x86_64
sg3_utils-1.37-12.el7.x86_64

"sg_reset" command comes bundled with "sg3_utils" package, so install it if not installed already. 

Get the wwid for the device which needs to be reset:

# multipath -ll

360002ac000000000000000820001cb64 dm-23 3PARdata,VV
size=2.5T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 0:0:0:5  sdj  8:144  active ready running
  |- 3:0:0:5  sdl  8:176  active ready running
  |- 0:0:1:5  sdaw 67:0   active ready running
  `- 3:0:1:5  sdax 67:16  active ready running


LUN Reset (--device | -d):

# sg_reset -d /dev/mapper/360002ac000000000000000820001cb64


# tail -f /var/log/messages

Dec  8 23:28:00 rhel74 kernel: qla2xxx [0000:83:00.0]-8009:0: DEVICE RESET ISSUED nexus=0:0:5 cmd=ffff88085cd3f640.
Dec  8 23:28:00 rhel74 kernel: qla2xxx [0000:83:00.0]-800e:0: DEVICE RESET SUCCEEDED nexus:0:0:5 cmd=ffff88085cd3f640.


Target Reset (--target | -t):


# sg_reset -t /dev/mapper/360002ac000000000000000820001cb64


# tail -f /var/log/messages

Dec  8 23:28:40 rhel74 kernel: qla2xxx [0000:83:00.0]-8009:0: TARGET RESET ISSUED nexus=0:0:5 cmd=ffff881059b4da40.
Dec  8 23:28:40 rhel74 kernel: qla2xxx [0000:83:00.0]-800e:0: TARGET RESET SUCCEEDED nexus:0:0:5 cmd=ffff881059b4da40.


If still you are seeing issues with SCSI reservation conflicts, you can use "--bus | -b" option

It resets all accessible targets on the bus. The reset clears any SCSI reservation on all the LUNs accessible through the bus and makes them available to all servers again.

How to update VI web client interface for ESXi 6.5 ?

If you are using ESXi 6.5 and seeing below two issues when using VI web-client interface:


  • Java exceptions during VI web-client navigation
  • Not seeing RDM option while adding disk to VM


https://communities.vmware.com/thread/542973


These above issues are resolved in the latest available esx-ui package, which can be downloaded from below VMware URL:

https://labs.vmware.com/flings/esxi-embedded-host-client#summary

Download Offline bundle for ESXi 6.x - esxui-offline-bundle-6.x-5744014.zip


Steps:

1. To check currently installed esx-ui package, login to ESXi 6.5 host via SSH and run:

# esxcli software vib list | grep esx-ui
esx-ui                         1.8.0-4516221                         VMware     VMwareCertified   2017-07-06


2. Copy zip file to any location on the esxi host for example i am copying it to “datastore1”

# pwd
/vmfs/volumes/datastore1


3. Update esx-ui package to the latest build:

# esxcli software vib update -d "/vmfs/volumes/datastore1/esxui-offline-bundle-6.x-5744014.zip"
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: VMware_bootbank_esx-ui_1.21.0-5744014
   VIBs Removed: VMware_bootbank_esx-ui_1.8.0-4516221
   VIBs Skipped:


4. Check the version of installed package:

# esxcli software vib list | grep esx-ui
esx-ui                         1.21.0-5744014                        VMware     VMwareCertified   2017-08-01


Reboot ESX if Web UI is not updated.

How to do Lun reset and target reset operations on ESXi 6.x ?


LUN reset - Resets the specified LUN by clearing any SCSI reservation on the LUN and making the LUN available to all servers again. The reset does not affect any of the other LUNs on the device. If another LUN on the device is reserved, it remains reserved.

Target reset - Resets the entire target. The reset clears any SCSI reservations on all the LUNs associated with that target and makes the LUNs available to all servers again.

To check all the available devices:

# esxcfg-scsidevs -l
naa.60002ac000000000000000860001cb64
   Device Type: Direct-Access
   Size: 2609152 MB
   Display Name: 3PARdata Fibre Channel Disk (naa.60002ac000000000000000860001cb64)
   Multipath Plugin: NMP
   Console Device: /vmfs/devices/disks/naa.60002ac000000000000000860001cb64
   Devfs Path: /vmfs/devices/disks/naa.60002ac000000000000000860001cb64
   Vendor: 3PARdata  Model: VV                Revis: 3312
   SCSI Level: 6  Is Pseudo: false Status: on
   Is RDM Capable: true  Is Removable: false
   Is Local: false Is SSD: false
   Other Names:
      vml.020005000060002ac000000000000000860001cb64565620202020
   VAAI Status: supported


To reset LUN:

# vmkfstools -L lunreset /vmfs/devices/disks/naa.60002ac000000000000000860001cb64

Sometimes resetting LUN using "naa." device doesn't work, in that case try running same command with "vml" device reference:

# vmkfstools -L lunreset /vmfs/devices/disks/vml.020005000060002ac000000000000000860001cb64565620202020


# tail -f /var/log/vmkernel.log

2017-12-08T16:48:49.072Z cpu25:71333)WARNING: NMP: nmpDeviceTaskMgmt:2291: Attempt to issue lun reset on device naa.60002ac000000000000000860001cb64. This will clear any SCSI-2 reservations on the device.
2017-12-08T16:48:49.072Z cpu25:71333)Resv: 633: Executed out-of-band lun reset on naa.60002ac000000000000000860001cb64


To reset Target:


# vmkfstools -L targetreset /vmfs/devices/disks/naa.60002ac000000000000000860001cb64

OR

# vmkfstools -L targetreset /vmfs/devices/disks/vml.020005000060002ac000000000000000860001cb64565620202020


# tail -f /var/log/vmkernel.log

2017-12-08T16:49:53.525Z cpu0:71347)WARNING: NMP: nmpDeviceTaskMgmt:2291: Attempt to issue target reset on device naa.60002ac000000000000000860001cb64. This will clear any SCSI-2 reservations on the device.
2017-12-08T16:49:53.525Z cpu2:71347)Resv: 633: Executed out-of-band target reset on naa.60002ac000000000000000860001cb64


If still you are seeing issues with SCSI reservation conflicts, you can use "busreset" option

It resets all accessible targets on the bus. The reset clears any SCSI reservation on all the LUNs accessible through the bus and makes them available to all servers again.