Home > Benchmark, VMware > VMware I/O performance – Guest vmdk vs Guest iscsi initiator

VMware I/O performance – Guest vmdk vs Guest iscsi initiator

Today i decided to run some I/O benchmarks within a VMware environment to compare a performance of a virtual disk assigned to a virtual machine vs a disk accessed by an iscsi initiator.

Basically the VM had 2 virtual disks assigned for the benchmark :

  1. Disk E is a thick virtual disk added from VMware. This vmdk resides on a SAS based datastore ;
  2. Disk F is a disk accessed by the iscsi initiator from Windows. This disk is served from a Dell Equallogic PS6000 and MPIO is enabled
  3. The disk controller it’s the LSI Logic SAS provided by VMware

Results : ( results parsed at http://vmktree.org/iometer/ )

  • I/O using the ISCSI Initiator from Windows
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 6380 199 33%
RealLife-60%Rand-65%Read 12.16 4244 33 9%
Max Throughput-50%Read 78.83 4812 150 45%
Random-8k-70%Read 10.16 4142 32 9%
  • I/O using the assigned virtual disk
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 7183 224 26%
RealLife-60%Rand-65%Read 12.05 4208 32 10%
Max Throughput-50%Read 111.94 6833 213 26%
Random-8k-70%Read 9.38 3817 29 9%

Conclusion :

The virtual disk outperforms the iscsi disk for read loads and has a slight less performance for random workloads . Interesting that when the virtual disk performance is lower we see a high CPU usage as well on the VM.Something to be investigated…..

Unless your environment needs an iscsi connection , for instance for hosting MS Clusters, the performance of a virtual disk it’s quite good as well , with all the advantages of snapshot, svmotion, etc..

Environment configuration details :

VMware Infrastructure configuration :

  • VMware ESXi 4.1 Update 1
  • Dell R515 with 2 sockets AMD Opteron 4180 (2.6GHz) with 64GB
  • Equallogic PS6000 with 15 600GB 15K rpm SAS disks with 1 volume assigned to the VM and one provisioned as a datastore within VMware
  • iscsi switches based on 2 Dell PowerConnect 6224 stacked
  • 2 networks configured within VMware to access the Equallogic . Each network has 2 Intel 82576 gigabit cards allocated.
  • Jumbo frames are enabled on Equallogic, switches, vswitches

Client configuration :

  • Windows 2008 R2 (64bits) Enterprise Edition
  • 2 vCPU
  • 2GB memory
  • 2 virtual network cards assigned for storage traffic , based on VMXNET3 , with Jumbo Frames enabled on the OS
  • Dell Equallogic Host Integration Tools installed and MPIO enabled
  • I/O benchmark done with IOMeter (1.1.0-rc1-win64.x86_64) using the OpenPerformanceTest.icf file .
  • For extra results check out  the VMware unofficial storage performance thread
Advertisements
Categories: Benchmark, VMware
  1. July 13, 2011 at 9:42 pm

    I’d suggest experimenting with other iSCSI target software and hardware. EQL has custom MPIO so most of IP SANs would not get comparable numbers with local Vs. iSCSI I/O.

    Anton

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: