Archive

Archive for the ‘Benchmark’ Category

VMware 5.0 Disk I/O performance – Thick Provision Lazy Zeroed vs Thick Provision Eager Zeroed vs Thin Provision

August 25, 2011 8 comments

Hello dear reader,

As you know VMware has announced some time ago the vsphere 5  and finally it can be downloaded by anyone 🙂

Based on this i decided to test the performance of the 3 types of disks supported by VMware :

Basically the VM had 2 virtual disks assigned for the benchmark :

  • Thick Provision Lazy Zeroed
  • Thick Provision Eager Zeroed
  • Thin Provision

Test setup :

One VM has been configured with 2 vcpu’s, 2 GB ram and 4 disks, all using the LSI SAS SCSI Controller :

  • Disk 01 is the Windows “disk”  (c:\ drive)
  • Disk 02 it’s a Thick Provision Lazy Zeroed disk
  • Disk 03 it’s a Thick Provision Eager Zeroed disk
  • Disk 04 it’s a Thin Provision disk

The VM had 1 Gigabit link without any sort of redundancy, link aggregation, no MPIO , no jumbo frames to an ISCSI storage. The purpose of this exercise is to see the performance differences between the 3 types of disks and not to see the performance of the ISCSI storage .

Anyway….let’s go for the results

Results : ( results parsed at http://vmktree.org/iometer/ )

  •     I/O of a Thick Provision Lazy Zeroed disk
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 3491 109 3%
RealLife-60%Rand-65%Read 12.87 4490 25 11%
Max Throughput-50%Read 101.44 6190 193 15%
Random-8k-70%Read 13.96 5681 44 17%
  • I/O of a Thick Provision Eager Zeroed disk
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 3511 109 1%
RealLife-60%Rand-65%Read 12.78 4460 34 30%
Max Throughput-50%Read 102.88 6261 195 2%
Random-8k-70%Read 14.19 5770 45 34%
  • I/O of a Thin Provision disk
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 3530 110 0%
RealLife-60%Rand-65%Read 13.06 4566 35 30%
Max Throughput-50%Read 102.36 6243 195 2%
Random-8k-70%Read 14.17 5767 45 36%

Conclusion :

It seems like VMware has quite similar performance across different types of disks (at least with the used benchmark profile for this test) and for me the Thin Provision disk would probably be the chosen one due to the fact of being…Thin .

During the next days/weeks i will try to get some more tests, against different storage devices and using ISCSI and NFS .

Advertisements

VMware I/O performance – LSI SAS SCSI controller vs Paravirtual SCSI controller

July 7, 2011 Leave a comment

VMware LSI SAS SCSI controller vs VMware Paravirtual SCSI controller I/O benchmarks

Basically the VM had 2 virtual disks assigned for the benchmark :

  1. Disk E is a thick virtual disk added from VMware. This vmdk resides on a SAS based datastore using the VMware LSI SAS SCSI Controller;
  2. Disk G is a thick virtual disk added from VMware. This vmdk resides on a SAS based datastore using the VMware Paravirtual SCSI Controller;

Results : ( results parsed at http://vmktree.org/iometer/ )

  •     I/O using the VMware  LSI SAS Controller
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 7183 224 26%
RealLife-60%Rand-65%Read 12.05 4208 32 10%
Max Throughput-50%Read 111.94 6833 213 26%
Random-8k-70%Read 9.38 3817 29 9%
  • I/O using the VMware Paravirtual SCSI controller
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 7245 226 24%
RealLife-60%Rand-65%Read 14.79 5156 40 14%
Max Throughput-50%Read 113.94 6947 217 24%
Random-8k-70%Read 12.48 5079 39 10%

Conclusion :

There’s a slight performance boost from the paravirtual controller and for some environments this can be quite a good thing..

However the “generic” emulated SCSI controller can also deliver a good performance !

Environment configuration details :

VMware Infrastructure configuration :

  • VMware ESXi 4.1 Update 1
  • Dell R515 with 2 sockets AMD Opteron 4180 (2.6GHz) with 64GB
  • Equallogic PS6000 with 15 600GB 15K rpm SAS disks with 1 volume assigned to the VM and one provisioned as a datastore within VMware
  • iscsi switches based on 2 Dell PowerConnect 6224 stacked
  • 2 networks configured within VMware to access the Equallogic . Each network has 2 Intel 82576 gigabit cards allocated.
  • Jumbo frames are enabled on Equallogic, switches, vswitches

Client configuration :

  • Windows 2008 R2 (64bits) Enterprise Edition
  • 2 vCPU
  • 2GB memory
  • 2 virtual network cards assigned for storage traffic , based on VMXNET3 , with Jumbo Frames enabled on the OS
  • Dell Equallogic Host Integration Tools installed and MPIO enabled
  • I/O benchmark done with IOMeter (1.1.0-rc1-win64.x86_64) using the OpenPerformanceTest.icf file .
  • For extra results check out  the VMware unofficial storage performance thread

VMware I/O performance – Guest vmdk vs Guest iscsi initiator

July 7, 2011 1 comment

Today i decided to run some I/O benchmarks within a VMware environment to compare a performance of a virtual disk assigned to a virtual machine vs a disk accessed by an iscsi initiator.

Basically the VM had 2 virtual disks assigned for the benchmark :

  1. Disk E is a thick virtual disk added from VMware. This vmdk resides on a SAS based datastore ;
  2. Disk F is a disk accessed by the iscsi initiator from Windows. This disk is served from a Dell Equallogic PS6000 and MPIO is enabled
  3. The disk controller it’s the LSI Logic SAS provided by VMware

Results : ( results parsed at http://vmktree.org/iometer/ )

  • I/O using the ISCSI Initiator from Windows
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 6380 199 33%
RealLife-60%Rand-65%Read 12.16 4244 33 9%
Max Throughput-50%Read 78.83 4812 150 45%
Random-8k-70%Read 10.16 4142 32 9%
  • I/O using the assigned virtual disk
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 7183 224 26%
RealLife-60%Rand-65%Read 12.05 4208 32 10%
Max Throughput-50%Read 111.94 6833 213 26%
Random-8k-70%Read 9.38 3817 29 9%

Conclusion :

The virtual disk outperforms the iscsi disk for read loads and has a slight less performance for random workloads . Interesting that when the virtual disk performance is lower we see a high CPU usage as well on the VM.Something to be investigated…..

Unless your environment needs an iscsi connection , for instance for hosting MS Clusters, the performance of a virtual disk it’s quite good as well , with all the advantages of snapshot, svmotion, etc..

Environment configuration details :

VMware Infrastructure configuration :

  • VMware ESXi 4.1 Update 1
  • Dell R515 with 2 sockets AMD Opteron 4180 (2.6GHz) with 64GB
  • Equallogic PS6000 with 15 600GB 15K rpm SAS disks with 1 volume assigned to the VM and one provisioned as a datastore within VMware
  • iscsi switches based on 2 Dell PowerConnect 6224 stacked
  • 2 networks configured within VMware to access the Equallogic . Each network has 2 Intel 82576 gigabit cards allocated.
  • Jumbo frames are enabled on Equallogic, switches, vswitches

Client configuration :

  • Windows 2008 R2 (64bits) Enterprise Edition
  • 2 vCPU
  • 2GB memory
  • 2 virtual network cards assigned for storage traffic , based on VMXNET3 , with Jumbo Frames enabled on the OS
  • Dell Equallogic Host Integration Tools installed and MPIO enabled
  • I/O benchmark done with IOMeter (1.1.0-rc1-win64.x86_64) using the OpenPerformanceTest.icf file .
  • For extra results check out  the VMware unofficial storage performance thread
Categories: Benchmark, VMware