Home > Benchmark, esxi5, VMware > VMware 5.0 Disk I/O performance – Thick Provision Lazy Zeroed vs Thick Provision Eager Zeroed vs Thin Provision

VMware 5.0 Disk I/O performance – Thick Provision Lazy Zeroed vs Thick Provision Eager Zeroed vs Thin Provision

Hello dear reader,

As you know VMware has announced some time ago the vsphere 5  and finally it can be downloaded by anyone 🙂

Based on this i decided to test the performance of the 3 types of disks supported by VMware :

Basically the VM had 2 virtual disks assigned for the benchmark :

  • Thick Provision Lazy Zeroed
  • Thick Provision Eager Zeroed
  • Thin Provision

Test setup :

One VM has been configured with 2 vcpu’s, 2 GB ram and 4 disks, all using the LSI SAS SCSI Controller :

  • Disk 01 is the Windows “disk”  (c:\ drive)
  • Disk 02 it’s a Thick Provision Lazy Zeroed disk
  • Disk 03 it’s a Thick Provision Eager Zeroed disk
  • Disk 04 it’s a Thin Provision disk

The VM had 1 Gigabit link without any sort of redundancy, link aggregation, no MPIO , no jumbo frames to an ISCSI storage. The purpose of this exercise is to see the performance differences between the 3 types of disks and not to see the performance of the ISCSI storage .

Anyway….let’s go for the results

Results : ( results parsed at http://vmktree.org/iometer/ )

  •     I/O of a Thick Provision Lazy Zeroed disk
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 3491 109 3%
RealLife-60%Rand-65%Read 12.87 4490 25 11%
Max Throughput-50%Read 101.44 6190 193 15%
Random-8k-70%Read 13.96 5681 44 17%
  • I/O of a Thick Provision Eager Zeroed disk
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 3511 109 1%
RealLife-60%Rand-65%Read 12.78 4460 34 30%
Max Throughput-50%Read 102.88 6261 195 2%
Random-8k-70%Read 14.19 5770 45 34%
  • I/O of a Thin Provision disk
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 0.00 3530 110 0%
RealLife-60%Rand-65%Read 13.06 4566 35 30%
Max Throughput-50%Read 102.36 6243 195 2%
Random-8k-70%Read 14.17 5767 45 36%

Conclusion :

It seems like VMware has quite similar performance across different types of disks (at least with the used benchmark profile for this test) and for me the Thin Provision disk would probably be the chosen one due to the fact of being…Thin .

During the next days/weeks i will try to get some more tests, against different storage devices and using ISCSI and NFS .

Advertisements
  1. rodrigocamaraotester
    August 31, 2011 at 11:53 pm

    Thanks, I wait your new discoveries.

  2. Andrew B
    September 23, 2011 at 5:54 pm

    It would be good knowledge to know the brand of SAN / Local Storage you are using as that really is a major factor for performance.

    Speed (10k / 15K / SSD )
    Type (SCSI / FibreChannnel / SAS / SATA)
    SAN / Local
    If SAN – What brand – controller type – backplane

    • blurrnin@gmail.com
      October 1, 2011 at 3:27 pm

      Would the brand matter in a benchmark test like this? If all test are run on like hardware the benchmarks would indicate which disk provisioning has better performance. While Latency, iops, cpu load and mbps would improve on better hardware, the result should continue to reveal which provisioning in the best, correct? Unless a particular provisioning technology had built in love for a particular SAN or other storage device or something. Unless my logic is whack?

  3. November 2, 2011 at 8:40 pm

    it might matter if your are running at max speed of the disks.

  4. DC
    February 19, 2012 at 9:42 am

    The difference between these choices is not really apparent in Read I/O, only Write I/O, so those benchmarks may well be a waste of time.

    “Thin” requires the VM to be allocated space dynamically from the storage pool whenever it uses up its existing written storage, so that will add overhead in that circumstance.

    “Thick – Lazy Zeroed” requires any yet to be used spaced to be zeroed on first use, which adds that overhead.

    “Thick – Eager Zeroed” has all the space filled with zeros when you create the disk so there is no overhead when data is first written to it.

    “Thin” disks on a system with a lot of VMs can well have massive fragmentation due to each VM grabbing resources as it needs them and eventually the disk performance of these VMs could be woeful.

  5. March 5, 2012 at 3:56 pm

    Thanks, I will test that on a new Compellent System with Storage Center 6..

  6. Erick Noleto
    March 15, 2012 at 5:07 pm

    On my usage scenario the lazy is better option…. cpu usage on real life scenario matter’s more for me, as performance remains the same across the board.

  7. Abu
    April 26, 2012 at 2:27 am

    Very interesting. I wonder what result you have for NFS? I could not get more than 40MBs, while the host clocked 110MBs.

    I wonder why all VMs (including VirtualBox) are slow compared to the host

    Please share the rest of the data

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: