VMware: Thick vs. Thin provisioning benchmarks
Update: Here’s a valuable thread on HardForum. Here is info on using the VAAI and issuing SCSI UNMAP to reclaim thin provisioned space. You may want to disable this automatic functionality to control when it occurs.
There are three options for allocating space out of your storage for your VM disks:
Thick Provisioning Lazy Zeroed: Allocates the disk space statically (no other volumes can take the space), but doesn’t write zeros to the blocks until the first write takes place to that block during runtime (which includes a full disk format).
Thick Provisioning Eager Zeroed: Allocates the disk space statically (no other volumes can take the space), and writes zeros to all the blocks.
Thin Provisioning: Allocates the disk space only when a write occurs to a block, but the total volume size is reported by VMFS to the OS. Other volumes can take the remaining space. This allows you to float space between your servers, and expand your storage when your size monitoring indicates there’s a problem. Note that once a Thin Provisioned block is allocated, it remains on the volume regardless if you’ve deleted data, etc.
By definition, you would expect Thick Provisioning Eager Zeroed to be the fastest.
So, of course, the whitepaper proves that Thick Provisioning Eager Zeroed is going to get you the most throughput from an SQL DB or ESE DB. But it might be wise to even use Thin provisioning, then force the DB to grow, if possible scheduling the growth off hours. Exchange will do this “padding” automatically, but it is definitely wise to try to control this in some manner.
The same goes for thin provisioning on a SAN, but that’s another day.