If you’re an old timer, you know how difficult managing thin-on-thin provisioning can be (historically a “no-no”). That’s when a vSphere sysadmin uses thin provisioning for VMDKs and also has it on the storage array itself. Traditionally, this could lead to “unpredictable results” with capacity, having to do a bunch of math to understand when you might actually oversubscribe the system enough to run out of space without really knowing it (and everything halts).
So, why might you want to take a fresh look at this (**cough, cough WTG Dell PowerStore customers**)? A while, back VMware made a few updates to vSphere to pass certain commands directly from the guest OS to the storage array. In this case, specifically UNMAP. This allows for a supported guest OS to start thin and “stay thin” over time in vSphere and on the storage array. In the past, you would have to play some games with esxcli commands and sdelete to make this happen. While I imagine this capability is technically possible for eager-zero disk formats, it is ONLY implemented with thin provisioned VMDKs. The VMware documentation is clear on the requirements for this.
As a sysadmin, you must ask yourself – can I comfortably accept the “management” risk of thin-on-thin to maximize storage efficiency on my array? The answer is probably yes, especially considering the expense of all-flash systems (which all generally depend on data efficiency strategies). I would also add that both VMware and modern storage arrays are MUCH better than previous generations at reporting space consumption in a meaningful way, further mitigating the risk.