![]() ![]() ![]() I have given you some of the highlights of the 62TB VMDK’s and vRDM’s but now lets look at what’s supported and what’s not supported in a bit more detail. Supported and Unsupported Cases in Detail ![]() 62TB Virtual Mode RDM’s also supported (vRDM).No specific virtual hardware requirement (except if you want to use the AHCI SATA controller, which requires vHW v10).Supported on VMFS5 or NFS (NFS depends on array supported maximum file size).New MaxAddressableSpaceTB advanced parameter controlling the size of Pointer Block Cache kept in memory.Pointer Block Cache Eviction (Described in more detail later) swaps out unused VMDK pointer blocks from memory.VMFS Heap is no longer a limiting factor for the max amount of open VMDK’s.Reduced memory consumption (256MB vs 640MB in previous releases).VMware has made the use of the VMFS Heap much more efficient in vSphere 5.5: Highlights of the Jumbo VMDK and VMFS Heap Enhancements Lets get into some of the practical considerations and specifics of the 62TB Virtual Disk Feature of vSphere 5.5. This is because each host in the cluster needs to see each datastore of each VM so that vMotion and VMware DRS will work properly. In any case I think that’s enough theory. So you would have used up almost half of the maximum number of datastores (255) currently supported on an ESXi host. Significantly this is all without resorting to the use of raw device maps (RDM’s). If you were to try this configuration you would consume 120 datastores per host (for this one VM). This would be a truly massive VM and I would say it’s not very practical right now for many reasons, one of which being you’d need more than one fully populated enterprise storage array for this one VM, but it is theoretically possible. With this new Jumbo VMDK size it is theoretically possible to have a single VM with either 3720TB assigned, assuming 4 vSCSI controllers with 15 devices each (60 vmdk’s total), or 7440TB assigned using 4 x new AHCI SATA controllers that allow up to 30 devices each (120 vmdk’s total, requires vHW v10). With the pervious 2TB maximum VMDK size it was theoretically possible to have a VM with 120TB storage assigned, which is quite a lot. So as the VMFS maximum volume size is 64TB the maximum supported VMDK is 62TB. In order to allow for snapshots and the additional metadata and log files for a VM to fit within a 64TB VMFS volume you can’t consume all the space with a single VMDK. The reason is quite simple and straight forward. The first thing you might notice from the opening paragraphs of this article is that I mentioned a 62TB VMDK, not a 64TB VMDK, which you may have heard elsewhere. I predict this will be one of the very welcome enhancements for customers considering running business critical applications on vSphere 5.5, especially for those Monster VM’s with large storage requirements. These changes address the gotcha’s with VMFS and the size limitations that I have highlighted previously. This article will focus squarely on the new 62TB VMDK and VMFS enhancements that I’m calling Jumbo VMDK. There were many significant changes and enhancements announced in the release and some of the more significant ones I mentioned briefly in my article VMworld USA 2013 By The Numbers. While the content and considerations in these articles are still quite valid in a lot of respects many things changed a couple of weeks ago.Īt VMworld US 2013 in San Francisco VMware announced the launch of vSphere 5.5. I gave a considerable overview of storage sizing for business critical applications in my article Storage Sizing Considerations when Virtualizing Business Critical Applications. The VMFS Heap Size issue itself was somewhat addressed with updates, which I wrote about in Latest ESXi 5.0 Patch Improves VMFS Heap Size Limits. When architecting a large amount of storage for a Virtual Machine there are a lot of things to consider and it’s not all about capacity. In that article I put forward the pros and cons for larger than 2TB virtual disks, some solutions suitable for large storage VM’s in vSphere, and an issue caused by VMFS Heap Size. Many of you may recall the article I wrote titled The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |