h1

NFS and VMware: Perfect for Small Business? Part 1 – Introduction

August 22, 2012

Nexenta System’s “open storage” software made significant inroads into the VMware community over the last year with NFS storage. Even though Nexenta has been a partner with VMware for much longer, the storage vendor really made it’s debut at last year’s VMworld 2011 Hands-on-Labs by showcasing it’s NFS-for-VMware solution running on commodity hardware:

And, here’s the kicker, NexentaStor was running on industry standard hardware from Supermicro with STEC drives for write and read cache and 7200 rpm SAS drives for capacity.  Monday some DRAM on one of the four servers (two HA pairs) failed.  And no end users noticed because of our HA cluster performed correctly and failed over.  Meanwhile our load increased from a designed 33% to over 60% of the total load of the Hands on Lab due to unspecified issues with either NetApp or EMC.

Evan Powell, CEO – Nexenta Systems, VMworld Reviewed

While this was indeed an important inflection point in the VMware/Nexenta relationship, in broader terms Nexenta’s success at VMworld was the probably the moment when commodity NFS stepped out of the shadow of block storage. To be fair, there are many enterprise alternatives to Nexenta for NFS storage – like NetApp and EMC, but there are few can be deployed on commodity hardware, fewer that do both hardware and virtual storage appliances, and fewer still that have commercially licensed and community licensed distributions of the same platform.

If you’ve ever asked the question, “what’s the best storage solution for my vSphere stack?” I’d be willing to bet that NFS was not high on the list of recommendations. If you’ve looked at the related product marketing materials, as I have, or engaged front-line VMware personnel in a discussion of primary storage solutions, between 2009 and 2011, as I have, you’d be hard pressed to leave the conversation with a recommendation to use NFS. If Nexenta’s appearance can “prove” that open storage solutions based on NFS (and commodity hardware) are “ready” for big cloud infrastructures, can it be true that it’s a perfect fit for a small business’ private cloud? I’d say a resounding YES, but…

Introduction, NFS versus Block Storage

Before you say, “thanks for the tip, Collin, but who needs commercial stuff when NFS services are included in practically every Linux distribution, and “no cost” solutions -like FreeNAS – make NFS cheap and easy?” While it is true that solutions like this have been very popular with lab and bare-bones users, but most enterprises (even small ones) require a “bet the business” level of support and stability that isn’t often found in “community supported” distributions and do-it-yourself implementations. Even if the though “any NFSv3 server” – properly sized and configured – should work with VMware according to its abilities: it’s up to you to decide if the basket fits your eggs. The commercial NFS vendors really know their stuff, so you’re buying expertise, experience and a well-refined playbook: something you’ll be giving up when you go it alone.

Despite being “block storage’s whipping boy,” to say NFS is “not ready for prime time” in today’s VMware product matrix would be the height of FUD-peddling. On the contrary, a well know post in 2009 from noted EMC’r Chad Sakac and NetApp’s Vaughn Stewart made a great case for NFS in the enterprise in their multi-vendor post back in 2009. Since then, many improvements in NFS offerings and vSphere capabilities have increased NFS’ appeal in that space, not diminished it. To quote the Virtual Geek:

“NFS is an absolutely legitimate storage model for VMware – with many advantages.”

– Chad Sakac, aka Virtual Geek, EMC VP VMware Technology Alliance

Certainly there is a lot to like in pairing NFS with vSphere 5.x no matter the scale of the enterprise. Here are some of the high-points:

  • NFS works seamlessly with Storage I/O Control and Network I/O Control to support converged network architectures;
  • NFS exposes VMDKs to 3rd party tools and scripts without VMFS proxies, enabling:
    • Simple Backup/Recovery of VM, VMDK from NAS is a file copy operation
    • Linux, Windows7, etc. support NFS clients out of the box
    • Replication of VM or VMDK from NAS can be achieved simply with rsync
    • Use of snapshotted NFS volumes does not require ESX/VMFS
  • Reclamation of unused storage is not array dependent (file deletes return to storage immediately without SCSI Unmap support or equivalent)
  • Not subject to LUN locking and related performance issues in block/VMFS
  • It’s simpler to use: in the link above, VMware dedicates 24 pages to block/VMFS and only 3 to NFS
  • Presentation and management of NAS storage is very familiar (it’s a filer)
  • NFS is very forgiving of “imperfect” network configurations – compared to iSCSI, especially where network time-outs and latency are concerned
  • NFS storage does not need to be available at ESXi boot time, enabling VMs to exist on VSA running on-top of the host ESXi server (enabling recursive storage possibilities and reduced/shared hardware costs)
  • Mounting an NFS snapshot to vSphere does not include a signature operation (or risk possible collision)
  • NFS does not require VAAI to resolve SCSI file locking and VM loading limitations consistent with SCSI-based block storage
  • vSphere 5 currently support 256 NFS mounts per host
    • NFS.MaxVolumes (per host) – default 8, max 256
  • Single file size not limited on NFS file systems, however
    • Without 3rd party NAS VAAI, all VMDKs on NAS are always thin provisioned
    • Single file size limited to NAS vendor file system constraints
    • VMDK uses 512-byte sectors, so it suffers from the same limitations as physical disks, hence it will still have a 2TB-512-byte limit (since VMware has no 4K-byte sector VMDK, there will be no way to support 2TB+ VMDKs on NFS until that time)
  • NFS volumes are not limited in size
    • For NetApp WAFL, the limit is up to 100TB (with restrictions)
    • For NexentaStor, the limit is determined by the zpool size
  • On-line expansion of an NFS file system is a one-step operation: expand the file system on the filer

That said, NFS still cannot replace block storage on Tier 1 applications that were designed for block storage. Even iSCSI – arguably the least common denominator in shared block storage for VMware – still has some built-in advantages (and unique disadvantages) as compared to NFS. Likewise, when we’re talking about block storage in VMware we’re usually talking about VMFS too:

  • Writes are almost always asynchronous, making even low-end iSCSI “appear” to be faster than low-end NFS
  • Interface redundancy is straight forward and deterministic with many good options for redundancy
  • Storage latency in iSCSI/block is “more predictable” across common use cases
  • vSphere 5 currently supports 256 LUNs per host (similar to NFS mount limit)
    • Disk.MaxLUN (per target) – default 256, max 256
    • Total VMFS LUNs per host cannot exceed Disk.MaxLUN, regardless of type (FC, SAS, iSCSI, etc.)
  • vSphere VMFS3/5 limits single file size (VMDK and virtual RDM) to 2TB (minus 512 bytes)
  • VMFS3 limits single volume size to 50-64TB depending on block size chosen when formatted
  • VMFS5 limits single volume size to 64TB for VMFS5 (always uses 1MB block size)
  • vSphere’s storage telemetry is still geared towards block versus filer storage, making trouble-shooting of “performance issues” more available
  • Pairing storage to interface is much easier to do, even on-the-fly
  • Exchange 2010 expressly forbids the use of NAS storage as VMDK datastores
  • Virtual RDM and Clustering (shared block) require block storage (in some cases, not even iSCSI qualifies for support)
  • Tier 1 application support on block-based storage is generally better (familiarity and testing)
  • VMware VAAI for block storage ships with vSphere, similar acceleration features for NAS must come from the vendor (creating a much lest robust out-of-the-box experience for SMB)
  • On-line VMFS expansion usually requires two steps, with some caveats:
    • For VMFS expansions using a single LUN expansions under 2TB: (1) expand the underlying LUN on the SAN, (2) expand VMFS with the new space on the LUN
    • Single LUN expansions over 2TB require VMFS5
    • VMFS3 volume expansion beyond 2TB require multiple extents, each of which may not exceed 2TB-512B – loss of a single extent in a multi-extent volume could mean a loss of the entire volume
    • VMFS5 supports single LUNs (extents) as large as 60TB

Sparse VAAI issues aside, NFS is a great go-to storage protocol for most virtual workloads that do not strictly require block or shared-block storage back-ends (clustering, et al). Where NFS struggles today – in terms of VMware implementations in the SMB space – is in network resiliency. It is not that you cannot make NFS resilient to network failures, it’s more or less that redundancy is not neatly baked-into the service or protocol like it is for iSCSI, SAS and Fiber Channel – these block-based services have mature, multi-session amd multi-path capabilities at the service level (multi-path targets and initiators).

Note about 2TB VMDK limitations – given that most modern OSes running as supported virtual machines support some form of LUN concatenation (extents) to bypass 2TB physical disk limitations, the very same facilities can be leveraged to bypass the 2TB VMDK limits for these OSes. While this is not an optimal solution, it is a supported one. Today’s physical disks that exceed 2TB in size do so with 4KB sectors instead of 512B sectors. Currently, there is no 4KB sector VMDK analog.

Next Up, NFS and Path Redundancy

Hopefully by now there’s a compelling argument to look deeper into the NFS/VMware question, but – as with most shared, network storage – the rubber meets the road at the network layer. To me, the secret to making NFS more robust is in the network architecture that underpins it: depending on the complexity of the environment, the network layer will make or break an NFS implementation. In some ways there’s a lot more to making NFS “redundant’ (due to it’s lack of multipath capabilities): it’s not impossible; it’s not difficult; it’s just full of options and caveats.

Unlike block storage, you can’t “throw up two network interfaces, two target ports and two initiator ports” and easily have path redundancy and multipath data. With NFS, the network – not the storage service – does most of the “heavy lifting” and – as you’ll see in the next post – NFS has absolutely no concept of multipath. Therefore, I’m going to spend the next entry reviewing some of the main points driving network and NFS service dependencies that make understanding NFS network resiliency a bit more accessible.

2 comments

  1. Excellent article! You’ve hit the nail on the head about multi-pathing but NFS has so much going for it, especially when it comes to trying to simplify infrastructure. I’m looking forward to the next installment 🙂

    Like


  2. Is part 2 out yet?

    Like



Comments are closed.