VMWare/ESX can’t write to a Celerra Read/Write NFS mounted datastore

I had just created serveral new Celerra NFS mounted datastores for our ESX administrator.  When he tried to create new VM hosts using the new datastores, he would get this error:   Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “servername.company.com” failed.

Searching for that error message on powerlink, the VMWare forums, and general google searches didn’t bring back any easy answers or solutions.  It looked like ESX was unable to write to the NFS mount for some reason, even though it was mounted as Read/Write.  I also had the ESX hosts added to the R/W access permissions for the NFS export.

After much digging and experimentation, I did resolve the problem.  Here’s what you have to check:

1. The VMKernel IP must be in the root hosts permissions on the NFS export.   I put in the IP of the ESX server along with the VMKernel IP.

2. The NFS export must be mounted with the no_root_squash option.  By default, the root user with UID 0 is not given access to an NFS volume, mounting the export with no_root_squash allows the root user access.  The VMkernal must be able to access the NFS volume with UID 0.

I first set up the exports and permissions settings in the GUI, then went to the CLI to add the mount options.
command:  server_mount server_2 -option rw,uncached,sync,no_root_squash <sharename> /<sharename>

3. From within the ESX Console/Virtual Center, the Firewall settings should be updated to add the NFS Client.   Go to ‘Configuration’ | ‘Security Profile’ | ‘Properties’ | Click the NFS Client checkbox.

4. One other important item to note when adding NFS mounted datastores is the default limit of 8 in ESX.  You can increase the limit by going to ‘Configuration’ | ‘Advanced Settings’ | ‘NFS’ in the left column | Scroll to ‘NFS.MaxVolumes’ on the left, increase the number up to 64.  If you try to add a new datastore above the NFS.MaxVolumes limit, you will get the same error in red at the top of this post.

That’s it.  Adding the VMKernel IP to the root permissions, mounting with no_root_squash, and adding the NFS Client to ESX resolved the problem.

Advertisements

12 thoughts on “VMWare/ESX can’t write to a Celerra Read/Write NFS mounted datastore”

  1. Thank you, thank you. Been pulling my hair out, could browse datastore but not create any files/folders via esxi

    no_root_squash in /etc/exports config file fixed it

  2. Im having a similar issue to this but not exactly the same..

    I have a NAS box with an NFS share setup.
    VMWare ESX server connected to the NFS share sucessfully, but when i try to clone a VM to this datastore it creates the folder and the VMDK files on the NAS but all of the files are 0 bytes in size and it fails saying “Failed to connect to Host “. Open the datastore from within VMWare and attempting to manually copy files etc has the same result. However if i SSH onto the vmware server and login as root i can connect to the NFS mounted volume and create/copy files etc without a problem.

    Any idea what the issue could be here?

    1. I’m not 100% sure what your problem is but I can make a few suggestions. I’d Check the permissions on your NFS export and make sure you have all of your ESX servers added for Read/Write access. I’d also add them to the root host permissions on the export. Also check and make sure that you don’t have the file system exported as read-only. How many NFS mounted datastores do you have? There is a default limit of 8 on a single ESX server, however it can be increased to 64 in the server’s settings. If you haven’t done it already, make sure you mount them with no_root_squash as described in the blog post. Good Luck!

  3. I’ve checked everything that you mentioned, but i still cant get it to work correctly.

    There is only 1 esx server that im trying to setup with access to this NFS share and it is added for R/W access to the share. All other datastores are SAN connected, this is the only NFS datastore.

    The exports file on the NAS has rw and no_root_squash for the NFS share.

  4. I’m having a devil of a time adding the the option to the NFS export I sent up in the GUI. I’ve got the IP address of the vmkernel and mgmt in both RW and root areas. I can’t figure out the syntax to add the options. this is what it looks thus far:

    export “/vmware_nfsvolume” rw=10.10.100.10:10.10.100.11 root=10.10.100.10:10.10.100.11

    I’ve tried #server_mount server_2 -option rw,uncached,sync,no_root_squash vmware_nfsvolume /vmware_volume and it reports that the command is done. However, the option is not set in the server2_export report.

    Argggh….

    1. Sorry to hear you’re having trouble. I can take a look again today and review the procedure I used, but honestly it looks like you’re doing it correctly. Opening a service request with EMC may be what you need to do. The fact that you’re successfully running the command and it’s not showing up on the export command is a definite red flag. My first thought would be to try and schedule an off hours reboot of your control station & data mover and try again. You could also try adding no_root_squash in /etc/exports config file, which was mentioned in a previous comment.

      1. I worked with EMC support engineers for several hours before I gave up. They were at a loss too. I was just about to open a ticket with VMWare when I just happened to look at the fine print (hovered mouse over the RW, root, and access areas) details in the export properties in Unisphere. I needed to put the IP address in CIDR format…10.10.100.10/24.

        Typically, you’d only need the VMK IP address, but I also put the host’s mgmt IP in there too…so 10.10.100.10/24:10.10.100.11/24. I slapped that value in each of the three areas and lo and behold…i was then able to write to the NFS datastore. The EMC support engineers told me you only need the IP address in there…so I took them at their word. That caused me about 8 hours of more work on the weekend.

        Thanks for this blog entry…i had forwarded to the EMC engineers I worked with and they’d never heard of that no_root_squash option before. Yikes.

Leave a Reply