Uploaded image for project: 'XenServer Org'
  1. XenServer Org
  2. XSO-19

NFSV4 Share will not mount for some shares



    • Type: Bug
    • Status: Done (View Workflow)
    • Priority: Minor
    • Resolution: Done
    • Affects Version/s: Creedence alpha, beta, RC
    • Fix Version/s: None
    • Component/s: None
    • Labels:
    • Environment:


      An NFS share advertised as NFS4-capable from a NexentaStor NFS share mounts only as NFS V3 on Creedence. I have been told this is not an issue for some appliances, such as the NetApp FAS-2240. There does not appear to be a way to override the default NFS mount version using any available "Advanced Options" token when the share name is requested from XenCenter.

      The workaround is to allow the connection to be established using NFS V3, and then manually "umount" the share and mount it using the "-t nfs4" option. This is, of course, not permanent and is lost on a subsequent reboot. After redoing the mount, running "nfsstat -v4" shows there is indeed NFS V4 traffic on the XenServer, and the monitor on the NexentaStor utility indicates V4 packets are present, as well.

      There does not appear to be any override option readily available in /opt/xensource/sm/nfs.py to specify NFS4. Among some suggestions posted is one at http://likerabbits.blogspot.com/2009/09/xenserver-performance-tweaks.html which suggests a modification of the nfs.py file. However, there may be other NFS shares that should still be connected to using NFS V3, hence trying to force just one protocol for all NFS connections does not seem like the best approach.

      Having the NFS client attempt to connect at the maximum NFS version and work its way down would be one option, or if not, at least allowing the user to specify manually which NFS version to try to connect to would be highly desirable for storage devices that for some reason do not seem to be able to auto-negotiate using NFS V4.




            tjkreidl Tobias Kreidl
            0 Vote for this issue
            5 Start watching this issue