in Linux. Briefly, NFS (network file system) provides access to remote
filesystems which appear similar to local resources on client hosts.
The following focuses on simple NFS server and client configuration in
Linux (see note 1). Our host details are:
HOST (server): tux (10.0.23.171) HOST (client): cobblepot (10.0.23.31 PROMPT (root): HOST [0] PROMPT (user): troy@cobblepot [0] USRE UID:GID: 1000:1000 (on both server and client) OS: CentOS 5.4 Linux NOTE: The following should apply equally well to previous version of CentOS (or Red Hat based distros).Starting off with our server side, NFS requires at least 3 services
running (for sane usage), though possibly up to 5, depending on NFS
version and features (see note 2):
nfs (/etc/init.d/nfs) (required) nfslock (/etc/init.d/nfslock) (required) portmap (/etc/init.d/portmap) (required) rpcsvcgssd (/etc/init.d/rpcsvcgssd) (NFSv4, optional) rpcidmapd (/etc/init.d/rpcidmapd) (NFSv4, required)Below, we check which services are running and enable / start those
that aren't:
tux [0] for i in nfs nfslock portmap ; do /sbin/service ${i} status ; done rpc.mountd is stopped nfsd is stopped rpc.rquotad is stopped rpc.statd (pid 1834) is running... portmap (pid 1798) is running... tux [0] /usr/sbin/rpcinfo -p program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 741 status 100024 1 tcp 744 status tux [0] /sbin/chkconfig --list | /bin/awk '/(nfs|portmap)/ {print $1"\t"$5}' nfs 3:off nfslock 3:on portmap 3:on tux [0] chkconfig --level 3 nfs on tux [0] /sbin/chkconfig --list | /bin/awk '/(nfs|portmap)/ {print $1"\t"$5}' nfs 3:on nfslock 3:on portmap 3:on tux [0] for i in nfs nfslock portmap ; do /sbin/service ${i} status ; done rpc.mountd is stopped nfsd is stopped rpc.rquotad is stopped rpc.statd (pid 1834) is running... portmap (pid 1798) is running... tux [0] service nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS daemon: [ OK ] Starting NFS mountd: [ OK ] Starting RPC idmapd: [ OK ] tux [0] for i in nfs nfslock portmap ; do /sbin/service ${i} status ; done rpc.mountd (pid 13154) is running... nfsd (pid 13151 13150 13149 13148 13147 13146 13145 13144) is running... rpc.rquotad (pid 13139) is running... rpc.statd (pid 1834) is running... portmap (pid 1798) is running...That's quite a bit in the above. As a quick run through, we used
'service' to get the status of each of our services, then verify that
the local RPC server is functioning via 'rpcinfo'. Following that, we
see that 'nfs' is disabled for runlevel 3 and with another 'chkconfig'
we enable it, then verify. Since 'service' still shows 'nfs' and
'rpc.rquotad' as stopped, we start them from another 'service' command
and verify. Now that our services are up, we can update '/etc/exports'
with the filesystems (FS) that we want to export (share). Below, we've
updated 'exports', removing the entry for '/mnt' and adding one for
'/home' and another for '/usr/sfw' (see note 3):
tux [0] /bin/egrep -v '^#|^$' /etc/exports /mnt localhost(ro) 127.0.0.1(ro) tux [0] /bin/egrep -v '^#|^$' /etc/exports /home *(anonuid=65534) 10.0.22.0/23(rw) /usr/sfw (ro,subtree_check) beastie(rw) 10.0.23.191(rw,no_root_squash)With 'exports' updated, rather than restarting 'nfs', we can run
'exportfs' to share (export) the newly configured FS:
tux [0] /usr/sbin/exportfs -av exportfs: No host name given with /usr/sfw (ro,subtree_check), suggest *(ro,subtree_check) to avoid warning exporting beastie:/usr/sfw exporting 10.0.23.191:/usr/sfw exporting 10.0.22.0/23:/home exporting *:/usr/sfw exporting *:/homeIn the above, '-a' tells 'exportfs' to share all FS configured in
'exports' and '-v' tells 'exportfs' to be verbose about its actions.
Without '-v', 'exportfs' would have silently exported all filesystems,
while sending any warning messages STDERR:
tux [0] /usr/sbin/exportfs -a exportfs: No host name given with /usr/sfw (ro,subtree_check), suggest *(ro,subtree_check) to avoid warningAs an aside, we can use 'exportfs' to share FS on the fly from the
command line and also remove exported FS individually as seen below:
tux [0] /usr/sbin/exportfs -io ro,subtree_check :/usr/bin tux [0] /usr/sbin/exportfs -u *:/usr/binTo verify our exported filesystems, we can use 'exportfs', 'showmount',
or review the contents of '/var/lib/nfs/etab':
tux [0] /usr/sbin/exportfs /usr/sfw beastie /usr/sfw 10.0.23.191 /home 10.0.22.0/23 /usr/sfw <world> /home <world> tux [0] /usr/sbin/exportfs -v /usr/sfw beastie(rw,wdelay,root_squash,no_subtree_check,anonuid=65534,anongid=65534) /usr/sfw 10.0.23.191(rw,wdelay,no_root_squash,no_subtree_check,anonuid=65534,anongid=65534) /home 10.0.22.0/23(rw,wdelay,root_squash,no_subtree_check,anonuid=65534,anongid=65534) /usr/sfw <world>(ro,wdelay,root_squash,anonuid=65534,anongid=65534) /home <world>(ro,wdelay,root_squash,no_subtree_check,anonuid=65534,anongid=65534) tux [0] /usr/sbin/showmount -e Export list for tux: /home (everyone) /usr/sfw (everyone) tux [0] /bin/cat /var/lib/nfs/etab /usr/sfw beastie(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,\ no_subtree_check,secure_locks,acl,mapping=identity,anonuid=65534,anongid=65534) /usr/sfw 10.0.23.191(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,\ no_subtree_check,secure_locks,acl,mapping=identity,anonuid=65534,anongid=65534) /home 10.0.22.0/23(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,\ no_subtree_check,secure_locks,acl,mapping=identity,anonuid=65534,anongid=65534) /usr/sfw *(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,subtree_check,\ secure_locks,acl,mapping=identity,anonuid=65534,anongid=65534) /home *(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,\ secure_locks,acl,mapping=identity,anonuid=65534,anongid=65534)Note, I've broken the lines up from 'etab' above for sake of clarity.
The lines without a leading directory are continuations of the prior
line above them, as denoted on the preceding line with '\'.
With the server configured, we can work on the client host. A Linux
NFS client requires at least 3 services running (for sane usage), though
possibly 5 depending on NFS version (see note 2):
netfs (/etc/init.d/netfs) (required) nfslock (/etc/init.d/nfslock) (required) portmap (/etc/init.d/portmap) (required) rpcgssd (/etc/init.d/rpcgssd) (NFSv4, optional) rpcidmapd (/etc/init.d/rpcidmapd) (NFSv4, required)Below, we check the status of the services, enabling / starting them as
needed, and use 'showmount' to review what shares are available from the
NFS server:
cobblepot [0] for i in nfslock portmap ; do /sbin/service ${i} status ; done rpc.statd (pid 1628) is running... portmap (pid 1592) is running... cobblepot [0] /sbin/chkconfig --list | /bin/awk '/(nfslock|netfs|portmap)/ {print $1"\t"$5}' netfs 3:off nfslock 3:on portmap 3:on cobblepot [0] /sbin/chkconfig --level 3 netfs on cobblepot [0] /sbin/chkconfig --list | /bin/awk '/(nfslock|netfs|portmap)/ {print $1"\t"$5}' netfs 3:on nfslock 3:on portmap 3:on cobblepot [0] /usr/sbin/showmount -e 10.0.23.171 Export list for tux: /home (everyone) /usr/sfw (everyone)Since we already have a '/home' on 'cobblepot', we'll create '/home2'
to mount '/home' from our NFS server (10.0.23.171), mount it, and verify:
cobblepot [0] /bin/mkdir /home2 cobblepot [0] /bin/ls -ld /home2 drwxr-xr-x 2 root root 4096 Feb 27 23:48 /home2 cobblepot [0] /bin/mount -t nfs -o rw,bg,intr 10.0.23.171:/home /home2 cobblepot [0] /bin/ls -ld /home2 drwxr-xr-x 4 root root 4096 Oct 6 22:33 /home2 cobblepot [0] /bin/df -h /home2 Filesystem Size Used Avail Use% Mounted on 10.0.23.171:/home 7.2G 2.1G 4.8G 31% /home2 cobblepot [0] /bin/mount | /bin/grep /home2 10.0.23.171:/home on /home2 type nfs (rw,bg,intr,addr=10.0.23.171)It's notable that the timestamp on '/home2' changes from its original
modification time to the last modification time of '/home' on the NFS
server after we mount the share. On 'cobblepot' as user 'troy' , we
switch to '/home2/troy' (10.0.23.171:/home/troy) and test out our access:
troy@cobblepot [0] cd /home2/troy troy@cobblepot [0] echo "this is my file" >> myfile troy@cobblepot [0] /bin/cat myfile this is my file troy@cobblepot [0] /bin/ls -l myfile -rw-r--r-- 1 troy sysvuser 16 Feb 27 23:53 myfile troy@cobblepot [0] /bin/rm myfile troy@cobblepot [0] /bin/ls -l myfile /bin/ls: myfile: No such file or directory troy@cobblepot [2]This is good, we can access and write to the shared filesystem, like
we'd expect. Now, let's create '/opt/sfw' on 'cobblepot' so that we
can mount the exported '/usr/sfw' FS:
cobblepot [0] /bin/mkdir -p /opt/sfw cobblepot [0] /bin/mount -t nfs -o rw,intr 10.0.23.171:/usr/sfw /opt/sfw cobblepot [0] /bin/mount | /bin/grep /opt/sfw 10.0.23.171:/usr/sfw on /opt/sfw type nfs (rw,intr,addr=10.0.23.171) cobblepot [0] /bin/df -h /opt/sfw Filesystem Size Used Avail Use% Mounted on 10.0.23.171:/usr/sfw 7.2G 2.1G 4.8G 31% /opt/sfw cobblepot [0] /bin/ls /opt/sfw bin troyWith our share mounted, again as user 'troy', we try to create another
file (also-mine) on 'cobblepot'. This time it will be to the read-only
exported FS 10.0.23.171:/usr/sfw (mounted at /opt/sfw):
troy@cobblepot [0] /bin/ls -ld /opt/sfw/troy troy@cobblepot [0] cd /opt/sfw/troy drwxr-xr-x 2 troy sysvuser 4096 Feb 28 00:15 /opt/sfw/troy troy@cobblepot [0] echo "this is also my file" >> also-mine -ksh: also-mine: cannot create [Read-only file system] cobblepot [0] /bin/umount /opt/sfw cobblepot [0] /bin/umount /home2The above is to illustrate that export options (ro) from the NFS
server take precedence over the 'mount' options (rw) used by the client.
After the "Read-only" error, we've unmounted both '/opt/sfw' and '/home2'.
Rather than manually mounting an NFS share each time the host reboots,
I've added an entry to '/etc/fstab' on the last line below for '/home2':
cobblepot [0] /bin/cat /etc/fstab LABEL=/1 / ext3 defaults 1 1 LABEL=/var1 /var ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 LABEL=SWAP-sda3 swap swap defaults 0 0 10.0.23.171:/home /home2 nfs rw,bg,intr 0 0Assuming we no longer need any of our configured shares, after our clients
have unmounted them, we can stop sharing all exported filesystems at once
via the first 'exportfs' command below. The second iteration verifies
there are no further exported FS shared:
tux [0] /usr/sbin/exportfs -ua tux [0] /usr/sbin/exportfs
NOTES
Note 1: The details provided herein do not take into account any potential
security issues and assume access via a local LAN segment.
Note 2: Server / Client services:
server nfs NFS server functionality based on /etc/exports containing data (see /etc/sysconfig/nfs) starts: /usr/sbin/rpc.svcgssd /usr/sbin/rpc.rquotad /usr/sbin/rpc.nfsd /usr/sbin/rpc.mountd nfslock provides NFS file locking functionality. starts: /sbin/rpc.lockd (as needed) /sbin/rpc.statd portmap portmapper manages RPC connections, starts: /sbin/portmap rpcsvcgssd server-side rpcsec_gss daemon (nfsv4) starts: /usr/sbin/rpc.svcgssd (if not already started by nfs) rpcidmapd NFSv4 [u|g]ID <-> name mapping daemon (client and server) starts: /usr/sbin/rpc.idmapd
client netfs Mount network filesystems at boot runs: mount -a -t nfs,nfs4 nfslock provides NFS file locking functionality. starts: /sbin/rpc.lockd (as needed) /sbin/rpc.statd portmap portmapper manages RPC connections, starts: /sbin/portmap rpcgssd manages RPCSEC GSS contexts for the NFSv4 client. starts: /usr/sbin/rpc.gssd rpcidmapd NFSv4 [u|g]ID <-> name mapping daemon (client and server) starts: /usr/sbin/rpc.idmapdNote 3: The breakdown of 'exports' entries reads:
/home *(anonuid=65534) 10.0.22.0/23(rw)
/usr/sfw (ro,subtree_check) beastie(rw) 10.0.23.191(rw,no_root_squash)
format is 'directory <[host](option,option,option)> [[host](options)] ... (/home|/usr/sfw) directory to be shared (exported) * wildcard, any / all hosts anonuid=65534 unknown users will have an effective UID of 65534 10.0.22.0/23 CIDR notation of network to configure access for beastie specifying a host via a resolvable hostname rw read write access for accompanying host / network ro read only access for accompanying host / network no_root_squash nomally the root user on the client host accesses a share with permissions /UID set by 'anonuid', however 'no_root_squash' for the accompanying host / network specifies that root on that host retains root privileges / UID subtree_check validate file location in exported tree
see also:
Configuring NFS in Solaris
Configuring NFS in FreeBSD
Configuring NFS in SmartOS