Export nfs in linux

Export nfs in linux

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network in a manner similar to how local storage is accessed.

Contents

Installation

Both client and server only require the installation of the nfs-utils package.

It is highly recommended to use a time synchronization daemon to keep client/server clocks in sync. Without accurate clocks on all nodes, NFS can introduce unwanted delays.

Configuration

Server

Global configuration options are set in /etc/nfs.conf . Users of simple configurations should not need to edit this file.

The NFS server needs a list of exports (see exports(5) for details) which are defined in /etc/exports or /etc/exports.d/*.exports . These shares are relative to the so-called NFS root. A good security practice is to define a NFS root in a discrete directory tree which will keep users limited to that mount point. Bind mounts are used to link the share mount point to the actual directory elsewhere on the filesystem.

Consider this following example wherein:

  1. The NFS root is /srv/nfs .
  2. The export is /srv/nfs/music via a bind mount to the actual target /mnt/music .

To make the bind mount persistent across reboots, add it to fstab:

Add directories to be shared and limit them to a range of addresses via a CIDR or hostname(s) of client machines that will be allowed to mount them in /etc/exports , e.g.:

It should be noted that modifying /etc/exports while the server is running will require a re-export for changes to take effect:

To view the current loaded exports state in more detail, use:

For more information about all available options see exports(5) .

Starting the server

Restricting NFS to interfaces/IPs

By default, starting nfs-server.service will listen for connections on all network interfaces, regardless of /etc/exports . This can be changed by defining which IPs and/or hostnames to listen on.

Restart nfs-server.service to apply the changes immediately.

Firewall configuration

To enable access through a firewall, TCP and UDP ports 111 , 2049 , and 20048 may need to be opened when using the default configuration; use rpcinfo -p to examine the exact ports in use on the server:

When using NFSv4, make sure TCP port 2049 is open. No other port opening should be required:

When using an older NFS version, make sure other ports are open:

To have this configuration load on every system start, edit /etc/iptables/iptables.rules to include the following lines:

The previous commands can be saved by executing:

If using NFSv3 and the above listed static ports for rpc.statd and lockd the following ports may also need to be added to the configuration:

To apply changes, Restart iptables.service .

Enabling NFSv4 idmapping

This article or section needs expansion.

The NFSv4 protocol represents the local system’s UID and GID values on the wire as strings of the form user@domain . The process of translating from UID to string and string to UID is referred to as ID mapping. See nfsidmap(8) for details.

Even though idmapd may be running, it may not be fully enabled. If /sys/module/nfs/parameters/nfs4_disable_idmapping or /sys/module/nfsd/parameters/nfs4_disable_idmapping returns Y on a client/server, enable it by:

Set as module option to make this change permanent, i.e.:

To fully use idmapping, make sure the domain is configured in /etc/idmapd.conf on both the server and the client:

See [2] for details.

Client

Users intending to use NFS4 with Kerberos need to start and enable nfs-client.target .

Manual mounting

For NFSv3 use this command to show the server’s exported file systems:

For NFSv4 mount the root NFS directory and look around for available mounts:

Then mount omitting the server’s NFS export root:

If mount fails try including the server’s export root (required for Debian/RHEL/SLES, some distributions need -t nfs4 instead of -t nfs ):

Mount using /etc/fstab

Using fstab is useful for a server which is always on, and the NFS shares are available whenever the client boots up. Edit /etc/fstab file, and add an appropriate line reflecting the setup. Again, the server’s NFS export root is omitted.

Some additional mount options to consider:

rsize and wsize The rsize value is the number of bytes used when reading from the server. The wsize value is the number of bytes used when writing to the server. By default, if these options are not specified, the client and server negotiate the largest values they can both support (see nfs(5) for details). After changing these values, it is recommended to test the performance (see #Performance tuning). soft or hard Determines the recovery behaviour of the NFS client after an NFS request times out. If neither option is specified (or if the hard option is specified), NFS requests are retried indefinitely. If the soft option is specified, then the NFS client fails a NFS request after retrans retransmissions have been sent, causing the NFS client to return an error to the calling application.

Читайте также:  Minecraft server manager linux

Mount using /etc/fstab with systemd

Another method is using the x-systemd.automount option which mounts the filesystem upon access:

To make systemd aware of the changes to fstab, reload systemd and restart remote-fs.target [3].

The factual accuracy of this article or section is disputed.

As systemd unit

Create a new .mount file inside /etc/systemd/system , e.g. mnt-myshare.mount . See systemd.mount(5) for details.

What= path to share

Where= path to mount the share

Options= share mounting options

To use mnt-myshare.mount , start the unit and enable it to run on system boot.

automount

To automatically mount a share, one may use the following automount unit:

Disable/stop the mnt-myshare.mount unit, and enable/start mnt-myshare.automount to automount the share when the mount path is being accessed.

Mount using autofs

Using autofs is useful when multiple machines want to connect via NFS; they could both be clients as well as servers. The reason this method is preferable over the earlier one is that if the server is switched off, the client will not throw errors about being unable to find NFS shares. See autofs#NFS network mounts for details.

Tips and tricks

Performance tuning

When using NFS on a network with a significant number of clients one may increase the default NFS threads from 8 to 16 or even a higher, depending on the server/network requirements:

It may be necessary to tune the rsize and wsize mount options to meet the requirements of the network configuration.

In recent linux kernels (>2.6.18) the size of I/O operations allowed by the NFS server (default max block size) varies depending on RAM size, with a maximum of 1M (1048576 bytes), the max block size of the server will be used even if nfs clients requires bigger rsize and wsize . See https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/5.8_Technical_Notes/Known_Issues-kernel.html It is possible to change the default max block size allowed by the server by writing to the /proc/fs/nfsd/max_block_size before starting nfsd. For example, the following command restores the previous default iosize of 32k:

To make the change permanent, create a systemd-tmpfile:

To mount with the increased rsize and wsize mount options:

Furthermore, despite the violation of NFS protocol, setting async instead of sync or sync,no_wdelay may potentially achieve a significant performance gain especially on spinning disks. Configure exports with this option and then execute exportfs -arv to apply.

Automatic mount handling

This trick is useful for NFS-shares on a wireless network and/or on a network that may be unreliable. If the NFS host becomes unreachable, the NFS share will be unmounted to hopefully prevent system hangs when using the hard mount option [5].

Make sure that the NFS mount points are correctly indicated in fstab:

Create the auto_share script that will be used by cron or systemd/Timers to use ICMP ping to check if the NFS host is reachable:

in the auto_share script above.

Make sure the script is executable.

Next check configure the script to run every X, in the examples below this is every minute.

systemd/Timers

Finally, enable and start auto_share.timer .

Using a NetworkManager dispatcher

NetworkManager can also be configured to run a script on network status change.

The easiest method for mount shares on network status change is to symlink the auto_share script:

However, in that particular case unmounting will happen only after the network connection has already been disabled, which is unclean and may result in effects like freezing of KDE Plasma applets.

The following script safely unmounts the NFS shares before the relevant network connection is disabled by listening for the pre-down and vpn-pre-down events, make the script is executable:

Create a symlink inside /etc/NetworkManager/dispatcher.d/pre-down to catch the pre-down events:

Troubleshooting

There is a dedicated article NFS/Troubleshooting.

Источник

NFS mount options | NFS exports options | Beginners Guide

Table of Contents

In this article we will learn about most used NFS mount options and NFS exports options with examples. I have tried to be as simple as possible in my examples so that even a beginner to Linux can understand these and then make a decision to use the respective NFS mount and export options in his/her setup.

There are two types of permissions which can be implemented between NFS Server and Client

  1. NFS Server Side (NFS Exports Options)
  2. NFS Client side (NFS Mount Options)

Let us jump into the details of each type of permissions. I have already configured a NFS server and client to demonstrate about NFS mount options and NFS exports options as this is a pre-requisite to this article.

NFS Exports Options

NFS exports options are the permissions we apply on NFS Server when we create a NFS Share under /etc/exports

Below are the most used NFS exports options in Linux

NFS exports options example with secure vs insecure

  • With secure the port number from which the client requests a mount must be lower than 1024.
  • The secure permission is on by default.
  • To turn it off, specify insecure instead

Below I have shared /nfs_shares folder on the NFS Server

As you see by default NFS exports options takes secure

In such case the client will be forced to use port number less than 1024 to access the NFS shares. Here as you see client is using port 867 to access the share.

To allow client any available free port use insecure in the NFS share

Next re-export your shares

Verify the NFS Share permissions

So now a client is free to use any port. Using insecure does not mean that you are forcing a client to use port higher than 1024, a client can still use a port value lesser than 1024, it is just that now the client will also be allowed to connect to NFS server with higher port numbers which are considered insecure.

Читайте также:  Шаблоны для powerpoint windows

NFS exports options example with ro vs rw

I believe the naming syntax explains the definition here.

  • ro means read-only access to the NFS Share
  • rw means read write access to the NFS Share

But what if you share a directory as read-only but mount the NFS share as read-write?

In the below example I have shared /nfs_shares with read-only permission

List the available shares

But on the NFS Client, I will mount the NFS Share with read write permission

Verify if the mount was successful. As you see the NFS share is mounted as read write

Let us try to create a file in our NFS mount point on the client

So I hope this is clear, if a directory is shared as read only then you will not be allowed to perform any write operation on that directory, even if you mount the share using read write permission.

root_squash vs no_root_squash

  • If you read the text carefully, the text itself explains the meaning of the parameter.
  • Here, squash literally means to squash (destroy) the power of the remote root user or don’t squash the power of the remote root user
  • root_squash prevents remote root users from having superuser (root) privileges on remote NFS-mounted volumes.
  • no_root_squash allows root user on the NFS client host to access the NFS-mounted directory with the same rights and privileges that the superuser would normally have.

NFS exports options root_squash example

Let us understand root_squash with some examples:

I have a directory /nfs_shares with 700 permission on my NFS Server. So only user owner is allowed to read, write and execute in this directory

Now this directory is shared va NFS Server using /etc/exports. I have given read write permission and all other permissions are set to default

Re-export the shares

List the shared directories

On the Client I will mount the NFS Share to /mnt

Next let me try to navigate to the NFS mount point

Here since we have used default NFS exports options, the NFS share will be mounted as nobody user.
Also we had given 700 permission for /nfs_shares which means no permission for » others « so » nobody » user is not allowed to do any activity in /nfs_shares

Next I will give read and execute permission to others for /nfs_shares on the NFS Server

Now I will be allowed to navigate inside the mount point

but since there is no write permission, even root user will not be allowed to write inside /mnt

Next I will also give write access to /nfs_shares (so now others have full access to /nfs_shares )

Now I should be allowed to write inside /mnt (where /nfs_shares is mounted)

As expected the we were able to create a file and this file is created with nobody user and group permission as we are using root_squash on the NFS Share

NFS exports options no_root_squash example

Next let’s see the the behaviour of no_root_squash

I will update the NFS exports options on NFS Server to use no_root_squash

Re-export the shares

List the properties of the NFS Shares on the NFS Server

On the NFS client now if I create a new file

So the new file is created with root permission.

This should prove the fact that the NFS share is accessed as root user with no_root_squash .

Understanding all_quash vs no_all_squash

  • all_squash will map all User IDs (UIDs) and group IDs (GIDs) to the anonymous user.
  • all_squash is useful for NFS-exported public FTP directories, news spool directories
  • By default no_all_squash is applied to the NFS Shares

Understanding sync vs aysnc

  • With sync reply to requests are done only after the changes have been committed to stable storage
  • While async allows the NFS server to violate the NFS protocol and reply to requests before any changes made by that request have been committed to stable storage
  • Using aysnc option usually improves performance, but at the cost that an unclean server restart (i.e. a crash) can cause data to be lost or corrupted.

NFS Mount Options with mount

NFS Mount Options are the ones which we will use to mount a NFS Share on the NFS Client.

Below are the most used NFS mount options we are going to understand in this article with different examples.

Hard Mount vs Soft Mount

  • By default all the NFS Shares are mounted as hard mount
  • With hard mount if a NFS operation has a major timeout, a «server not responding» message is reported and the client continues to try indefinitely
  • With hard mount there are chances that a client performing operations on NFS Shares can get stuck indefinitiley if the NFS server becomes un-reachable
  • Soft mount allows client to timeout the connection after a number of retries specified by retrams=n

NFS mount options hard mount example

In this NFS mount point example, I will mount my NFS share using hard mount

Check the share properties to make sure hard mount is implemented.

Next I will create a small script to write to NFS Shares and also print on screen so we know the progress or the script:

Next I executed the script on client node

During the execution after «4» was printed, I stopped the nfs-server service

On Client node I started getting these messages in /var/log/messages

Then I started NFS Server service after which the client was able to establish the connection with NFS server

And our script on client node again started to write on the NFS Share

So we see there was no data loss with hard mount

Advantage and Disadvantage of NFS Hard Mount

  • The demerit of hard mount is that this will consume more resources on your system, as your client will hold the write process until the NFS server is UP.
  • This can be used in mission critical systems where data is more important to make sure the data is not lost while writing to NFS Shares
Читайте также:  Remote linux command line

NFS mount options Soft Mount example

Let us also examine the behaviour with NFS Soft Mount in our NFS mount options example»

First I will un-mount the NFS Share. Although I could also do a remount but let’s keep it simple.

Then I will do a soft mount along with some more values such as retrans=2 and timeo=60
So the client will transmit two packets at an interval of 60 seconds before announcing the NFS Server as unreachable

Verify the NFS Mount Options on the client

Next we will again execute our script

Here I have stopped the nfs-server service to make my server unreachable.

In couple of seconds we start getting the below alarms in /var/log/messages which is similar to hard mount

But the script continues to execute even if it fails to write on the NFS Shares

Advantage and Disadvantage of NFS Soft Mount

  • So this can lead to data loss in real time environment.
  • Although in this example if I start the nfs-server , the server would be reachable again and the client will again start writing to the NFS share but while the time our NFS Server was un-reachable, that data would be lost.
  • So in production environment where data is important, it is recommended to use hard mount as preferred NFS mount options.

Define NFS version while mounting NFS Share

  • You can explicitly define the NFS version you wish to use to mount the NFS Share.
  • RHEL/CentoS 7/8 by default support NFSv3 and NFSv4 (unless you have explicitly disabled either of them).
  • So the client has an option to define the NFS version it wants to use to connect to the NFS Server
  • You can use nfsvers=n to define the NFS version

For example:
To mount NFS Share using NFSv4

Similarly to mount NFS Share using NFSv3

Use wsize and rsize mount option

  • There is no ‘default’ value for rsize and wsize . The ‘ default ‘ is to use the largest value that both the client and server support.
  • If rsize / wsize is not specified in the mount options, the client will query the server and will use the largest size that both support.
  • If rsize / wsize is specified in the mount options and it exceeds the maximum value that either the client or server support, the client will use the largest size that both support.
  • However based on your system resources and requirement, you can choose to define your own rsize and wsize value

You can define your own wsize and rsize using

Verify the new properties

For more details on the supported maximum read and write size with different Red Hat kernels check
What are the default and maximum values for rsize and wsize with NFS mounts?

Use intr mount option

  • When a process makes a system call, the kernel takes over the action.
  • During the time that the kernel is handling the system call, the process may not have control over itself.
  • When there’s an error, however, it can be quite a nuisance.
  • Because of this, NFS has an option to mount file systems with the interruptible flag (the intr option), which allows a process that is waiting on an NFS request to give up and move on.
  • In general, unless you have reason not to use the intr option, it is usually a good idea to do so.

Using bg and fg NFS mount options

  • I wouldn’t blindly recommend this and it mostly depends on your use case.
  • These options can be used to select the retry behavior if a mount fails.
  • The bg option causes the mount attempts to be run in the background.
  • The fg option causes the mount attempt to be run in the foreground.
  • The default is fg , which is the best selection for file systems that must be available. This option prevents further processing until the mount is complete.
  • bg is a good selection for noncritical file systems because the client can do other processing while waiting for the mount request to be completed.

NFS Mount Options with Fstab

If you mount a share using mount command then the changes will be intact only for the current session and post reboot you will have to again mount the NFS share

To make persistent changes you must create a new entry in /etc/fstab with the NFS share details. In /etc/fstab you can define any additional NFS mount options for the share path

For example:
In this NFS mount options example I will mount /nfs_shares path as soft mount, NFSv3, timeout value of 600 and retrans value of 5

Save and exit the /etc/fstab file

Next execute mount -a to mount all the paths from /etc/fstab

Lastly I hope the steps from the article to understand NFS Exports Options and NFS Mount Options on Linux was helpful. So, let me know your suggestions and feedback using the comment section.

Related Searches: nfs mount options performance, linux nfs mount options example, nfs exports options example, nfs client options, nfs unix commands, linux mount options

Didn’t find what you were looking for? Perform a quick search across GoLinuxCloud

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

For any other feedbacks or questions you can either use the comments section or contact me form.

Thank You for your support!!

Источник

Оцените статью