nfs synology
Objectives/Overview
We want to configure our synology nas ds923+ to share NFS and allow proxmox to use that as disk storage for it’s virtual machine images. We need to create the share on the NAS, the mount point on the proxmox node, and then map it. Afterwards, persistent reboot and security are configured.
Devices:
host | ip | notes |
---|---|---|
pve128 | 192.168.7.30 | /mnt/nas3-proxmox |
nas3 | 192.168.7.3 | /volumne1/proxmox |
in other words:
pve128:/mnt/nas3-proxmox > nas3:/volume1/proxmox
Setup NAS WebUI
You cannot mount to a remote share that doesn’t exist, so create the share first:
- Control Panel > File Services > NFS
- Control Panel > Shared Folder > Create
- Control Panel > Shared Folder > Edit > NFS Permissions > Add NFS Rule, Squash,
Setup ProxMox CLI
Install NFS Client
root@pve128:~# apt install nfs-common
Verify Mounts
Is it already mounted?
root@pve128:~# mount -av
/ : ignored
/boot/efi : already mounted
none : ignored
/proc : already mounted
If it were mounted it might look like this:
root@pve128:~# mount -av
/ : ignored
/boot/efi : already mounted
none : ignored
/proc : already mounted
/mnt/nas3-proxmox : already mounted
Verify State Permission
I haven’t confirmed if this should be 644 or 755 yet. If you change permissions, restart the services. Note this allows all users to have whatever permission you set (644 of 755).
root@pve128:~# ls -ld /var/lib/nfs/state
-rw-r--r-- 1 root root 4 Jun 19 17:37 /var/lib/nfs/state
root@pve128:~# chmod 755 /var/lib/nfs/state
root@pve128:~# systemctl restart rpc-statd
root@pve128:~# systemctl restart nfs-client.target
root@pve128:~# ls -ld /var/lib/nfs/state
-rwxr-xr-x 1 root root 4 Jun 19 17:37 /var/lib/nfs/state
Make Mount Point
First, we need to make a mount point on our proxmox server. Each member of a cluster needs to go through these steps in order to use the mount point.
root@pve128:~# mkdir /mnt/nas3-proxmox
Mount it
I didn’t realize that my proxmox didn’t know how to find the nas3 by name, but that was an easy fix - it’s a static IP, so I just mapped to IP
root@pve128:~# mount -t nfs nas3:/proxmox /mnt/nas3-proxmox
^C
root@pve128:~# ping nas3
ping: nas3: Temporary failure in name resolution
root@pve128:~# ping 192.168.7.3
PING 192.168.7.30 (192.168.7.3) 56(84) bytes of data.
64 bytes from 192.168.7.3: icmp_seq=1 ttl=64 time=0.011 ms
64 bytes from 192.168.7.3: icmp_seq=2 ttl=64 time=0.043 ms
^C
Mount Command
root@pve128:~# mount -t nfs 192.168.7.3:/volume1/proxmox /mnt/nas3-proxmox
Secure Mount Command
root@pve128:~# mount -t nfs -o rw,sync,noexec,nosuid,nodev,secure 192.168.7.3:/volume1/proxmox /mnt/nas3-proxmox
- rw: Mounts the NFS share with read-write access. If you need read-only access, use
ro
instead. - sync: Ensures that changes are written to the server immediately, which can help prevent data corruption.
- noexec: Prevents the execution of binaries on the mounted file system. This is useful for directories that do not contain executable files.
- nosuid: Prevents the operation of set-user-ID and set-group-ID bits. This enhances security by not allowing privilege escalation via files on the mounted file system.
- nodev: Disallows access to device files on the mounted file system. This prevents potential device file exploitation.
- secure: Requires clients to use privileged ports (ports below 1024) to connect to the NFS server. This ensures that only authorized users can mount the NFS share.
Verify
root@pve128:~# mount | grep nfs
192.168.7.3:/volume1/proxmox on /mnt/nas3-proxmox type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.7.3,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.7.3)
Survive Reboot
Add this line to your /etc/fstab:
echo '192.168.7.3:/volume1/proxmox /mnt/nas3-proxmox nfs defaults 0 0' | tee -a /etc/fstab
or securely:
echo '192.168.7.3:/volume1/proxmox /mnt/nas3-proxmox nfs defaults,nosuid,nodev,secure 0 0' | tee -a /etc/fstab
Setup Additional Parts ProxMox WebUI
Add/Configure NFS Storage
Navigate to Datacenter > Storage > Add > NFS. Configure with:
- ID: nas3-proxmox
- Server: 192.168.7.3
- Export: /volume1/proxmox
- Content: Select appropriate options.
- Nodes: Select All Nodes you have run mount commands on
Troubleshooting
List NFS Shares
root@pve128:~# showmount -e 192.168.7.3
Export list for 192.168.7.3:
/volume1/proxmox 192.168.7.30
What does Mount Show?
root@pve128:~# mount | grep nas3
Check Journalctl
root@pve128:~# journalctl -xe | grep nfs
Jun 19 16:09:57 pve128 systemd[1]: Reached target nfs-client.target - NFS client services.
░░ Subject: A start job for unit nfs-client.target has finished successfully
░░ A start job for unit nfs-client.target has finished successfully.
Jun 19 17:37:29 pve128 rpc.statd[16148]: Failed to read /var/lib/nfs/state: Success
Verify Services are Running
root@pve128:~# systemctl status nfs-client.target
● nfs-client.target - NFS client services
Loaded: loaded (/lib/systemd/system/nfs-client.target; enabled; preset: enabled)
Active: active since Wed 2024-06-19 17:42:26 CDT; 12h ago
Jun 19 17:42:26 pve128 systemd[1]: Stopping nfs-client.target - NFS client services...
Jun 19 17:42:26 pve128 systemd[1]: Reached target nfs-client.target - NFS client services.
root@pve128:~# systemctl status rpc-statd
● rpc-statd.service - NFS status monitor for NFSv2/3 locking.
Loaded: loaded (/lib/systemd/system/rpc-statd.service; enabled-runtime; preset: enabled)
Active: active (running) since Wed 2024-06-19 17:42:11 CDT; 12h ago
Process: 17529 ExecStart=/sbin/rpc.statd (code=exited, status=0/SUCCESS)
Main PID: 17530 (rpc.statd)
Tasks: 1 (limit: 154284)
Memory: 504.0K
CPU: 1ms
CGroup: /system.slice/rpc-statd.service
└─17530 /sbin/rpc.statd
Jun 19 17:42:11 pve128 systemd[1]: Starting rpc-statd.service - NFS status monitor for NFSv2/3 locking....
Jun 19 17:42:11 pve128 rpc.statd[17530]: Version 2.6.2 starting
Jun 19 17:42:11 pve128 rpc.statd[17530]: Flags: TI-RPC
Jun 19 17:42:11 pve128 systemd[1]: Started rpc-statd.service - NFS status monitor for NFSv2/3 locking..
root@pve128:~# systemctl status rpcbind
● rpcbind.service - RPC bind portmap service
Loaded: loaded (/lib/systemd/system/rpcbind.service; enabled; preset: enabled)
Active: active (running) since Wed 2024-06-19 16:09:57 CDT; 14h ago
TriggeredBy: ● rpcbind.socket
Docs: man:rpcbind(8)
Main PID: 1674 (rpcbind)
Tasks: 1 (limit: 154284)
Memory: 884.0K
CPU: 61ms
CGroup: /system.slice/rpcbind.service
└─1674 /sbin/rpcbind -f -w
Jun 19 16:09:57 pve128 systemd[1]: Starting rpcbind.service - RPC bind portmap service...
Jun 19 16:09:57 pve128 systemd[1]: Started rpcbind.service - RPC bind portmap service.
Verify Jumbo Frame
Check the mtu in case things aren’t working. Though this is not likely an issue in your home network, it could be if you cross several routers, VPN, etc.
root@pve128:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 28:87:ba:2f:76:c2 brd ff:ff:ff:ff:ff:ff
3: enp6s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether d8:5e:d3:e0:71:a4 brd ff:ff:ff:ff:ff:ff
4: wlp7s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether bc:09:1b:f4:b5:fa brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 28:87:ba:2f:76:c2 brd ff:ff:ff:ff:ff:ff
Ping with an unfragmented packet:
ping -M do -s 1472 192.168.7.3
Script it if you do not know your max mtu
#!/bin/bash
# Define the NFS server IP and the starting MTU size
NFS_SERVER="192.168.7.3"
MAX_MTU=1550
# Initialize the lower and upper bounds for the search
LOWER_BOUND=0
UPPER_BOUND=$((MAX_MTU - 28)) # Subtracting 28 bytes for IP and ICMP headers
# Function to check if a given packet size works without fragmentation
check_mtu() {
local size=$1
if ping -M do -s $size -c 1 $NFS_SERVER > /dev/null 2>&1; then
echo 1
else
echo 0
fi
}
# Binary search for the largest packet size that does not fragment
while [ $((LOWER_BOUND + 1)) -lt $UPPER_BOUND ]; do
MID=$(( (LOWER_BOUND + UPPER_BOUND) / 2 ))
if [ $(check_mtu $MID) -eq 1 ]; then
LOWER_BOUND=$MID
else
UPPER_BOUND=$MID
fi
done
# The maximum MTU size is the largest size that worked, plus 28 bytes for headers
MAX_PACKET_SIZE=$LOWER_BOUND
MAX_MTU=$((MAX_PACKET_SIZE + 28))
echo "The maximum MTU size to the NFS server $NFS_SERVER is $MAX_MTU bytes."
# Clean exit
exit 0
Firewalls?
Check Host based firewalls on NFS server and Client
root@pve128:~# iptables -L -n -v
Chain INPUT (policy ACCEPT 1238K packets, 4544M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 976K packets, 4533M bytes)
pkts bytes target prot opt in out source destination
Ports Open?
If you can test ports, then firewalls are not an issue.
root@pve128:~# nc -zv 192.168.7.3 2049
192.168.7.3: inverse host lookup failed: Unknown host
(UNKNOWN) [192.168.7.3] 2049 (nfs) open
root@pve128:~# nc -zv 192.168.7.3 111
192.168.7.3: inverse host lookup failed: Unknown host
(UNKNOWN) [192.168.7.3] 111 (sunrpc) open
Verify RPC Services
root@pve128:~# rpcinfo -p 192.168.7.3
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100005 1 udp 892 mountd
100005 1 tcp 892 mountd
100005 2 udp 892 mountd
100005 2 tcp 892 mountd
100005 3 udp 892 mountd
100005 3 tcp 892 mountd
100024 1 udp 662 status
100024 1 tcp 662 status
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100021 1 udp 4045 nlockmgr
100021 3 udp 4045 nlockmgr
100021 4 udp 4045 nlockmgr
100021 1 tcp 4045 nlockmgr
100021 3 tcp 4045 nlockmgr
100021 4 tcp 4045 nlockmgr
Key RPC Services for NFS
- portmapper (program 100000): The port mapper service, running on port 111, is critical as it maps RPC service numbers to network port numbers. This service must be running for the NFS client to locate other RPC services on the server. Usually on port 111 for both TCP and UDP.
- mountd (program 100005): The mount daemon handles mount requests from NFS clients. It typically runs on port 892.
- nfs (program 100003): This is the main NFS service, which handles file operations. It usually runs on port 2049 for both TCP and UDP
- status (program 100024): The NFS status monitor handles lock management and other state-related services. Dynamic Port.
- nlockmgr (program 100021): The NFS lock manager handles file locking to ensure data integrity during concurrent access. Dynamic Port
Security Tips
- Use NFSv4, which allows for Kerberos auth and encryption
- Restrict access to specific IP/Subnet through NFS rules and/or firewalls
- Grand only necessary Read or Write permissions
- use root_squash to prevent remote root users from having root access on NFS server (map all users to admin)
- Firewall only open to required ports 2049 for NFS and 111 for RPC
- Use VPN across any public channels, or use private/isolated networks
- Use NFS exports to limit priviledge
- Use consistent UID/GID mapping between NFS server and clients for proper access control
Summary of Commands
Setup NAS
- Control Panel > File Services > NFS
- Control Panel > Shared Folder > Create
Proxmox CLI (Each Node)
You will need to mount on each node if you want that node to be able to use the NFS share.
mkdir -p /mnt/nas3-proxmox
mount -t nfs 192.168.7.3:/volume1/proxmox /mnt/nas3-proxmox
echo '192.168.7.3:/volume1/proxmox /mnt/nas3-proxmox nfs defaults 0 0' | tee -a /etc/fstab
Proxmox WebUI (once for cluter)
Navigate to Datacenter > Storage > Add > NFS. Configure with:
Section | Setting |
---|---|
ID | nas3-proxmox |
Server | 192.168.7.3 |
Export | /volume1/proxmox |
Content | Select appropriate options |
Nodes | Select All Nodes you have run mount commands on |