I had a little bit of a learning experience this week regarding NFS exports and Mac OS X that I thought would be interesting to share with my readers. It’s part “simple tip” and part “facepalm.” Read on!
TL;DR: NFS works (insecurely) if you use all_squash and set anonuid and anongid to match your UNIX user ID
The Setup: What I Wanted to Do
I have an old Iomega ix4-200d with about 2 TB of data on it and wanted to transfer that to the Drobo attached to my Mac. I also have a new Raspberry Pi 2 connected to a Drobo of its own and wanted to access its data over the network from the Mac.
There are essentially four protocols I could choose from:
- CIFS – The Iomega runs an old version of Samba (v3.0.32 if you’re curious) that only supports CIFS, the ancient and crusty NAS protocol still sadly supported by just about everything. Pro: It works. Con: Not well.
- NFS – The other ubiquitous protocol, NFSv3 is well-supported by most systems including the Iomega, Raspbian on the Raspberry Pi, and Mac OS X. But it’s weird (see below).
- SMB2 – Apple gave up on AFP and CIFS and embraced SMB2, the way-better sequel to CIFS. And Samba supports it too. But not on the old Iomega.
- AFP – Apple’s NAS protocol is rapidly falling out of favor and the open source servers are flaky to say the least. Do not want.
Frankly, SMB2 (or the newer SMB3) is the best NAS protocol for modern devices thanks to solid support from Microsoft, Apple, and Samba. But I needed to make either CIFS or NFS work in my special circumstance thanks to that old Iomega.
First I tried CIFS since it was already configured, but it was too slow. I tried tuning it but gave up and turned to NFS. This lead me back to my 1990’s nightmares, setting up exports and trying to match ID’s. But I got it all working! Here’s the story.
Samba 3.0 and CIFS
The easiest option was just sharing the ix4 volume using CIFS (yes, CIFS).
Yes, that works, except that the Iomega is so underpowered and slow. I was getting less than 10 MB/s so the transfer would take more than two days! Luckily, I rooted the Iomega years ago and was able to get inside and take a look. Sure enough, the CPU was pegged at 100% the whole time my transfer was in process.
I tried tuning Samba to use less CPU. An old trick for that old dog was to lower the log level to 0 in /etc/samba/smb.conf and this did help. But transfers were still running under 12 MB/s even without the CPU pegged.
So Samba and CIFS struck out. On to NFS!
NFS Wasn’t Designed For Home Networks
NFS is a very old protocol created for a very different circumstance. Sun Microsystems, creator of NFS, expected it would be used in well-administered networks as one component of a whole fleet of network protocols.
One protocol they particularly liked was NIS/yp, which shared user accounts and managed security between systems. If you were user 1234 on one system, you were that on all systems. And your login credentials were checked by a central server no matter where you logged in. NIS managed authentication and NFS trusted it for authorization. This is not how most home networks are configured.
Mac OS X is a UNIX system, but it’s not a managed one like Sun imagined. The same is true for all of the other UNIX systems commonly found in home networks (routers, NAS arrays, etc). Each maintains its own set of user and group id numbers and passwords and each manages authentication and authorization in a vacuum. Two Macs might share the same user accounts, passwords, and id numbers, but this would be purely coincidental.
For this reason, the NAS protocols developed by Apple and Microsoft (AFP and SMB/CIFS, respectively) use local credentials rather than relying on a consistent network. When you connect to a share on a Mac or a Windows box, you must log into that computer unless a network name service like Active Directory or LDAP is active on the network.
How UNIX Permissions Work
On traditional UNIX systems, each file and directory has three sets of permissions: One user, one group, and everyone else. Each file must be owned by one (and only one) user and one (and only one) group. And each is identified by a simple integer, not some kind of secure hash. NFS operates the same way. Some systems (including HFS+ in Mac OS X) have optional, more advanced file permissions but let’s just ignore that for the moment.
In contrast, NFS only manages security at the share level and blindly relies on local user id numbers beyond that. If a client can connect to an NFS server and mount its share, the server relies on the client to decide who has access to what files and directories on that share. This seems bizarre, but it’s how NFS was designed and makes sense when you consider the context in which it was designed.
Most UNIX filesystems work really well in their own context but become a security nightmare in other contexts. Imagine I took the hard drive out of a Sun server (or any Linux system or router) and connected it to a different one. File access permissions are encoded in the directory structure on disk and would remain as they were, but the server (not the filesystem) decides how to map those to access requests.
A file accessible only by user 0 (root) can be accessed by any “user zero” on any system, not just the one on which it was created. This makes it incredibly easy to hack into any UNIX system’s storage: All you have to do is set up a new system where you control root and you can access any file on any filesystem attached to it.
Using NFS At Home
NFS works exactly the same way as most local filesystems. When you mount an NFS share, file and directory access are arbitrated by the local system based on its own user and group databases. This causes all sorts of issues in situations where the user and group ids aren’t identical from one system to another (i.e. example every home network).
What we would like to do is intelligently map the user ids on the NFS client to those on the NFS server, but NFS shockingly doesn’t have any way to do this. Instead it has something called “squashing”, which is a wonderfully evocative term for a remarkably insecure and undesirable but probably necessary thing.
Since it’s important to protect shared filesystems from rampant root access, NFS shares can be exported with “root_squash” to translate remote root access into some other user id, typically “nobody“. This is like taking a sledgehammer to a nail, but effectively prevents users with root access from crushing remote filesystems.
But what about desirable access from user accounts that just don’t happen to have the same numerical id from system to system? There’s no good answer. You can change user ids so they match, but this is a tricky process and requires that you also chown all the local files to match the new id. Not fun.
all_squash and anonuid
An easier thing to do is use the “all_squash” option on the NFS server to map every access onto an appropriate account. This has the benefit of being functional without making a ton of changes but has the drawback of being incredibly insecure and risky. Every access to every directory and file will appear to be made by a single user id and group id on the NAS server. This could be acceptable if just one or two people are using the NAS (as in my case) but is a horrible, awful idea for any truly-shared service.
If you have root access to your home NAS, Raspberry pi, or similar UNIX system and if you want to share its files using NFS to your Mac, here’s what to do:
- Figure out an appropriate universal user id on the NAS system: cat /etc/passwd and look at the accounts. I used my own user id; it might also have some kind of generic user id. These typically start around 1000. You probably don’t want to use root, which is typically id 0. The id number is the third entry, after the username and password which is usually “x” or “*” since we use shadow passwords these days. For example:
- Pick a good universal group id on the NAS: cat /etc/group and look at the group list. You could just use the user-specific group specified after your user id from step 1, but it’s a better to use the “users” group or something similar. This is typically group 100 or 1000. You can also add all your NAS accounts to that group (after the last colon) to make sure they’ll be able to read the files created over your NFS share. For example:
- Manually edit /etc/exports to share filesystems using the all_squash option and set the appropriate anonuid and anongid values determined above. You’ll do this for each IP address that might access the share. You do have static IP addresses, right? For example:
- Now tell NFS to share this directory by typing exportfs -a
- On the Mac client, in Finder, type Command-k and put in the appropriate NFS URL to mount. For example:
If all went as planned, you should see the directory shared from your NAS as a regular drive on the Mac. All reads and writes will be done as the user and group you specified. Try copying a file over and see what it looks like on the other side.
If you want to get crafty, you can create another share for a different client IP address specifying their NAS user id instead of yours. For example, you could map you kids’ MacBook Air to their account and group “kids” and map your MacBook Pro to your account and group “adults”. But this only works on a per-client-IP basis, and all access from each client is still squashed onto just one account.
This approach works, but it’s really insecure. Anyone with that IP address could mount the NFS share and read and write files as whatever user and group specified with no other controls. And there’s no way to have one user’s reads and writes go to one account and another’s go to a different account unless they use different machines.
It works, it’s faster than CIFS, and it’s easier than configuring Samba, but it’s not a great approach. Maybe you should just go buy a home NAS instead.
Or, pay the $20 for OS X Server (if you’re an Apple shop), and use its LDAP and Mobile Accounts to get the shared UID/GID numbers.
However, I have to admit that I continue to have problems with OS X Server when I want to transition an older installation with existing UID/GID values.
Interesting post, Stephen. The cat photo is funny. However, what it doesn’t say speaks volumes (pun intended) – the over-simplification of the term “NFS”. What you are speaking of is NFSv3, which was written to authenticate machines, not users. If you want strong user authentication/access (and strong ACL support) you should use NFSv4. Or, as the poster below indicates, use NIS or LDAP and coordinate your user identities between machines.
The blog seems to take the tone that “NFS is bad and insecure.” Well, you used NFSv3 without centralized identity control; be careful what you ask for. NFSv3 did exactly what it was designed to do in your experiment. It was designed to let a user logged into machine A with local credentials access a filesystem on machine B reachable over a network. Period. NFSv3 is a layer 5 protocol; it makes no assumptions about the user’s credentials on machine B.
Lastly, and this is just a nit, you continue to conflate CIFS and SMB. CIFS is not a protocol, it’s a specification. The CIFS spec described an implementation of SMB, SMB version 1 to be precise. Now, while it’s not a cardinal sin to say “CIFS protocol” or “it’s faster than CIFS” instead of “SMB 1”, the conflation between the two just isn’t helpful.
Sure, the old Iomega runs crusty old SMB 1. Saying “it’s too slow” is like saying “I have an old car with a tiny engine, and it only goes 40 MPH, it’s too slow.” Well, of course it’s “too slow” relative to what we expect these days – it’s an old piece of kit. And yes, NFSv3 is “faster” – but remember, it’s a stateless protocol whereas SMB is stateful. In most instances, stateless protocols are indeed “faster” than stateful protocols, by definition.
Anyway, having said all that, I enjoyed reading the blog and pictured you going through the gyrations you describe. That’s life in the home lab 🙂
For those of you who completely misunderstood what I was writing and have no context regarding my long-standing support of NFS as a datacenter protocol, let me clarify: I was talking about using NFSv3 in a home environment. It’s icky there.
Search this very blog and you will see that I have a different assessment of all of these protocols in a datacenter setting.
What performance did you get with NFS ?
Joseph Bloe says
Interesting, but too many opinions stated (like NFS being very old – implies it was never updated beyond V3; OSes and NOT filesystems dictate access – to do it any other way is illogical and a huge security issue) and simple falsehoods (most home networks simply do not use NIS… no one has, for years, unless they just don’t know any better). Too bad AFP is bloated much in the same vein as SMB/CIFS and it’s faster when tuned properly 🙂
Joseph Bloe says
and you still think you need the insecure switch with NFS… :/
Jared Brees says
Thank you for this article! 1. It was helpful to explain how NFS works to a co-worker that’s been asking. 2. It helped me resolve some issues I was having with my home NFS setup.
I think this is the best entry-level NFS article I’ve seen. Will definitely be sharing with more as needed.
Michael Hoffmann says
Hy stephen – have you ever tried the other way: mount a Mac NFS share from a linux ? (all commandline)
on the mac os its easy to share (sudo nfsd enable ; vi /etc/exports ) but how to mount ? I always get : wrong fs type, bad option, bad superblock error if try to mount this on linux
Both Apple and Microsoft should just give up on that bloated mess, <- Samba*/CIFS/AFP/… .
NFS is King, and yes I've worked in many Datacenter's myself.
Or, as some else already said here: have you heard of "NFSv4."?
btw, NFS(Client) has finally arrived in latest Win10 Pro, so it also looks like "NFS for Home" is here to stay.
negative opinions about NFS and/or TCP/IP are old and boring now.