NFS vs. CIFS questions
Posted: Tue Sep 30, 2008 11:31 am
I recently bought a TS-109 Pro II to act as a fileserver for our home network. I chose the Pro to get NFS out of the box, thinking that would be the best option for using it with Linux. (I know enough about Linux that I figured I could probably install NFS on a non-Pro box, but I didn't think the difference in price was worth the time to figure it out, and I was hoping for a plug-and-play solution to save me the time of setting up my own server.)
Unfortunately, I didn't fully understand how NFS worked until I started playing with the box. I was expecting something more like scp or AppleShare, where local and remote users are completely independent. I think I have a better handle on things now, but I would like to pick the brains of those with more experience to see if I'm on the right track now. I apologize for the length, but if can trudge through and throw in your two cents, I'd appreciate it.
It seems that the main problem (for me) with using NFS is that user and group IDs must be the same on all the systems and on the NAS for things to work as expected (or at all, really, IME). If I understand it correctly now, it really seems to be intended more for the case where a single sysadmin controls all the computers and the servers and can set up the IDs as required. Yes, I could do that too, and yes, it's not that much work, but I can't prevent any one of the users in the house (assume there are some who are a bit Linux-savvy) from creating an account with the same ID as mine and accessing all my files, I don't think. And it might be someone from outside, if they crack into my wireless network.
I know there are further security measures I could use, like allowing access to shares from only certain IP address, but that's just more work (giving static addresses to all the computers), and it still doesn't prevent someone from using my IP if my computer is off. To me, NFS just doesn't seem like a good fit for the case where you have a bunch of single-user computers, whose owners are root users as well, coming and going from your network. Is that a fair assessment?
So I turned to CIFS/Samba next. At first, this was looking pretty messy too: when I mounted the share locally, files I copied to it would have my local UID (say 1000 -- this is standard Ubuntu box), but directories would get my remote UID (500), and then I couldn't write to them! And I just didn't like seeing the UID mismatches. Now, I looked into this some more and found some suggestions, and it looked like turning off Unix Extensions ("unix extensions = no" in the smb.conf file on the QNAP) might be the answer. But no, don't do that! All the IDs come up as 0 and you can't set file permissions properly. It was ugly!
However, it seems that setting the uid and gid mount options does just what I wanted, despite the documentation saying that they ignored if the unix extensions are enabled on the server. Locally, all the files on the mounted volume appear to be owned by my local user, but on the QNAP's drive, they get the UID/GID of the remote user. This is my mount command:
This works!
Now there may still be something lurking to catch me up, as I haven't played with this setup much yet. I would like to make this a little better integrated with the system, for starters. But I thought I'd try to get a little feedback before I find myself too far down the wrong path. Is what I'm doing making sense? And should it have been this hard?
Unfortunately, I didn't fully understand how NFS worked until I started playing with the box. I was expecting something more like scp or AppleShare, where local and remote users are completely independent. I think I have a better handle on things now, but I would like to pick the brains of those with more experience to see if I'm on the right track now. I apologize for the length, but if can trudge through and throw in your two cents, I'd appreciate it.
It seems that the main problem (for me) with using NFS is that user and group IDs must be the same on all the systems and on the NAS for things to work as expected (or at all, really, IME). If I understand it correctly now, it really seems to be intended more for the case where a single sysadmin controls all the computers and the servers and can set up the IDs as required. Yes, I could do that too, and yes, it's not that much work, but I can't prevent any one of the users in the house (assume there are some who are a bit Linux-savvy) from creating an account with the same ID as mine and accessing all my files, I don't think. And it might be someone from outside, if they crack into my wireless network.
I know there are further security measures I could use, like allowing access to shares from only certain IP address, but that's just more work (giving static addresses to all the computers), and it still doesn't prevent someone from using my IP if my computer is off. To me, NFS just doesn't seem like a good fit for the case where you have a bunch of single-user computers, whose owners are root users as well, coming and going from your network. Is that a fair assessment?
So I turned to CIFS/Samba next. At first, this was looking pretty messy too: when I mounted the share locally, files I copied to it would have my local UID (say 1000 -- this is standard Ubuntu box), but directories would get my remote UID (500), and then I couldn't write to them! And I just didn't like seeing the UID mismatches. Now, I looked into this some more and found some suggestions, and it looked like turning off Unix Extensions ("unix extensions = no" in the smb.conf file on the QNAP) might be the answer. But no, don't do that! All the IDs come up as 0 and you can't set file permissions properly. It was ugly!
However, it seems that setting the uid and gid mount options does just what I wanted, despite the documentation saying that they ignored if the unix extensions are enabled on the server. Locally, all the files on the mounted volume appear to be owned by my local user, but on the QNAP's drive, they get the UID/GID of the remote user. This is my mount command:
Code: Select all
sudo mount -t cifs //qnap/myvol -o credentials=~/.qnap-credentials,uid=1000,gid=1000
Now there may still be something lurking to catch me up, as I haven't played with this setup much yet. I would like to make this a little better integrated with the system, for starters. But I thought I'd try to get a little feedback before I find myself too far down the wrong path. Is what I'm doing making sense? And should it have been this hard?