Note (2016-07-03): This article still might be interesting if you want to learn about autofs on Linux, but if you want to share files with a local VM I recommend reading this updated article instead.
There is this really cool feature on Unix systems called autofs. It is really crazy, really magical, and really convenient. In a previous life I had to maintain what is likely the world's most complicated hierarchical autofs setup (distributed in an hierarchical manner across thousands of NetApp filers), so I learned a lot about autofs and how it works. In this post though, I'm just going to explain what it is, why you'd want to use it, and how to set it up.
I'm explicitly going to explain the workflow for using autofs with a local VM, not how to use it in production.
Let's say you're in the habit of doing dev work on a VM. A lot of people do this because they are on OS X or Windows and need to do dev work on Linux. I do it because I'm Fedora and need to do dev work on Debian, so I have a VM that runs Debian. You can set up autofs on OS X too. However, I don't use OS X so I can't explain how to set it up. I believe it works exactly the same way, and it I think it's even be installed by default, so I think you can follow this guide with OS X and it might be even easier. But no guarantees.
Now, there are a lot of ways to set it up so that you can access files on your VM. A lot of people use sshfs because it's super easy to set up. There's nothing wrong per se with sshfs, but it works via FUSE and therefore is kind of weird and slow. The slowness is what really bothers me personally.
There's this amazing things called NFS that has been available on Unix systems since the 1980s and is specifically designed to be an efficient POSIX network filesystem. It has a ton of features, you can do tons of fine-grained control over your NFS mounts, and the Linux kernel has both native support for being an NFS client and server. That's right, there's an NFS server in the Linux kernel (there's also a userspace one). So if you're trying to access remote files on a Linux system for regular work, I highly recommend using NFS instead of sshfs. It has way higher performance and has been specifically designed from day one for Unix systems remotely accessing filesystems on other Unix systems.
Setting up the NFS Server on Debian
There's one package you need to install, nfs-kernel-server
. To install it:
sudo apt-get install nfs-kernel-server
Now you're going to want to set this up to actually run and be enabled by default. On a Debian Jessie installation you'll do:
sudo systemctl enable nfs-kernel-server
sudo systemctl start nfs-kernel-server
If you're using an older version of Debian (or Ubuntu) you'll have to futz with
the various legacy systems for managing system startup service e.g.by using
Upstart or update-rc.d
.
Great, now you have an NFS server running. You need to set it up so that other
people can actually access things using it. You do this by modifying the file
/etc/exports
. I have a single line in mine:
/home/evan 192.168.124.0/24(rw,sync,no_subtree_check)
This makes it possible for any remote system on the 192.168.124.0/24 subnet
possible to mount /home/evan
with read/write permissions on my server without
any authentication. Normally this would be incredibly insecure, but the default
way that Linux virtualization works with libvirt
is that 192.168.124.0/24 is
reserved for local virtual machines, so I'm OK with it. In my case I know that
only localhost can access this machine, so in my case it's only insecure insofar
as if I give someone else my laptop they can access my VM.
Please check your network settings to verify that remote hosts on your network can't mount your NFS export since if they can you've exposed your NFS mount to your local network. Your firewall may already disable NFS exports by default, if not just change the host mask. There are also ways to set up NFS authentication, but you shouldn't need to do that just to use a VM and that topic is outside the scope of this blog post.
Now reload nfs-kernel-server
so it knows about the new export:
sudo systemctl restart nfs-kernel-server
Update: It was
pointed out to me
that if you're accessing the VM from OS X you have to use high ports for
automounted NFS, meaning that in the /etc/exports
file on the guest VM you'll
need to add "insecure" as an export option.
Setting up the NFS Client on Debian/Fedora
On client you'll need the autofs
package. On Fedora that will be:
sudo dnf install autofs
and on Debian/Ubuntu it will be
sudo apt-get install autofs
I think it's auto-installed on OS X?
Again, I'm going to assume you're on a recent Linux distro (Fedora, or a modern Debian/Ubuntu) that has systemd, and then you'll do:
sudo systemctl enable autofs
sudo systemctl start autofs
Before we proceed, I recommend that you record the IP of your VM and put it in
/etc/hosts
. There's some way to set up static networking with libvirt
hosts,
but I haven't bothered to figure it out yet, so I just logged into the machine
and recorded its IP (which is the same every time it boots). On my system, in my
/etc/hosts
file I added a line like:
192.168.124.252 d8
So now I can use the hostname d8
as an alias for 192.168.124.252
. You can
verify this using ssh
or ping
.
Due to some magic that I don't understand, on modern NFS systems there's a way
to query a remote server and ask what NFS exports it knows about. You can query
these servers using the showmount
command. So in my case I now see:
$ showmount -e d8
Export list for d8:
/home/evan 192.168.124.0/24
This confirms that for the host alias d8
I added that the host is actually
exporting NFS mounts in the way I expect. If you don't have a showmount
command you may need to install a package called nfs-utils
to get it.
Actually Using Autofs
Now here's where it gets magical.
When you do any filesystem operation (e.g. change directory, open a file, list a
directory, etc.) that would normally fail with ENOENT
, the kernel checks if
that file should have been available via autofs. If it is, it will mount the
filesystem using autofs and then proceed with the operation as usual.
This is really crazy if you think about it. Literally every filesystem related system call has logic in it that understands this autofs thing and can transparently mount remote media (which doesn't just have to be NFS, this used to be how Linux distros auto-mounted CD-ROMs) and then proceeds with the operation. And this is all invisible ot the user.
There's a ton of ways to configure autofs, but here's the easiest way. Your
/etc/auto.master
file will likely contain a file like this already (if not,
add it):
/net -hosts
This means that there is a magical autofs directory called /net
, which isn't a
real directory. And if you go to /net/<host-or-ip>/mnt/point
then it will
automatically NFS mount /mnt/point
from <host-or-ip>
on your behalf.
If you want to be really fancy you can set /etc/auto.masster
to use a timeout,
e.g.:
/net -hosts --timeout=60
So here's what I do. I added a symlink like this:
ln -s /net/d8/home/evan ~/d8
So now I have an "empty" directory called ~/d8
. When I start up my computer, I
also start up my local VM. Once the VM boots up, if I enter the ~/d8
directory
or access data in it, it's automatically mounted! And if I don't use the
directory, or I'm not running the VM, then it's just a broken symlink.
This solves a major problem with the conventional way you would use NFS with a
VM. Normally what you would do is you'd have a line in /etc/fstab
that has the
details of the NFS mount. However, if you set it up to be automatically mounted,
you have a problem where your machine will try to mount the VM before the VM is
finished booting. You can use the user
option in your /etc/fstab
options
line which lets you subsequently mount the VM NFS server without being root, but
then you have to manually invoke the mount
command once you know the VM is
started.
By using autofs, you don't ever need to type the mount
command, and you don't
even need to change /etc/fstab
. I recommend editing /etc/hosts
because it's
convenient, but you don't need to do that either. I could have just as easily
used /net/192.168.124.252/home/evan
and not created a hosts entry.