I had the following idea which I posted to the openvpn-devel mailing list.
I had an idea I wanted to run by people and see if its feasible….here goes.
I've been hearing a lot about “virtualized” networking for VMs and that got me thinking. It seems like OpenVPN would be a good tool that could join a group of VMs into their own private LAN, basically segregating them from the internet even though they're just machines hosted by amazon, rackspace, or in my own server room. This could all be done now by setting all the VMs up with the openvpn client and getting them to connect, etc. The down side is that this is a lot of configuration, and the machines would still be exposed to the larger network.
The idea I had, and wanted to run by, was if it would be possible to integrate an openvpn client into the hypervisor's virtual network card. This would make it so that from the moment the VM boots up, it is only connected to the private LAN served by the OpenVPN server. The VM would see just another NIC, but instead of routing the data directly to the Hypervisor's NIC (tap) or NATing it or whatever, it would go to an OpenVPN client library (that wouldn't need a tun/tap device on the hypervisor) which sends the data to the server over the udp connection.
Is this something that would be technically feasible? Practically feasible? I've only used the binaries before, is the client in a state (is there a libopenvpn) where it could be plugged into another program like QEMU/KVM?
Thanks for any input, Tom
You can see the follow-up thread here.
Based on this thread, I decided to put together a system using what is currently available in OpenVPN, by manually creating the interfaces on the various hypervisors. To be clear, this is NOT running OpenVPN inside a virtual machine. This would be trivial to accomplish, but would also require every virtual machine to be OpenVPN aware, and configured as such. Instead, this method keeps the VMs oblivious to the fact that they are using OpenVPN. To them it is just a normal ethernet network, just like the host-local segregated networks already provided with all the different virtualization solutions, except this one spans across multiple hypervisors.
To be clear, this isn't a setup that is good for all users. All the network traffic goes through the single OpenVPN server machine. Because of this, it will not scale very well at all, and will not give you very good throughput or latency. However, if network throughput and latency aren't important to you, this is a fine way to securely segregate your machines from the rest of the network.
Here's a picture of what I have:
I'm running with four hypervisor machines connected to a physical ethernet (and wireless) network, using IPs in the 192.168.1.* range. One of the hypervisors will contain a special virtual machine, running the openvpn server. This VM will have access to both the 192.168.1.* network and our virtual (segregated) 10.35.50.* network which will be private to all the virtual machines.
The VMs are all based on the same image. It was created from a Ubuntu 12.04 i386 alternate CD using the option to create a bare-bones command line instance. I then added openssh-server, vim and byobu (a fancier version of screen) to it, for convenience.
The hypervisors are all running Ubuntu 11.10. They run KVM/QEMU for the virtualization software. I control the virtualization with virt-manager which works through libvirt.
The server machine was named openvpn-server (creative, I know) and on it, I installed openvpn and dnsmasq.
sudo apt-get install openvpn dnsmasq
To get openvpn running, I first had to generate keys.
cp -a /usr/share/doc/openvpn/examples/easy-rsa/2.0 ~/easy-rsa
Now you need to edit the “vars” file so that the “export KEY_CONFIG=” line reads something like this: “export KEY_CONFIG=/home/user/easy-rsa/openssl-1.0.0.cnf”. You can also edit the KEY_COUNTRY…KEY_EMAIL variable defaults at the bottom of the file. Then on to creating keys.
Once you have the keys, we need to install everything in the /etc/openvpn directory. We'll need the server's key/certificate as well as the CA's certificate, DH params, and the server.conf file.
sudo cp server.conf /etc/openvpn/
sudo cp easy-rsa/openvpn-server.crt /etc/openvpn/
sudo cp easy-rsa/openvpn-server.key /etc/openvpn/
sudo cp easy-rsa/dh1024.pem /etc/openvpn/
sudo cp easy-rsa/ca.crt /etc/openvpn/
sudo service openvpn restart
Here's the important lines of the server.conf (that make it different from the default).
server 10.35.50.0 255.255.255.0
You should now be able to connect to the server as if it were a normal OpenVPN server, just to test it out.
OpenVPN will give out IPs to each machine that connects to it, but it doesn't seem to give them out to any other machines on the network. This means that the hypervisors will automatically get 10.35.50.2,3,4; however the VMs won't be given anything. To solve this we're going to use dnsmasq, which also happens to be a DHCP server. Nice bonus is that we get DNS too. You could skip this section if you wanted to use static addressing and/or hosts files on each machine (not a terrible idea).
Edit /etc/dnsmask.conf and change the following:
# If you don't want dnsmasq to read /etc/resolv.conf or any other
# file, getting its servers from this file instead (see below), then
# uncomment this.
no-resolv ### We don't want to start resolving other stuff on the internet, just the segregated network, no need for resolving other servers.
# If you want dnsmasq to listen for DHCP and DNS requests only on
# specified interfaces (and the loopback) give the name of the
# interface (eg eth0) here.
# Repeat the line for more than one interface.
interface=tap0 ### This keeps it on the segregated network...doesn't talk to the physical ethernet.
# Uncomment this to enable the integrated DHCP server, you need
# to supply the range of addresses available for lease and optionally
# a lease time. If you have more than one network, you will need to
# repeat this for each network on which you want to supply DHCP
dhcp-range=10.35.50.50,10.35.50.99,12h # We're not acutally going to use it, but its nice to have if something unexpected shows up on the network (it will be able to talk)
# Give the machine which says its name is "bert" IP address
# 192.168.0.70 and an infinite lease
### This is the money line. It sets up dnsmasq so that if a machine sends a DHCP request and has one of these hostnames, it will get the correct address.
You would think that since dnsmasq knows the hostnames and IPs of the VMs, it would be able to turn those into DNS entries. However, when I tried that, it wasn't working. To fix that, I just added those names and IPs to the /etc/hosts file.
If you don't want to pollute your existing hosts file with this segregated network, you can play with the following entries in /etc/dnsmasq.conf
# If you don't want dnsmasq to read /etc/hosts, uncomment the
# following line.
# or if you want it to read another file, as well as /etc/hosts, use
Finally get it all running
sudo service dnsmasq restart
The VMs are just the base image that I'm using everywhere. The only change you need to make is the machine name. They need to match what is being expected by dnsmasq. On Ubuntu to change the hostname you need to edit two files: /etc/hostname - just change the name, and /etc/hosts - change the 127.0.1.1 line to the hostname.
This is the most important part of our whole setup, unfortunately I also did it in the dirtiest manner possible. Read through to the end for tips on making it better.
I first got the client.conf and hypervisorX.crt/.key and ca.crt files onto each hypervisor. I then renamed hypervisorX.crt to client.crt and hypervisorX.key to client.key. The other option would be to go into client.conf and change the names of the key/cert from hypervisorX to client.
Now (in byobu, to keep it running if I should disconnect from the hypervisor) I ran
sudo openvpn --config client.conf
I watched to make sure it connected fine, then move to a new byobu terminal.
Here I needed to make a bridge to the tap0 interface that openvpn created (when it started up). Instead of just br0, I'm calling it ovpnbr0, to differentiate from a bridge that goes to the eth0 (which this doesn't) and the VM bridges that already existed virbr0 and virbr1.
sudo brctl addbr ovpnbr0
sudo brctl addif ovpnbr0 tap0
At this point everything should be working on the hypervisor. The only thing to note is that if you shutdown the openvpn client and re-start it, you'll need to re-attach the tap0 interface to ovpnbr0.
Now you just need to attach the VMs and start up. In virt-manager the network settings for each VM have the option to bridge to a “Shared device name”. In this box put ovpnbr0.
Making it better
This was the easiest part. I went and started all the VMs, logged in, and everything was working. I was able to ping/ssh/etc by IP address or hostname.
user@openvpn-vm2$ ping openvpn-vm3
64 bytes from openvpn-vm3 (10.35.50.13): icmp_req=1 ttl=64 time=0.431 ms
This method of creating a segregated network worked great for the low-throughput usecase. The one thing that would make it a much simpler task would be if the virtualization software could pull in the tasks that needed to be done manually on the hypervisor. I don't imagine it would be too difficult for libvirt to create a new type of segregated network (like those it already uses) to work with openvpn. You would pass in the server name (some basic conf stuff) and the certs/key and it would take care of starting it up, making the bridges, etc. If anyone with libvirt experience has any hints on where to get started with this, let me know.
The other obvious item that would be nice in the future, is to fix this setup so that the traffic goes directly between the correct hypervisors instead of through the central server. To do this, the server would have to become an arbiter. It would keep track of which VMs (IPs/MACs) started up on which hypervisors. Then when one VM sends data to another VM across the segregated network, the hypervisor will query the arbiter and get back an IP address and symmetric key for the hypervisor containing the desired VM. All this will happen in tens of milliseconds (the first time only) and the traffic from the VM will be on its way to the other VM. Another feature of this would need to be live migration. When a VM moves from one hypervisor to another, it would let the arbiter know, as well as all the other hypervisors that have an open connection to it. They would then change their routes to the new hypervisor and all the VMs would go on communicating without interruption. It doesn't seem overly complicated to create something like this, but the devil is always in the details.