Archives For 30 November 1999

I’ve been working a lot with CloudStack Advanced networking and find it very flexible. Recently, I had another opportunity to test its flexibility when a customer called: “We want VM’s in your CloudStack cloud, but these VM’s are only allowed be reachable from our office, and not from the public internet”. Firewalling? No, they required us to use their VPN solution.

Is CloudStack flexible enough for this to work? Yes, it is. In this blog I’ll tell you how we did it. And it doesn’t even matter what VPN device you use. This will work regardless of brand and features, as lang as it supports a public ip-address to connect over the internet to another VPN device, and has a private network behind it. All VPN devices I know of support these basic features.

VPN (Virtual Private Networking)
The client’s office is connected to the internet and has a VPN device. We received another device as well to host in our data center and the two talk to each other over the public internet in a secure way. Probably speaking IPsec or similar but that is beyond the scope of this blog.

The VPN device in the data center has a public ip-address on its WAN port but also has some ports for the internal network. We configured it to use the same network CIDR as we did in the CloudStack network we created for this customer. Let’s use 10.10.16.0/24 as an example in this blog. And now the problem: this cloud network is a tagged network and the VPN device we received is not VLAN-capable.

VLANs in the Cloud
CloudStack Advanded networking relies on VLANs. Every VLAN has its own unique ID. Switches use this VLAN ID to keep the VLAN networks apart and make sure they’re isolated from each other. Most switches support VLANs as well, and that’s were we’ll find the solution to this problem.

Configuring the switch

We connected the VPN device to our switch and set its port to UNTAGGED for the VLAN ID the CloudStack network uses. In other words, devices connected to this port now do not need to know about the VLAN. The switch will add it as traffic flows. This means the VPN device will use an ip-address in the 10.10.16.0/24 range and is able to communicate with the VM’s in the same network. The CloudStack compute nodes have their switch ports set to TAGGED and the switch makes communication between them possible.

Overview of ip-addresses:

  • 10.10.16.1 – the internal VPN device ip-address in the data center
  • 10.10.16.11 – the first VM’s ip-address
  • 10.10.16.12 – the second VM’s ip-address

The VM’s have their default gateway set to the VPN device’s 10.10.16.1 address. Also, the office needs to be configured in a way it knows the 10.10.16.0/24 network is handled by the VPN device located there. Users in the office will now be able to access the VM’s on the 10.10.16.0/24 network.

Conclusion
While the VM’s are hosted on our CloudStack cloud on the internet, they do not have public ip-addresses and thus are not reachable. The only public ip-address for this customer is the one configured on the VPN device in the data center. This provides the same level of security as you’d have with physical servers but adds the power of a cloud solution.

Thanks to the flexibility of the CloudStack Advanced Networking this cloud be done!

Back in March I wrote a blog on how to create a network without a Virtual Router.  I received a lot of questions about it. It’s also a question that pops up now and then on the CloudStack forums. In the meanwhile I’ve worked hard to implement this setup at work. In this blog I’ll describe the concept of working with a CloudStack setup that has no Virtual Router.

First some background. In Advanced Networking, VLAN’s are used for isolation. This way, multiple separated networks can exist over the same wire. More about VLAN technology in general on this wikipedia page. For VLAN’s to work, you need to configure your switch so it knows about the VLAN you use. VLAN’s have an unique id between 1 and 4096. CloudStack configures this all automatically, except for the switch. Communication between Virtual Machines in the same CloudStack network (aka VLAN) is done using the corresponding VLAN-id. This all works out-of-the-box.

It took me some time to realize how powerful this actually is. One can now combine both VM’s and physical servers in the same network, by using the same VLAN for both. Think about it for a moment. You’re now able to replace the Virtual Router with a Linux router simply by having it join the same VLAN(s) and using the Linux routing tools.

Time for an example. Say we have a CloudStack network using VLAN-id 1234, and this network is created without a Virtual Router (see instructions here). Make sure you have at least 2 VM’s deployed and make sure they’re able to talk to each other over this network. Don’t forget to configure your switch. If both VM’s are on the same compute node, networking between the VM’s works, but you won’t be able to reach the Linux router later on if the switch doesn’t know the VLAN-id.

Have a separate physical server available running Linux and connect it to the same physical network as your compute nodes are connected to. Make sure the ip’s used here are private addresses. In this example I use:

compute1: 10.0.0.1
compute2: 10.0.0.2
router1: 10.0.0.10
vm1: 10.1.1.1
vm2: 10.1.1.2

The Linux router needs two network interfaces: one to the public internet (eth0 for example) and one to the internal network, where it connects to the compute nodes (say eth1). The eth1 interface on the router has ip-address 10.0.0.10 and it should be able to ping the compute node(s). When this works, add a VLAN interface on the router called eth1.1234 (where 1234 is the VLAN-id CloudStack uses). Like this:

ifconfig eth1.1234 10.1.1.10/24 up

Make sure you use the correct ip-address range and netmask. They should match the ones CloudStack uses for the network. Also, note the ‘.’ between the eth1 and the VLAN-id. Don’t confuse this with ‘:’ which just adds an alias ip.

To check if the VLAN was added, run:

cat /proc/net/vlan/eth1.1234

It should return something like this:

eth1.1234 VID: 1234 REORDER_HDR: 1 dev->priv_flags: 1
 total frames received 14517733268
 total bytes received 8891809451162
 Broadcast/Multicast Rcvd 264737
 total frames transmitted 6922695522
 total bytes transmitted 1927515823138
 total headroom inc 0
 total encap on xmit 0
Device: eth1
INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0
 EGRESS priority mappings:

Tip: if this command does not work, make sure the VLAN software is installed. In Debian you’d simply run:

apt-get install vlan

Another check:

ifconfig eth1.1234

It should return something like this:

eth1.1234 Link encap:Ethernet HWaddr 00:15:16:66:36:ee 
 inet addr:10.1.1.10 Bcast:0.0.0.0 Mask:255.255.255.0
 inet6 addr: fe80::215:17ff:fe69:b63e/64 Scope:Link
 UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
 RX packets:14518848183 errors:0 dropped:0 overruns:0 frame:0
 TX packets:6925460628 errors:0 dropped:15 overruns:0 carrier:0
 collisions:0 txqueuelen:0 
 RX bytes:8892566186128 (8.0 TiB) TX bytes:1927937684747 (1.7 TiB)

Now, the most interesting tests: ping vm1 and vm2 from the linux router, and vice versa. It should work, because they are all using the same VLAN-id. Isn’t this cool? You just connected a physical server to a virtual one! 🙂

You now have two options to go from here:

1. Use a LoadBalancer (like Keepalived) and keep the ip’s on the VLAN private using Keepalived’s NAT routing. The configuration is exactly the same as if you had all physical servers or all virtual servers.

2. Directly route public ip’s to the VM’s. This is the most interesting one to explain a bit further. In the example above we’ve used private ip’s for the VLAN. Imagine you’d use public ip addresses instead. For example:

vm1: 8.254.123.1
vm2: 8.254.123.2
router1: 8.254.123.10 (eth1.1234; eth1 itself remains private)

This also works: vm1, vm2 and router1 are now able to ping each other. A few more things need to be done on the Linux router to allow it to route the traffic:

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp

Finally, on vm1 and vm2, set the default gateway to router1; 8.254.123.10 in this example.

How does this work? The Linux router also answers arp requests for the ip’s in the VLAN. Whenever traffic comes by for vm1, router1 answers the arp request and routes the traffic over the VLAN to vm1. When you’d run a traceroute, you’ll see the Linux router appear as well. Of course you need to have a subnet of routable public ip’s assigned by your provider for this to work properly.

To me this setup has two major advantages:

1. No wasted resources for Virtual Routers (one for each network)
2. Public ip’s can be assigned directly to VM’s; you can even assign multiple if you like.

The drawbacks? Well, this is not officially supported nor documented. And since you are not using the Virtual Router, you’ll have to implement a lot of services on your own that were normally provided by the Virtual Router. Also, deploying VM’s in a network like this only works using the API. To me these are all challenges that make my job more interesting 😉

I’ve implemented this in production at work and we successfully run over 25 networks like this with about 100-125 VM’s. It was a lot of work to configure it all properly and to come up with a working solution. Now that it is live, I’m really happy with it!

I realize this is not a complete step-by-step howto. But I do hope this blog will serve as inspiration for others to come up with great solutions build on top of the awesome CloudStack software. Please let me know in the comments what you’ve come up with! Also, feel free to ask questions: I’ll do my best to give you some directions.

Enjoy!