Archives For 30 November 1999

Recently I was looking for a way to SSH from a network that blocked my outgoing SSH connection. I’d be nice to have a way around firewalls and be able to access your private Linux terminal. To be able to debug a problem from an remote location, for example.

A collegue suggested a tool called ‘Shell In A Box‘. Shell In A Box implements a web server that can export arbitrary command line tools to a web based terminal emulator using just JavaScript and CSS without any additional browser plugins. This means: connecting your browser via HTTPS to your own hosted Shell In A Box web site, and access a Linux terminal from there.

How cool is that? In this blog I’ll show you how to set it up in a secure way.

Building and installing Shell In A Box
I want to setup Shell In A Box on my Raspberry Pi. It’s a great device running Linux that has a very small energy consumption footprint. Ideal for an always-on device I’d say!

Since there is no package available, we’ve to compile our own. It’s best to get the sources from Github (original here), since the Github repository contains some patches and fixes for issues on Firefox.

These commands install the required dependencies, clone the Git repository and start building:

apt-get install git dpkg-dev debhelper autotools-dev libssl-dev libpam0g-dev zlib1g-dev libssl1.0.0 libpam0g openssl
git clone https://github.com/pythonanywhere/shellinabox_fork
cd shellinabox_fork
dpkg-buildpackage

During my first attempt, I ran into this problem:

dpkg-source -b shellinabox-2.14
dpkg-source: error: can't build with source format '3.0 (quilt)': no upstream tarball found at ../shellinabox_2.14.orig.tar.{bz2,gz,lzma,xz}
dpkg-buildpackage: error: dpkg-source -b shellinabox-2.14 gave error exit status 255

When grepping for ‘quilt’ I found a file called ‘/debian/source/format’. From what I can tell this does not do anything important, so I ended up deleting the file. Guess what, it now works.

rm ./debian/source/format

Build the package again, this should now succeed.

dpkg-buildpackage

This process will take some time (especially on the Raspberry Pi). Afterwards you’ll find the .deb file ready to be installed.

dpkg -i ../shellinabox_2.14-1_armhf.deb

I changed the configuration, to disallow the build-in SSL and to bind to localhost only. I did this because another web server will serve our terminal. I will explain in a minute.

vim /etc/default/shellinabox

And edit this line:

SHELLINABOX_ARGS="--no-beep -s /terminal:LOGIN --disable-ssl --localhost-only"

Finally, restart the deamon:

/etc/init.d/shellinabox restart

And check if all went well:

/etc/init.d/shellinabox status

You should see:

Shell In A Box Daemon is running

Another way to verify is to check the open ports:

netstat -ntl

You should see:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State 
tcp 0 0 127.0.0.1:4200 0.0.0.0:* LISTEN


Setting up Lighttpd as a proxy

Shell In A Box runs on port 4200 by default. Although this can be changed to a more common 80 or even 443, this is not what I want. I decided to integrate it with another webserver, to be able to combine other services and use just one url (and one SSL certificate). Since the Raspberry Pi isn’t that powerful, I choose Lighttpd.

apt-get instal lighttpd
cd /etc/lighttpd/conf-enabled
ln -s ../conf-available/10-proxy.conf

This installs Lighttpd and enables Proxy support. Now add the Proxy config:

vim /etc/lighttpd/lighttpd.conf

And add:

proxy.server = (
 "/terminal" =>
  ( (
    "host" => "127.0.0.1",
    "port" => 4200
  ) )
)

Save and restart Lighttpd:

/etc/init.d/lighttpd restart

Connect to http://pi.example.org/terminal and your Shell In A Box terminal should appear.

Although this is cool already, we’re not quite there. No one will SSH on an unencrypted web page, right? So, we’ll configure an SSL certificate to enable encryption. For double safety, we’ll also set a username/password on the web page. One then needs to know this password to access the login promt, and needs a valid local username/password to really use the terminal.

Adding encryption with SSL
By using a HTTPS-url, our traffic is encrypted. Let’s generate a private key (and remove the passphrase):

openssl genrsa -des3 -out pi.example.org.key 2048
cp -pr pi.example.org.key pi.example.org.key.passwd
openssl rsa -in pi.example.org.key.passwd -out pi.example.org.key

If you do not remove the passphrase, you will need to type it every time you start the web server. To request a SSL-certificate, you need to supply a CSR (Certificate Signing Request) and send that to a SSL provider such as Thawte or Verisign.

openssl req -new -key pi.example.org.key -out pi.example.org.csr

To be able to continue now, let’s self-sign the certificate:

openssl x509 -in pi.example.org.csr -out pi.example.org.pem -req -signkey pi.example.org.key -days 365
cat pi.example.org.key >> pi.example.org.pem

A self-signed certificate will display a warning in our browser, but that’s ok for now. Once the real certificate comes back from our SSL provider, it’s easy to replace it. The warning will then disappear.

Time to tell Lighttpd about our certificate:

vim /etc/lighttpd/lighttpd.conf

Add these lines:

$SERVER["socket"] == "10.0.0.10:443" {
  ssl.engine = "enable"
  ssl.pemfile = "/etc/lighttpd/ssl/pi.example.org/pi.example.org.pem"
  server.name = "pi.example.org"
  server.document-root = "/home/lighttpd/pi.example.org/https"
  server.errorlog = "/var/log/lighttpd/pi.example.org_serror.log"
  accesslog.filename = "/var/log/lighttpd/pi.example.org_saccess.log"
}

And restart Lighttpd:

/etc/init.d/lighttpd restart

Now Shell In A Box should be available on: https://pi.example.org/terminal

Enhancing security by adding HTTP-auth
Since the /terminal page now makes an actual terminal available to web users, I added an extra password for security. You can use the ‘HTTP Auth’ method for this. It will pop up a message box that requires an valid username/password before the /terminal page is shown.

First enable the module:

cd /etc/lighttpd/conf-enabled
ln -s ../conf-available/05-auth.conf

Then extend the config of the virtual host config you created above. The final result should be:

$SERVER["socket"] == "10.0.0.10:443" {
  ssl.engine = "enable"
  ssl.pemfile = "/etc/lighttpd/ssl/pi.example.org/pi.example.org.pem"

  server.name = "pi.example.org"
  server.document-root = "/home/lighttpd/pi.example.org/https"
  server.errorlog = "/var/log/lighttpd/pi.example.org_serror.log"
  accesslog.filename = "/var/log/lighttpd/pi.example.org_saccess.log"

  auth.debug = 2
  auth.backend = "htpasswd"
  auth.backend.htpasswd.userfile = "/etc/lighttpd/shellinabox-htpasswd"

  auth.require = ( "/terminal/" =>
    (
      "method" => "basic",
      "realm" => "Password protected area",
      "require" => "user=remibergsma"
    )
  )
}

Reload Lighttpd to make the changes active:

/etc/init.d/lighttpd reload

To set a password:

apt-get install apache2-utils
htpasswd -c -m /etc/lighttpd/shellinabox-htpasswd remibergsma

You can enter multiple users, just remember to remove the ‘-c’ flag when adding more users, as this overwrites the current file.

When you visit https://pi.example.org/terminal you will need to enter a valid username and password, before the page loads.

The final result: SSH in a browser window!
You should now be able to use a terminal via your own protected webpage. It’s mostly like a real terminal/SSH session but from a browser. Wow 🙂

Shell In A Box in action

Shell In A Box in action

 

I always use GNU Screen, so I know for sure my commands keep running whatever happens.

Using GNU Screen in a browser

Using GNU Screen in a browser

 

Changes to production systems should be tested on a development system and then be deployed using configuration management (such as Puppet), if you ask me. Sometimes I first run commands by hand (on a test system) to find out the right ones, the right order, etc. In those cases I find it useful to automatically document what I type and be able to ‘replay’ it later on.

Why? Because when I write the Puppet configuration needed to deploy the change, I want to be sure all manual commands I ran make it to the Puppet manifest. Being able to replay what I did allows me to do so with ease.

Here’s how to record your key strokes using the ‘script’ Linux utility:

script change1234.script -t 2> change1234.timing

The ‘script’ utility logs all commands you enter to the file ‘change1234.script’. Furthermore it saves the timing data to a file called ‘change1234.timing’. Beware that everything is saved, including errors and typo’s.

The timing data allows to replay the script using the same timing as when the session was recorded. It gives a good representation of what happened. To replay, simply run:

scriptreplay change1234.timing change1234.script

You can replay it as many times as you like.

The .script and .timing files are just plain text files. This means you can ‘grep’ them to quickly find commands. For example, to display the ‘sed’ commands you used:

grep sed change1234.script

As you can see, saving (and documenting) your commands is easy and offers some nice features. It even allows you to forget what exactly you did… 😉

Back in March I wrote a blog on how to create a network without a Virtual Router.  I received a lot of questions about it. It’s also a question that pops up now and then on the CloudStack forums. In the meanwhile I’ve worked hard to implement this setup at work. In this blog I’ll describe the concept of working with a CloudStack setup that has no Virtual Router.

First some background. In Advanced Networking, VLAN’s are used for isolation. This way, multiple separated networks can exist over the same wire. More about VLAN technology in general on this wikipedia page. For VLAN’s to work, you need to configure your switch so it knows about the VLAN you use. VLAN’s have an unique id between 1 and 4096. CloudStack configures this all automatically, except for the switch. Communication between Virtual Machines in the same CloudStack network (aka VLAN) is done using the corresponding VLAN-id. This all works out-of-the-box.

It took me some time to realize how powerful this actually is. One can now combine both VM’s and physical servers in the same network, by using the same VLAN for both. Think about it for a moment. You’re now able to replace the Virtual Router with a Linux router simply by having it join the same VLAN(s) and using the Linux routing tools.

Time for an example. Say we have a CloudStack network using VLAN-id 1234, and this network is created without a Virtual Router (see instructions here). Make sure you have at least 2 VM’s deployed and make sure they’re able to talk to each other over this network. Don’t forget to configure your switch. If both VM’s are on the same compute node, networking between the VM’s works, but you won’t be able to reach the Linux router later on if the switch doesn’t know the VLAN-id.

Have a separate physical server available running Linux and connect it to the same physical network as your compute nodes are connected to. Make sure the ip’s used here are private addresses. In this example I use:

compute1: 10.0.0.1
compute2: 10.0.0.2
router1: 10.0.0.10
vm1: 10.1.1.1
vm2: 10.1.1.2

The Linux router needs two network interfaces: one to the public internet (eth0 for example) and one to the internal network, where it connects to the compute nodes (say eth1). The eth1 interface on the router has ip-address 10.0.0.10 and it should be able to ping the compute node(s). When this works, add a VLAN interface on the router called eth1.1234 (where 1234 is the VLAN-id CloudStack uses). Like this:

ifconfig eth1.1234 10.1.1.10/24 up

Make sure you use the correct ip-address range and netmask. They should match the ones CloudStack uses for the network. Also, note the ‘.’ between the eth1 and the VLAN-id. Don’t confuse this with ‘:’ which just adds an alias ip.

To check if the VLAN was added, run:

cat /proc/net/vlan/eth1.1234

It should return something like this:

eth1.1234 VID: 1234 REORDER_HDR: 1 dev->priv_flags: 1
 total frames received 14517733268
 total bytes received 8891809451162
 Broadcast/Multicast Rcvd 264737
 total frames transmitted 6922695522
 total bytes transmitted 1927515823138
 total headroom inc 0
 total encap on xmit 0
Device: eth1
INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0
 EGRESS priority mappings:

Tip: if this command does not work, make sure the VLAN software is installed. In Debian you’d simply run:

apt-get install vlan

Another check:

ifconfig eth1.1234

It should return something like this:

eth1.1234 Link encap:Ethernet HWaddr 00:15:16:66:36:ee 
 inet addr:10.1.1.10 Bcast:0.0.0.0 Mask:255.255.255.0
 inet6 addr: fe80::215:17ff:fe69:b63e/64 Scope:Link
 UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
 RX packets:14518848183 errors:0 dropped:0 overruns:0 frame:0
 TX packets:6925460628 errors:0 dropped:15 overruns:0 carrier:0
 collisions:0 txqueuelen:0 
 RX bytes:8892566186128 (8.0 TiB) TX bytes:1927937684747 (1.7 TiB)

Now, the most interesting tests: ping vm1 and vm2 from the linux router, and vice versa. It should work, because they are all using the same VLAN-id. Isn’t this cool? You just connected a physical server to a virtual one! 🙂

You now have two options to go from here:

1. Use a LoadBalancer (like Keepalived) and keep the ip’s on the VLAN private using Keepalived’s NAT routing. The configuration is exactly the same as if you had all physical servers or all virtual servers.

2. Directly route public ip’s to the VM’s. This is the most interesting one to explain a bit further. In the example above we’ve used private ip’s for the VLAN. Imagine you’d use public ip addresses instead. For example:

vm1: 8.254.123.1
vm2: 8.254.123.2
router1: 8.254.123.10 (eth1.1234; eth1 itself remains private)

This also works: vm1, vm2 and router1 are now able to ping each other. A few more things need to be done on the Linux router to allow it to route the traffic:

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp

Finally, on vm1 and vm2, set the default gateway to router1; 8.254.123.10 in this example.

How does this work? The Linux router also answers arp requests for the ip’s in the VLAN. Whenever traffic comes by for vm1, router1 answers the arp request and routes the traffic over the VLAN to vm1. When you’d run a traceroute, you’ll see the Linux router appear as well. Of course you need to have a subnet of routable public ip’s assigned by your provider for this to work properly.

To me this setup has two major advantages:

1. No wasted resources for Virtual Routers (one for each network)
2. Public ip’s can be assigned directly to VM’s; you can even assign multiple if you like.

The drawbacks? Well, this is not officially supported nor documented. And since you are not using the Virtual Router, you’ll have to implement a lot of services on your own that were normally provided by the Virtual Router. Also, deploying VM’s in a network like this only works using the API. To me these are all challenges that make my job more interesting 😉

I’ve implemented this in production at work and we successfully run over 25 networks like this with about 100-125 VM’s. It was a lot of work to configure it all properly and to come up with a working solution. Now that it is live, I’m really happy with it!

I realize this is not a complete step-by-step howto. But I do hope this blog will serve as inspiration for others to come up with great solutions build on top of the awesome CloudStack software. Please let me know in the comments what you’ve come up with! Also, feel free to ask questions: I’ll do my best to give you some directions.

Enjoy!