Programming

Improving dev environments: All the HTTP things

Improve your development environment with the help of Docker, Nginx, and other tools
May 16, 2016
--
User avatar
Adrian Perez
@blackxored

This post belongs to the series "Improving dev environments", you can see the previous post on cross-compiling Go on Docker here.

I still remember the early days when Pow came out, a zero configuration Rack server for Ruby apps. It felt like magic at the time: the ease of use, the improved benefits to working in multiple projects at once, the beloved .dev suffix. In all honesty, it was one of the reasons I craved for a Mac and switching to OS X.

Perhaps the most significant parts of Pow for me was, as mentioned, the ability to have a custom domain name for my app, being able to run multiple applications without having to deal with port collisions in the host (the Rails developers would feel me here, having to fiddle with rails s -p ???? after a port-in-use error that you weren't expecting), besides there was a cool factor with that .dev suffix. It also wasn't technically specific to Ruby/Rack apps, since it would allow to proxy to any specific port in a similar fashion to reverse proxying with web servers. And of course, its ease of use, symlinks all the way and you were done.

Despite all the love I've given to it so far, it's been a while since I've used it. The main reason is: I don't need it anymore, I have Docker.

I've been writing a lot about Docker recently, so I will skip any introductions. While I might have not been able to achieve technically everything that Pow provides, I've reached a point where I have the most important features, and it fits with my workflow better, it's also potentially more powerful. I'm going to show you how to improve a bit all of those little HTTP things we deal with on a daily basis, with the help of a few Docker tricks, projects and images.

We're going to cover:

  • Accessing our apps using a .dev TLD.
  • Avoiding port collisions.
  • Serving static files.
  • Refreshing browser on file changes.

Most of this post assumes an OS X install, and Docker for Mac beta, but could be easily translated to other operating systems, and for Docker Machine users one of the few differences would be on the IP address of the VM (as in requires IP address instead of localhost).

The .dev suffix

Let's start by getting our beloved .dev suffix back. In case you're not sure what it means, it would allow you to access your app under the .dev TLD from your local machine. For example, if your app is called pinataparty, you can access it by browsing to http://pinataparty.dev.

While this might seem cosmetic, as I described before, it frees us from having to remember which port an app is running on, and if you have several of them at the same time, avoiding port collisions. Plus, a proper domain name helps with cookies and authentication (who hasn't had a cookie on http://localhost:3000 belonging to a different app?).

Setting up DNS

As you might have guessed, one of the things we'd need to tackle it's DNS. Since our apps are going to change over time, we can't use any static mappings in /etc/hosts.

We need to able to resolve anything that ends in .dev to a specific address, the address of a reverse proxy/forwarder (more on that later) that's running on our local computer.

We're going to run a small DNS server for that purpose, using that particular feature from dnsmasq. And we're going to run it through Docker so we'll not mess with our host.

docker run -d \
  --name dnsmasq \
  --restart always \
  -p 53535:53/tcp \
  -p 53535:53/udp \
  --cap-add NET_ADMIN \
  andyshinn/dnsmasq \
  --command --address=/dev/127.0.0.1

The most important part of this is the --command switch. For the sake of this discussion, we'll keep it simple, this is essentially telling dnsmasq that anything that ends in .dev should resolve to an address of 127.0.0.1. (Docker Machine/Boot2Docker users will need to change this address to the IP of the VM. Docker for Mac users: well we're lucky ones as of recent betas to be able to use localhost, aren't we?).

It's also worth noticing the ports, we're mapping ports 53535 on the host to port 53 on the container, for both TCP and UDP protocols. We used a higher port and avoided the default, because of the chance of conflicts. For example, Docker for Mac runs it's own DNS resolver, which used to resolve <container-name>.docker.local, but it's not clear to me as of the latest betas that use VPN/host-mode by default how relevant this remains). If you're not running Docker for Mac or a DNS server on your computer, you're free to use the default port if you so desire. We're also setting a restart policy so this container runs when we reboot.

Now we have a dumb DNS server that resolves anything ending in .dev to 127.0.0.1, but we're not using it yet. In OS X, we're going to create a specific configuration that would tell the system to use a specific nameserver when a particular condition occurs, in this case, the dev TLD. The process should be quite similar with resolv.conf in Linux.

We're going to create a file under /etc/resolver/ called dev (notice it unsurprisingly matches our TLD), containing the following:

nameserver 127.0.0.1
port 53535

Again, for Docker Machine users you would specify the VM's IP here instead.

With those changes in place, we need to test it, it never hurts. OS X is a bit weird when it comes to how resolver works, so don't be fooled if you try:

$ nslookup anything.dev
Server:         192.168.0.1
Address:        192.168.0.1#53

Name:   anything.dev
Address: 127.0.53.53

Instead, we'll ping it:

$ ping -c1 anything.dev
PING anything.dev (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.043 ms

And lo and behold, 127.0.0.1 is our resolved address.

The proxy service

We managed to get our DNS setup working. Now we need to dive deeper into what our forwarder or proxy might look like.

We need a service, but not any standard service as you might think. We would need it to be dynamic, since our list of running apps have the potential to be changing all the time. In DevOps terms, we're in need of a dynamic load balancer. And since we're running (or at least should be) most of our apps in Docker itself, it would be nice if it had tight integration with it.

Fortunately, there's such a thing. Not only that, you've plenty of options when it comes to it within the Docker ecosystem. In this article, we'll leverage nginx-proxy which provides even more than what we need.

nginx-proxy uses Docker events to reconfigure and reload Nginx, thus giving us a fully dynamic reverse proxy that we'll use to provide a backend for our DNS magic we just setup.

There are different ways to run it, but let's just try the easiest:

$ docker run \
  -p 80:80 \
  -p 443:443 \
  --name nginx-proxy \
  --restart always \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -v ./certs:/etc/nginx/certs \
  jwilder/nginx-proxy

Let's recap on our command. As in our previous DNS example, we're ensuring our container persists within restarts, and now we're exposing ports 80 and 443 (optional, required for SSL) and mounting a few volumes. The first one is a path to the Docker sock which is required by nginx-proxy to listen for events, and the second (and optional) contains our SSL certificates, for SSL support, however, it's outside the scope of this post, please refer to the project's documentation for details.

As you would expect, after running this command and navigating to http://localhost (again, Docker Machine users replace with VM's IP), we have a Nginx server running.

However, it shows a 503 page. This is standard, as we haven't told this proxy which app we want to access. Which brings us to...

If you remember from above, one of the Pow niceties was the ability to setup symlinks to our apps. In the case of nginx-proxy which is even more powerful when it comes to routing, we specify our app name (or names, since it supports multiple) as an environment variable in our containers.

The magic happens with VIRTUAL_HOST. For any container we run with this environment variable, will be accessible under that name, which coupled with our DNS setup effectively achieves our goal of getting our multi-app, no port-colliding setup from Pow into our Docker setup. Due to introspection, we don't even need to expose the ports in our containers, and certainly not in the host.

We're going to start by testing it with a default Nginx website, which we're all familiar with.

$ docker run -it -e VIRTUAL_HOST=example.dev nginx

If we navigate to http://example.dev we should be able to see Nginx's default welcome page.

This is also the case for any containers we run as we described, any app with exposed ports, so you can leverage it within Docker Compose to further improve your development workflow.

I can't stress the productivity gains of this enough, you run an app server, you have a DNS name, you don't have to remember the port, you kill it it's gone. It's magical.

Serving static files

How many times have you been in need of serving static files over HTTP? Your shell history might tell you. This is pretty common, sometimes we need to test a static site, run an example, read offline docs, load something that uses a JS framework and not have trouble with file:/// restrictions, etc. Perhaps we're even developing a static site, although the next section will be more useful for that.

In my case, I can't recall the amount of times I've typed:

python -m SimpleHTTPServer 8080

Or serve even.

Now, we have Docker, we have DNS, and we have a proxy, so serving any directory from HTTP shouldn't be that hard, right?

I'm going to skip the simplest example, and show you a slightly more complicated version.

We'll start with a shell function:

serve() {
  vhost="$(basename $PWD).dev"
  echo "Serving current directory as $vhost... Press Ctrl-C to stop serving"
  docker run -it --rm -v $PWD:/usr/share/nginx/html/ -e VIRTUAL_HOST=$vhost nginx
}

This function would retrieve the name of the current directory to construct that VIRTUAL_HOST environment variable, and run an Nginx container with a mapped volume to the current directory. The results shouldn't surprise you:

$ cd pinataparty
$ ls
index.html
$ serve
Serving current directory as pinatapary.dev... Press Ctrl-C to stop serving

And now we have that directory served to us over HTTP and accessible under our dev address. But we can take it even further.

Reloading browser on changes

It's definitely an improvement to our previous workflow to be able to serve a directory as we described in the previous section, but we have to reload the browser after we make a change to our files.

This would not be that appealing to people developing static website and client-side applications. It would be great if we would be able to combine our serve function with a tool designed to provide this functionality, such as BrowserSync.

Let's create another function for this purpose that combines the two:

browsersync() {
  docker run \
    --rm \
    -it \
    -P \
    -v "$(pwd)":/source \
    -e VIRTUAL_HOST=${1-$(basename $PWD).dev} \
    -e VIRTUAL_PORT=3000 \
    blackxored/browsersync \
    start --server --files '**/*.css' --files '**/*.html'
}

Assuming we're still on our pinataparty directory, when we run this command and open our browser window we get the same result than before, now the difference is that if we edit or create CSS or HTML files the results will be shown instantly without need to refresh.

In addition to the previous example, this one allows us to specify a custom virtual host as an argument. You could see how easy it would be to extend and implement and integrate other tools, like JSPM Server and the likes.

Docker Machine users: as Docker for Mac supports inotify it works out of the box, unfortunately it's not the case for Docker Machine/Boot2Docker, in which case you might need to instruct BrowserSync to use polling, or use one of the several alternatives that exist to enable file modification detection inside Docker.

Exercise to the reader

This pretty much wraps up this article. What would be missing would be port proxying to external apps, which is insanely easy, and I encourage you to try it yourself.

~ EOF ~

Craftmanship Journey
λ
Software Engineering Blog

Stay in Touch


© 2020 Adrian Perez