/ Configuration

Laying the groundwork, part 3: Ghost

I have always been amazed by the popularity of some technologies, in spite of their obvious flaws. I can certainly understand the void they fill, but I struggle to come to terms with the mass of people that seem to turn a blind eye. It must just have to do with our primal instinct of seeking immediate gratification. But enough about why this blog isn't running on Wordpress, why did I choose Ghost? It's definitely not ideal and I can't even say it's great – but it's decent enough that I decided to give it a try.

Back when it was pristine

Let's go back to where we got to in the setup procedure: LXD is all set up to create containers stored on ZFS datasets, whose only access to the outside world is via an HTTP proxy that is accessible using an IPv6 link-local address over a virtual bridge. So the obvious first step is to build the new "system" and set some limits:

# lxc launch ubuntu:xenial blog
# lxc config set blog limits.memory '1GB'
# zfs set refquota=3g lxd/containers/blog

With that out of the way, we can enter the new realm as a non-privileged user by using something like lxc exec blog -- sudo -iu ubuntu[1]. If you remembered from the previous post that the standard http_proxy and https_proxy environment variables are supposed to be automatically set and try to print them, you'll discover they don't work. It's due to sudo's default configuration in Ubuntu, which is nice enough to clear them – for security reasons. There are lots of ways around this: get a shell as root using bash -l or su -, get sudo to keep the two variables using Defaults env_keep += "http_proxy https_proxy". Whatever floats your boat. The great thing about this is that most applications will use them: your favourite APT frontend, curl, Wget etc.

I'm a very big fan of keeping environments clean, so my first steps were to add the hostname to /etc/hosts, uninstall the ssh server (and the ssh client while I was at it) and disable the getty on the inexistant console: systemctl disable console-getty.

The struggle with Node

As I might've previously mentioned, Ghost is hipster-tech; which these days means it runs on Node.js; and it uses Yarn. As is the case with anything under very active development, an Ubuntu LTS release is going to contain a fairly old version of Node.js; in the universe component[2]. At the time of this writing, there's no sight of Yarn in any Ubuntu release. All of this leaves us with the only "decent" option of using the developers' repositories. I set them up with the following lines under /etc/apt/sources.list.d/:

deb http://deb.nodesource.com/node_6.x xenial main
deb http://dl.yarnpkg.com/debian/ stable main

And installed their GPG keys with:

# curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add -
# curl -s https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -

I briefly considered limiting the packages that can be installed from these repositories, but then I thought: what's the point? Their maintainer scripts run as root anyway. I officially no longer trust my container; but that's why I had it in the first place. Time to install nodejs, yarn and make using APT.

As with any Node.js-based package, the next step is to fire up npm and we can install the bootstrapping package, ghost-cli globally. Of course, npm install -g ghost-cli fails, as it ignores my proxy environment variables. No surprises there. I find out it uses its own config variables so I run:

# npm config set proxy "$http_proxy"
# npm config set https-proxy "$https_proxy"

Only that the second command complains about the content of the proxy config setting, previously set by the first command, which had succeeded. Turns out npm doesn't like my link-local IPv6 address. I'm still not surprised, albeit slightly irritated. I decide to press on instead of using a site-local address on the bridge, if only to find out if I run into any more issues. I configure a local Squid proxy to forward all requests via my access proxy with the following:

http_port 3128
cache_peer fe80::XXXX:XXXX:XXXX:XXXX%eth0 parent 3128 0 proxy-only no-query no-digest default
never_direct allow all

But even that's not enough, as Squid doesn't at the moment support fowarding a CONNECT request using another CONNECT request to an upstream peer and npm's default is to access its repository via HTTPS. I am now forced to do this[3]:

# npm config set registry 'http://registry.npmjs.org/'

…and it works!

The main mission: getting Ghost

With npm over-the-moon to be able to communicate with the outside world, I finally have ghost-cli installed. The next step in the oh-so-easy procedure[4] to get Ghost up and running is to issue ghost install --db sqlite3 in the directory of my choosing: I settled on /var/www/ghost with the setup done as the ubuntu user – after all, running this is the only purpose of the container. ghost install actually uses npm to install its dependencies in the target directory, so I should be forgiven for assuming that the npm proxy settings should do the trick. Instead, I remain baffled as tcpdump shows some queries for A records of hosts, including registry.npmjs.org, being sent to The destination of the queries wasn't unusual, as no DNS resolver is configured in the container (it has no capability to forward DNS queries, after all) and the Unix convention dictates that is to be used as a fallback.

I weigh my options and slightly linger on the idea of choosing a different blogging platform; however, I decide to trick whatever ghost is using by installing dnsmasq and configuring it to reply to all A queries with and all AAAA queries with ::1:


Only one extra setting transforms the local Squid into a transparent proxy capable of forwarding any HTTP over TCP traffic to my gateway:

http_port 80 intercept

I will have to admit I felt a combination of relief and satisfaction to see the ghost install command finally succeeding. All that was left was to change the server.host setting in config.production.json[5] to :: so that we can access it from the host and restart the ghost service.

Having succeeded in achieving the desired goal, it's time to re-evaluate the decisions made and consider whether I need to backtrack. The choice to use only link-local IPv6 addresses over the communication bridge initially forced me to install Squid in the container due to npm's limitations. While using site-local addresses would've prevented this first issue, in the end, it turned out the local proxy would have still been needed to overcome Ghost's stubbornness to by-pass any proxy settings I had. Therefore, for the time-being, I decide to stick with my original pick.

The last (artificially-inflated) hurdle

Back on the host, a quick test using curl confirms the service is accessible, although it needs a reverse proxy to allow the rest of the world to read my wonderful, yet-to-be-written, stories. With Apache[6] running, I enable the proxy and proxy_http modules and set up a new basic site file based on Ghost's requirements for the headers a reverse proxy should set:

RequestHeader unset X-Forwarded-For
RequestHeader unset X-Forwarded-Host
RequestHeader unset X-Forwarded-Server
RequestHeader set X-Forwarded-Proto https
RequestHeader set X-Real-IP %{REMOTE_ADDR}s
ProxyPreserveHost on
ProxyPass / "http://[fe80::XXXX:XXXX:XXXX:XXXX%lxdbr0]:2368/"

I admit I was shocked to discover the ProxyPass directive does not support a link-local IPv6 address. It turns out it's one of the things that often gets overlooked: Nginx isn't any better and also lacks the capability; this makes Squid's support for it rather impressive, instead of simply to be expected.

The nature of the problem is quite frustrating, not because of the lack of any options, but rather that of any elegant ones. To summarise: we have a system capable of establishing TCP sessions to an IPv6 link-local address; we want to use Apache or Nginx as a reverse proxy to said TCP endpoint, but they don't support link-local addresses. I could think of numerous ways to bridge the two, none of which I found terribly appealing (and some, as it turns out, aren't feasible either):

  • Using a netfilter DNAT rule: simple and efficient, but I was unable to find a way to redirect packets to a link-local IPv6 address on a specific interface. I can only assume I am missing something obvious, as I refuse to believe this isn't supported by the Linux kernel.
  • Using systemd-socket-proxyd: does not support IPv6 link-local addresses; at this stage I barely mimic any bewilderment.
  • Using socat: forks a new process for each active connection. And given its flexibility, I do wonder just how efficient the copying is. The great news is it does support link-local IPv6 addresses!
  • Using systemd sockets with netcat: similar enough to the socat option above that I don't have anything to add.
  • Write a simple TCP proxy using edge-triggered epoll to allow good concurrency. We can even avoid copying the data to user-space by using splice: an optimisation which might be interesting to benchmark[7].

The single-process-per-connection model is probably good enough – the frontend web server is going to limit the number of connections it'll make, potentially making the last option using edge-triggered epoll overkill. Alternatively, going back and relinquishing my original desire of using IPv6 link-local addresses would quickly and painlessly make the problem go away.

Nevertheless, I get easily distracted and can hardly resist the temptation, so I spawn my favourite editor – vim – and waste a couple of hours writing the above mentioned proxy. It certainly feels great pondering all the edge conditions and stress-testing it to ensure everything works as expected. I take solace in thinking that had this been a work environment and not my pet project, I would've made the right choice.

What's next?

And so it ends – the first series of articles. I already have quite a few subjects lined-up and I'm eager to bring the ideas to life. But I simply couldn't have missed the opportunity to start my long-overdue blog with a meta-story!

  1. Which, incidentally, spits out the funny error message mesg: ttyname failed: Success. Known issue. If you really hate it, you can use script /dev/null which allocates a PTS and TTY programs will no longer complain. ↩︎

  2. Which effectively means security updates are not guaranteed – it's community maintained. ↩︎

  3. Does npm use digital signatures to verify the integrity of packages it installs? Of course not. ↩︎

  4. It is composed of a few simple steps, I just don't appreciate the assumptions it makes. ↩︎

  5. When did JSON become a suitable user-friendly format for configuration files? ↩︎

  6. I can understand the frowning in the context of this blog, but I am experimenting with mod_wsgi. ↩︎

  7. If you're thinking about premature optimisation, I should mention that I prefer thinking in shades of grey: I believe some optimisations should be made early. Of course, at the moment we're talking about a personal a blog, so it would hardly be warranted. ↩︎



I don’t know any witty quotes, but if I did, this is where I’d insert one.

Read More