Ever since FreeNAS 10 came out and burned me, I switched from using old-fashioned jails to the new docker experience. While the docker experience was great, FreeNAS 10 left a bad taste in my mouth. Begrudgingly, I switched to FreeNAS 11 which has no docker support. I had just spent most of my time switching my home stack away from jails only to have the docker option taken away from me. I decided to install RancherOS under bhyve to adapt to my new needs. Here are some of the configurations and setup process:
1. RancherOS installation
This part is fairly straightforward. Go to https://github.com/rancher/os to learn a little more about RancherOS and to download the latest iso. Use iohyve fetch to get the iso and put it in the data store. Use iohyve install RancherOS rancheros.iso to get the RancherOS livedisk up and running. Run ros install -d /dev/sda to install to local disk.
However, you should check out a guide here that does a great job at explaining the entire process as I am missing many small details for the sake of brevity.
2. The Setup
Arguably the longest part of the process is the setup. Since I’m running a zfs pool on FreeNAS, all of the data that we will need to give the containers access to is there. What I did was create a new user/group and add new NFS shares for that user. We can then create a new system container in docker using d3fk/nfs-client to mount the NFS shares into the docker host so that we can share them with the containers. Here is part of the configuration for this:
# cloud-config.yml rancher: services: nfs: environment: MOUNTPOINT: /mnt/config SERVER: 192.168.10.100 SHARE: /mnt/Volume1/configurations image: d3fk/nfs-client labels: io.rancher.os.after: console, preload-user-images io.rancher.os.scope: system net: host privileged: true restart: always volumes: - /usr/bin/iptables:/sbin/iptables:ro - /mnt/config:/mnt/config:shared - /mnt/data2:/mnt/data2:shared - /mnt/data1:/mnt/data1:shared write_files: - path: /etc/rc.local permissions: "0755" content: | #!/bin/bash [ ! -e /usr/bin/docker ] && ln -s /usr/bin/docker.dist /usr/bin/docker mounts: - ["192.168.10.100:/mnt/Volume1/data1", "/mnt/data1", "nfs", ""] - ["192.168.10.100:/mnt/Volume1/data2", "/mnt/data2", "nfs", ""]
As you can see, you can also add any additional mountpoints into the rancher.mounts section. This cuts down on the number of instances of d3fk/nfs-client. NOTE: You will also have to list any mount points you intend to mount under rancher.services.nfs.volumes as “mountPoint:mountPoint:shared”.
The other standard containers that I have in my stack are portainer, cadvisor, prometheus (and exporters), and traefik. To install these, we can simply add them into the cloud config above.
rancher: services: portainer: container_name: portainer environment: PGID: "1001" PUID: "1001" TZ: America/Los_Angeles image: portainer/portainer labels: - traefik.enable=true - traefik.backend=portainer - traefik.frontend.rule=Host:portainer.fqdn.org - traefik.port=9000 ports: - 9000:9000 privileged: true restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock - /mnt/config/portainer:/data prometheus: container_name: prometheus image: prom/prometheus ports: - 9001:9090 volumes: - /mnt/config/prometheus.yml:/etc/prometheus/prometheus.yml labels: - traefik.enable=true - traefik.backend=prometheus - traefik.frontend.rule=Host:prometheus.fqdn.org - traefik.port=9090 - traefik.frontend.auth.basic=user:hash traefik: container_name: traefik environment: PGID: "1001" PUID: "1001" TZ: America/Los_Angeles image: traefik labels: - traefik.enable=true - traefik.backend=traefik - traefik.frontend.rule=Host:traefik.fqdn.org - traefik.port=5001 ports: - 80:80 - 443:443 - 5001:5001 restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock - /mnt/config/traefik:/etc/traefik
debug = false traefikLogsFile = "/etc/traefik/log/traefik.log" accessLogsFile = "/etc/traefik/log/access.log" defaultEntryPoints = ["http", "https"] [acme] email = "your email address here" storage = "/etc/traefik/acme/acme.json" entryPoint = "https" acmeLogging = true OnHostRule = true [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" [entryPoints.https] address = ":443" [entryPoints.https.tls] [web] address = ":5001" [web.auth.basic] users = ["user:hash"] [docker] endpoint = "unix:///var/run/docker.sock" watch = true exposedbydefault = false
$ sudo ros service up prometheus portainer traefik
Traefik is an amazing reverse proxy written in go. By adding labels to the container compose script, traefik will look for those labels using the docker.sock so that we don’t have to continually edit the traefik configuration. It also will automatically fetch SSL certificates from https://letsencrypt.org/, securing our connections and give us basic http auth for services that don’t offer any authentication such as prometheus. Amazing, right? And we’re not even taking advantage of it’s full potential.
3. The Monitoring
I would also recommend setting up grafana for dashboards from prometheus. Simply add other docker containers as exporters (prom/container-exporter and prom/node-exporter most notably) and add grafana. From there, add prometheus as a source and import a dashboard. Go to https://grafana.com/dashboards to take a look at some of the dashboards that they have. I personally recommend using dashboard 179 as a base and adding to your configuration from there.
4. The Alerting
Since this is a home stack, you probably aren’t going to want to do much alerting. However, you may want to alert on certain things (for instance, to detect a process being OOM’ed causing it to download 1GiB of data everytime before dying and exceeding your consumer Comcast connection limit of 1 TB but hey, what would I know about that?). In this instance, alerting is very easy to setup with either prometheus or grafana. You have a lot of options here and they’re all equally great. The hardest part is probably learning the prometheus query syntax but it’s pretty easy to understand.
5. The ToDo’s
This has been my weekend funtime project for some time now and it’s worked out fairly well. One of the things that I haven’t quite implemented yet are healthchecks. This would be ultimately ideal if we had insight into every container but lets face it, were not going to have that much insight unless we integrate checks into the program upstream. Health checks allow you to run an arbitrary command or shell script and the exit code determines the health of the container. Not many containers include healthchecks but you can add one if you want edit the existing docker container with a new Dockerbuild file but then that gets messy with updating and turns into more of a hassle. Hence, todo, I haven’t figured out a good way to do this yet.