{ Josh Rendek }

<3 Go & Kubernetes

Go Buffalo: Adding a 2nd database

Sep 4, 2020 - 1 minutes

If you need to connect to multiple databases in your buffalo app open up your models/models.go file:

Up at the top add a new var like:

1var YourDB *pop.Connection

then in the init() func you can connect to it - the important part is to make sure you call .Open:

1	YourDB, err = pop.NewConnection(&pop.ConnectionDetails{
2		Dialect: "postgres",
3		URL:     envy.Get("EXTRA_DB_URL", "default_url_here"),
4	})
5	if err != nil {
6		log.Fatal(err)
7	}

That’s it! You can now connect to a 2nd database from within your app.

We’ll go over everything needed to get a small development environment up and running using code-server, buffalo and postgres for a remote dev environment.

First lets install Go and Buffalo with gofish:

2curl -fsSL https://raw.githubusercontent.com/fishworks/gofish/master/scripts/install.sh | bash
3gofish init
4gofish install go 
5gofish install buffalo
6buffalo version # should say 0.16.12 or whatever latest is

Install docker

1curl -fsSL get.docker.com | bash

Install NodeJS & Yarn:

1curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
2curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
3echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
4sudo apt-get update && sudo apt-get install yarn

Install code-server

1curl -fsSL https://code-server.dev/install.sh | sh

Add your user:

1sudo usermod -aG docker YOURUSER
2# you made need to reboot after this step to get your user to talk to the docker daemon properly

Start code-server, as your regular user (ie: not sudo):

1systemctl --user enable --now code-server

Edit your config only do this if its on a private, trusted network, don’t do this on an internet exposed server


2auth: none
3password: xxxx
4cert: false

Restart the server:

1systemctl --user restart code-server

And finally let’s setup postgres:

1# for the server, this is quickest/easiest
2docker run --name postgres -e POSTGRES_PASSWORD=postgres -p 5432:5432 --restart=always -d postgres
3# for client
4sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
5wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
6sudo apt-get update
7sudo apt-get install postgresql-client-12

You can now psql to your app from the terminal:

1postgres://postgres:[email protected]:5432/demo_test?sslmode=disable

Let’s create our demo buffalo app:

1buffalo new demo
2cd demo

Edit your database.yml to look like this for development:

3  url: {{envOr "DEV_DATABASE_URL" "postgres://postgres:[email protected]:5432/demo_test?sslmode=disable"}}

Then you can create and migrate your database:

1buffalo db create && buffalo db migrate

Now just run buffalo dev in your VS Code terminal and you can browse your app and start working on it.

If you want other things like the clipboard to work properly, I’d suggest setting up a proxy with auth and a real SSL certificate, for example using Traefik or Caddy.

When you setup your cloud9 IDE for the first time, it comes pre-installed with go1.9 - if you’d like to update to the latest (as of this writing), just run the following commands:

1wget https://golang.org/dl/go1.14.6.linux-amd64.tar.gz
2sudo tar -C /usr/local -xzf ./go1.14.6.linux-amd64.tar.gz
3mv /usr/bin/go /usr/bin/go-old # move the old binary

Edit your .bashrc file so your $PATH has the new location:

1export PATH=$PATH:$HOME/.local/bin:$HOME/bin:/usr/local/go/bin

Source the file again so your settings are reloaded:

1source ~/.bashrc

And now go version should show 1.14.6

Now lets install Buffalo with gofish:

1curl -fsSL https://raw.githubusercontent.com/fishworks/gofish/master/scripts/install.sh | bash
2gofish init
3gofish install buffalo
4buffalo version # should say 0.16.12 or whatever latest is

And finally let’s setup postgres:

 1sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
 2wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
 3sudo apt-get update
 4sudo apt-get install postgresql-12
 5sudo su - postgres 
 7postgres=# create role ubuntu with superuser createdb login password 'ubuntu'; 
 8postgres=# create database ubuntu;
 9postgres=# exit

You should now be to execute psql from your regular user.

Let’s create our demo buffalo app:

1buffalo new demo
2cd demo

Edit your database.yml to look like this for development:

3  url: {{envOr "DEV_DATABASE_URL" "postgres://ubuntu:[email protected]/demo_development"}}

Then you can create and migrate your database:

1buffalo db create && buffalo db migrate

Now just run buffalo dev in your Cloud9 terminal and you can preview your application! (Cloud9 already sets the PORT env var).

And then you realize that auto-complete support doesn’t work with a small little

This feature is in an experimental state for this language. It is not fully implemented and is not documented or supported.

on the AWS product page, and go back to IntelliJ.

15 Years of Remote Work

Mar 22, 2020 - 6 minutes

In one form or another I’ve worked remotely since 2005 (with a brief mix of remote/onsite for about 2 years).

Below is a checklist of the things I’ve found that are required to have a succesful remote work life and ensure you join a great remote team.

Why I like working remote

If it’s going to be your first time working remote, you should figure out why you want to do it.

I get to have my own space

I prefer having my own area to work. I have surround sound, an entire L desk spanning a good chunk of a wall, and plenty of desk space. Headphones bother my ears so this lets me listen to music as loud as I want, have the temperature at what I want and setup my work area with personal knick knacks (without having to worry about them vanishing).

One of the important things is to have a dedicated working area - whether thats a room in your house or even renting a co-working space (I’ve had team-mates do that).

Creating your own routine

Sign-in to chat, say hello, then go make some coffee. Figure out a pattern that works for you - you probably don’t have far to walk to get to your desk. There’s no rush to beat traffic or hit up your coffee shop on the way in. One of my favorite things about being at home is being able to make my lunch fresh whenever I want (usually I do meal-prep for a few days) - some offices will have a kitchen, but it’s not the same as having your own pots, pans and knives.

Figure out how you like to work

Since you have a dedicated work space you can work how you like. Depending on my mood, I’ll have anything from Spotify going, to listening to cooking shows in the background, to having re-runs on Netflix for background noise. I’ve had friends also do things like have CSPAN and other news networks on for background noise.

Don’t waste time traveling

I worked on a hybrid team recently, and some people in NYC would spend hours traveling. That’s hours of your life gone sitting in a car or train.

Not having to sit in traffic (even if only a short way from home) is still an extra stress you don’t have to worry about at the end of the day.


When my previous dog Spunky was going through chemo and cancer surgeries, it was such a relief to be able to be at home with him. This isn’t something I would’ve been able to do at a regular office job.

Currently I have two golden retrievers and a lhasa - being able to reach down and pet your dogs or step outside to play with them is amazing. When the weather is nice I’ll even grab a lawn chair and sit outside to work and play fetch until the golden decides to bury the ball in the yard.

No open office plans

Open office plans are awful, and you get to avoid them working from home! Unfortunately one company I worked at with a hybrid setup had an open office plan and a ping pong table making it almost impossible to hear co-workers when pairing or meeting unless they went into a seperate conference room.

You have a strong local network of friends

You’ll be working remotely - you need to make sure you still get out and interact with people. Having a good network of friends locally helps give you that social fix you may need by not having officemates.

Things to look for in remote companies

Now that you know why you want to work remotely, you need to know what to look for in companies.

Make sure a good portion of the company is actually remote

If you’re joining a smaller team within the company, it’s okay to compromise if the team is fully remote as well. The important aspect here is that there are expectations around communicating with people in the open (ie: no water cooler or office hallway chats).

How do they communicate?

You want to make sure theres a mix of online/offline and face-to-face communication. Online chats like Slack/Mattermost/Hipchat solve the immediate/quick conversations. Email is generally what I’ve used for offline communication.

Video chat is important and can help make explaining things and clarifying designs a lot easier than text. I’ve used Zoom and Google Meet/Hangouts in the past (Zoom being my preference).

Do they meetup?

I hate flying but its one of the things I’ve compromised on. Ensure your company (or team) if fully remote is meeting up at least once a year. It’s very important to build work friendships and rapport with your team-mates. My preference is twice a year but that will vary person-to-person.


Find a company that is transparent about roadmap, customers, where you’re taking the product, etc. Having open communication with the product management team is super important and knowing where goals are being tracked and how far along they are is very useful when working remote.

What type of work / planning system are they using?

It’s a good idea to figure out if they’re using Kanban/Scrum or some other variation (turns out no one does scrum or agile exactly the same). The important thing is you understand how they operator, what pieces of scrum (or whatever they’re using) are being utilized, and expectations for things like demo days.

Do they pay market rates?

I’ve seen some companies try and negotiate salary down based on cost of living for where you live. Don’t ever accept this - your talent is worth the same remotely as it is in person in a large city, even more so since you’ll be more productive.

Do they talk about things other than work?

It’s nice to chat about things outside of work to get to know your co-workers, examples from my past have included channels centered around food/recipes, gardening, woodworking, etc. Some place where people can share outside interests without polluting main work channels.

Working, tools, people

Ensure you have a good internet connection

It’s very important to have a solid internet connection. It’s very hard to hear or pair with people when the video looks like pixelated garbage. I used to even have a backup internet line as well (unfortunately new area doesn’t have multiple providers).

Don’t be afraid to hop on a hangout/video chat

Sometimes its easier to go through a code review or work through a problem over video and screen share than it is chat.

Be respectful to your co-workers when video/hang-outing

I don’t mean the obvious things here like being courteous and respectful.

It’s really important to be aware of your surroundings. It may work amazing for you to be in a coffee shop and around people with some music in the background.

When it’s time to meet with people over video chat though, ensure you can re-locate to a quiet space for meetings/pairing. Trying to talk and listen with background music and chatter in a coffee / sandwhich shop is a very unpleasant experience for the people on the other end.

Don’t mis-interpret things

Text can sometimes seem aggressive if not phrased properly - if you’ve met or know your team mates, remember 99.9999% of the time they probably aren’t trying to be hostile. Going over code reviews or other in depth architecture discussions over hangouts and video chat can help alleviate this. The same way text messages can be misinterpreted spills over into work chats as well since everything is text - keep an open mind and be flexible.

Here’s a sample application to show how to stitch together Buffalo, gqlgen and graphql subscriptions. Github Repo

I’ll go over the important parts here. After generating your buffalo application you’ll need a graphql schema file and a gqlgen config file:

1# schema.graphql
2type Example {
3	message: String
7type Subscription {
8    exampleAdded: Example!

and your config file:

 1# gqlgen.yml
 2struct_tag: json
 4- schema.graphql
 6  filename: exampleql/exec.go
 7  package: exampleql
 9  filename: exampleql/models.go
10  package: exampleql
12  filename: exampleql/resolver.go
13  type: Resolver

Next lets generate our graphql files:

1go run github.com/99designs/gqlgen --verbose

Now we can open up our resolver.go file and add a New method to make creating the handler easier:

1func New() Config {
2	return Config{
3		Resolvers: &Resolver{},
4	}

Let’s also add our resolver implementation:

 1func (r *subscriptionResolver) ExampleAdded(ctx context.Context) (<-chan *Example, error) {
 2	msgs := make(chan *Example, 1)
 4	go func() {
 5		for {
 6			msgs <- &Example{Message: randString(50)}
 7			time.Sleep(1 * time.Second)
 8		}
 9	}()
10	return msgs, nil
13var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
15func randString(n int) *string {
16	b := make([]rune, n)
17	for i := range b {
18		b[i] = letterRunes[rand.Intn(len(letterRunes))]
19	}
20	s := string(b)
21	return &s

Inside your app.go file you’ll need to add a few handlers and wrap them in buffalo’s handler as well:

 1c := cors.New(cors.Options{
 2	AllowedOrigins:   []string{"http://localhost:3000"},
 3	AllowCredentials: true,
 6srv := handler.New(exampleql.NewExecutableSchema(exampleql.New()))
 9	KeepAlivePingInterval: 10 * time.Second,
10	Upgrader: websocket.Upgrader{
11		CheckOrigin: func(r *http.Request) bool {
12			return true
13		},
14	},
17app.ANY("/query", buffalo.WrapHandler(c.Handler(srv)))
19app.GET("/play", buffalo.WrapHandler(playground.Handler("Example", "/query")))

Now if you head over to the playground and run this query:

1subscription {
2  exampleAdded {
3    message
4  }

You should see something like this scrolling by:

Things to note:

  • This is not production ready code
  • If you were doing something with multiple load balanced nodes you should be using something like Redis or NATs pubsub to handle messaging
  • This isn’t cleaning up channels or doing anything that you should be doing for live code

If you’ve been using helm you’ve inevitably run into a case where a

1helm upgrade --install

has failed and helm is stuck in a FAILED state when you list your deployments.

Try and make sure any old pods are cleared up (ie: if they’re OutOfEphemeralStorage or something other error condition).

Next to get around this without doing a helm delete NAME --purge:

1helm rollback NAME REVISION

Where REVISION is the failed revision deploy. You can then re-run your upgrade.

This should hopefully go away in Helm 3.

This is useful if you’re building a generic library/package and want to let people pass in types and convert to them/return them.

 1package main
 3import (
 4	"encoding/json"
 5	"fmt"
 6	"reflect"
 9type Monkey struct {
10	Bananas int
13func main() {
14	deliveryChan := make(chan interface{}, 1)
15	someWorker(&Monkey{}, deliveryChan)
16	monkey := <- deliveryChan
17	fmt.Printf("Monkey: %#v\n", monkey.(*Monkey))
20func someWorker(inputType interface{}, deliveryChan chan interface{}) {
21	local := reflect.New(reflect.TypeOf(inputType).Elem()).Interface()
22	json.Unmarshal([]byte(`{"Bananas":20}`), local)
23	deliveryChan <- local

Line 21 is getting the type passed in and creating a new pointer of that struct type, equivalent to &Monkey{}

Line 22 should be using whatever byte array your popping off a MQ or stream or something else to send back.

Helm not updating image, how to fix

Jul 16, 2018 - 1 minutes

If you have your imagePullPolicy: Always and deploys aren’t going out (for example if you’re using a static tag, like stable) - then you may be running into a helm templating bug/feature.

If your helm template diff doesn’t change when being applied the update won’t go out, even if you’ve pushed a new image to your docker registry.

A quick way to fix this is to set a commit sha in your CICD pipeline, in GitLab for example this is $CI_COMMIT_SHA.

If you template this out into a values.yaml file and add it as a label on your Deployment - when you push out updates your template will be different from the remote, and tiller and helm will trigger an update provided you’ve set it properly, for example:

2    - helm upgrade -i APP_NAME -i --set commitHash=$CI_COMMIT_SHA

Faster Local Dev With Minikube

Jun 5, 2018 - 1 minutes

If your developing against kubernetes services or want to run your changes without pushing to a remote registry and want to run inside kubernetes:

First create a registry running in minikube:

1kubectl create -f https://gist.githubusercontent.com/joshrendek/e2ec8bac06706ec139c78249472fe34b/raw/6efc11eb8c2dce167ba0a5e557833cc4ff38fa7c/kube-registry.yaml

Forward your localhost:5000 to 5000 on minikube:

1kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000

Use minikube’s docker daemon and then push to localhost:5000

1eval $(minikube docker-env)
2docker push localhost:5000/test-image:latest

And then you can develop your helm charts and deploy quicker using localhost. No need to configure default service account creds or getting temporary creds.

Using localhost eliminates the need to use insecure registry settings removing a lot of docker daemon configuration steps.

Kubernetes On Bare Metal

Apr 1, 2018 - 13 minutes

If you’ve been following kubernetes, you’ll understand theres a myriad of options available… I’ll cover a few of them briefly and why I didn’t choose them. Don’t know what Kubernetes is? Minikube is the best way to get going locally.

This guide will take you from nothing to a 2 node cluster, automatic SSL for deployed apps, a custom PVC/PV storage class using NFS, and a private docker registry. Helpful tips and bugs I ran into are sprinkled throughout their respective sections.

But first the goals for this cluster:
  • First-class SSL support with LetsEncrypt so we can easily deploy new apps with SSL using just annotations.
  • Bare metal for this conversation means a regular VM/VPS provider or a regular private provider like Proxmox with no special services - or actual hardware.
  • Not require anything fancy (like BIOS control)
  • Be reasonably priced (<$50/month)
  • Be reasonably production-y (this is for side projects, not a huge business critical app). Production-y for this case means a single master with backups being taken of the node.
  • Works with Ubuntu 16.04
  • Works on Vultr (and others like Digital Ocean - providers that are (mostly) generic VM hosts and don’t have specialized APIs and services like AWS/GCE)
  • I also reccomend making sure your VM provider supports a software defined firewall and a private network - however this is not a hard requirement.

Overview of Options
  • OpenShift: Owned by RedHat - uses its own special tooling around oc. Minimum requirements were to high for a small cluster. Pretty high vendor lockin.
  • KubeSpray: unstable. It used to work pretty consistently around 1.6 but when trying to spin up a 1.9 cluster and 1.10 cluster it was unable to finish. I am a fan of Ansible, and if you are as well, this is the project to follow I think.
  • Google Kubernetes Engine: Attempting to stay away from cloud-y providers so outside of the scope of this. If you want a managed offering and are okay with GKE pricing, this is the way to go.
  • AWS: Staying away from cloud-y providers. Cost is also a big factor here since this is a side-project cluster.
  • Tectonic: Requirements are to much for a small cloud provider/installation ( PXE boot setup, Matchbox, F5 LB ).
  • Kops: Only supports AWS and GCE.
  • Canonical Juju: Requires MAAS, attempted to use but kept getting errors around lxc. Seems to favor cloud provider deploys (AWS/GCE/Azure).
  • Kubicorn: No bare metal support, needs cloud provider APIs to work.
  • Rancher: Rancher is pretty awesome, unfortunately it’s incredibly easy to break the cluster and break things inside Rancher that make the cluster unstable. It does provide a very simple way to play with kubernetes on whatever platform you want.

… And the winner is… Kubeadm. It’s not in any incubator stages and is documented as one of the official ways to get a cluster setup.

Servers we’ll need:
  • $5 (+$5 for 50G block storage) - NFS Pod storage server ( 1 CPU / 1GB RAM / block storage )
  • $5 - 1 Master node ( 1 CPU / 1G RAM )
  • $20 - 1 Worker node ( 2 CPU / 4G RAM - you can choose what you want for this )
  • $5 - (optional) DB server - due to bugs I’ve ran into in production environments with docker, and various smart people saying not do it, and issues you can run into with file system consistency, I run a seperate DB server for my apps to connect to if they need it.

Total cost: $40.00

Base Worker + Master init-script
 2apt-get update
 3apt-get upgrade -y
 4apt-get -y install python
 5IP_ADDR=$(echo 10.99.0.$(ip route get | awk '{print $NF; exit}' | cut -d. -f4))
 6cat <<- EOF >> /etc/network/interfaces
 7auto ens7
 8iface ens7 inet static
 9    address $IP_ADDR
10    netmask
11    mtu 1450
13ifup ens7
15apt-get install -y apt-transport-https
16apt -y install docker.io
17systemctl start docker
18systemctl enable docker
19curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
20echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list
21apt-get update
22apt-get install -y kubelet kubeadm kubectl kubernetes-cni nfs-common

Lines 2-13 will run on server boot up, install python (used so Ansible can connect and do things later), update and upgrade everything, and then add the private network address. Since Vultr gives you a true private network I’m cheating a bit and just using the last octect of the public IP to define my internal LAN IP.

Line 16 we’re installing the Ubuntu packaged version of docker – this is important. There are a lot of tools that don’t bundle the proper docker version to go along with their k8s installation and that can cause all kinds of issues, including everything not working due to version mismatches.

Lines 15-22 we’re installing the kubernetes repo tools for kubeadm and kubernetes itself.

Setting up the NFS Server

I’m not going to go in depth on setting an NFS server, there’s a million guides. I will however mention the exports section which I’ve kobbled together after a few experiments and reading OpenShift docs. There’s also a good amount of documentation if you want to go the CEPH storage route as well, however NFS was the simplest solution to get setup.

Remember to lock down your server with a firewall so everything is locked down except internal network traffic to your VMs.



Export options:

  • no_root_squash - this shouldn’t be used for shared services, but if its for your own use and not accessed anywhere else this is fine. This lets the docker containers work with whatever user they’re booting as without conflicting with permissions on the NFS server.
  • no_subtree_check - prevents issues with files being open and renamed at the same time
  • no_wdelay - generally prevents NFS from trying to be smart about when to write, and forces it to write to the disk ASAP.

Setting up the master node

On the master node run kubeadm to init the cluster and start kubernetes services:

1kubeadm init --allocate-node-cidrs=true --cluster-cidr=

This will start the cluster and setup a pod network on for internal pods to use.

Next you’ll notice that the node is in a NotReady state when you do a kubectl get nodes. We need to setup our worker node next.

You can either continue using kubectl on the master node or copy the config to your workstation (depending on how your network permissions are setup):

1mkdir -p $HOME/.kube
2cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3chown $(id -u):$(id -g) $HOME/.kube/config

Setting up the worker node

You’ll get a token command to run on workers from the previous step. However if you need to generate new tokens later on when you’re expanding your cluster, you can use kubeadm token list and kubeadm token create to get a new token.

Important Note: Your worker nodes Must have a unique hostname otherwise they will join the cluster and over-write each other (1st node will disappear and things will get rebalanced to the node you just joined). If this happens to you and you want to reset a node, you can run kubeadm reset to wipe that worker node.

Setting up pod networking (Flannel)

Back on the master node we can add our Flannel network overlay. This will let the pods reside on different worker nodes and communicate with eachother over internal DNS and IPs.

1kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

After a few seconds you should see output from kubectl get nodes similar to this (depending on hostnames):

1[email protected]:~# kubectl get nodes
2NAME           STATUS    ROLES     AGE       VERSION
3k8s-master     Ready     master    4d        v1.10.0
4k8s-worker     Ready     <none>    4d        v1.10.0

Deploying the Kubernetes Dashboard

If you need more thorough documentation, head on over to the dashboard repo. We’re going to follow a vanilla installation:

1kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Once that is installed you need to setup a ServiceAccount that can request tokens and use the dashboard, so save this to dashboard-user.yaml:

 1apiVersion: v1
 2kind: ServiceAccount
 4  name: admin-user
 5  namespace: kube-system
 7apiVersion: rbac.authorization.k8s.io/v1beta1
 8kind: ClusterRoleBinding
10  name: admin-user
12  apiGroup: rbac.authorization.k8s.io
13  kind: ClusterRole
14  name: cluster-admin
16- kind: ServiceAccount
17  name: admin-user
18  namespace: kube-system

and then apply it

1kubectl apply -f dashboard-user.yaml

Next you’ll need to grab the service token for the dashbord authentication and fire up kube proxy:

1kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t'
2kube proxy

Now you can access the dashboard at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login.

Setting up our NFS storage class

When using a cloud provider you normally get a default storage class provided for you (like on GKE). With our bare metal installation if we want PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) to work, we need to set up our own private storage class.

We’ll be using nfs-client from the incubator for this.

The best way to do this is to clone the repo and go to the nfs-client directory and edit the following files:

  • deploy/class.yaml: This is what your storage will be called in when setting up storage and from kubectl get sc:
1apiVersion: storage.k8s.io/v1
2kind: StorageClass
4  name: managed-nfs-storage
5provisioner: joshrendek.com/nfs # or choose another name, must match deployment's env PROVISIONER_NAME'
  • deploy/deployment.yaml: you must make sure your provisioner name matches here and that you have your NFS server IP set properly and the mount your exporting set properly.

Create a file called nfs-test.yaml:

 1kind: PersistentVolumeClaim
 2apiVersion: v1
 4  name: test-claim
 5  annotations:
 6    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
 8  accessModes:
 9    - ReadWriteMany
10  resources:
11    requests:
12      storage: 1Mi
14kind: Pod
15apiVersion: v1
17  name: test-pod
19  containers:
20  - name: test-pod
21    image: gcr.io/google_containers/busybox:1.24
22    command:
23      - "/bin/sh"
24    args:
25      - "-c"
26      - "touch /mnt/SUCCESS && exit 0 || exit 1"
27    volumeMounts:
28      - name: nfs-pvc
29        mountPath: "/mnt"
30  restartPolicy: "Never"
31  volumes:
32    - name: nfs-pvc
33      persistentVolumeClaim:
34        claimName: test-claim

Next just follow the repository instructions:

1kubectl apply -f deploy/deployment.yaml
2kubectl apply -f deploy/class.yaml
3kubectl create -f deploy/auth/serviceaccount.yaml
4kubectl create -f deploy/auth/clusterrole.yaml
5kubectl create -f deploy/auth/clusterrolebinding.yaml
6kubectl patch deployment nfs-client-provisioner -p '{"spec":{"template":{"spec":{"serviceAccount":"nfs-client-provisioner"}}}}'
7kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

This creates all the RBAC permissions, adds them to the deployment, and then sets the default storage class provider in your cluster. You should see something similar when running kubectl get sc now:

1NAME                            PROVISIONER           AGE
2managed-nfs-storage (default)   joshrendek.com/nfs   4d

Now lets test our deployment and check the NFS share for the SUCCESS file:

1kubectl apply -f nfs-test.yaml

If everything is working, move on to the next sections, you’ve gotten NFS working! The only problem I ran into at this point was mis-typing my NFS Server IP. You can figure this out by doing a kubectl get events -w and watching the mount command output and trying to replicate it on the command line from a worker node.

Installing Helm

Up until this point we’ve just been using kubectl apply and kubectl create to install apps. We’ll be using helm to manage our applications and install things going forward for the most part.

If you don’t already have helm installed (and are on OSX): brew install kubernetes-helm, otherwise hop on over to the helm website for installation instructions.

First we’re going to create a helm-rbac.yaml:

 1apiVersion: v1
 2kind: ServiceAccount
 4  name: tiller
 5  namespace: kube-system
 7kind: ClusterRoleBinding
 8apiVersion: rbac.authorization.k8s.io/v1beta1
10  name: tiller-clusterrolebinding
12- kind: ServiceAccount
13  name: tiller
14  namespace: kube-system
16  kind: ClusterRole
17  name: cluster-admin
18  apiGroup: ""

Now we can apply everything:

1kubectl create -f helm-rbac.yaml
2kubectl create serviceaccount --namespace kube-system tiller
3kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
4helm init --upgrade
5kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

First we install the RBAC permissions, service accounts, and role bindings. Next we install helm and initalize tiller on the server. Tiller keeps track of which apps are deployed where and when they need updates. Finally we tell the tiller deployment about its new ServiceAccount.

You can verify things are working with a helm ls. Next we can install our first application, Heapster.

Important Helm Note: Helm is great, but sometimes it breaks. If your deployments/upgrades/deletes are hanging, try bouncing the tiller pod:

1kubectl delete po -n kube-system -l name=tiller

Installing Heapster

Heapster provides in cluster metrics and health information:

1helm install stable/heapster --name heapster --set rbac.create=true

You should see it installed with a helm ls.

Installing Traefik (LoadBalancer)

First lets create a traefik.yaml values file:

 1serviceType: NodePort
 2externalTrafficPolicy: Cluster
 3replicas: 2
 4cpuRequest: 10m
 5memoryRequest: 20Mi
 6cpuLimit: 100m
 7memoryLimit: 30Mi
 9  enabled: false
11  enabled: true
13  enabled: true
14  email: [email protected]
15  staging: false
16  logging: true
17  challengeType: http-01
18  persistence:
19    enabled: true
20    annotations: {}
21    accessMode: ReadWriteOnce
22    size: 1Gi
24  enabled: true
25  domain: # YOUR DOMAIN HERE
26  service:
27    annotations:
28      kubernetes.io/ingress.class: traefik
29  auth:
30    basic:
33  enabled: true
35  enabled: false
36  ## Path to the access logs file. If not provided, Traefik defaults it to stdout.
37  # filePath: ""
38  format: common  # choices are: common, json
40  enabled: true
41## Enable the /metrics endpoint, for now only supports prometheus
42## set to true to enable metric collection by prometheus
45  hostPort:
46    httpEnabled: true
47    httpsEnabled: true

Important things to note here are the hostPort setting - with multiple worker nodes this lets us specify multiple A records for some level of redundancy and binds them to the host ports of 80 and 443 so they can receive HTTP and HTTPS traffic. The other important setting is to use NodePort so we use the worker nodes IP to expose ourselves (normally in something like GKE or AWS we would be registering with an ELB, and that ELB would talk to our k8s cluster).

Now lets install traefik and the dashboard:

1helm install stable/traefik --name traefik -f traefik.yaml --namespace kube-system

You can check the progress of this with kubectl get po -n kube-system -w. Once everything is registered you should be able to go https://traefik.sub.yourdomain.com and login to the dashboard with the basic auth you configured.

Private Docker Registry

Provided you got everything working in the previous step (HTTPS works and LetsEncrypt got automatically setup for your traefik dashboard) you can continue on.

First we’ll be making a registry.yaml file with our custom values and enable trefik for our ingress:

 1replicaCount: 1
 3  enabled: true
 4  # Used to create an Ingress record.
 5  hosts:
 6    - registry.svc.bluescripts.net
 7  annotations:
 8    kubernetes.io/ingress.class: traefik
10  accessMode: 'ReadWriteOnce'
11  enabled: true
12  size: 10Gi
13  # storageClass: '-'
14# set the type of filesystem to use: filesystem, s3
15storage: filesystem
17  haSharedSecret: ""

And putting it all together:

1helm install -f registry.yaml --name registry stable/docker-registry

Provided all that worked, you should now be able to push and pull images and login to your registry at registry.sub.yourdomain.com

Configuring docker credentials (per namespace)

There are several ways you can set up docker auth (like ServiceAccounts) or ImagePullSecrets - I’m going to show the latter.

Take your docker config that should look something like this:

2        "auths": {
3                "registry.sub.yourdomain.com": {
4                        "auth": "BASE64 ENCODED user:pass"
5                }
6        }

and base64 encode that whole file/string. Make it all one line and then create a registry-creds.yaml file:

1apiVersion: v1
2kind: Secret
4 name: regcred
5 namespace: your_app_namespace
7 .dockerconfigjson: BASE64_ENCODED_CREDENTIALS
8type: kubernetes.io/dockerconfigjson

Create your app namespace: kubectl create namespace your_app_namespace and apply it.

1kubectl apply -f registry-creds.yaml

You can now delete this file (or encrypt it with GPG, etc) - just don’t commit it anywhere. Base64 encoding a string won’t protect your credentials.

You would then specify it in your helm delpoyment.yaml like:

2  replicas: {{ .Values.replicaCount }}
3  template:
4    metadata:
5      labels:
6        app: {{ template "fullname" . }}
7    spec:
8      imagePullSecrets:
9        - name: regcred

Deploying your own applications

I generally make a deployments folder then do a helm create app_name in there. You’ll want to edit the values.yaml file to match your docker image names and vars.

You’ll need to edit the templates/ingress.yaml file and make sure you have a traefik annotation:

1  annotations:
2    kubernetes.io/ingress.class: traefik

And finally here is an example deployment.yaml that has a few extra things from the default:

 1apiVersion: extensions/v1beta1
 2kind: Deployment
 4  name: {{ template "fullname" . }}
 5  labels:
 6    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
 8  replicas: {{ .Values.replicaCount }}
 9  template:
10    metadata:
11      labels:
12        app: {{ template "fullname" . }}
13    spec:
14      imagePullSecrets:
15        - name: regcred
16      affinity:
17        podAntiAffinity:
18          requiredDuringSchedulingIgnoredDuringExecution:
19          - topologyKey: "kubernetes.io/hostname"
20            labelSelector:
21              matchLabels:
22                app:  {{ template "fullname" . }}
23      containers:
24      - name: {{ .Chart.Name }}
25        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
26        imagePullPolicy: {{ .Values.image.pullPolicy }}
27        ports:
28        - containerPort: {{ .Values.service.internalPort }}
29        livenessProbe:
30          httpGet:
31            path: /
32            port: {{ .Values.service.internalPort }}
33          initialDelaySeconds: 5
34          periodSeconds: 30
35          timeoutSeconds: 5
36        readinessProbe:
37          httpGet:
38            path: /
39            port: {{ .Values.service.internalPort }}
40          initialDelaySeconds: 5
41          timeoutSeconds: 5
42        resources:
43{{ toYaml .Values.resources | indent 10 }}

On line 14-15 we’re specifying our registry credentials we created in the previous step.

Assuming a replica count >= 2, Lines 16-22 are telling kubernetes to schedule the pods on different worker nodes. This will prevent both web servers (for instance) from being put on the same node incase one of them crashes.

Lines 29-41 are going to depend on your app - if your server is slow to start up these values may not make sense and can cause your app to constantly go into a Running/ Error state and getting its containers reaped by the liveness checks.

And provided you just have configuration changes to try out (container is already built and in a registry), you can iterate locally:

1helm upgrade your_app_name . -i --namespace your_app_name --wait --debug

Integrating with GitLab / CICD Pipelines

Here is a sample .gitlab-ci.yaml that you can use for deploying - this is building a small go binary for ifcfg.net:

 2    - build
 3    - deploy
 6  DOCKER_DRIVER: overlay2
 7  IMAGE: registry.sub.yourdomain.com/project/web
10- docker:dind
13  image: docker:latest
14  stage: build
15  script:
16    - docker build -t $IMAGE:$CI_COMMIT_SHA .
17    - docker tag $IMAGE:$CI_COMMIT_SHA $IMAGE:latest
18    - docker push $IMAGE:$CI_COMMIT_SHA
19    - docker push $IMAGE:latest
22  image: ubuntu
23  stage: deploy
24  before_script:
25    - apt-get update && apt-get install -y curl
26    - curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
27  script:
28    - cd deployment/ifcfg
29    - KUBECONFIG=<(echo "$KUBECONFIG") helm upgrade ifcfg . -i --namespace ifcfg --wait --debug --set "image.tag=$CI_COMMIT_SHA"

Lines 16-19 are where we tag latest and our commited SHA and push both to our registry we made in the previous steps.

Line 26 is installing helm.

Line 29 is doing a few things. First thing to note is we have our ~/.kube/config file set as a environment variable in gitlab. The <(echo)... stuff is a little shell trick that makes it look like a file on the file system (that way we don’t have to write it out in a separate step). upgrade -i says to upgrade our app, and if it doesn’t exist yet, to install it. The last important bit is image.tag=$CI_COMMIT_SHA - this helps you setup for deploying tagged releases instead of always deploying the latest from your repository.

Thats it, you should now have an automated build pipeline setup for a project on your working kubernetes cluster.