{ Josh Rendek }

<3 Go & Kubernetes

15 Years of Remote Work

Mar 22, 2020 - 6 minutes

In one form or another I’ve worked remotely since 2005 (with a brief mix of remote/onsite for about 2 years).

Below is a checklist of the things I’ve found that are required to have a succesful remote work life and ensure you join a great remote team.

Why I like working remote

If it’s going to be your first time working remote, you should figure out why you want to do it.

I get to have my own space

I prefer having my own area to work. I have surround sound, an entire L desk spanning a good chunk of a wall, and plenty of desk space. Headphones bother my ears so this lets me listen to music as loud as I want, have the temperature at what I want and setup my work area with personal knick knacks (without having to worry about them vanishing).

One of the important things is to have a dedicated working area - whether thats a room in your house or even renting a co-working space (I’ve had team-mates do that).

Creating your own routine

Sign-in to chat, say hello, then go make some coffee. Figure out a pattern that works for you - you probably don’t have far to walk to get to your desk. There’s no rush to beat traffic or hit up your coffee shop on the way in. One of my favorite things about being at home is being able to make my lunch fresh whenever I want (usually I do meal-prep for a few days) - some offices will have a kitchen, but it’s not the same as having your own pots, pans and knives.

Figure out how you like to work

Since you have a dedicated work space you can work how you like. Depending on my mood, I’ll have anything from Spotify going, to listening to cooking shows in the background, to having re-runs on Netflix for background noise. I’ve had friends also do things like have CSPAN and other news networks on for background noise.

Don’t waste time traveling

I worked on a hybrid team recently, and some people in NYC would spend hours traveling. That’s hours of your life gone sitting in a car or train.

Not having to sit in traffic (even if only a short way from home) is still an extra stress you don’t have to worry about at the end of the day.

Pets!

When my previous dog Spunky was going through chemo and cancer surgeries, it was such a relief to be able to be at home with him. This isn’t something I would’ve been able to do at a regular office job.

Currently I have two golden retrievers and a lhasa - being able to reach down and pet your dogs or step outside to play with them is amazing. When the weather is nice I’ll even grab a lawn chair and sit outside to work and play fetch until the golden decides to bury the ball in the yard.

No open office plans

Open office plans are awful, and you get to avoid them working from home! Unfortunately one company I worked at with a hybrid setup had an open office plan and a ping pong table making it almost impossible to hear co-workers when pairing or meeting unless they went into a seperate conference room.

You have a strong local network of friends

You’ll be working remotely - you need to make sure you still get out and interact with people. Having a good network of friends locally helps give you that social fix you may need by not having officemates.

Things to look for in remote companies

Now that you know why you want to work remotely, you need to know what to look for in companies.

Make sure a good portion of the company is actually remote

If you’re joining a smaller team within the company, it’s okay to compromise if the team is fully remote as well. The important aspect here is that there are expectations around communicating with people in the open (ie: no water cooler or office hallway chats).

How do they communicate?

You want to make sure theres a mix of online/offline and face-to-face communication. Online chats like Slack/Mattermost/Hipchat solve the immediate/quick conversations. Email is generally what I’ve used for offline communication.

Video chat is important and can help make explaining things and clarifying designs a lot easier than text. I’ve used Zoom and Google Meet/Hangouts in the past (Zoom being my preference).

Do they meetup?

I hate flying but its one of the things I’ve compromised on. Ensure your company (or team) if fully remote is meeting up at least once a year. It’s very important to build work friendships and rapport with your team-mates. My preference is twice a year but that will vary person-to-person.

Transparency

Find a company that is transparent about roadmap, customers, where you’re taking the product, etc. Having open communication with the product management team is super important and knowing where goals are being tracked and how far along they are is very useful when working remote.

What type of work / planning system are they using?

It’s a good idea to figure out if they’re using Kanban/Scrum or some other variation (turns out no one does scrum or agile exactly the same). The important thing is you understand how they operator, what pieces of scrum (or whatever they’re using) are being utilized, and expectations for things like demo days.

Do they pay market rates?

I’ve seen some companies try and negotiate salary down based on cost of living for where you live. Don’t ever accept this - your talent is worth the same remotely as it is in person in a large city, even more so since you’ll be more productive.

Do they talk about things other than work?

It’s nice to chat about things outside of work to get to know your co-workers, examples from my past have included channels centered around food/recipes, gardening, woodworking, etc. Some place where people can share outside interests without polluting main work channels.

Working, tools, people

Ensure you have a good internet connection

It’s very important to have a solid internet connection. It’s very hard to hear or pair with people when the video looks like pixelated garbage. I used to even have a backup internet line as well (unfortunately new area doesn’t have multiple providers).

Don’t be afraid to hop on a hangout/video chat

Sometimes its easier to go through a code review or work through a problem over video and screen share than it is chat.

Be respectful to your co-workers when video/hang-outing

I don’t mean the obvious things here like being courteous and respectful.

It’s really important to be aware of your surroundings. It may work amazing for you to be in a coffee shop and around people with some music in the background.

When it’s time to meet with people over video chat though, ensure you can re-locate to a quiet space for meetings/pairing. Trying to talk and listen with background music and chatter in a coffee / sandwhich shop is a very unpleasant experience for the people on the other end.

Don’t mis-interpret things

Text can sometimes seem aggressive if not phrased properly - if you’ve met or know your team mates, remember 99.9999% of the time they probably aren’t trying to be hostile. Going over code reviews or other in depth architecture discussions over hangouts and video chat can help alleviate this. The same way text messages can be misinterpreted spills over into work chats as well since everything is text - keep an open mind and be flexible.

Here’s a sample application to show how to stitch together Buffalo, gqlgen and graphql subscriptions. Github Repo

I’ll go over the important parts here. After generating your buffalo application you’ll need a graphql schema file and a gqlgen config file:

1# schema.graphql
2type Example {
3	message: String
4}
5
6
7type Subscription {
8    exampleAdded: Example!
9}

and your config file:

 1# gqlgen.yml
 2struct_tag: json
 3schema:
 4- schema.graphql
 5exec:
 6  filename: exampleql/exec.go
 7  package: exampleql
 8model:
 9  filename: exampleql/models.go
10  package: exampleql
11resolver:
12  filename: exampleql/resolver.go
13  type: Resolver

Next lets generate our graphql files:

1go run github.com/99designs/gqlgen --verbose

Now we can open up our resolver.go file and add a New method to make creating the handler easier:

1func New() Config {
2	return Config{
3		Resolvers: &Resolver{},
4	}
5}

Let’s also add our resolver implementation:

 1func (r *subscriptionResolver) ExampleAdded(ctx context.Context) (<-chan *Example, error) {
 2	msgs := make(chan *Example, 1)
 3
 4	go func() {
 5		for {
 6			msgs <- &Example{Message: randString(50)}
 7			time.Sleep(1 * time.Second)
 8		}
 9	}()
10	return msgs, nil
11}
12
13var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
14
15func randString(n int) *string {
16	b := make([]rune, n)
17	for i := range b {
18		b[i] = letterRunes[rand.Intn(len(letterRunes))]
19	}
20	s := string(b)
21	return &s
22}

Inside your app.go file you’ll need to add a few handlers and wrap them in buffalo’s handler as well:

 1c := cors.New(cors.Options{
 2	AllowedOrigins:   []string{"http://localhost:3000"},
 3	AllowCredentials: true,
 4})
 5
 6srv := handler.New(exampleql.NewExecutableSchema(exampleql.New()))
 7srv.AddTransport(transport.POST{})
 8srv.AddTransport(transport.Websocket{
 9	KeepAlivePingInterval: 10 * time.Second,
10	Upgrader: websocket.Upgrader{
11		CheckOrigin: func(r *http.Request) bool {
12			return true
13		},
14	},
15})
16
17app.ANY("/query", buffalo.WrapHandler(c.Handler(srv)))
18
19app.GET("/play", buffalo.WrapHandler(playground.Handler("Example", "/query")))

Now if you head over to the playground and run this query:

1subscription {
2  exampleAdded {
3    message
4  }
5}

You should see something like this scrolling by:


Things to note:

  • This is not production ready code
  • If you were doing something with multiple load balanced nodes you should be using something like Redis or NATs pubsub to handle messaging
  • This isn’t cleaning up channels or doing anything that you should be doing for live code

If you’ve been using helm you’ve inevitably run into a case where a

1helm upgrade --install

has failed and helm is stuck in a FAILED state when you list your deployments.

Try and make sure any old pods are cleared up (ie: if they’re OutOfEphemeralStorage or something other error condition).

Next to get around this without doing a helm delete NAME --purge:

1helm rollback NAME REVISION


Where REVISION is the failed revision deploy. You can then re-run your upgrade.

This should hopefully go away in Helm 3.

This is useful if you’re building a generic library/package and want to let people pass in types and convert to them/return them.

 1package main
 2
 3import (
 4	"encoding/json"
 5	"fmt"
 6	"reflect"
 7)
 8
 9type Monkey struct {
10	Bananas int
11}
12
13func main() {
14	deliveryChan := make(chan interface{}, 1)
15	someWorker(&Monkey{}, deliveryChan)
16	monkey := <- deliveryChan
17	fmt.Printf("Monkey: %#v\n", monkey.(*Monkey))
18}
19
20func someWorker(inputType interface{}, deliveryChan chan interface{}) {
21	local := reflect.New(reflect.TypeOf(inputType).Elem()).Interface()
22	json.Unmarshal([]byte(`{"Bananas":20}`), local)
23	deliveryChan <- local
24}


Line 21 is getting the type passed in and creating a new pointer of that struct type, equivalent to &Monkey{}

Line 22 should be using whatever byte array your popping off a MQ or stream or something else to send back.

Helm not updating image, how to fix

Jul 16, 2018 - 1 minutes

If you have your imagePullPolicy: Always and deploys aren’t going out (for example if you’re using a static tag, like stable) - then you may be running into a helm templating bug/feature.

If your helm template diff doesn’t change when being applied the update won’t go out, even if you’ve pushed a new image to your docker registry.

A quick way to fix this is to set a commit sha in your CICD pipeline, in GitLab for example this is $CI_COMMIT_SHA.

If you template this out into a values.yaml file and add it as a label on your Deployment - when you push out updates your template will be different from the remote, and tiller and helm will trigger an update provided you’ve set it properly, for example:

1script:
2    - helm upgrade -i APP_NAME -i --set commitHash=$CI_COMMIT_SHA

Faster Local Dev With Minikube

Jun 5, 2018 - 1 minutes

If your developing against kubernetes services or want to run your changes without pushing to a remote registry and want to run inside kubernetes:

First create a registry running in minikube:

1kubectl create -f https://gist.githubusercontent.com/joshrendek/e2ec8bac06706ec139c78249472fe34b/raw/6efc11eb8c2dce167ba0a5e557833cc4ff38fa7c/kube-registry.yaml

Forward your localhost:5000 to 5000 on minikube:

1kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000

Use minikube’s docker daemon and then push to localhost:5000

1eval $(minikube docker-env)
2docker push localhost:5000/test-image:latest

And then you can develop your helm charts and deploy quicker using localhost. No need to configure default service account creds or getting temporary creds.

Using localhost eliminates the need to use insecure registry settings removing a lot of docker daemon configuration steps.

Kubernetes On Bare Metal

Apr 1, 2018 - 13 minutes

If you’ve been following kubernetes, you’ll understand theres a myriad of options available… I’ll cover a few of them briefly and why I didn’t choose them. Don’t know what Kubernetes is? Minikube is the best way to get going locally.

This guide will take you from nothing to a 2 node cluster, automatic SSL for deployed apps, a custom PVC/PV storage class using NFS, and a private docker registry. Helpful tips and bugs I ran into are sprinkled throughout their respective sections.

But first the goals for this cluster:
  • First-class SSL support with LetsEncrypt so we can easily deploy new apps with SSL using just annotations.
  • Bare metal for this conversation means a regular VM/VPS provider or a regular private provider like Proxmox with no special services - or actual hardware.
  • Not require anything fancy (like BIOS control)
  • Be reasonably priced (<$50/month)
  • Be reasonably production-y (this is for side projects, not a huge business critical app). Production-y for this case means a single master with backups being taken of the node.
  • Works with Ubuntu 16.04
  • Works on Vultr (and others like Digital Ocean - providers that are (mostly) generic VM hosts and don’t have specialized APIs and services like AWS/GCE)
  • I also reccomend making sure your VM provider supports a software defined firewall and a private network - however this is not a hard requirement.

Overview of Options
  • OpenShift: Owned by RedHat - uses its own special tooling around oc. Minimum requirements were to high for a small cluster. Pretty high vendor lockin.
  • KubeSpray: unstable. It used to work pretty consistently around 1.6 but when trying to spin up a 1.9 cluster and 1.10 cluster it was unable to finish. I am a fan of Ansible, and if you are as well, this is the project to follow I think.
  • Google Kubernetes Engine: Attempting to stay away from cloud-y providers so outside of the scope of this. If you want a managed offering and are okay with GKE pricing, this is the way to go.
  • AWS: Staying away from cloud-y providers. Cost is also a big factor here since this is a side-project cluster.
  • Tectonic: Requirements are to much for a small cloud provider/installation ( PXE boot setup, Matchbox, F5 LB ).
  • Kops: Only supports AWS and GCE.
  • Canonical Juju: Requires MAAS, attempted to use but kept getting errors around lxc. Seems to favor cloud provider deploys (AWS/GCE/Azure).
  • Kubicorn: No bare metal support, needs cloud provider APIs to work.
  • Rancher: Rancher is pretty awesome, unfortunately it’s incredibly easy to break the cluster and break things inside Rancher that make the cluster unstable. It does provide a very simple way to play with kubernetes on whatever platform you want.

… And the winner is… Kubeadm. It’s not in any incubator stages and is documented as one of the official ways to get a cluster setup.

Servers we’ll need:
  • $5 (+$5 for 50G block storage) - NFS Pod storage server ( 1 CPU / 1GB RAM / block storage )
  • $5 - 1 Master node ( 1 CPU / 1G RAM )
  • $20 - 1 Worker node ( 2 CPU / 4G RAM - you can choose what you want for this )
  • $5 - (optional) DB server - due to bugs I’ve ran into in production environments with docker, and various smart people saying not do it, and issues you can run into with file system consistency, I run a seperate DB server for my apps to connect to if they need it.

Total cost: $40.00

Base Worker + Master init-script
 1#!/bin/sh
 2apt-get update
 3apt-get upgrade -y
 4apt-get -y install python
 5IP_ADDR=$(echo 10.99.0.$(ip route get 8.8.8.8 | awk '{print $NF; exit}' | cut -d. -f4))
 6cat <<- EOF >> /etc/network/interfaces
 7auto ens7
 8iface ens7 inet static
 9    address $IP_ADDR
10    netmask 255.255.0.0
11    mtu 1450
12EOF
13ifup ens7
14
15apt-get install -y apt-transport-https
16apt -y install docker.io
17systemctl start docker
18systemctl enable docker
19curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
20echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list
21apt-get update
22apt-get install -y kubelet kubeadm kubectl kubernetes-cni nfs-common
23
24reboot

Lines 2-13 will run on server boot up, install python (used so Ansible can connect and do things later), update and upgrade everything, and then add the private network address. Since Vultr gives you a true private network I’m cheating a bit and just using the last octect of the public IP to define my internal LAN IP.

Line 16 we’re installing the Ubuntu packaged version of docker – this is important. There are a lot of tools that don’t bundle the proper docker version to go along with their k8s installation and that can cause all kinds of issues, including everything not working due to version mismatches.

Lines 15-22 we’re installing the kubernetes repo tools for kubeadm and kubernetes itself.

Setting up the NFS Server

I’m not going to go in depth on setting an NFS server, there’s a million guides. I will however mention the exports section which I’ve kobbled together after a few experiments and reading OpenShift docs. There’s also a good amount of documentation if you want to go the CEPH storage route as well, however NFS was the simplest solution to get setup.

Remember to lock down your server with a firewall so everything is locked down except internal network traffic to your VMs.

/etc/exports

1/srv/kubestorage 10.99.0.0/255.255.255.0(rw,no_root_squash,no_wdelay,no_subtree_check)

Export options:

  • no_root_squash - this shouldn’t be used for shared services, but if its for your own use and not accessed anywhere else this is fine. This lets the docker containers work with whatever user they’re booting as without conflicting with permissions on the NFS server.
  • no_subtree_check - prevents issues with files being open and renamed at the same time
  • no_wdelay - generally prevents NFS from trying to be smart about when to write, and forces it to write to the disk ASAP.

Setting up the master node

On the master node run kubeadm to init the cluster and start kubernetes services:

1kubeadm init --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16

This will start the cluster and setup a pod network on 10.244.0.0/16 for internal pods to use.

Next you’ll notice that the node is in a NotReady state when you do a kubectl get nodes. We need to setup our worker node next.

You can either continue using kubectl on the master node or copy the config to your workstation (depending on how your network permissions are setup):

1mkdir -p $HOME/.kube
2cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3chown $(id -u):$(id -g) $HOME/.kube/config

Setting up the worker node

You’ll get a token command to run on workers from the previous step. However if you need to generate new tokens later on when you’re expanding your cluster, you can use kubeadm token list and kubeadm token create to get a new token.

Important Note: Your worker nodes Must have a unique hostname otherwise they will join the cluster and over-write each other (1st node will disappear and things will get rebalanced to the node you just joined). If this happens to you and you want to reset a node, you can run kubeadm reset to wipe that worker node.

Setting up pod networking (Flannel)

Back on the master node we can add our Flannel network overlay. This will let the pods reside on different worker nodes and communicate with eachother over internal DNS and IPs.

1kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

After a few seconds you should see output from kubectl get nodes similar to this (depending on hostnames):

1[email protected]:~# kubectl get nodes
2NAME           STATUS    ROLES     AGE       VERSION
3k8s-master     Ready     master    4d        v1.10.0
4k8s-worker     Ready     <none>    4d        v1.10.0

Deploying the Kubernetes Dashboard

If you need more thorough documentation, head on over to the dashboard repo. We’re going to follow a vanilla installation:

1kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Once that is installed you need to setup a ServiceAccount that can request tokens and use the dashboard, so save this to dashboard-user.yaml:

 1apiVersion: v1
 2kind: ServiceAccount
 3metadata:
 4  name: admin-user
 5  namespace: kube-system
 6---
 7apiVersion: rbac.authorization.k8s.io/v1beta1
 8kind: ClusterRoleBinding
 9metadata:
10  name: admin-user
11roleRef:
12  apiGroup: rbac.authorization.k8s.io
13  kind: ClusterRole
14  name: cluster-admin
15subjects:
16- kind: ServiceAccount
17  name: admin-user
18  namespace: kube-system

and then apply it

1kubectl apply -f dashboard-user.yaml

Next you’ll need to grab the service token for the dashbord authentication and fire up kube proxy:

1kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t'
2kube proxy

Now you can access the dashboard at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login.

Setting up our NFS storage class

When using a cloud provider you normally get a default storage class provided for you (like on GKE). With our bare metal installation if we want PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) to work, we need to set up our own private storage class.

We’ll be using nfs-client from the incubator for this.

The best way to do this is to clone the repo and go to the nfs-client directory and edit the following files:

  • deploy/class.yaml: This is what your storage will be called in when setting up storage and from kubectl get sc:
1apiVersion: storage.k8s.io/v1
2kind: StorageClass
3metadata:
4  name: managed-nfs-storage
5provisioner: joshrendek.com/nfs # or choose another name, must match deployment's env PROVISIONER_NAME'
  • deploy/deployment.yaml: you must make sure your provisioner name matches here and that you have your NFS server IP set properly and the mount your exporting set properly.

Create a file called nfs-test.yaml:

 1kind: PersistentVolumeClaim
 2apiVersion: v1
 3metadata:
 4  name: test-claim
 5  annotations:
 6    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
 7spec:
 8  accessModes:
 9    - ReadWriteMany
10  resources:
11    requests:
12      storage: 1Mi
13---
14kind: Pod
15apiVersion: v1
16metadata:
17  name: test-pod
18spec:
19  containers:
20  - name: test-pod
21    image: gcr.io/google_containers/busybox:1.24
22    command:
23      - "/bin/sh"
24    args:
25      - "-c"
26      - "touch /mnt/SUCCESS && exit 0 || exit 1"
27    volumeMounts:
28      - name: nfs-pvc
29        mountPath: "/mnt"
30  restartPolicy: "Never"
31  volumes:
32    - name: nfs-pvc
33      persistentVolumeClaim:
34        claimName: test-claim

Next just follow the repository instructions:

1kubectl apply -f deploy/deployment.yaml
2kubectl apply -f deploy/class.yaml
3kubectl create -f deploy/auth/serviceaccount.yaml
4kubectl create -f deploy/auth/clusterrole.yaml
5kubectl create -f deploy/auth/clusterrolebinding.yaml
6kubectl patch deployment nfs-client-provisioner -p '{"spec":{"template":{"spec":{"serviceAccount":"nfs-client-provisioner"}}}}'
7kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

This creates all the RBAC permissions, adds them to the deployment, and then sets the default storage class provider in your cluster. You should see something similar when running kubectl get sc now:

1NAME                            PROVISIONER           AGE
2managed-nfs-storage (default)   joshrendek.com/nfs   4d

Now lets test our deployment and check the NFS share for the SUCCESS file:

1kubectl apply -f nfs-test.yaml

If everything is working, move on to the next sections, you’ve gotten NFS working! The only problem I ran into at this point was mis-typing my NFS Server IP. You can figure this out by doing a kubectl get events -w and watching the mount command output and trying to replicate it on the command line from a worker node.

Installing Helm

Up until this point we’ve just been using kubectl apply and kubectl create to install apps. We’ll be using helm to manage our applications and install things going forward for the most part.

If you don’t already have helm installed (and are on OSX): brew install kubernetes-helm, otherwise hop on over to the helm website for installation instructions.

First we’re going to create a helm-rbac.yaml:

 1apiVersion: v1
 2kind: ServiceAccount
 3metadata:
 4  name: tiller
 5  namespace: kube-system
 6---
 7kind: ClusterRoleBinding
 8apiVersion: rbac.authorization.k8s.io/v1beta1
 9metadata:
10  name: tiller-clusterrolebinding
11subjects:
12- kind: ServiceAccount
13  name: tiller
14  namespace: kube-system
15roleRef:
16  kind: ClusterRole
17  name: cluster-admin
18  apiGroup: ""

Now we can apply everything:

1kubectl create -f helm-rbac.yaml
2kubectl create serviceaccount --namespace kube-system tiller
3kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
4helm init --upgrade
5kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

First we install the RBAC permissions, service accounts, and role bindings. Next we install helm and initalize tiller on the server. Tiller keeps track of which apps are deployed where and when they need updates. Finally we tell the tiller deployment about its new ServiceAccount.

You can verify things are working with a helm ls. Next we can install our first application, Heapster.

Important Helm Note: Helm is great, but sometimes it breaks. If your deployments/upgrades/deletes are hanging, try bouncing the tiller pod:

1kubectl delete po -n kube-system -l name=tiller

Installing Heapster

Heapster provides in cluster metrics and health information:

1helm install stable/heapster --name heapster --set rbac.create=true

You should see it installed with a helm ls.

Installing Traefik (LoadBalancer)

First lets create a traefik.yaml values file:

 1serviceType: NodePort
 2externalTrafficPolicy: Cluster
 3replicas: 2
 4cpuRequest: 10m
 5memoryRequest: 20Mi
 6cpuLimit: 100m
 7memoryLimit: 30Mi
 8debug:
 9  enabled: false
10ssl:
11  enabled: true
12acme:
13  enabled: true
14  email: [email protected]
15  staging: false
16  logging: true
17  challengeType: http-01
18  persistence:
19    enabled: true
20    annotations: {}
21    accessMode: ReadWriteOnce
22    size: 1Gi
23dashboard:
24  enabled: true
25  domain: # YOUR DOMAIN HERE
26  service:
27    annotations:
28      kubernetes.io/ingress.class: traefik
29  auth:
30    basic:
31      admin: # FILL THIS IN WITH A HTPASSWD VALUE
32gzip:
33  enabled: true
34accessLogs:
35  enabled: false
36  ## Path to the access logs file. If not provided, Traefik defaults it to stdout.
37  # filePath: ""
38  format: common  # choices are: common, json
39rbac:
40  enabled: true
41## Enable the /metrics endpoint, for now only supports prometheus
42## set to true to enable metric collection by prometheus
43
44deployment:
45  hostPort:
46    httpEnabled: true
47    httpsEnabled: true

Important things to note here are the hostPort setting - with multiple worker nodes this lets us specify multiple A records for some level of redundancy and binds them to the host ports of 80 and 443 so they can receive HTTP and HTTPS traffic. The other important setting is to use NodePort so we use the worker nodes IP to expose ourselves (normally in something like GKE or AWS we would be registering with an ELB, and that ELB would talk to our k8s cluster).

Now lets install traefik and the dashboard:

1helm install stable/traefik --name traefik -f traefik.yaml --namespace kube-system

You can check the progress of this with kubectl get po -n kube-system -w. Once everything is registered you should be able to go https://traefik.sub.yourdomain.com and login to the dashboard with the basic auth you configured.

Private Docker Registry

Provided you got everything working in the previous step (HTTPS works and LetsEncrypt got automatically setup for your traefik dashboard) you can continue on.

First we’ll be making a registry.yaml file with our custom values and enable trefik for our ingress:

 1replicaCount: 1
 2ingress:
 3  enabled: true
 4  # Used to create an Ingress record.
 5  hosts:
 6    - registry.svc.bluescripts.net
 7  annotations:
 8    kubernetes.io/ingress.class: traefik
 9persistence:
10  accessMode: 'ReadWriteOnce'
11  enabled: true
12  size: 10Gi
13  # storageClass: '-'
14# set the type of filesystem to use: filesystem, s3
15storage: filesystem
16secrets:
17  haSharedSecret: ""
18  htpasswd: "YOUR_DOCKER_USERNAME:GENERATE_YOUR_OWN_HTPASSWD_FOR_HERE"

And putting it all together:

1helm install -f registry.yaml --name registry stable/docker-registry

Provided all that worked, you should now be able to push and pull images and login to your registry at registry.sub.yourdomain.com

Configuring docker credentials (per namespace)

There are several ways you can set up docker auth (like ServiceAccounts) or ImagePullSecrets - I’m going to show the latter.

Take your docker config that should look something like this:

1{
2        "auths": {
3                "registry.sub.yourdomain.com": {
4                        "auth": "BASE64 ENCODED user:pass"
5                }
6        }
7}

and base64 encode that whole file/string. Make it all one line and then create a registry-creds.yaml file:

1apiVersion: v1
2kind: Secret
3metadata:
4 name: regcred
5 namespace: your_app_namespace
6data:
7 .dockerconfigjson: BASE64_ENCODED_CREDENTIALS
8type: kubernetes.io/dockerconfigjson

Create your app namespace: kubectl create namespace your_app_namespace and apply it.

1kubectl apply -f registry-creds.yaml

You can now delete this file (or encrypt it with GPG, etc) - just don’t commit it anywhere. Base64 encoding a string won’t protect your credentials.

You would then specify it in your helm delpoyment.yaml like:

1spec:
2  replicas: {{ .Values.replicaCount }}
3  template:
4    metadata:
5      labels:
6        app: {{ template "fullname" . }}
7    spec:
8      imagePullSecrets:
9        - name: regcred

Deploying your own applications

I generally make a deployments folder then do a helm create app_name in there. You’ll want to edit the values.yaml file to match your docker image names and vars.

You’ll need to edit the templates/ingress.yaml file and make sure you have a traefik annotation:

1  annotations:
2    kubernetes.io/ingress.class: traefik

And finally here is an example deployment.yaml that has a few extra things from the default:

 1apiVersion: extensions/v1beta1
 2kind: Deployment
 3metadata:
 4  name: {{ template "fullname" . }}
 5  labels:
 6    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
 7spec:
 8  replicas: {{ .Values.replicaCount }}
 9  template:
10    metadata:
11      labels:
12        app: {{ template "fullname" . }}
13    spec:
14      imagePullSecrets:
15        - name: regcred
16      affinity:
17        podAntiAffinity:
18          requiredDuringSchedulingIgnoredDuringExecution:
19          - topologyKey: "kubernetes.io/hostname"
20            labelSelector:
21              matchLabels:
22                app:  {{ template "fullname" . }}
23      containers:
24      - name: {{ .Chart.Name }}
25        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
26        imagePullPolicy: {{ .Values.image.pullPolicy }}
27        ports:
28        - containerPort: {{ .Values.service.internalPort }}
29        livenessProbe:
30          httpGet:
31            path: /
32            port: {{ .Values.service.internalPort }}
33          initialDelaySeconds: 5
34          periodSeconds: 30
35          timeoutSeconds: 5
36        readinessProbe:
37          httpGet:
38            path: /
39            port: {{ .Values.service.internalPort }}
40          initialDelaySeconds: 5
41          timeoutSeconds: 5
42        resources:
43{{ toYaml .Values.resources | indent 10 }}

On line 14-15 we’re specifying our registry credentials we created in the previous step.

Assuming a replica count >= 2, Lines 16-22 are telling kubernetes to schedule the pods on different worker nodes. This will prevent both web servers (for instance) from being put on the same node incase one of them crashes.

Lines 29-41 are going to depend on your app - if your server is slow to start up these values may not make sense and can cause your app to constantly go into a Running/ Error state and getting its containers reaped by the liveness checks.

And provided you just have configuration changes to try out (container is already built and in a registry), you can iterate locally:

1helm upgrade your_app_name . -i --namespace your_app_name --wait --debug

Integrating with GitLab / CICD Pipelines

Here is a sample .gitlab-ci.yaml that you can use for deploying - this is building a small go binary for ifcfg.net:

 1stages:
 2    - build
 3    - deploy
 4
 5variables:
 6  DOCKER_DRIVER: overlay2
 7  IMAGE: registry.sub.yourdomain.com/project/web
 8
 9services:
10- docker:dind
11
12build:
13  image: docker:latest
14  stage: build
15  script:
16    - docker build -t $IMAGE:$CI_COMMIT_SHA .
17    - docker tag $IMAGE:$CI_COMMIT_SHA $IMAGE:latest
18    - docker push $IMAGE:$CI_COMMIT_SHA
19    - docker push $IMAGE:latest
20
21deploy:
22  image: ubuntu
23  stage: deploy
24  before_script:
25    - apt-get update && apt-get install -y curl
26    - curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
27  script:
28    - cd deployment/ifcfg
29    - KUBECONFIG=<(echo "$KUBECONFIG") helm upgrade ifcfg . -i --namespace ifcfg --wait --debug --set "image.tag=$CI_COMMIT_SHA"

Lines 16-19 are where we tag latest and our commited SHA and push both to our registry we made in the previous steps.

Line 26 is installing helm.

Line 29 is doing a few things. First thing to note is we have our ~/.kube/config file set as a environment variable in gitlab. The <(echo)... stuff is a little shell trick that makes it look like a file on the file system (that way we don’t have to write it out in a separate step). upgrade -i says to upgrade our app, and if it doesn’t exist yet, to install it. The last important bit is image.tag=$CI_COMMIT_SHA - this helps you setup for deploying tagged releases instead of always deploying the latest from your repository.

Thats it, you should now have an automated build pipeline setup for a project on your working kubernetes cluster.

The goal of this project was to start with a base directory (in this case The Hidden Wiki) and start spidering out to discover all reachable Tor servers. Some restrictions were placed on this after a few trial runs:

  • Only HTML/JSON was parsed/spidered for more links to follow (no jpegs/xml, etc)
  • There were a few skipped websites, noteably: Facebook, Reddit, and a few Blockchain websites due to the amount of spidering/time that would be required
  • Limited to 10k visits per host so we wouldn’t infinitely keep spidering / some reasonable time frame to finish
  • Non 200 OK status responses were skipped

Table of Contents

Stack & Tools

I used a few different tools to build this out:

  • HA Proxy to load balance between tor SOCKs proxies so multiple could be run at the same time to saturate a network link
  • Redis to store state information about visits
  • Golang for the spidering
  • Postgres for data storage

This was all run on a single dedicated server over the period of about 1 week, multiple prototypes ran before that to flush out bugs.

Crawl Stats

Metric Count
Total Hosts 107,067
Total Scanned Pages 14,177,383
Total Visited (non-200+) 17,038,091

Security Headers

Technology % using
Content Security Policy (CSP) 0.15%
Secure Cookie 0.01%
– httpOnly 0%
Cross-origin Resource Sharing (CORS) 0.07%
– Subresource Integrity (SRI) 0%
Public Key Pinning (HPKP) 0.01%
Strict Transport Security (HSTS) 0.11%
X-Content-Type-Options (XCTO) 0.52%
X-Frame-Options (XFO) 0.58%
X-XSS-Protection 0%

Some of these headers are interesting when viewed through a Tor light. HSTS and HPKP for example, can be used for super cookies and tracking (although tor does protect against this across new identities) (source).

Services implementing CORS also help protect users by preventing cookie finger printing via scripts and other malicious finger printing methods.

Software Stats

We can fingerprint and figure out exposed software by taking a look at a few different signatures, like cookies and headers. There are other methods to fingerprint using the response body but due to server restrictions and time I couldn’t save every single page source, so the results based on headers/titles are below:

Source code hosting

Software Type Identifier
Gitea Cookie i_like_gitea [src]
GitLab Cookie gitlab_session [src]
Gogs Forked version has header X-Clacks-Overhead: GNU Terry Pratchett from NotABug.org

Build Servers

I’m going to focus on build servers because I think this is the most easy to breach front. Not only has Jenkins had some serious RCE’s in the past, it is very helpful in identifying itself with headers and debug information as seen below. People also generally store sensitive information in build servers as well, such as SSH keys and cloud provider credentials.

 1| X-Jenkins-Session: 8965d09b
 2| X-Instance-Identity: MIIBIjANBgkqhkiG9w0BAQEFAA.....
 3| Server: Jetty(9.2.z-SNAPSHOT)
 4| X-Xss-Protection: 1
 5| X-Jenkins: 2.60.1
 6| X-Jenkins-Cli-Port: 46689
 7| X-Content-Type-Options: nosniff nosniff
 8| X-Frame-Options: sameorigin sameorigin
 9| X-Hudson-Theme: default
10| X-Jenkins-Cli2-Port: 46689
11| Referrer-Policy: same-origin
12| Content-Type: text/html;charset=UTF-8
13| X-Hudson: 1.395
14| X-Hudson-Cli-Port: 46689
15| Set-Cookie: JSESSIONID.112b5e69=16uts5qfqz6j....Path=/;Secure;HttpOnly

We can get Jenkins version, CLI ports, and Jetty versions all from just visiting the host.

Software Type Identifier
Jenkins Headers X-Jenkins- and X-Hudson- style headers
GitLab Cookie gitlab_session
Gocd Cookie Path / Title Generally sets a cookie path at /go and uses - Go in <title> tags
Drone Title Sets a drone title

Unfortunately I was unable to find any exposed Gocd or Drone servers.

Software Tracking

Software Type Identifier
Trac Cookie trac_session
Redmine Cookie redmine_session

I was not able to find any running BugZilla, Mantis or OTRS instances.

Total with Server Header: 15,630

Total without header: 91,437

Top 10 (full list of 282 available for download)

 1nginx | 9619
 2Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips PHP/5.6.30 | 2659
 3Apache | 1056
 4nginx/1.6.2 | 249
 5nginx/1.13.1 | 210
 6Apache/2.4.10 (Debian) | 161
 7Apache/2.4.18 (Ubuntu) | 100
 8Apache/2.2.22 (Debian) | 90
 9Apache/2.4.7 (Ubuntu) | 82
10lighttpd/1.4.31 | 80
11FobbaWeb/0.1 | 78
Full list available here

Just from the Server header we can gather a bunch of useful information:

  • 2,659 servers are running a potentially vulnerable OpenSSL version (1.0.1e) [vulns] and vulnerable Apache version [vulns]
  • Many servers are leaving the OS tag on, revealing a mix of operating systems. I think it’s also a safe assumption to say the same people who would leave fingerprinting on will also be using the OS package of these servers, making it easy to combine both OS vulnerabilities and web server vulnerabilities to combine attack vectors:
    • CentOS
    • Debian
    • Ubuntu
    • Windows
    • Raspbian
    • Amazon Linux
    • Fedora
    • Red Hat
    • Trisquel
    • YellowDog
    • FreeBSD
    • Scientific Linux
    • Vine
  • Some people are exposing application servers directly:
    • thin
    • node-static
    • gunicorn
    • Mojolicious
    • WSGI
    • Jetty
    • GlassFish
  • Very old versions of IIS (5.0/6.0), Apache (1.3), and Nginx
  • Nginx appears to dominate the server share on Tor - just taking the top 2 in account, nginx is at least 3.5x as popular as Apache

Summary

This was a fun project to work on and I learned quite a bit about scaling up the tor binary in order to scan the network faster. I’m hoping to make this process a bit less manual and start publishing these results regularly over at my security data website, https://hnypots.com

Have any suggestions for other software to look for? Leave a comment and let me know!

Performance before and after Optimizations

When working with billions of documents in your Elasticsearch cluster, there are a few important things to keep in mind:

  • Look at what the big players do (Elasticsearch/Kibana) for organization and planning
  • Experiment with index sizes that make sense for your business, don’t just assume 1 index for a billion documents is a good idea (even if you N shards)
  • Understand which metrics to monitor when you are performance testing your cluster
  • Monitor all points of ingestion: Elasticsearch, Load balancers (ELB, HAProxy, etc), and your application code that is inserting

What do the big players do?

Split by date ranges. Based on your data, decide whether daily, weekly, or even monthly splits are best for your dataset. Elasticsearch reccomends not going over 30-32G per shard based on current JVM memory reccomendations. The reason they reccomend to stay below 32G of ram per shard is that after that, the JVM will use uncompressed pointers which means internal pointers go from 4 bytes to 8 bytes, which (depending on your memory size) can lead to decreased heap available and also increased GC times from the JVM.

Don’t allocate more than 50% of your system memory for the JVM. Your kernel will cache files and help keep performance up. Over-allocating the JVM can lead to poor performance from the underlying engine, Lucene, which relies on the OS cache as well as the JVM to do searches.

Understand your users: other devs, other systems, etc. Don’t do deep pagination instead, use scan and scroll. Turn on slow logging to find any queries doing this or returning to many points of data per query.

Index Sizing and Memory

Keeping in mind the 30-32G per shard reccomendation, this will determine the number of shards per dataset. Remember shards are not modifiable but replicas are. Shards will increase indexing performance, while replicas will increase search performance.

Overwhelmed and can’t figure out what to do? Just start with an index and see how things go. Using alias’s you can create another index later on and map both of them together for searching (and eventually delete the old one if the data expires). If you start out with alias’s being used, transitions can be seemless (no need to redeploy to point to the new alias/index name).

Metrics to monitor

Use the plugin community to monitor your cluster: ElasticHQ, BigDesk, Head and Paramedic.

Watch for refresh/merge/flush time (ElasticHQ makes this available under Node Diagnostics). For example, with a large index (1TB) that has frequent updates or deletions, in order for the data to actually be freed from the disk and cluster fully, a merge must be performed. When the number of segments in a cluster gets to large, this can cause issues for refreshing and merging.

The basic idea is the larger your index, the more segments, and the more optimization steps that need to be performed. Automatic flushes happen every few seconds so more segments get created - as you can imagine this gets compounded the larger your index is. You can see a full rundown of how deleting and updating works in the documentation.

By seperating our indexes into smaller datasets (by day, week, or month) we can eliminate some of the issues that pop up. For example, a large number of segments can cause search performance issues until an optmize command is run (which in itself can cause high IO and make your search unavailable). By reducing the data we reduce the time these operations can take. We also end up at a point where no new data is inserted into the old indexes, so no further optimizations need to be done on them, only new indexes. Any acitivity on the old indexes then should only be from searching and will reduce the IO requirements from the cluster for those shards/indexes.

This also greatly simplifies purging old data. Instead of having to have the cluster do merges and optimizations when we remove old documents, we can just delete the old indexes and remove them from the aliases. This will also reduce the IO overhead on your cluster.

Monitoring Ingestion

Watch your ELB response time - is it spiking? Check flush, merge, and indexing times.

Add logging to your posts to understand how long each bulk insert is taking. Play with bulk sizes to see what works best for your document/datasize.

When moving from a single large index to aliased indexes, insertion times went from 500ms-1.5s+ to 50ms on average. Our daily processes that were taking half a day to complete, finishing in less than 15 minutes.

Processing 5k log lines per minute? Now we’re processing over 6 million.

Taking the time to understand your database and how each part of it works can be worth the effort especially if you’re looking for performance gains.

If you’ve done any concurrency work in Go you’ve used WaitGroups. They’re awesome!

Now lets say you have a bunch of workers that do some stuff, but at some point they all need to hit a single API that your rate limited against.

You could move to just using a single process and limiting it that way, but that doesn’t scale out very well.

While there are quite a few distributed lock libraries in Go, I didn’t find any that worked similarly to WaitGroups, so I set out to write one.

( If you just want the library, head on over to Github https://github.com/joshrendek/redis-rate-limiter )

Design goals:

  • Prevent deadlocks
  • Hard limit on concurrency (dont accidentally creep over)
  • Keep it simple to use
  • Use redis
  • Keep the design similar to sync.WaitGroup by using Add() and Done()

Initially I started off using INCR/DECR with WATCH. This somewhat worked but was causing the bucket to over-flow and go above the limit I defined.

Eventually I found the SETNX command and decided using a global lock with that around adding was the way to go.

So the final design goes through this flow for Add():

  1. Use SETNX to check if a key exists; loop until it doesn’t error (aka the lock is available for acquiring)
  2. Immediately add an expiration to the lock key once acquired so we don’t deadlock
  3. Check the current number of workers running; wait until it is below the max rate
  4. Generate a uuid for the worker lock, use this to SET a key and also add to a worker set
  5. Set an expiration on the worker lock key based on uuid so the worker doesn’t deadlock
  6. Unlock the global lock from SETNX by deleting the key
  7. Clean old, potentially locked workers

Removing is much simpler with Done():

  1. Delete the worker lock key
  2. Remove the worker lock from the worker set

For (1) we want to make sure we don’t hammer Redis or the CPU - so we make sure we can pass an option for a sleep duration while busy-waiting.

(2) Prevents the global lock from stalling out if a worker is cancelled in the middle of a lock acquisition.

Waiting for workers in (3) is done by making sure the cardinanality ( SCARD ) of the worker set is less than the worker limit. We loop and wait until this count goes down so we don’t exceed our limit.

(4) and (5) uses a UUID library to generate a unique id for the worker lock name/value. This gets added via SADD to the wait group worker set and also set as a key as well. We set a key with a TTL based on the UUID so we can remove it from the set via another method if it no longer exists.

(6) frees the global lock allowing other processes to acquire it while they wait in (1).

To clear old locks in (7) we need to take the members in the worker set and then query with EXISTS to see if the key still exists. If it doesn’t exist but it is still in the set, we know something bad happened. At this point we need to remove it from the worker set so that the slot frees up. This will prevent worker deadlocks from happening if it fails to reach the Done() function.

The Add() function returns a UUID string that you then pass to Done(uuid) to remove the worker locks. I think this was the simplest approach for doing this however if you have other ideas let me know!

That’s it! We now have a distributed wait group written in go as a library. You can see the source and how to use it over at https://github.com/joshrendek/redis-rate-limiter.