Planet Redpill Linpro

22 January 2020

Pixelpiloten

I wrote a e-book.

So the reason I have not blogged here a much since the autumn 2019 is because I have been out on assignment. But I have also been writing a small e-book for the company I work for (RedPill Linpro).

The book is called “DevOps Guide 2020” and it is about the DevOps experience, methodology, the agile process and a index of tools that can help you in your process.

You can download the E-book by clicking the link below but here is a preview from the section about the correlation between Agile software development and the DevOps methodology.

You cannot talk about DevOps without talking about the agile methodology. The DevOps mentality is all about being agile, to be able to adapt, to iterate, have faster turn around, to self organize and have cross-functional teams. DevOps is a product of the agile software-development movement. So, in many ways it is not a new concept; It is a logical step in doing software development, because of what agile methodologies teach us. If your company is working with agile methodologies, it will be easier adopting DevOps-processes and technologies.

Wed 22 Jan 2020, 15:00

17 January 2020

Redpill Linpro Techblog

A look at our new routers

This year we intend to upgrade all the routers in our network backbone to a brand new platform based on open networking devices from Edge-Core running Cumulus Linux. In this post - replete with pictures - we will take a close look at the new routers and the topology of our new network backbone.

Why upgrade?

Our network backbone is today based on the Juniper MX 240 routing platform. Each of them occupy 5

Fri 17 Jan 2020, 00:00

10 January 2020

Redpill Linpro Techblog

Rapidly removing a Cumulus Linux switch from production

Sometimes I need to quickly remove one of our data centre switches from production. Typically this is done in preparation of scheduled maintenance, but it could also be necessary if I suspect that it is misbehaving in some way. Recently I stumbled across an undocumented feature in Cumulus Linux that significantly simplified this procedure.

The key is the file /cumulus/switchd/ctrl/shutdown_linkdown. This file does normally not exist, but if it is created with the contents 1, it changes ...

Fri 10 Jan 2020, 00:00

02 January 2020

Ingvar Hagelund

Packages of varnish-6.0.5 with matching vmods for el6 and el7, and a fedora modularity stream

Some time back in 2019, Varnish Software and the Varnish Cache project released a new LTS upstream version 6.0.5 of Varnish Cache. I updated the fedora 29 package, and added a modularity stream varnish:6.0 for fedora 31. I have also built el6 and el7 packages for the varnish60 copr repo, based on the fedora package. A snapshot of matching varnish-modules, and a selection of other misc vmods are also available.

Packages may be fetched from https://copr.fedorainfracloud.org/coprs/ingvar/varnish60/.

vmods included in varnish-modules:
vmod-bodyaccess
vmod-cookie
vmod-header
vmod-saintmode
vmod-tcp
vmod-var
vmod-vsthrottle
vmod-xkey

vmods packaged separately:
vmod-blobsynth
vmod-rfc6052
vmod-querystring
vmod-blobdigest
vmod-memcached
vmod-digest
vmod-geoip
vmod-basicauth
vmod-curl
vmod-uuid

by ingvar at Thu 02 Jan 2020, 16:03

25 December 2019

Ingvar Hagelund

Creation Day (J.R.R. Tolkien: The Silmarillion)

A version of this text was presented as the lecture for Creation Day, Holmlia Church, 2019-06-19.

[Introduction: Excerpts from The Ainulindalë accompagnied by folk music improvisation on organ and violin]

Some of you may know that I’m a Tolkien enthusiast. I give away Tolkien books on my own birthday. Sometimes I feel like going door-to-door with The Lord of the Rings and its gospel; *Ding* *dong* Goood Morning! Did you know that Tolkien’s books may change your life? (What is that? Yes, Good Morning in all meanings of that expression, thank you). Now, as I can present this before you here in church, I probably won’t have to.

For many, the language professor John Ronald Reuel Tolkien only means his books The Hobbit and The Lord of the Rings. Some have even not read any of his books, but may have seen films with strange wizards, orcs, elves, and a good deal of fighting. But this is Creation Day, so in this small lecture there will be less orcs, Gandalf, Bilbo, Frodo, and the Ring. Instead I will talk a bit about Tolkien’s thoughts on God as the Creator, his Creation, and Men, as God’s sub-creators.

In the introduction, we heard lines from Tolkien’s creation myth, the Ainulindalë, that is, The Music of the Ainur: God gives the Ainur, that is, his angels, a theme to improvise over. Then he lets the song unfold, and when the song is finished, he shows them what they have sung. He says: Ëa! Let this world be! And the song is the World. When the song is sung, its life is the history of the World unfolding. Isn’t that just incredibly beautiful?

The Ainur enjoys the high mountains and the deep valleys, and the sea, and the elves, and the trees, and the flowers, and the animals they have sung about. But in the middle of the harmonies, Melkor’s dissonance is heard. The mightiest of the angels sets his own thoughs above God’s thoughs, and wants to rule, and in pride, fill the void with subjects under his dominion. But what first sounds like destroying God’s theme, is it self taken up in the song, and makes it even more fulfilled.

In the motion of the sea, the song is most clearly heard. Now further in the Ainulindalë, we hear how Melkor in his rebellion makes extreme cold, freezing the water, and uncontrolled heat, boiling it to steam. But in the midst of the freezing cold, we get beautiful snowflakes, and from the heat and steam, there are clouds and life-giving rain. Tolkien shows us that even when the Creation is challenged by evil, God can always turn the evil to something good in the end. God don’t want evil to happen, but when it happens, hope is always there. And when Time comes to its end, and the final chord is sung, we may see that hope and faith in the middle of evil, gave the most beautiful music played in God’s honor.

Those reading Tolkien’s books will soon observe his joy of nature. The books are swarming of life. There are bushes and flowers and trees of all kinds, and everything has value; from pipe weed to oak trees. There are insects and foxes, eagles and ravens, bears and elephants, and even the simplest flower may be important and save lives. Tolkien loved the landscape were he grew up, with meadows, woods, small rivers, hills, and the other crossroads with an inn with good beer. But he also loved the snow in the high mountains, the mighty large rivers, the deep cloven valleys, the sun in the sky, the stars of Elbereth, thunder claps and storm over mountains, and the wind of the sea. There is a lot of God’s creation wihin Tolkien’s Middle Earth.

Tolkien criticize those who says that fairy-tales and fantastic stories are just escapism, and have nothing to do with reality. In one of his most known lectures, he turns this upside-down: In a World of evil, somebody wants to tell that there is Light in the darkness and make stories of Hope. What is wrong with that? And Escape is getting from prison to freedom. That is a Good Thing!

Tolkien says that one of the most important features of a fairy-tale, is to experience anew the small and large wonders of the World. When in The Lord of the Rings we read about Frodo coming to the elven wood Lothlórien; For the first time in his life, he realizes what a Tree really is. He feels the bark, the trunk, the branches, and the leaves. They are full of color and smell and sound and Life. The Ents, the sheperds of the Trees, that watches over the trees in Fangorn Forest, sing and talk to their trees, and mourns them when they die. Trees are so much more than something that’s just there. Go and watch and smell and enjoy the life of the trees in the grove you pass on the way to work every day.

Aragorn and his rangers have watched over Hobbiton and Bree, and held evil forces away, without the people living there knowing about this. When you get to live in freedom and peace, remember in thankfulness who built the peace, and who is watching over it. After reading about the faithful friendship between Sam and Frodo, find again the joy in the relations to your friends. When the story about Aragorn and Arwen’s long awaited marriage is told, or Faramir’s spontanous proposal to Eowyn, or Rose and Sam’s happy wedding, renew the joy of your partner, and delight in your choice. Fantasy and fairy-stories gives us the opportunity to recovery, to find again the fantastic from the domestic.

Man is special in God’s creation. Tolkien meant that God has put a spark of his creating power within us, making us more than animals. Telling myth and stories, we make new things that weren’t there before. We are sub-creators.

When we make new stories, or tell or retell myths, they are of course not the Truth. But as the light is spread through a prism making a spectrum of colors, our stories are created from the True Light. Thus, Myth and stories may show us a glimpse of the Truth. This is good, and not only because they come of God’s true Light. When light is broken into colors, they are no longer perfect white: Some becomes red, some blue, some yellow, some violet. But in this spectrum of colors, something new has been created, that earlier was not. And it has value in itself.

Unfortunately, we can not all write like Tolkien. There are those that try, and you get … things … like Game of Thrones and other garbage. But when we use our talents, we are sub-creators too. If that is being a priest, or taking pictures, or making music, or doing accounting, or sports, or learning, or baking, or programming, or carpentry; That is fullfilment of the potential of God’s light through us. With all our strange shapes and colors, we bring fourth a richness that would not exist without us. And though our sub-creation is not perfect, it still has its source in God’s unbroken bright light.

by ingvar at Wed 25 Dec 2019, 15:07

24 December 2019

Ingvar Hagelund

The Rivendell Resort for the Resting (J.R.R. Tolkien: The Lord of the Rings)

I read Tolkien’s “Canon”, that is, The Hobbit, The Lord of the Rings, and The Silmarillion, every year about Christmas. So also this year.

What was Bilbo up to after he left Hobbiton, and until Frodo met him again in Rivendell. While there are few explicit mentions, there are some cues that we may explore.

First, when Bilbo was packing and leaving Bag End after his long expected party, he was again going with dwarves. They are not named, but it seems likely that they are the same who delivered goods from Dale to the party, and have probably stayed in the guest rooms of Bag End since. No dwarves were mentioned at the party, and I guess they would have, had they been present. So Bilbo goes with the dwarves, and as he tells to Frodo later, he goes on his last journey all the way to The Mountain, that is, Erebor, and to Dale. He comes too late to visit his old friend Balin – he had left for Moria. Then Bilbo returned to Rivendell. No more is told about his travels back, though it is easy to speculate. When he left the Mountain, returing homewards the previous time, he was invited to the halls of his friend the Elven King, that is Thranduil of Mirkwood/Greenwood the Great, but gently rejected the offer. It would be natural to pay him a visit on his second return westwards. The elves would give him safe journey through the forest. By legend, he was probably well known to the Beornings too, and I would guess he got a safe and well escorted journey back over the Misty Mountains.

Back in Rivendell, Frodo got acquainted to Aragorn the Ranger. If Bilbo uses one year on his journey to Erebor and back to Rivendell, he is 112, and Aragorn would be at a the frisky age of 71. While Aragorn is often away, helping in the watch of the Shire, or on errantry for Gandalf, like going hunting for Gollum, he is probably often back in Rivendell. Bilbo speaks of him as his good friend, the Dùnadan, and when they sneak away in the Hall of Fire, it sounds like it is not the first time they redraw to look over his verses.

So what has Bilbo done over the next 16 years? Like the Asbjørnsen and Moe, or the Grimm brothers, he has literally collected fairy tales. The Red Book of Westmarch that goes from Bilbo and Frodo to Sam at the end of the story, contains several long stories and verse translated from elvish by Bilbo. Within this frame, this is what we may call the Silmarillion Traditions. And based on this, he may have written quite a few verses of his own. When he recites for Erestor and other elves in the Hall of Fire, it is clear that this is not the first time he does this, though he does not often get asked for a second hearing.

Finally in Rivendell, Bilbo got his own parlour. After Frodo’s reception dinner, and all the singing and reciting of verse in the Hall of Fire, we are told that Frodo and Bilbo retraits to Bilbo’s room, where they can exit to a veranda that looks out over a garden and the river. We know Bilbo was always fond of his garden, and it is nice to know that the elves of Rivendell provided him with one just outside his room.

If I had to grow old in solitude, I’d like a room at the Rivendell Resort for the Resting, please.

by ingvar at Tue 24 Dec 2019, 18:00

23 December 2019

Ingvar Hagelund

J.R.R. Tolkien: The Hobbit

I read Tolkien’s “Canon”, that is, The Hobbit, The Lord of the Rings, and The Silmarillion, every year about Christmas. So also this year.

There is said so much about this book already, so instead of adding more non-interesting chatter to the World, I’d rather again this year show off my latest acquisition to my Hobbit collection: The annotated Hobbit:
20191223_083535_compress41

This is a true treasure for Hobbit fans. In addition to the actual text, it contains tons of information, like the contemporary context for the book, different versions and updates among the many editions, possible inspirations and related texts, fun facts, illustrations from Hobbit variants of the World,

20191223_083755_compress6

20191223_083631_compress94

notes on the meaning of names and places, and so much more.

20191223_083725_compress1

It even contains the full text of The Quest of Erebor, that was meant as an appendix for The Lord of the Rings, but was cut before its release.

This is the revised and expanded version of The Annotated Hobbit. We owe great thanks to Douglas A. Anderson who must have gone to extremes while researching for this edition.

This book is greatly recommended for those who enjoy being immersed in footnotes, distractions, and fun facts while reading. Ah, that would be the typical Tolkien fan, I guess.

20191223_084028_compress7

It is another great addition to my ever growing list of Hobbits.

by ingvar at Mon 23 Dec 2019, 18:00

10 November 2019

Pixelpiloten

RBAC - Controlling user access in Kubernetes - Part 2

This is the second part in my series of articles on “RBAC - Controlling user access in Kubernetes”, you can find Part 1 here: https://www.pixelpiloten.se/blog/rbac-in-kubernetes-part-1/.

In this article I will go through on how you can create a ClusterRole and a ServiceAccount, bind that to the ClusterRole but tied to a Namespace. And to top it of I will show how you can generate a Kubeconfig file that you can use or give to a Developer that should have access to your cluster, but limited to that specific Namespace and the permissions that the ClusterRole gives it.

Step 1 - Create the ClusterRole

  1. Create a file called clusterrole--developer.yml, this role can basically create, delete and update all the most common things that a Developer would use.
  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: developer-role
  rules:
    # Ingresses
    - apiGroups: ["networking.k8s.io", "extensions"]
      resources: ["ingresses"]
      verbs: ["get", "watch", "list", "create", "update", "delete"]
    # Services
    - apiGroups: [""]
      resources: ["services"]
      verbs: ["get", "watch", "list", "create", "update", "delete"]
    # Deployments
    - apiGroups: ["extensions", "apps"]
      resources: ["deployments", "replicasets"]
      verbs: ["get", "watch", "list", "create", "update", "delete"]
    # Pods & Logs
    - apiGroups: [""]
      resources: ["pods", "pods/log", "pods/portforward", "pods/exec"]
      verbs: ["get", "watch", "list", "create", "update", "delete"]
    # Jobs & Cronjobs
    - apiGroups: ["batch"]
      resources: ["cronjobs", "jobs"]
      verbs: ["get", "watch", "list", "create", "update", "delete"]
    # Daemonsets
    - apiGroups: ["apps"]
      resources: ["daemonsets"]
      verbs: ["get", "watch", "list", "create", "update", "delete"]
    # Replication controllers
    - apiGroups: [""]
      resources: ["replicationcontrollers"]
      verbs: ["get", "watch", "list"]
    # Stateful sets
    - apiGroups: ["apps"]
      resources: ["statefulsets"]
      verbs: ["get", "watch", "list"]
    # Configmaps
    - apiGroups: [""]
      resources: ["configmaps"]
      verbs: ["get", "watch", "list", "create", "update", "delete"]
    # Secrets
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["get", "watch", "list", "create", "update", "delete"]
  1. Create that ClusterRole with Kubectl.
  $ kubectl apply clusterrole.yaml

Step 2 - Create Namespace, ServiceAccount and ClusterRoleBinding.

  1. Create a file called serviceaccount--developer.yaml with this content, this defines a Namespace, a ServiceAccount and a binds that to our ClusterRole we created before, but only within our Namespace. So this ServiceAccount only has the permissions within this Namespace, not cluster wide.
  apiVersion: v1
  kind: Namespace
  metadata:
    name: developer-namespace
  ---
  apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: developer
    namespace: developer-namespace
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: RoleBinding
  metadata:
    name: developer-rolebinding
    namespace: developer-namespace
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: developer
  subjects:
  - kind: ServiceAccount
    name: developer
    namespace: developer-namespace
  1. Create the Namespace, ServiceAccount and ClusterRoleBinding with Kubectl.
  $ kubectl apply -f serviceaccount--developer.yaml

Step 3 - Create bash script that can generate a kubeconfig file.

  1. Create a file called kubeconfig.sh and paste this code.
  #!/bin/sh

  SERVICE_ACCOUNT=$1
  NAMESPACE=$2
  SERVER=$3

  SERVICE_ACCOUNT_TOKEN_NAME=$(kubectl -n ${NAMESPACE} get serviceaccount ${SERVICE_ACCOUNT} -o jsonpath='{.secrets[].name}')
  SERVICE_ACCOUNT_TOKEN=$(kubectl -n ${NAMESPACE} get secret ${SERVICE_ACCOUNT_TOKEN_NAME} -o "jsonpath={.data.token}" | base64 --decode)
  SERVICE_ACCOUNT_CERTIFICATE=$(kubectl -n ${NAMESPACE} get secret ${SERVICE_ACCOUNT_TOKEN_NAME} -o "jsonpath={.data['ca\.crt']}")

  cat <<END
  apiVersion: v1
  kind: Config
  clusters:
  - name: default-cluster
    cluster:
      certificate-authority-data: ${SERVICE_ACCOUNT_CERTIFICATE}
      server: ${SERVER}
  contexts:
  - name: default-context
    context:
      cluster: default-cluster
      namespace: ${NAMESPACE}
      user: ${SERVICE_ACCOUNT}
  current-context: default-context
  users:
  - name: ${SERVICE_ACCOUNT}
    user:
      token: ${SERVICE_ACCOUNT_TOKEN}
  END
  1. Make kubeconfig.sh executable.
  chmod +x kubeconfig.sh

Step 4 - Generate a kubeconfig file

  1. Run the script and point the output to a file and location where you want to put it (usually in the ~/.kube-folder).
  ./kubeconfig developer developer-namespace <KUBERNETES-API-URL> > ~/.kube/mycluster.conf
  1. Set the path of the generated kubeconfig file in the KUBECONFIG variable.
  $ export KUBECONFIG="~/.kube/mycluster.conf"

Tadaa!

You have now created a ServiceAccount with all the tools your Developer needs but limited to a specific Namespace so they cannot deploy or modify anything outside that.

Sun 10 Nov 2019, 20:30

03 October 2019

Pixelpiloten

Updates to K3S Ansible

So last week I released a Ansible playbook for getting K3S up and working within 5 minutes (https://github.com/pixelpiloten/k3s-ansible) and both the blog post and my Github account spiked in traffic, for example the blog post (https://www.pixelpiloten.se/blog/kubernetes-running-in-5min/) I made about it attracted about 1000% more visitors according to my Google analytics account, and I had a bunch of feedback on both Reddit, LinkedIn and Disqus. Thank you for your interest, it encourage me to continue my work on the project.

Thank you everyone :)

So i made some updates…

  1. In the playbook I created functionality to choose what K3S version you wanted to use BUT unfortunately the specific code i made for that did not work, it is now fixed and should now download and install the version you provided in the inventory file.
  2. I had hard coded the folder where the K3S installer and I have made this available as a variable.

Future

  • Provide a Virtual machine to test this Ansible playbook, both for developing the playbook but also for testing K3S. This work has been started in a separate branch (https://github.com/pixelpiloten/k3s-ansible/tree/vagrant) but is not ready yet.
  • Ability to upgrade K3S and in the end Kubernetes to a new version by changing the version number in the inventory file.
  • Support more Operative systems than Ubuntu. CentOS will probably be the first one.

To the future and beyond!

Thu 03 Oct 2019, 14:15

28 September 2019

Redpill Linpro Techblog

Running PostgreSQL in Google Kubernetes Engine

(Update: This post has been updated to reflect changing backup tool from WAL-E to WAL-G. WAL-G is a more modern and faster implementation of cloud backups for postgreSQL)

Several Redpill Linpro customers are now in the kubernetes way of delivery. Kubernetes has changed the way they work, and is acting as an effective catalyst empowering their developers. For these customers, the old-school way of running PostgreSQL is becoming a bit cumbersome:

The typical PostgreSQL installation has been based on bare ...

Sat 28 Sep 2019, 00:00

25 September 2019

Pixelpiloten

Kubernetes cluster (K3S) running in 5 min

Setting up Kubernetes can be a procedure that takes some time, but with the Kubernetes distribution K3S and a Ansible playbook we can get a Kubernetes cluster up and running within 5min.

So I created just that, an Ansible playbook where you just add your nodes to a inventory file and then the playbook will install all the dependencies we need for K3S, the workers will automaticly join the master AND I have also added some firewall rules (with help from UFW) so you have some basic protection for your servers. Let’s go through how to use it.

Requirements

  • Git
  • Virtualenv (Python) on the computer you will deploy this from.
  • At least 2 servers with SSH access.
    • Must be based on Ubuntu.
    • User with sudo permissions.
    • Must have docker installed.

Preparations

  • Point a hostname to the server you want to be your kubernetes_master_server.

Steps

Step 1

Prepare your local environment (or wherever you deploy this from) with the dependencies we need. This will install a Virtual Python environment and download Python, Pip and Ansible and make those available in your $PATH so you can execute them.

  1. Clone the K3S Ansible repository from Github.

     $ git clone git@github.com:pixelpiloten/k3s-ansible.git
    
  2. Go to the ansible directory and create your Virtual Python environment.

     $ virtualenv venv
    
  3. Activate the Virtual Python environment.

     $ source venv/bin/activate
    

Step 2

Configure your inventory file with the servers you have, you can add how many workers you want but there can only be one master in K3S (though it might be supported in the future).

  1. Rename the inventory.example file to inventory.

     $ mv inventory.example inventory
    
  2. Change the kubernetes_master_server to the server you want to be your Kubernetes master and add ip(s) to the allowed_kubernetes_access list for the ips that should have access to the cluster via Kubectl. Alsop change ansible_ssh_host, ansible_ssh_user, ansible_ssh_port, ansible_ssh_private_key_file and hostname_alias to your server login details. And of course add how many workers you have.

     all:
     vars:
       ansible_python_interpreter: /usr/bin/python3
       kubernetes_master_server: https://node01.example.com:6443 # Change to your master server
       allowed_kubernetes_access: # Change these to a list of ips outside your cluster that should have access to the api server.
         - 1.2.3.4
         - 1.2.3.5
     children:
       k3scluster:
         hosts:
           node01: # We can only have one master.
             ansible_ssh_host: node01.example.com
             ansible_ssh_user: ubuntu
             ansible_ssh_port: 22
             ansible_ssh_private_key_file: ~/.ssh/id_rsa
             hostname_alias: node01
             kubernetes_role: master # Needs to be as it is.
             node_ip: 10.0.0.1
           node02: # Copy this and add how many workers you want and have.
             ansible_ssh_host: node02.example.com
             ansible_ssh_user: ubuntu
             ansible_ssh_port: 22
             ansible_ssh_private_key_file: ~/.ssh/id_rsa
             hostname_alias: node02
             kubernetes_role: worker # Needs to be as it is.
             node_ip: 10.0.0.2
    

Step 3

You are now ready to install K3S on your servers. The Ansible playbook will go through the inventory-file and install the dependencies on your servers and then install K3S on your master and your workers, the workers will automaticly join your K3S master. At the end the playbook will write a Kubeconfig-file to /tmp/kubeconfig.yaml on the machine you are running the playbook on.

  1. Run the Ansible playbook and supply the sudo password when Ansible asks for it.

     $ ansible-playbook -i inventory playbook.yml --ask-become-pass
    
  2. When the playbook is done you can copy the Kubeconfig-file /tmp/kubeconfig.yaml to your ~/.kube/config or where ever you want to keep it, BUT you need to modify the the server hostname to whatever your kubernetes_master_server is. (PS. Do not use the content below, use the /tmp/kubeconfig.yaml that got generated locally)

     apiVersion: v1
     clusters:
     - cluster:
         certificate-authority-data: <REDACTED-CERTIFICATE>
         server: https://0.0.0.0:6443
     name: default
     contexts:
     - context:
         cluster: default
         user: default
     name: default
     current-context: default
     kind: Config
     preferences: {}
     users:
     - name: default
     user:
         password: <REDACTED-PASSWORD>
         username: admin
    

Tadaa!

So you have now created a K3S Kubernetes cluster, gotten the Kubeconfig and added magic firewall rules so only you have access. And if you need to add more workers after the first initial setup you can just add more workers to the inventory-file and run the playbook in Step 3.1 again.

So walk like a baws down the office corridor with your elbows high and call it a day!

You done did it Baws!

Wed 25 Sep 2019, 09:30

20 September 2019

Pixelpiloten

Build your own PaaS?

Ok, that headline was a bit baity, but let me explain why I’m not completely trolling for site views.

Kubernetes has an extensive API from which you can get pretty much anything, and there are many libraries for all kinds of languages, here is some of the official libraries:

A better list of official and community driven libraries can be found here: https://kubernetes.io/docs/reference/using-api/client-libraries/

So with any of these libraries you will have the ability to both list resources as well as create new one like Ingresses, Pods, Services, Deployments and ofcourse change them. Now thats powerful, all that power inside your language of choice.

I got the power!

With great power comes great responsibility

As the old Superman quote says “With great power comes great responsibility” you need to make sure that if you are going to use any of these tools you need to make sure that the developer that is going to programaticly access your Kubernetes cluster have an Account in with only as much permissions as that person should have, and preferbly only have access to a test server or a limited namespace for example. You dont want someone to develop against live production and lose your database because of a missing Quotation mark somewhere, trust me, I’ve been there.

A handful of tools

With that disclaimer out of the way lets play around and get up to some shenanigans. I have choosen the Python library https://www.github.com/kubernetes-client/python since I have coded alot in Python, but wait there is more. I wanted to make as much of a real world example as i could of how a PaaS application could look like, and that means that I would have a backend that does the API calls via the Python library and a frontend that actually shows the result, so here is my setup:

  • Backend
    • The Python Kubernetes client library - https://www.github.com/kubernetes-client/python
    • A number of Python scripts that calls the Kubernetes API using the Kubeconfig file from the host where its executing the scripts from.
    • Outputs its result in JSON.
  • Frontend
    • Laravel
      • Using the Symfony component symfony/process I execute one of the Python scripts and convert that JSON to a PHP object in a Controller and send that to a Blade template where I list the output.
      • Bootstrap 4

Coding coding coding

An example

So let’s go through an example page where I list the details about a deployment and its rollout history.

The backend (Python script)

This script gets details about a deployment like name, namespace, what container image it runs and the rollout history of it and you execute it like this: python3 /app/backend/deployment-details.py --namespace=mynamespace --deployment=mydeployment

# -*- coding: UTF-8 -*-
from kubernetes import client, config
import json
import argparse
from pprint import pprint

class Application:
    def __init__(self):
        # Parse arguments.
        argumentParser = argparse.ArgumentParser()
        argumentParser.add_argument('-n', '--namespace', required=True)
        argumentParser.add_argument('-d', '--deployment', required=True)
        self.arguments = argumentParser.parse_args()
        # Load kubeconfig, from env $KUBECONFIG.
        config.load_kube_config()
        self.kubeclient = client.AppsV1Api()
        # Load the beta2 api for the deployment details.
        self.kubeclientbeta = client.AppsV1beta2Api()

    # This is just because I used a user with full access to my cluster, DON'T do this kids!
    def forbiddenNamespaces(self):
        return ["kube-system", "cert-manager", "monitoring", "kube-node-lease", "kube-public"]

    # Here we get some well choosen details about our deployment.
    def getDeploymentDetails(self):
        # Calls the API.
        deployments_data = self.kubeclient.list_namespaced_deployment(self.arguments.namespace)
        deployment_details = {}
        # Add the deployment details to our object if the deployment was found.
        for deployment in deployments_data.items:
            if deployment.metadata.namespace not in self.forbiddenNamespaces():
                if self.arguments.deployment == deployment.metadata.name:
                    deployment_details = deployment
                    break

        # Here we set our response.
        if not deployment_details:
            response = {
                "success": False,
                "message": "No deployment with that name was found."
            }
        else:
            containers = {}
            container_ports = {}
            for container in deployment_details.spec.template.spec.containers:
                for port in container.ports:
                    container_ports[port.container_port] = {
                        "container_port": port.container_port,
                        "host_port": port.host_port
                    }
                containers[container.name] = {
                        "name": container.name,
                        "image": container.image,
                        "ports": container_ports
                    }
            response = {
                "success": True,
                "message": {
                    "name": deployment_details.metadata.name,
                    "namespace": deployment_details.metadata.namespace,
                    "uid": deployment_details.metadata.uid,
                    "spec": {
                        "replicas": deployment_details.spec.replicas,
                        "containers": containers
                    },
                    "history": self.getDeploymentHistory(deployment_details.metadata.namespace, deployment_details.metadata.name)
                }
            }
        return response

    # To get the rollout history of a deployment we need to call another method and use the beta2 API.
    def getDeploymentHistory(self, namespace, deployment):
        deployment_history = {}
        deployment_revisions = self.kubeclientbeta.list_namespaced_replica_set(namespace)
        for deployment_revision in deployment_revisions.items:
            if deployment_revision.metadata.name.startswith(deployment):
                deployment_history[deployment_revision.metadata.annotations['deployment.kubernetes.io/revision']] = {
                    "revision": deployment_revision.metadata.annotations['deployment.kubernetes.io/revision']
                }
        return deployment_history

    # The method we actually call to run everything and outputs the result in JSON.
    def run(self):
        response = json.dumps(self.getDeploymentDetails())
        print(response)

app = Application()
app.run()

Frontend (Laravel - Route)

In laravel you create a route for the url you want to display your page, in this case I use the url /deployments/{namespace}/{deployment} where namespace and deployment are dynamic values you provide in the url and I send that request to my Controller, a PHP class called DeploymentsController (classic MVC programming).

<?php
Route::get('/deployments/{namespace}/{deployment}', 'DeploymentsController@show');

Frontend (Laravel - Controller)

In my controller I execute the Python script with the help of the symfony/process component and convert the JSON I get from that output to a PHP object and send that to the Blade template.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Symfony\Component\Process\Exception\ProcessFailedException;
use Symfony\Component\Process\Process;

class DeploymentsController extends Controller
{
    public function show($namespace, $deployment) {
        $process = new Process([
            "python3",
            "/app/backend/deployment-details.py",
            "--namespace=". $namespace,
            "--deployment=". $deployment
        ]);
        $process->run();

        if (!$process->isSuccessful()) {
            throw new ProcessFailedException($process);
        }
        $deployment_details = $process->getOutput();
        $deployment_details_array = json_decode($deployment_details);
        $deployment_details_history_array = collect($deployment_details_array->message->history);
        $deployment_details_array->message->history = $deployment_details_history_array->sortByDesc("revision");
        return view("deployment-details", ["deployment_details" => $deployment_details_array->message]);
    }
}

Frontend (Laravel - Template)

…and in our Blade template we render the deployment details we got from our controller, this is not the full html-page since we are extending another template (read more here: https://laravel.com/docs/5.8/blade#extending-a-layout) but these are the important parts. Blade templates allow you to use some PHP functions but 99% are stripped away so you dont write a bunch of PHP inside the templates, but you can do things like loops, render variables and do some formating on them etc.

@extends('layouts.app')
@section('title', $deployment_details->name)
@section('content')
<p>
    <a href="/deployments">&laquo; Go back to deployments</a>
</p>
<div class="jumbotron">
    <ul>
        <li><span class="font-weight-bold">Name: </span></li>
        <li><span class="font-weight-bold">Namespace: </span></li>
        <li><span class="font-weight-bold">UID: </span></li>
        <li>
            <span class="font-weight-bold">Spec:</span>
            <ul>
                <li><span class="font-weight-bold">Replicas: </span></li>
                <li>
                    <span class="font-weight-bold">Containers:</span>
                    <ul>
                        @foreach ($deployment_details->spec->containers as $container)
                            <li><span class="font-weight-bold">Name: </span></li>
                            <li><span class="font-weight-bold">Image: </span></li>
                            <li>
                                <span class="font-weight-bold">Ports:</span>
                                <ul>
                                    @foreach ($container->ports as $port)
                                        <li><span class="font-weight-bold">Container port: </span></li>
                                        <li><span class="font-weight-bold">Host port: </span></li>
                                    @endforeach
                                </ul>
                            </li>
                        @endforeach
                    </ul>
                </li>
            </ul>2019-09-19-build-your-own-paas
        </li>
    </ul>
</div>
<table class="table">
    <thead class="thead-dark">
        <tr>
            <th scope="col">Revision</th>
            <th scope="col">Cause</th>
        </tr>
    </thead>
    <tbody>
        @foreach($deployment_details->history as $deployment_revision)
            <tr>
                <td></td>
                <td>-</td>
            </tr>
        @endforeach
    </tbody>
</table>
@endsection

Screenshots, give me Screenshots!

Here is an example of how this application looks like in the flesh, I’ve built more parts than this but this is the result of all the code above.

DIY PaaS Screenshot

Tadaa!!

And there you have it, a working example of displaying real live data from your Kubernetes cluster. Now this is not a super securee way of doing it, but it’s to show you the power of the tools that exist out there and spark imagination. Think of the possibilitys in integrating this into your existing infrastructure, creating statistics or actually creating your own PaaS, the sky’s the limit. Go forth and play :)

Fri 20 Sep 2019, 11:15

17 September 2019

Pixelpiloten

What is DevOps?

I often get asked what is it that I actually do, from both people within the Tech industry as well as outside and I dont always have a simple explanation for that, so I thought I should try and write this down. What is it that I do and what is this DevOps thing really?

A Wikipedia article about the subject (https://en.wikipedia.org/wiki/DevOps) starts out by defining it as:

DevOps is a set of software development practices that combine software development (Dev) and information-technology operations (Ops) to shorten the systems-development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.

And that pretty much describes it pretty well, but lets try and boil it down and talk about what kind of tasks you would be doing and what kind of technologies and software you would use. Now this is going to be from my personal experience, yours might differ but the basic principles should be there.

Tasks for a DevOps engineer

Local development

  • Create a local development platform.
    • Set up a local development environment in a Virtual machine (like Virtualbox with Vagrant) or Docker for Developers to copy and run and develop their applications in.
    • Set up a skeleton of one of the applications/frameworks that the Developers work with to develop in.
    • Document the above setup and make it easy to get started, preferably with one, or a few commands.

CI

  • Setup CI servers and deployjobs to make it super easy to deploy, with as little hands-on for each deploy as possible. Using tools like:
  • Setup test servers that run tests when the CI deployjob runs at every commit or before a deployment for example.
  • Setup security checks in CI deployjobs.
    • Something like sensiolabs/security-checker to scan for vulnerabilities in Symfony components in Symfony based applications.
    • Something like Clair to scan Docker containers for vulnerabilities.

Infrastructure

Logging and Monitoring

  • Setup servers for aggregated Logging using something like:
  • Setup monitoring for your infrastructure with tools like.
  • Setup monitoring for your applications with tools like:

Other

  • Setup backups for files and databases in Cronjobs.
  • Setup mailservers.
  • Do regular security maintainence on all the infrastructure.

Communication

Communication is key!

One of the things you defninitly need as a DevOps enginner is communication skills, you are going to have to talk to a lot of people in your daily job with everyone from Developers to Customers, your IT partners, your customers IT partners, Project leaders and Managers. Since what you do is touching everything from local development up to the production infrastructure you affect them all and you need to work together and be open and transparent about all this.

In conclution

Is this a complete list of what you do as a DevOps engineer? No, but its a start and its very much colored from my personal experience and I would go back to the Wikipedia quote, it’s all about making delievery of your applications from 0 to 100 as automatic and easy as possible.

All you need is me!

And trying and have that mindset throughout all the software and infrastructure you use, always think “can we automate this process?” and “can we make this part of the repository and make it more transparent?” so whenever you quit your job for new adventures our you get hit by a bus your company is Not screwed because you where the only one who knew how “..that odd server was setup”.

So I hope that was helpful in understanding what a DevOps engineer does, do you agree or have anything to add, please be vocal and make your voice heard in the comments below.

Tue 17 Sep 2019, 09:00

14 September 2019

Pixelpiloten

Pixelpiloten is out and about!

So, i recently got hired by Redpill Linpro and am currently at a conference at Mallorca. So I thought i should show some initiative at my new gig and hold a presentation, and on Saturday morning at 9 o clock non the less.

I did not know what i wanted to talk about first but after some thinking I thought that I should do something impressive, you know really wow them.

So how do you best impress your new collegues about how damn skilled you are? You do a live demo of deploying an application to Kubernetes via Git…hopefully this will work, you know how haunted live demos are.

Cross your fingers

Sat 14 Sep 2019, 06:45

11 September 2019

Pixelpiloten

Persistent volumes in Kubernetes

When you run Kubernetes on a Cloud provider like Amazon AWS, Google cloud, Azure or OpenStack creating Volumes on the fly for your persistent storage needs is easy peasy. So come along and I’ll show you how you can use this functionality and get your persistent storage goodness.

Steps

In this example our cluster is in Amazon AWS so we can use EBS volumes (look here for other Cloud storage provisioners: https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner).

  1. Create a Storage class definition in a file called storageclass.yaml.

     apiVersion: storage.k8s.io/v1
     kind: StorageClass
     metadata:
       name: generalssd
     provisioner: kubernetes.io/aws-ebs
       parameters:
         type: gp2 # This is different for each Cloud provider and what disk types they have and what they name them.
         fsType: ext4
    
  2. Create the StorageClass with Kubectl.

     $ kubectl apply -f storageclass.yaml
    
  3. Create a Persistent volume claim that uses the Storage class we created above and define how much storage you need by creating a file called pvc.yaml and paste this into it.

     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: mycoolvolumeclaim
     spec:
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 1Gi # Specify the size of the volume you want.
       storageClassName: generalssd # This is the name of the Storage class we created above.
    
  4. Create the PersistentVolumeClaim with Kubectl.

     $ kubectl apply -f pvc.yaml
    
  5. You can now use that volume and mount it inside your container. In this example we use a database container and mount the database folder inside the container. Create a file called deployment.yaml and past this:

     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: mysql
       labels:
         app: mysql
     spec:
       replicas: 1
       selector:
         matchLabels:
         app: mysql
       template:
         metadata:
         labels:
           app: mysql
         spec:
           containers:
           - name: mysql
             image: mysql
             volumeMounts:
               - name: mycoolvolume # Name of the volume you define below.
                 mountPath: /var/lib/mysql # Path inside the container you want to mount the volume on.
             ports:
             - containerPort: 80
           volumes:
             - name: mycoolvolume # This pods definition of the volume.
               persistentVolumeClaim:
                 claimName: mycoolvolumeclaim # The PersistentVolumeClaim we created above.
    
  6. Create the Deployment with your database Pod with Kubectl.

     $ kubectl apply -f deployment.yaml
    

So to re-itterrate the steps:

  • Create a StorageClass where you define a name and what type of disk you want (depends on the Cloud provider)
  • Create a PersistentVolumeClaim where you define a name and how much disk space you want.
  • Define what volumes you want to use in your Pod definition and reference what PersistentVolumeClaim you want to use and mount it on a path inside your container.

Tadaa!

You have now deployed mysql with a volume attached to the container so whenever you deploy it the database(s) will actually persist because of our Volume :)

Wed 11 Sep 2019, 12:00

06 September 2019

Pixelpiloten

Prometheus & Grafana - Monitoring in Kubernetes

Monitoring how much resources your server(s) and how much resources your apps are using is easier then ever with Prometheus and Grafana. In this tutorial i will show you how to do this on a Kubernetes cluster.

To make this as easy as possible I created a Helm chart that deploys Prometheus and Grafana together, it uses local storage instead of a volume from a cloud vendor. The only preperations you need to do is create the directories on the Kubernetes node you plan to deploy this on.

I created this because i use K3S for some of my Kubernetes setups and I run them on non cloud vendors, so this is perfect for that, “On-prem” servers or vendors that dont have cloud volumes.

Requirements

  • Kubernetes cluster
  • Helm (Tiller installed on Kubernetes cluster)

Step 1 - Preperations.

  1. SSH into the server you plan to deploy Prometheus & Grafana too.

  2. Create the directories needed, paths for this can be changed to whatever you supply in values.yaml.

     $ mkdir -p /myvolumes/prometheus/alertmanager
    
     $ mkdir -p /myvolumes/prometheus/pushgateway
    
     $ mkdir -p /myvolumes/prometheus/server
    

Step 2 - Deploy the helm chart.

  1. Clone this repo to your computer (not the server): https://github.com/pixelpiloten/prometheus

  2. Deploy the helm chart.

     $ helm install --name pixelpiloten_prometheus . -f values.yaml
    
  3. Check that all the pods for Prometheus and Grafana are deployed and up and running.

     $ kubectl -n monitoring get pods
    
  4. Create a port-forward to the Grafana Service, port 8080 can be whatever port you want.

     $ kubectl -n monitoring port-forward svc/grafana 8080:80
    
  5. Now you can access the Grafana web GUI in your browser.

     http://127.0.0.1:8080
    

Tadaa!

You now have installed Prometheus and Grafana and can get started creating Dashboards or import existing ones from https://grafana.com/grafana/dashboards to monitor everything from your Kubernetes nodes and applications running in it.

My Grafana dashboard monitoring Kubernetes node resources

Fri 06 Sep 2019, 13:45

04 September 2019

Pixelpiloten

Probably a stupid IDE(A)

I thought I would take a small break from the Kubernetes articles and focus a bit on local development with Docker and specificly how you can work completely inside Docker containers, even using a IDE inside a Docker container. GASP! :O

Now this is just a proof of concept, and not even I am convinced that it is such a good idea but I want to demonstrate the power of containers and this is a example of what you can do.

The setup

In my previous life I used to do alot of Drupal based development and I thought I could use Drupal as our software we want to work on. This is possible using 3 docker containers and actually using Docker from the host machine inside the coder container (so docker inside docker), the setup look like this:

Docker containers

  • Apache container
    • PHP 7.2
    • Composer
    • Drush
  • MariaDB container
    • Minor custom settings for Drupal optimization
  • Coder container

Docker volume

…and a Docker volume, this is to have the same speed on Linux, MacOS and Windows. On MacOS and Windows Docker runs in a virtual machine and you need to mount a folder inside the docker containers INSIDE the virtual machine which causes a reaaaal slow down in performance, specificly when using Drupal.

WARNING! If you remove the volume you remove your code!

Requirements

  • Git
  • Docker
  • Docker compose

Step 1 - Start the environment.

  1. Clone this repository somewhere to your hardrive.

     $ git clone git@github.com:pixelpiloten/coder_drupal.git myfolder
    
  2. Start the environment.

     $ docker-compose up --build --force-recreate -d
    

Step 2 - Install drupal.

  1. Open Microsoft Visual studio code in the browser using this url.

    http://127.0.0.1:8443

  2. Click on View in the menu and choose Terminal to open the terminal.

  3. Download Drupal using Composer to the current directory (this will actually exec composer in the apache container).

     $ composer create-project drupal-composer/drupal-project:8.x-dev . --no-interaction
    
  4. OPTIONAL: If you get an error about ...requires behat/mink-selenium2-driver 1.3.x-dev -> no matching package found then add this to the composer.json file under require.

    "behat/mink-selenium2-driver": "dev-master as 1.3.x-dev"

  5. OPTIONAL: Run composer to update your packages if you got the above error.

     $ commposer update
    
  6. Install Drupal with Drush.

     $ drush si --db-url=mysql://root:password@mariadb/drupal --site-name=MySite
    
  7. Create a one time login link to login to Drupal admin.

     $ drush uli
    

Tadaa!

So, a development environment including a IDE in browser, who could have thought this was possible? This is possible because Microsoft Visual Studio Code is actually built on web technologies and when you use it on your desktop its basicly running inside a web browser environment…and a bit of docker in docker magic ;)

The commands

All the commands we ran in the terminal in the tutorial were actually running docker exec commands in the coder container, but the execution of those commands where in the apache container.

The composer and drush commands are actually just bash aliases that you can find in the .docker/coder/config/zshrc_aliases file, now this is very basic and not the most intuitive way but this is a proof of concept so just add what you want and rebuild the containers.

alias php="docker exec -u 1000:1000 -t drupal_apache php"
alias composer="docker exec -u 1000:1000 -t drupal_apache composer"
alias drush="docker exec -u 1000:1000 -it drupal_apache drush --root /home/coder/project/code-server/web --uri 127.0.0.1:8080"

Wed 04 Sep 2019, 13:45

29 August 2019

Pixelpiloten

Kubernetes in Google cloud - Tutorial

Another week, another tutorial and another Cloud provider with Kubernetes as a serivce. This time I will look at installing a Kubernetes cluster in Google cloud. To do this I will use one of my favorite Cloud native tools called Terraform from one of my favorite companies in the DevOps landscape Hashicorp

Requirements

Step 1 - Create a folder for your terraform files.

  1. Create a folder called googlekube somwehere on your computer.

Step 2 - Create your Google cloud platform API credentials

  1. Login to your Google cloud account and go to the Console.

  2. Hover with the mouse over the APIs & Services in the menu on the left hand side and click on Credentials.

  3. When the page has loaded click on Create credentials and choose Create service account key.

  4. Choose Compute engine default service account in the Service account field and JSON in the Key type field and click Create.

  5. Copy the json-file you downloaded to the googlekube folder you created in Step 1.1 and rename it cloud-credentials.json

Step 3 - Create Terraform files.

  1. Create a file called variables.tf in your googlekube folder with this content, replace the placeholder project id and region with project id and a region where you want to deploy Kubernetes to (project id you can find in your cloud-credentials.json and available regions you can find here: https://cloud.google.com/about/locations/#region, add -a to your region-name, like europe-north1 should be europe-north1-a, otherwise you will deploy a worker node in each available zone in that Region, and that is not necessary in this example).
     variable "goovars" {
         type = "map"
         default = {
             "project" = "<YOUR-PROJECT-ID>"
             "region" = "<REGION-CLOSE-TO-YOU>"
             "node_machine_type" = "n1-standard-1" # The machine type you want your worker nodes to use.
             "node_count" = "1" # How many worker nodes do you want?
             "version" = "1.13.7-gke.19" # Kubernetes version you want to install.
         }
     }
    
  2. Create a file called main.tf in your googlekube folder with this content.
     provider "google" {
         credentials = "${file("cloud-credentials.json")}"
         project     = "${var.goovars["project"]}"
         region      = "${var.goovars["region"]}"
     }
    
     resource "google_container_cluster" "gookube" {
         name     = "gookube"
         location = "${var.goovars["region"]}"
         min_master_version = "${var.goovars["version"]}"
    
         remove_default_node_pool = true
         initial_node_count = 1
    
         master_auth {
             client_certificate_config {
                 issue_client_certificate = false
             }
         }
     }
    
     resource "google_container_node_pool" "gookubenodepool" {
         name       = "gookubenodepool"
         location   = "${var.goovars["region"]}"
         cluster    = "${google_container_cluster.gookube.name}"
         node_count = "${var.goovars["node_count"]}"
    
         node_config {
             preemptible  = true
             machine_type = "${var.goovars["node_machine_type"]}"
    
             metadata = {
                 disable-legacy-endpoints = "true"
             }
    
             oauth_scopes = [
                 "https://www.googleapis.com/auth/logging.write",
                 "https://www.googleapis.com/auth/monitoring",
             ]
         }
     }
    

Step 4 - Create your kubernetes cluster.

  1. Init Terraform to download the Google cloud platform provider (run this command in the googlekube folder).
     $ terraform init
    
  2. Create your Kubernetes cluster and answer Y when Terraform asks for confirmation. This process should take about 10-15 minutes.
     $ terraform apply
    
  3. Get your Kubeconfig to access the Kubernetes cluster with kubectl using the Google cloud platform CLI tool.
     $ gcloud beta container clusters get-credentials gookube --region <THE-REGION-YOU-CHOOSED> --project <YOUR-PROJECT-ID>
    
  4. The above command saved your credentials to your Kubeconfig, normally in ~/.kube/config

  5. Check that you can reach your nodes (the master nodes are completely handled by Google so you will only see your worker nodes here)
     $ kubectl get nodes
    

Tadaa!

So thats how you can create a Kubernetes cluster on the Google cloud platform using their Kubernetes as a service with Terraform. Overall I would say that setting up Kubernetes with Terraform can be a bit of a hassle with a Cloud provider, but so far Google cloud platform has been the easiest to work with when using Terraform in this manner.

Thu 29 Aug 2019, 12:25

27 August 2019

Redpill Linpro Techblog

Evaluating Local DNSSEC Validators

Domain Name System Security Extensions (DNSSEC) is a technology that uses cryptographic signatures to make the Domain Name System (DNS) tamper-proof, safeguarding against DNS hijacking. If your ISP or network operator cares about your online security, their DNS servers will validate DNSSEC signatures for you. DNSSEC is widely deployed: here in Scandinavia, about 80% of all DNS lookups are subject to DNSSEC validation (source). Wondering whether or not your DNS server validates DNSSEC signatures? www.dnssec-or-not.com ...

Tue 27 Aug 2019, 00:00

26 August 2019

Pixelpiloten

Kubernetes in AWS with eksctl - Tutorial

So I’m on a quest, a mission, a exploratory journey to find different ways of installing Kubernetes. I’m on this journey because different hosting providers (be it a Cloud provider or simple VPS provider or On prem) will have different tools to what that works best with their hosting solution. Either built by the provider them self or the community for either a Kubernetes as a service-service or installation based more on standard Virtual private servers.

Some of theese tools include:

  • Kubespray - Based on Ansible and can be used on everything from Cloud to On prem.
  • Rancher - Support for many different Cloud providers as well as On prem.
  • Eksctl - Official CLI tool for creating AWS EKS cluster using Cloud formation.
  • Terraform - Mainly made for creating infrastructure with support for many Different cloud provders, but also has support for AWS EKS.

Official CLI tool for EKS

Today I’m gonna focus on the official CLI tool for AWS EKS called eksctl, built in Go by Weaveworks but they have made it Open source and you can find their Github repository here: https://github.com/weaveworks/eksctl and contribute if you want to.

Requirements

Step 1 - Download your AWS API credentials.

  1. Create a user in AWS for programmatic access that has at least the access policys above defined in the Requirements section and take a note of the API credentials.

  2. Create a file in ~/.aws/credentials with this content and replace the placeholders with Your API credentials.

     [default]
     aws_access_key_id=<YOUR-ACCESS-KEY-ID>
     aws_secret_access_key=<YOUR-SECRET-ACCESS-KEY>
    

Step 2 - Install eksctl

  1. Download eksctl for your OS here: https://github.com/weaveworks/eksctl/releases/tag/latest_release.

  2. Extract and put the binary file eksctl somwehere in your $PATH, i put mine in the ~/.local/bin folder on my Linux based laptop.

Step 3 - Create file that declares your cluster setup.

  1. Create a file called cluster.yaml and copy the content from below, this contains a very basic setup but should be enough to test things out.
     apiVersion: eksctl.io/v1alpha5
     kind: ClusterConfig
    
     metadata:
         name: myCluster # Name of your cluster.
         region: eu-north-1 # Region you want to create your cluster and worker nodes in.
         version: "1.13" # Version of Kubernetes you want to install.
    
     nodeGroups:
     - name: myClusterNodeGroup # Name you your Node group (basicly a template) for your worker nodes in Cloud formation
         labels: { role: workers } # Any kind of labels you want to assign them.
         instanceType: m5.large # The size of your worker nodes.
         desiredCapacity: 1 # How many worker nodes do you want?
         ssh:
             publicKeyPath: ~/.ssh/mySshKey.pub # Location to your local public ssh-key you want to copy to the Worker nodes.
    

Step 4 - Create your cluster and access it.

  1. Create the EKS cluster with eksctl using the file you just created.
     $ eksctl create cluster -f cluster.yaml
    
  2. This process should take about 15min and eksctl will add the Kubeconfig information to access your cluster to your ~/.kube/config file, you can also use the flag --kubeconfig ~/.kube/myfolder/config in the above command if you want to write the Kubeconfig to another file instead.

  3. Check that you can access your cluster by listing all your nodes, the master nodes will be excluded here since they are completely managed by AWS.
     $ kubectl get nodes
    

Update your cluster with more nodes.

Lets say you see that your cluster starts to run out of resources, you can very easily adjust the desiredCapacity from 1 to 2 in the cluster.yaml-file for example and then run this command to update the cluster with another node by running:

$ eksctl update cluster -f cluster.yaml

Tadaa!

So thats it, a vey easy way of installing Kubernetes on AWS EKS withouot much hassle, the biggest hurdle is basicly getting your IAM configuration set, rest is pretty straight forward.

PS. You can customize the cluster.yaml with many more settings, look at more examples here: https://github.com/weaveworks/eksctl/tree/master/examples

Mon 26 Aug 2019, 13:00

21 August 2019

Pixelpiloten

Kubernetes in Azure - Tutorial

Many of the cloud providers today provides a Kubernetes as a service where they will maintain the Kubernetes nodes much like a Managed hosting service.

Some of the Cloud providers selling Kubernetes as a service:

This can be a great introduction to start using Kubernetes since you dont have to be an expert in setting up or maintaining a Kubernetes cluster, and in fact majority of the companies using Kubernetes ARE using Kubernetes this way since Kubernetes has a deep integration with cloud providers for things like load balancing and persistent storage.

I have installed Kubernetes in many different ways, all in the search of the easiest and most cloud agnostic way of installing it without taking any shortcusts, but that does not mean that I don’t use cloud providers or the Kubernetes as a service that they provide, quite the opposite.

Recently I have been playing around with Azure and they are one of the cloud providers that provides this service, so lets create a Kubernetes cluster in Azure in this tutorial :)

Requirements

Step 1

Get Authorization information for Terraform to use.

  1. Login to your Microsoft Azure account with the Azure CLI tool, this will open up a browser window where you login.
     $ az login
    
  2. Go back to your terminal and you should have got an output from the command above looking something like this, copy this somewhere since we will use this later (<REDACTED-STRING> is of course your user’s unique authentication details).
     [
         {
             "cloudName": "AzureCloud",
             "id": "<REDACTED-SUBSCRIPTION-ID>",
             "isDefault": true,
             "name": "Free Trial",
             "state": "Enabled",
             "tenantId": "<REDACTED-TENNANT-ID>",
             "user": {
                 "name": "<REDACTED-USERNAME>",
                 "type": "user"
             }
         }
     ]
    
  3. Create a Service principal in your AD with the Azure CLI tool.
     $ az ad sp create-for-rbac --skip-assignment
    
  4. Copy the appId and password string from the output somewhere (<REDACTED-STRING> is of course your unique authentication details for AD).
     {
         "appId": "<REDACTED-APP-ID",
         "displayName": "<REDACTED-DISPLAY-NAME>",
         "name": "<REDACTED-NAME>",
         "password": "<REDACTED-PASSWORD>",
         "tenant": "<REDACTED-TENANT>"
     }
    

Step 2

Create the Terraform files.

  1. Create a folder on your computer and navigate to this folder.

  2. Create a file called variables.tf and paste the content below, replace <YOUR-STRING> with the corresponding value you got from the az login and az ad commands in Step 1.1 and Step 1.3 above.
     variable "account_subscription_id" {
         type = "string"
         default = "<YOUR-ACCOUNT-ID>"
     }
    
     variable "account_tennant_id" {
         type = "string"
         default = "<YOUR-TENNANT-ID>"
     }
    
     variable "service_principal_appid" {
         type = "string"
         default = "<YOUR-SERVICE-PRINCIPAL-APP-ID>"
     }
    
     variable "service_principal_password" {
         type = "string"
         default = "<YOUR-SERVICE-PRINCIPAL-PASSWORD>"
     }
    
     variable "node_count" {
         type = "string"
         default = "1" # This is how many worker nodes you will create.
     }
    
  3. Create a file called main.tf and paste the content below
     provider "azurerm" {
         version           = "=1.28.0"
         subscription_id   = "${var.account_subscription_id}"
         tenant_id         = "${var.account_tennant_id}"
     }
    
     resource "azurerm_resource_group" "myresourcegroup" {
         name     = "myresourcegroup"
         location = "North Europe" # Replace with the region that makes sence to you.
     }
    
     resource "azurerm_kubernetes_cluster" "myk8scluster" {
         name                = "myk8scluster"
         location            = "${azurerm_resource_group.myresourcegroup.location}"
         resource_group_name = "${azurerm_resource_group.myresourcegroup.name}"
         dns_prefix          = "myk8scluster"
    
         agent_pool_profile {
             name            = "default"
             count           = "${var.node_count}"
             vm_size         = "Standard_D1_v2" # A 1 vCPU / 3.5gb Memory VM.
             os_type         = "Linux"
             os_disk_size_gb = 30
         }
    
         service_principal {
             client_id     = "${var.service_principal_appid}"
             client_secret = "${var.service_principal_password}"
         }
    
         tags = {
             Environment = "myk8scluster"
         }
     }
    
     output "kube_config" {
         value = "${azurerm_kubernetes_cluster.myk8scluster.kube_config_raw}"
     }
    

Step 3

Create your cluster :)

  1. Init terraform so it can download the cloud provider plugin for Microsoft Azure, run this in the root of the folder you created your Terraform files in.
     $ terraform init
    
  2. Tell Terraform to start create your cluster, and confirm by writing yes when Terraform asks you for confirmation.
     $ terraform apply
    

Terraform is now going to start and create a Kubernetes cluster in your Microsoft Azure account and for this one worker node setup this will take about 10-15 minutes, and when it is done it will output a Kubeconfig file you can use to authenticate to this cluster.

Access the Kubernetes Dashboard

When you install Kubernetes with Azure’s Kubernetes as a service you get the Kubernetes Dashboard installed automaticly, and the Azure CLI tool makes accessing it a breeze.

Use this command to access the Kubernetes Dashboard.

$ az aks browse --resource-group <NAME-OF-YOUR-RESOURCE-GROUP> --name <YOUR-AZURE-USERNAME>

This command will do a port forward to your Kubernetes dashboard service and open up a Browser window to that url.

Update your cluster?

Lets say you want to increase the number of worker nodes for your cluster, to do this you can just change the node_count in your variables.tf file, like this:

variable "node_count" {
    type = "string"
    default = "2" # Increase to 2 worker nodes.
}

And then just run Terraform again and it will create that second worker node for you.

$ terraform apply

Delete your cluster?

Just run this command and Terraform will delete the cluster you created.

$ terraform destroy

Wed 21 Aug 2019, 11:45

06 August 2019

Redpill Linpro Techblog

A rack switch removal ordeal

I recently needed to remove a couple of decommissioned switches from one of our data centres. This turned out to be quite an ordeal. The reason? The ill-conceived way the rack mount brackets used by most data centre switches are designed. In this post, I will use plenty of pictures to explain why that is, and propose a simple solution on how the switch manufacturers can improve this in future.

Rack switch mounting 101

Tue 06 Aug 2019, 00:00

03 August 2019

Pixelpiloten

K3S - Tutorial

In this tutorial i will go through the steps i made to setup K3S to be able to host this blog on it, the server we will be using will be a bare Ubuntu 18.04 Linux server with at least 1024mb Memory.

What will we do in this Tutorial?

  • Install docker on our server.
  • Install a 1 node Kubernetes cluster.
  • Fetch the Kubeconfig file content to be able to use Kubectl from our local Machine.
  • Install Tiller so we can deploy Helm charts to our cluster.
  • Install Cert manager so we can use that in combination with Traefik for automatic SSL certificate generation for our Kubernetes ingress resources.

Step 1 - Install Docker

SSH into the server you plan to install K3S on.

  1. Update your apt index.
    $ sudo apt-get update
    
  2. Install the packages needed to make apt be able to fetch packages over https
    $ sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
     gnupg-agent \
     software-properties-common
    
  3. Add the Docker GPG key.
    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    
  4. Add the apt repository for Docker.
    $ sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    
  5. Update your apt index again to be able to install docker from the repository we just added.
    $ sudo apt-get update
    
  6. Install docker
    $ sudo apt-get install docker-ce docker-ce-cli containerd.io
    

Step 2 - Install K3S

  1. Download and run the K3S install bash script.
    $ curl -sfL https://get.k3s.io | sh -
    
  2. Wait until script is done (about 30 seconds) and run this command to check if your 1 node cluster is up.
    $ k3s kubectl get node
    
  3. Copy the file contents for the Kubeconfig from /etc/rancher/k3s/k3s.yaml and paste that into the ~/.kube/config file on your local machine. (example of contents below, are unique strings to your cluster)
    apiVersion: v1
    clusters:
    - cluster:
         certificate-authority-data: <REDACTED>
         server: https://localhost:6443 # This needs to be changed.
      name: default
    contexts:
    - context:
         cluster: default
         user: default
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: default
      user:
         password: <REDACTED>
         username: <REDACTED>
    
  4. On your local machine: Change the server value to a public facing ip or Hostname for your server in the ~/.kube/config
  5. On your local machine: Set the KUBECONFIG variable so you can talk to your Kubernetes cluster with Kubectl
    $ export KUBECONFIG=~/.kube/config
    
  6. On your local machine: Check that you can reach your Kubernetes cluster with Kubectl
    $ kubectl get nodes
    NAME       STATUS   ROLES    AGE   VERSION
    pixkube1   Ready    master   10d   v1.14.4-k3s.1
    

Step 3 - Install Helm

  1. First we need to make sure Tiller (server part of Helm) has a ServiceAccount it can use, and give enough permissions for it, in this example i give it cluster-admin permissions, copy and paste this into a file on your local machine and call it something like serviceaccount-tiller.yaml.
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
     name: tiller
     namespace: kube-system
    
  2. Create the Tiller ServiceAccount.
    $ kubectl apply -f serviceaccount-tiller.yaml
    
  3. Download the latest release of Helm from https://github.com/helm/helm/releases for your OS and put it in your $PATH so you can execute it from anywhere, like /usr/local/bin/helm if you are on linux.
  4. Init helm using the Tiller ServiceAccount.
    $ helm init --service-account tiller
    

Step 4 - Install Cert manager

  1. Download the CustomResourceDefinition yaml file from https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml
  2. Apply the CustomResourceDefinition.
    $ kubectl apply -f 00-crds.yaml
    
  3. Add the Jetstack Helm chart repository (the gang behind Cert manager)
    $ helm repo add jetstack https://charts.jetstack.io
    
  4. Install the Cert manager Helm chart.
    $ helm install --name cert-manager --namespace cert-manager jetstack/cert-manager
    
  5. Add the Cert manager TLS Issuer, basicly some config that will identify you at Letsencrypt and a reference to a secret your Ingress will use to get the cert.
    apiVersion: certmanager.k8s.io/v1alpha1
    kind: Issuer
    metadata:
     name: letsencrypt-prod
     namespace: pixelpiloten-blog
    spec:
     acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration, update to your own.
    email: <REDACTED>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    http01: {}
    

Final step - Your Ingress

Add the annotations for traefik in your ingress so Traefik can see them and add TLS to your domain/subdomain, in this example I also redirect all non http requests to https.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: pixelpiloten-blog-ingress
  namespace: pixelpiloten-blog
  annotations:
      kubernetes.io/ingress.class: "traefik"
      certmanager.k8s.io/issuer: "letsencrypt-prod"
      certmanager.k8s.io/acme-challenge-type: http01
      traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: pixelpiloten-blog-service
          servicePort: 80
  tls:
    - hosts:
      - www.pixelpiloten.se
      secretName: pixelpiloten-se-cert

Tadaa!

You have now set up K3S on a server with Helm and Cert manager on it for automatic TLS certificates with Lets encrypt and Traefik. Now go build your application :) Fuck yeah

Sat 03 Aug 2019, 08:00

01 August 2019

Pixelpiloten

K3S - Kubernetes on the cheap

Kubernetes - Will cost you

One of the caveats you run into when you want to run Kubernetes is that you need a lot of computing power (mostly memory) to run it, and you’ll need to “rent” more than one VPS from your Cloud provider (Amazon AWS, Google Cloud, Azure etc.) or wherever you get your servers.

A minimum Kubernetes cluster would look something like this, all though I would recomend running Master and Etcd on separate nodes:

  • 1 Master & Etcd node (needs at least 8gb of memory)
  • 1 Worker node (depends on the workload you deploy to it but at least 4gb memory)

Node is the same thing as a server in Kubernetes)

So the cost for this would be around 100-150 US dollars per month, and if you are in the first steps in deploying applications to run on Kubernetes that can be a bit much.

K3S - Will cost you (much) less

K3S is often described as a Lightweight Kubernetes Distribution and is build by the people behind Rancher and is just that, a slimmed down of Kubernetes where the Rancher team have removed things like:

  • Legacy, alpha, non-default features.
  • Most in-tree plugins (cloud providers and storage plugins).
  • etcd3, in favor of qlite3 as the default storage mechanism.

These changes and more makes the footprint of K3S much much smaller and hey have also made K3S available as a single binary so you dont have to be an expert in installing Kubernetes to get started.

Easy to install. A binary of less than 40 MB. Only 512 MB of RAM required to run.

They have also built support for IOT devices running ARM CPU’s and you can for instance run K3S on a Raspberry PI. Another great thing is that they have included Traefik as the default Ingress controller which have tons of annotations you can use in your ingress definitions, Epic Win!

And in fact..this very blog you are reading this article on is running on K3S on a cheap VPS from Hetzner and the next article i will be a tutorial on how i set this up together with Cert manager - For automatic SSL certificate generation.

Thu 01 Aug 2019, 00:48

28 July 2019

Tore Anderson

Validating SSH host keys with DNSSEC

(Note: this is a repost of an article from the Redpill Linpro techblog.)

We have all done it. When SSH asks us this familiar question:

$ ssh redpilllinpro01.ring.nlnog.net
The authenticity of host 'redpilllinpro01.ring.nlnog.net (2a02:c0:200:104::1)' can't be established.
ECDSA key fingerprint is SHA256:IM/o2Qakw4q7vo9dBMLKuKAMioA7UeJSoVhfc5CYsCs.
Are you sure you want to continue connecting (yes/no/[fingerprint])?

…we just answer yes - without bothering to verify the fingerprint shown.

Many of us will even automate answering yes to this question by adding StrictHostKeyChecking accept-new to our ~/.ssh/config file.

Sometimes, SSH will be more ominous:

$ ssh redpilllinpro01.ring.nlnog.net
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:IM/o2Qakw4q7vo9dBMLKuKAMioA7UeJSoVhfc5CYsCs.
Please contact your system administrator.
Add correct host key in /home/tore/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/tore/.ssh/known_hosts:448
ECDSA host key for redpilllinpro01.ring.nlnog.net has changed and you have requested strict checking.
Host key verification failed.

This might make us stop a bit and ask ourselves: «Has a colleague re-provisioned this node since the last time I logged in to it?»

Most of the time, the answer will be: «Yeah, probably», followed by something like sed -i 448d ~/.ssh/known_hosts to get rid of the old offending key. Problem solved!

These are all very understandable and human ways of dealing with these kinds of repeated questions and warnings. SSH certainly does «cry wolf» a lot! Let us not think too much about what happens that one time someone actually is «DOING SOMETHING NASTY», though…

Another challenge occurs when maintaining a large number of servers using automation software like Ansible. Manually answering questions about host keys might be impossible, as the automation software likely needs to run entirely without human interaction. The cop out way of ensuring it can do so is to disable host key checking altogether, e.g., by adding StrictHostKeyChecking no to the ~/.ssh/config file.

DNSSEC-validated SSH host key fingerprints in DNS

Fortunately a better way of securely verifying SSH host keys exists - one which does not require lazy and error-prone humans to do all the work.

This is accomplished by combining DNS Security Extensions (DNSSEC) with SSHFP resource records.

To make use of this approach, you will need the following:

  1. The SSH host keys published in DNS using SSHFP resource records
  2. Valid DNSSEC signatures on the SSHFP resource records
  3. A DNS recursive resolver which supports DNSSEC
  4. A stub resolver that is configured to request DNSSEC validation
  5. A SSH client that is configured to look for SSH host keys in DNS

I will elaborate on how to implement each of these requirements in the sections below.

1. Publishing SSHFP host keys in DNS

The ssh-keygen utility provides an easy way to generate the correct SSHFP resource records based on contents of the /etc/ssh/ssh_host_*_key.pub files. Run it on the server like so:

$ ssh-keygen -r $(hostname --fqdn).
redpilllinpro01.ring.nlnog.net. IN SSHFP 1 1 5fca087a7c3ebebbc89b229a05afd450d08cf9b3
redpilllinpro01.ring.nlnog.net. IN SSHFP 1 2 cdb4cdaf7734df343fd567e0cab92fd6ac5f2754bfef797826dfd4bcf90f0baf
redpilllinpro01.ring.nlnog.net. IN SSHFP 2 1 613f389a36cf33b67d9bd69e381785b275e101cd
redpilllinpro01.ring.nlnog.net. IN SSHFP 2 2 8a07b97b96d826a7d4d403424b97a8ccdb77105b527be7d7be835d02fdb9cd58
redpilllinpro01.ring.nlnog.net. IN SSHFP 3 1 3e46cecd986042e50626575231a4a155cb0ee5ca
redpilllinpro01.ring.nlnog.net. IN SSHFP 3 2 20cfe8d906a4c38abbbe8f5d04c2cab8a00c8a803b51e252a1585f739098b02b

These entries can be copied and pasted directly into the zone file in question so that they are visible in DNS:

$ dig +short redpilllinpro01.ring.nlnog.net. IN SSHFP | sort
1 1 5FCA087A7C3EBEBBC89B229A05AFD450D08CF9B3
1 2 CDB4CDAF7734DF343FD567E0CAB92FD6AC5F2754BFEF797826DFD4BC F90F0BAF
2 1 613F389A36CF33B67D9BD69E381785B275E101CD
2 2 8A07B97B96D826A7D4D403424B97A8CCDB77105B527BE7D7BE835D02 FDB9CD58
3 1 3E46CECD986042E50626575231A4A155CB0EE5CA
3 2 20CFE8D906A4C38ABBBE8F5D04C2CAB8A00C8A803B51E252A1585F73 9098B02B

How to automatically update the SSHFP records in DNS when a node is being provisioned is left as an exercise for the reader, but one nifty little trick is to run something like ssh-keygen -r "update add $(hostname --fqdn). 3600". This produces output that can be piped directly into nsupdate(1).

If you for some reason can not run ssh-keygen on the server, you can also use a tool called sshfp. This tool will take the entries from ~/.ssh/known_hosts (i.e., those you have manually accepted earlier) and convert them to SSHFP syntax.

2. Ensuring the DNS records are signed with DNSSEC

DNSSEC signing of the data in a DNS zone is a task that is usually performed by the DNS hosting provider, so normally you would not need to do this yourself.

There are several web sites that will verify that DNSSEC signatures exist and validate for any given host name. The two best known are:

If DNSViz shows that everything is «secure» in the left column (example) and the DNSSEC Debugger only shows green ticks (example), your DNS records are correctly signed and the SSH client should consider them secure for the purposes of SSHFP validation.

If DNSViz and the DNSSEC Debugger give you a different result, you will most likely have to contact your DNS hosting provider and ask them to sign your zones with DNSSEC.

3. A recursive resolver that supports DNSSEC

The recursive resolver used by your system must be capable of validating DNSSEC signatures. This can be verified like so:

$ dig redpilllinpro01.ring.nlnog.net. IN SSHFP +dnssec
[...]
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 7, AUTHORITY: 0, ADDITIONAL: 1
[...]

Look for the ad flag («Authenticated Data») in the answer, If present, it means that the DNS server confirms that the supplied answer has a valid DNSSEC signature and is secure.

If the ad flag is missing when querying a hostname known to have valid DNSSEC signatures (e.g., redpilllinpro01.ring.nlnog.net), your DNS server is probably not DNSSEC capable. You can either ask your ISP or IT department to fix that, or change your system use a public DNS server known to be DNSSEC capable.

Cloudflare’s 1.1.1.1 is one well-known example of a public recursive resolver that supports DNSSEC. To change to it, replace any pre-existing nameserver lines in /etc/resolv.conf with the following:

nameserver 1.1.1.1
nameserver 2606:4700:4700::1111
nameserver 1.0.0.1
nameserver 2606:4700:4700::1001

4. Configuring the system stub resolver to request DNSSEC validation

By default, the system stub resolver (part of the C library) does not set the DO («DNSSEC OK») bit in outgoing queries. This prevents DNSSEC validation.

DNSSEC is enabled in the stub resolver by enabling EDNS0. This is done by adding the following line to /etc/resolv.conf:

options edns0

5. Configuring the SSH client to look for host keys in DNS

Easy peasy: either you can add the line VerifyHostKeyDNS yes to your ~/.ssh/config file, or you can supply it on the command line using ssh -o VerifyHostKeyDNS=yes.

Verifying that it works

If you have successfully implemented steps 1-5 above, we are ready for a test!

If you have only done step 3-5, you can still test using redpilllinpro01.ring.nlnog.net (or any other node in the NLNOG RING for that matter). The NLNOG RING nodes will respond to SSH connection attempts from everywhere, and they have all DNSSEC-signed SSHFP records registered.

$ ssh -o UserKnownHostsFile=/dev/null -o VerifyHostKeyDNS=yes no-such-user@redpilllinpro01.ring.nlnog.net
no-such-user@redpilllinpro01.ring.nlnog.net: Permission denied (publickey).

Ignore the fact that the login attempt failed with «permission denied» - this test was a complete success, as the SSH client did not ask to manually verify the SSH host key.

UserKnownHostsFile=/dev/null was used to ensure that any host keys manually added to ~/.ssh/known_hosts at an earlier point in time would be ignored and not skew the test.

It is worth noting that SSH does not add host keys verified using SSHFP records to the ~/.ssh/known_hosts file - it will validate the SSHFP records every time you connect. This ensures that even if the host keys change, e.g., due to the server being re-provisioned, the ominous «IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY» warning will not appear - provided the SSHFP records in DNS have been updated, of course.

Trusting the recursive resolver

The setup discussed in this post places implicit trust in the recursive resolver used by the system. That is, you will be trusting it to diligently validate any DNSSEC signatures on the responses it gives you, and to only set the «Authenticated Data» flag if those signatures are truly valid.

You are also placing trust in the network path between the host and the recursive resolver. If the network is under control by a malicious party, the DNS queries sent from your host to the recursive resolver could potentially be hijacked and redirected to a rogue recursive resolver.

This means that an attacker with the capability to hijack or otherwise interfere with both your SSH and DNS traffic could potentially set up a fraudulent SSH server for you to connect to, and make your recursive resolver lie about the SSH host keys being correct and valid according to DNSSEC. The SSH client will not be able to detect this situation on its own.

In order to detect such attacks, it is necessary for your host to double-check the validity of answers received from the recursive resolver by performing local DNSSEC validation. How to set up this will be the subject of a future post here on the Redpill Lipro techblog. Stay tuned!

Sun 28 Jul 2019, 00:00

06 May 2019

Redpill Linpro Techblog

Validating SSH host keys with DNSSEC

We have all done it. When SSH asks us this familiar question:

$ ssh redpilllinpro01.ring.nlnog.net The authenticity of host 'redpilllinpro01.ring.nlnog.net (2a02:c0:200:104::1)' can't be established. ECDSA key fingerprint is SHA256:IM/o2Qakw4q7vo9dBMLKuKAMioA7UeJSoVhfc5CYsCs. Are you sure you want to continue connecting (yes/no/[fingerprint])? 

…we just answer yes - without bothering to verify the fingerprint shown.

Many of us will even automate answering yes to this question by adding StrictHostKeyChecking accept-new to our ~/.ssh/config ...

Mon 06 May 2019, 00:00

04 April 2019

Redpill Linpro Techblog

Single node Kubernetes setup

These are essentially my notes on setting up a single-node Kubernetes cluster at home. Every time I set up an instance I have to dig through lots of posts, articles and documentation, much of it contradictory or out-of-date. Hopefully this distilled and much-abridged version will be helpful to someone else.

...

Thu 04 Apr 2019, 00:00

02 April 2019

Magnus Hagander

When a vulnerability is not a vulnerability

Recently, references to a "new PostgreSQL vulnerability" has been circling on social media (and maybe elsewhere). It's even got it's own CVE entry. The origin appears to be a blogpost from Trustwave.

So is this actually a vulnerability? (Hint: it's not) Let's see:

by nospam@hagander.net (Magnus Hagander) at Tue 02 Apr 2019, 19:39

25 March 2019

Redpill Linpro Techblog

Configure Alfresco 5.2.x with SAML 2.0

In our project, we have successfully implemented SAML (Security Assertion Markup Language) 2.0 with our Alfresco Content Service v5.2.0. We use AD(Active Directory) to sync users and groups into Alfresco System.

...

Mon 25 Mar 2019, 00:00

21 March 2019

Ingvar Hagelund

Packages of varnish-6.2.0 with matching vmods, for el6 and el7

The Varnish Cache project recently released a new upstream version 6.2 of Varnish Cache. I updated the fedora rawhide package yesterday. I have also built a copr repo with varnish packages for el6 and el7 based on the fedora package. A snapshot of matching varnish-modules (based on Nils Goroll’s branch) is also available.

Packages are available at https://copr.fedorainfracloud.org/coprs/ingvar/varnish62/.

vmods included in varnish-modules:
vmod-bodyaccess
vmod-cookie
vmod-header
vmod-saintmode
vmod-tcp
vmod-var
vmod-vsthrottle
vmod-xkey

by ingvar at Thu 21 Mar 2019, 08:29