Planet Redpill Linpro

03 October 2019

Pixelpiloten

Updates to K3S Ansible

So last week I released a Ansible playbook for getting K3S up and working within 5 minutes (https://github.com/pixelpiloten/k3s-ansible) and both the blog post and my Github account spiked in traffic, for example the blog post (https://www.pixelpiloten.se/blog/kubernetes-running-in-5min/) I made about it attracted about 1000% more visitors according to my Google analytics account, and I had a bunch of feedback on both Reddit, LinkedIn and Disqus. Thank you for your interest, it encourage me to continue my work on the project.

Thank you everyone :)

So i made some updates…

  1. In the playbook I created functionality to choose what K3S version you wanted to use BUT unfortunately the specific code i made for that did not work, it is now fixed and should now download and install the version you provided in the inventory file.
  2. I had hard coded the folder where the K3S installer and I have made this available as a variable.

Future

  • Provide a Virtual machine to test this Ansible playbook, both for developing the playbook but also for testing K3S. This work has been started in a separate branch (https://github.com/pixelpiloten/k3s-ansible/tree/vagrant) but is not ready yet.
  • Ability to upgrade K3S and in the end Kubernetes to a new version by changing the version number in the inventory file.
  • Support more Operative systems than Ubuntu. CentOS will probably be the first one.

To the future and beyond!

Thu 03 Oct 2019, 14:15

27 September 2019

Redpill Linpro Techblog

Running PostgreSQL in Google Kubernetes Engine

Several Redpill Linpro customers are now in the kubernetes way of delivery. Kubernetes has changed the way they work, and is acting as an effective catalyst empowering their developers. For these customers, the old-school way of running PostgreSQL is becoming a bit cumbersome:

The typical PostgreSQL installation has been based on bare metal, or, the past few years, virtual machines. They are often set up as streaming replication clusters, with a primary r/w instance, and one or more replicating r/o ...

Fri 27 Sep 2019, 22:00

25 September 2019

Pixelpiloten

Kubernetes cluster (K3S) running in 5 min

Setting up Kubernetes can be a procedure that takes some time, but with the Kubernetes distribution K3S and a Ansible playbook we can get a Kubernetes cluster up and running within 5min.

So I created just that, an Ansible playbook where you just add your nodes to a inventory file and then the playbook will install all the dependencies we need for K3S, the workers will automaticly join the master AND I have also added some firewall rules (with help from UFW) so you have some basic protection for your servers. Let’s go through how to use it.

Requirements

  • Git
  • Virtualenv (Python) on the computer you will deploy this from.
  • At least 2 servers with SSH access.
    • Must be based on Ubuntu.
    • User with sudo permissions.
    • Must have docker installed.

Preparations

  • Point a hostname to the server you want to be your kubernetes_master_server.

Steps

Step 1

Prepare your local environment (or wherever you deploy this from) with the dependencies we need. This will install a Virtual Python environment and download Python, Pip and Ansible and make those available in your $PATH so you can execute them.

  1. Clone the K3S Ansible repository from Github.

     $ git clone git@github.com:pixelpiloten/k3s-ansible.git
    
  2. Go to the ansible directory and create your Virtual Python environment.

     $ virtualenv venv
    
  3. Activate the Virtual Python environment.

     $ source venv/bin/activate
    

Step 2

Configure your inventory file with the servers you have, you can add how many workers you want but there can only be one master in K3S (though it might be supported in the future).

  1. Rename the inventory.example file to inventory.

     $ mv inventory.example inventory
    
  2. Change the kubernetes_master_server to the server you want to be your Kubernetes master and add ip(s) to the allowed_kubernetes_access list for the ips that should have access to the cluster via Kubectl. Alsop change ansible_ssh_host, ansible_ssh_user, ansible_ssh_port, ansible_ssh_private_key_file and hostname_alias to your server login details. And of course add how many workers you have.

     all:
     vars:
       ansible_python_interpreter: /usr/bin/python3
       kubernetes_master_server: https://node01.example.com:6443 # Change to your master server
       allowed_kubernetes_access: # Change these to a list of ips outside your cluster that should have access to the api server.
         - 1.2.3.4
         - 1.2.3.5
     children:
       k3scluster:
         hosts:
           node01: # We can only have one master.
             ansible_ssh_host: node01.example.com
             ansible_ssh_user: ubuntu
             ansible_ssh_port: 22
             ansible_ssh_private_key_file: ~/.ssh/id_rsa
             hostname_alias: node01
             kubernetes_role: master # Needs to be as it is.
             node_ip: 10.0.0.1
           node02: # Copy this and add how many workers you want and have.
             ansible_ssh_host: node02.example.com
             ansible_ssh_user: ubuntu
             ansible_ssh_port: 22
             ansible_ssh_private_key_file: ~/.ssh/id_rsa
             hostname_alias: node02
             kubernetes_role: worker # Needs to be as it is.
             node_ip: 10.0.0.2
    

Step 3

You are now ready to install K3S on your servers. The Ansible playbook will go through the inventory-file and install the dependencies on your servers and then install K3S on your master and your workers, the workers will automaticly join your K3S master. At the end the playbook will write a Kubeconfig-file to /tmp/kubeconfig.yaml on the machine you are running the playbook on.

  1. Run the Ansible playbook and supply the sudo password when Ansible asks for it.

     $ ansible-playbook -i inventory playbook.yml --ask-become-pass
    
  2. When the playbook is done you can copy the Kubeconfig-file /tmp/kubeconfig.yaml to your ~/.kube/config or where ever you want to keep it, BUT you need to modify the the server hostname to whatever your kubernetes_master_server is. (PS. Do not use the content below, use the /tmp/kubeconfig.yaml that got generated locally)

     apiVersion: v1
     clusters:
     - cluster:
         certificate-authority-data: <REDACTED-CERTIFICATE>
         server: https://0.0.0.0:6443
     name: default
     contexts:
     - context:
         cluster: default
         user: default
     name: default
     current-context: default
     kind: Config
     preferences: {}
     users:
     - name: default
     user:
         password: <REDACTED-PASSWORD>
         username: admin
    

Tadaa!

So you have now created a K3S Kubernetes cluster, gotten the Kubeconfig and added magic firewall rules so only you have access. And if you need to add more workers after the first initial setup you can just add more workers to the inventory-file and run the playbook in Step 3.1 again.

So walk like a baws down the office corridor with your elbows high and call it a day!

You done did it Baws!

Wed 25 Sep 2019, 09:30

20 September 2019

Pixelpiloten

Build your own PaaS?

Ok, that headline was a bit baity, but let me explain why I’m not completely trolling for site views.

Kubernetes has an extensive API from which you can get pretty much anything, and there are many libraries for all kinds of languages, here is some of the official libraries:

A better list of official and community driven libraries can be found here: https://kubernetes.io/docs/reference/using-api/client-libraries/

So with any of these libraries you will have the ability to both list resources as well as create new one like Ingresses, Pods, Services, Deployments and ofcourse change them. Now thats powerful, all that power inside your language of choice.

I got the power!

With great power comes great responsibility

As the old Superman quote says “With great power comes great responsibility” you need to make sure that if you are going to use any of these tools you need to make sure that the developer that is going to programaticly access your Kubernetes cluster have an Account in with only as much permissions as that person should have, and preferbly only have access to a test server or a limited namespace for example. You dont want someone to develop against live production and lose your database because of a missing Quotation mark somewhere, trust me, I’ve been there.

A handful of tools

With that disclaimer out of the way lets play around and get up to some shenanigans. I have choosen the Python library https://www.github.com/kubernetes-client/python since I have coded alot in Python, but wait there is more. I wanted to make as much of a real world example as i could of how a PaaS application could look like, and that means that I would have a backend that does the API calls via the Python library and a frontend that actually shows the result, so here is my setup:

  • Backend
    • The Python Kubernetes client library - https://www.github.com/kubernetes-client/python
    • A number of Python scripts that calls the Kubernetes API using the Kubeconfig file from the host where its executing the scripts from.
    • Outputs its result in JSON.
  • Frontend
    • Laravel
      • Using the Symfony component symfony/process I execute one of the Python scripts and convert that JSON to a PHP object in a Controller and send that to a Blade template where I list the output.
      • Bootstrap 4

Coding coding coding

An example

So let’s go through an example page where I list the details about a deployment and its rollout history.

The backend (Python script)

This script gets details about a deployment like name, namespace, what container image it runs and the rollout history of it and you execute it like this: python3 /app/backend/deployment-details.py --namespace=mynamespace --deployment=mydeployment

# -*- coding: UTF-8 -*-
from kubernetes import client, config
import json
import argparse
from pprint import pprint

class Application:
    def __init__(self):
        # Parse arguments.
        argumentParser = argparse.ArgumentParser()
        argumentParser.add_argument('-n', '--namespace', required=True)
        argumentParser.add_argument('-d', '--deployment', required=True)
        self.arguments = argumentParser.parse_args()
        # Load kubeconfig, from env $KUBECONFIG.
        config.load_kube_config()
        self.kubeclient = client.AppsV1Api()
        # Load the beta2 api for the deployment details.
        self.kubeclientbeta = client.AppsV1beta2Api()

    # This is just because I used a user with full access to my cluster, DON'T do this kids!
    def forbiddenNamespaces(self):
        return ["kube-system", "cert-manager", "monitoring", "kube-node-lease", "kube-public"]

    # Here we get some well choosen details about our deployment.
    def getDeploymentDetails(self):
        # Calls the API.
        deployments_data = self.kubeclient.list_namespaced_deployment(self.arguments.namespace)
        deployment_details = {}
        # Add the deployment details to our object if the deployment was found.
        for deployment in deployments_data.items:
            if deployment.metadata.namespace not in self.forbiddenNamespaces():
                if self.arguments.deployment == deployment.metadata.name:
                    deployment_details = deployment
                    break

        # Here we set our response.
        if not deployment_details:
            response = {
                "success": False,
                "message": "No deployment with that name was found."
            }
        else:
            containers = {}
            container_ports = {}
            for container in deployment_details.spec.template.spec.containers:
                for port in container.ports:
                    container_ports[port.container_port] = {
                        "container_port": port.container_port,
                        "host_port": port.host_port
                    }
                containers[container.name] = {
                        "name": container.name,
                        "image": container.image,
                        "ports": container_ports
                    }
            response = {
                "success": True,
                "message": {
                    "name": deployment_details.metadata.name,
                    "namespace": deployment_details.metadata.namespace,
                    "uid": deployment_details.metadata.uid,
                    "spec": {
                        "replicas": deployment_details.spec.replicas,
                        "containers": containers
                    },
                    "history": self.getDeploymentHistory(deployment_details.metadata.namespace, deployment_details.metadata.name)
                }
            }
        return response

    # To get the rollout history of a deployment we need to call another method and use the beta2 API.
    def getDeploymentHistory(self, namespace, deployment):
        deployment_history = {}
        deployment_revisions = self.kubeclientbeta.list_namespaced_replica_set(namespace)
        for deployment_revision in deployment_revisions.items:
            if deployment_revision.metadata.name.startswith(deployment):
                deployment_history[deployment_revision.metadata.annotations['deployment.kubernetes.io/revision']] = {
                    "revision": deployment_revision.metadata.annotations['deployment.kubernetes.io/revision']
                }
        return deployment_history

    # The method we actually call to run everything and outputs the result in JSON.
    def run(self):
        response = json.dumps(self.getDeploymentDetails())
        print(response)

app = Application()
app.run()

Frontend (Laravel - Route)

In laravel you create a route for the url you want to display your page, in this case I use the url /deployments/{namespace}/{deployment} where namespace and deployment are dynamic values you provide in the url and I send that request to my Controller, a PHP class called DeploymentsController (classic MVC programming).

<?php
Route::get('/deployments/{namespace}/{deployment}', 'DeploymentsController@show');

Frontend (Laravel - Controller)

In my controller I execute the Python script with the help of the symfony/process component and convert the JSON I get from that output to a PHP object and send that to the Blade template.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Symfony\Component\Process\Exception\ProcessFailedException;
use Symfony\Component\Process\Process;

class DeploymentsController extends Controller
{
    public function show($namespace, $deployment) {
        $process = new Process([
            "python3",
            "/app/backend/deployment-details.py",
            "--namespace=". $namespace,
            "--deployment=". $deployment
        ]);
        $process->run();

        if (!$process->isSuccessful()) {
            throw new ProcessFailedException($process);
        }
        $deployment_details = $process->getOutput();
        $deployment_details_array = json_decode($deployment_details);
        $deployment_details_history_array = collect($deployment_details_array->message->history);
        $deployment_details_array->message->history = $deployment_details_history_array->sortByDesc("revision");
        return view("deployment-details", ["deployment_details" => $deployment_details_array->message]);
    }
}

Frontend (Laravel - Template)

…and in our Blade template we render the deployment details we got from our controller, this is not the full html-page since we are extending another template (read more here: https://laravel.com/docs/5.8/blade#extending-a-layout) but these are the important parts. Blade templates allow you to use some PHP functions but 99% are stripped away so you dont write a bunch of PHP inside the templates, but you can do things like loops, render variables and do some formating on them etc.

@extends('layouts.app')
@section('title', $deployment_details->name)
@section('content')
<p>
    <a href="/deployments">&laquo; Go back to deployments</a>
</p>
<div class="jumbotron">
    <ul>
        <li><span class="font-weight-bold">Name: </span></li>
        <li><span class="font-weight-bold">Namespace: </span></li>
        <li><span class="font-weight-bold">UID: </span></li>
        <li>
            <span class="font-weight-bold">Spec:</span>
            <ul>
                <li><span class="font-weight-bold">Replicas: </span></li>
                <li>
                    <span class="font-weight-bold">Containers:</span>
                    <ul>
                        @foreach ($deployment_details->spec->containers as $container)
                            <li><span class="font-weight-bold">Name: </span></li>
                            <li><span class="font-weight-bold">Image: </span></li>
                            <li>
                                <span class="font-weight-bold">Ports:</span>
                                <ul>
                                    @foreach ($container->ports as $port)
                                        <li><span class="font-weight-bold">Container port: </span></li>
                                        <li><span class="font-weight-bold">Host port: </span></li>
                                    @endforeach
                                </ul>
                            </li>
                        @endforeach
                    </ul>
                </li>
            </ul>2019-09-19-build-your-own-paas
        </li>
    </ul>
</div>
<table class="table">
    <thead class="thead-dark">
        <tr>
            <th scope="col">Revision</th>
            <th scope="col">Cause</th>
        </tr>
    </thead>
    <tbody>
        @foreach($deployment_details->history as $deployment_revision)
            <tr>
                <td></td>
                <td>-</td>
            </tr>
        @endforeach
    </tbody>
</table>
@endsection

Screenshots, give me Screenshots!

Here is an example of how this application looks like in the flesh, I’ve built more parts than this but this is the result of all the code above.

DIY PaaS Screenshot

Tadaa!!

And there you have it, a working example of displaying real live data from your Kubernetes cluster. Now this is not a super securee way of doing it, but it’s to show you the power of the tools that exist out there and spark imagination. Think of the possibilitys in integrating this into your existing infrastructure, creating statistics or actually creating your own PaaS, the sky’s the limit. Go forth and play :)

Fri 20 Sep 2019, 11:15

17 September 2019

Pixelpiloten

What is DevOps?

I often get asked what is it that I actually do, from both people within the Tech industry as well as outside and I dont always have a simple explanation for that, so I thought I should try and write this down. What is it that I do and what is this DevOps thing really?

A Wikipedia article about the subject (https://en.wikipedia.org/wiki/DevOps) starts out by defining it as:

DevOps is a set of software development practices that combine software development (Dev) and information-technology operations (Ops) to shorten the systems-development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.

And that pretty much describes it pretty well, but lets try and boil it down and talk about what kind of tasks you would be doing and what kind of technologies and software you would use. Now this is going to be from my personal experience, yours might differ but the basic principles should be there.

Tasks for a DevOps engineer

Local development

  • Create a local development platform.
    • Set up a local development environment in a Virtual machine (like Virtualbox with Vagrant) or Docker for Developers to copy and run and develop their applications in.
    • Set up a skeleton of one of the applications/frameworks that the Developers work with to develop in.
    • Document the above setup and make it easy to get started, preferably with one, or a few commands.

CI

  • Setup CI servers and deployjobs to make it super easy to deploy, with as little hands-on for each deploy as possible. Using tools like:
  • Setup test servers that run tests when the CI deployjob runs at every commit or before a deployment for example.
  • Setup security checks in CI deployjobs.
    • Something like sensiolabs/security-checker to scan for vulnerabilities in Symfony components in Symfony based applications.
    • Something like Clair to scan Docker containers for vulnerabilities.

Infrastructure

Logging and Monitoring

  • Setup servers for aggregated Logging using something like:
  • Setup monitoring for your infrastructure with tools like.
  • Setup monitoring for your applications with tools like:

Other

  • Setup backups for files and databases in Cronjobs.
  • Setup mailservers.
  • Do regular security maintainence on all the infrastructure.

Communication

Communication is key!

One of the things you defninitly need as a DevOps enginner is communication skills, you are going to have to talk to a lot of people in your daily job with everyone from Developers to Customers, your IT partners, your customers IT partners, Project leaders and Managers. Since what you do is touching everything from local development up to the production infrastructure you affect them all and you need to work together and be open and transparent about all this.

In conclution

Is this a complete list of what you do as a DevOps engineer? No, but its a start and its very much colored from my personal experience and I would go back to the Wikipedia quote, it’s all about making delievery of your applications from 0 to 100 as automatic and easy as possible.

All you need is me!

And trying and have that mindset throughout all the software and infrastructure you use, always think “can we automate this process?” and “can we make this part of the repository and make it more transparent?” so whenever you quit your job for new adventures our you get hit by a bus your company is Not screwed because you where the only one who knew how “..that odd server was setup”.

So I hope that was helpful in understanding what a DevOps engineer does, do you agree or have anything to add, please be vocal and make your voice heard in the comments below.

Tue 17 Sep 2019, 09:00

14 September 2019

Pixelpiloten

Pixelpiloten is out and about!

So, i recently got hired by Redpill Linpro and am currently at a conference at Mallorca. So I thought i should show some initiative at my new gig and hold a presentation, and on Saturday morning at 9 o clock non the less.

I did not know what i wanted to talk about first but after some thinking I thought that I should do something impressive, you know really wow them.

So how do you best impress your new collegues about how damn skilled you are? You do a live demo of deploying an application to Kubernetes via Git…hopefully this will work, you know how haunted live demos are.

Cross your fingers

Sat 14 Sep 2019, 06:45

11 September 2019

Pixelpiloten

Persistent volumes in Kubernetes

When you run Kubernetes on a Cloud provider like Amazon AWS, Google cloud, Azure or OpenStack creating Volumes on the fly for your persistent storage needs is easy peasy. So come along and I’ll show you how you can use this functionality and get your persistent storage goodness.

Steps

In this example our cluster is in Amazon AWS so we can use EBS volumes (look here for other Cloud storage provisioners: https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner).

  1. Create a Storage class definition in a file called storageclass.yaml.

     apiVersion: storage.k8s.io/v1
     kind: StorageClass
     metadata:
       name: generalssd
     provisioner: kubernetes.io/aws-ebs
       parameters:
         type: gp2 # This is different for each Cloud provider and what disk types they have and what they name them.
         fsType: ext4
    
  2. Create the StorageClass with Kubectl.

     $ kubectl apply -f storageclass.yaml
    
  3. Create a Persistent volume claim that uses the Storage class we created above and define how much storage you need by creating a file called pvc.yaml and paste this into it.

     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: mycoolvolumeclaim
     spec:
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 1Gi # Specify the size of the volume you want.
       storageClassName: generalssd # This is the name of the Storage class we created above.
    
  4. Create the PersistentVolumeClaim with Kubectl.

     $ kubectl apply -f pvc.yaml
    
  5. You can now use that volume and mount it inside your container. In this example we use a database container and mount the database folder inside the container. Create a file called deployment.yaml and past this:

     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: mysql
       labels:
         app: mysql
     spec:
       replicas: 1
       selector:
         matchLabels:
         app: mysql
       template:
         metadata:
         labels:
           app: mysql
         spec:
           containers:
           - name: mysql
             image: mysql
             volumeMounts:
               - name: mycoolvolume # Name of the volume you define below.
                 mountPath: /var/lib/mysql # Path inside the container you want to mount the volume on.
             ports:
             - containerPort: 80
           volumes:
             - name: mycoolvolume # This pods definition of the volume.
               persistentVolumeClaim:
                 claimName: mycoolvolumeclaim # The PersistentVolumeClaim we created above.
    
  6. Create the Deployment with your database Pod with Kubectl.

     $ kubectl apply -f deployment.yaml
    

So to re-itterrate the steps:

  • Create a StorageClass where you define a name and what type of disk you want (depends on the Cloud provider)
  • Create a PersistentVolumeClaim where you define a name and how much disk space you want.
  • Define what volumes you want to use in your Pod definition and reference what PersistentVolumeClaim you want to use and mount it on a path inside your container.

Tadaa!

You have now deployed mysql with a volume attached to the container so whenever you deploy it the database(s) will actually persist because of our Volume :)

Wed 11 Sep 2019, 12:00

06 September 2019

Pixelpiloten

Prometheus & Grafana - Monitoring in Kubernetes

Monitoring how much resources your server(s) and how much resources your apps are using is easier then ever with Prometheus and Grafana. In this tutorial i will show you how to do this on a Kubernetes cluster.

To make this as easy as possible I created a Helm chart that deploys Prometheus and Grafana together, it uses local storage instead of a volume from a cloud vendor. The only preperations you need to do is create the directories on the Kubernetes node you plan to deploy this on.

I created this because i use K3S for some of my Kubernetes setups and I run them on non cloud vendors, so this is perfect for that, “On-prem” servers or vendors that dont have cloud volumes.

Requirements

  • Kubernetes cluster
  • Helm (Tiller installed on Kubernetes cluster)

Step 1 - Preperations.

  1. SSH into the server you plan to deploy Prometheus & Grafana too.

  2. Create the directories needed, paths for this can be changed to whatever you supply in values.yaml.

     $ mkdir -p /myvolumes/prometheus/alertmanager
    
     $ mkdir -p /myvolumes/prometheus/pushgateway
    
     $ mkdir -p /myvolumes/prometheus/server
    

Step 2 - Deploy the helm chart.

  1. Clone this repo to your computer (not the server): https://github.com/pixelpiloten/prometheus

  2. Deploy the helm chart.

     $ helm install --name pixelpiloten_prometheus . -f values.yaml
    
  3. Check that all the pods for Prometheus and Grafana are deployed and up and running.

     $ kubectl -n monitoring get pods
    
  4. Create a port-forward to the Grafana Service, port 8080 can be whatever port you want.

     $ kubectl -n monitoring port-forward svc/grafana 8080:80
    
  5. Now you can access the Grafana web GUI in your browser.

     http://127.0.0.1:8080
    

Tadaa!

You now have installed Prometheus and Grafana and can get started creating Dashboards or import existing ones from https://grafana.com/grafana/dashboards to monitor everything from your Kubernetes nodes and applications running in it.

My Grafana dashboard monitoring Kubernetes node resources

Fri 06 Sep 2019, 13:45

04 September 2019

Pixelpiloten

Probably a stupid IDE(A)

I thought I would take a small break from the Kubernetes articles and focus a bit on local development with Docker and specificly how you can work completely inside Docker containers, even using a IDE inside a Docker container. GASP! :O

Now this is just a proof of concept, and not even I am convinced that it is such a good idea but I want to demonstrate the power of containers and this is a example of what you can do.

The setup

In my previous life I used to do alot of Drupal based development and I thought I could use Drupal as our software we want to work on. This is possible using 3 docker containers and actually using Docker from the host machine inside the coder container (so docker inside docker), the setup look like this:

Docker containers

  • Apache container
    • PHP 7.2
    • Composer
    • Drush
  • MariaDB container
    • Minor custom settings for Drupal optimization
  • Coder container

Docker volume

…and a Docker volume, this is to have the same speed on Linux, MacOS and Windows. On MacOS and Windows Docker runs in a virtual machine and you need to mount a folder inside the docker containers INSIDE the virtual machine which causes a reaaaal slow down in performance, specificly when using Drupal.

WARNING! If you remove the volume you remove your code!

Requirements

  • Git
  • Docker
  • Docker compose

Step 1 - Start the environment.

  1. Clone this repository somewhere to your hardrive.

     $ git clone git@github.com:pixelpiloten/coder_drupal.git myfolder
    
  2. Start the environment.

     $ docker-compose up --build --force-recreate -d
    

Step 2 - Install drupal.

  1. Open Microsoft Visual studio code in the browser using this url.

    http://127.0.0.1:8443

  2. Click on View in the menu and choose Terminal to open the terminal.

  3. Download Drupal using Composer to the current directory (this will actually exec composer in the apache container).

     $ composer create-project drupal-composer/drupal-project:8.x-dev . --no-interaction
    
  4. OPTIONAL: If you get an error about ...requires behat/mink-selenium2-driver 1.3.x-dev -> no matching package found then add this to the composer.json file under require.

    "behat/mink-selenium2-driver": "dev-master as 1.3.x-dev"

  5. OPTIONAL: Run composer to update your packages if you got the above error.

     $ commposer update
    
  6. Install Drupal with Drush.

     $ drush si --db-url=mysql://root:password@mariadb/drupal --site-name=MySite
    
  7. Create a one time login link to login to Drupal admin.

     $ drush uli
    

Tadaa!

So, a development environment including a IDE in browser, who could have thought this was possible? This is possible because Microsoft Visual Studio Code is actually built on web technologies and when you use it on your desktop its basicly running inside a web browser environment…and a bit of docker in docker magic ;)

The commands

All the commands we ran in the terminal in the tutorial were actually running docker exec commands in the coder container, but the execution of those commands where in the apache container.

The composer and drush commands are actually just bash aliases that you can find in the .docker/coder/config/zshrc_aliases file, now this is very basic and not the most intuitive way but this is a proof of concept so just add what you want and rebuild the containers.

alias php="docker exec -u 1000:1000 -t drupal_apache php"
alias composer="docker exec -u 1000:1000 -t drupal_apache composer"
alias drush="docker exec -u 1000:1000 -it drupal_apache drush --root /home/coder/project/code-server/web --uri 127.0.0.1:8080"

Wed 04 Sep 2019, 13:45

29 August 2019

Pixelpiloten

Kubernetes in Google cloud - Tutorial

Another week, another tutorial and another Cloud provider with Kubernetes as a serivce. This time I will look at installing a Kubernetes cluster in Google cloud. To do this I will use one of my favorite Cloud native tools called Terraform from one of my favorite companies in the DevOps landscape Hashicorp

Requirements

Step 1 - Create a folder for your terraform files.

  1. Create a folder called googlekube somwehere on your computer.

Step 2 - Create your Google cloud platform API credentials

  1. Login to your Google cloud account and go to the Console.

  2. Hover with the mouse over the APIs & Services in the menu on the left hand side and click on Credentials.

  3. When the page has loaded click on Create credentials and choose Create service account key.

  4. Choose Compute engine default service account in the Service account field and JSON in the Key type field and click Create.

  5. Copy the json-file you downloaded to the googlekube folder you created in Step 1.1 and rename it cloud-credentials.json

Step 3 - Create Terraform files.

  1. Create a file called variables.tf in your googlekube folder with this content, replace the placeholder project id and region with project id and a region where you want to deploy Kubernetes to (project id you can find in your cloud-credentials.json and available regions you can find here: https://cloud.google.com/about/locations/#region, add -a to your region-name, like europe-north1 should be europe-north1-a, otherwise you will deploy a worker node in each available zone in that Region, and that is not necessary in this example).
     variable "goovars" {
         type = "map"
         default = {
             "project" = "<YOUR-PROJECT-ID>"
             "region" = "<REGION-CLOSE-TO-YOU>"
             "node_machine_type" = "n1-standard-1" # The machine type you want your worker nodes to use.
             "node_count" = "1" # How many worker nodes do you want?
             "version" = "1.13.7-gke.19" # Kubernetes version you want to install.
         }
     }
    
  2. Create a file called main.tf in your googlekube folder with this content.
     provider "google" {
         credentials = "${file("cloud-credentials.json")}"
         project     = "${var.goovars["project"]}"
         region      = "${var.goovars["region"]}"
     }
    
     resource "google_container_cluster" "gookube" {
         name     = "gookube"
         location = "${var.goovars["region"]}"
         min_master_version = "${var.goovars["version"]}"
    
         remove_default_node_pool = true
         initial_node_count = 1
    
         master_auth {
             client_certificate_config {
                 issue_client_certificate = false
             }
         }
     }
    
     resource "google_container_node_pool" "gookubenodepool" {
         name       = "gookubenodepool"
         location   = "${var.goovars["region"]}"
         cluster    = "${google_container_cluster.gookube.name}"
         node_count = "${var.goovars["node_count"]}"
    
         node_config {
             preemptible  = true
             machine_type = "${var.goovars["node_machine_type"]}"
    
             metadata = {
                 disable-legacy-endpoints = "true"
             }
    
             oauth_scopes = [
                 "https://www.googleapis.com/auth/logging.write",
                 "https://www.googleapis.com/auth/monitoring",
             ]
         }
     }
    

Step 4 - Create your kubernetes cluster.

  1. Init Terraform to download the Google cloud platform provider (run this command in the googlekube folder).
     $ terraform init
    
  2. Create your Kubernetes cluster and answer Y when Terraform asks for confirmation. This process should take about 10-15 minutes.
     $ terraform apply
    
  3. Get your Kubeconfig to access the Kubernetes cluster with kubectl using the Google cloud platform CLI tool.
     $ gcloud beta container clusters get-credentials gookube --region <THE-REGION-YOU-CHOOSED> --project <YOUR-PROJECT-ID>
    
  4. The above command saved your credentials to your Kubeconfig, normally in ~/.kube/config

  5. Check that you can reach your nodes (the master nodes are completely handled by Google so you will only see your worker nodes here)
     $ kubectl get nodes
    

Tadaa!

So thats how you can create a Kubernetes cluster on the Google cloud platform using their Kubernetes as a service with Terraform. Overall I would say that setting up Kubernetes with Terraform can be a bit of a hassle with a Cloud provider, but so far Google cloud platform has been the easiest to work with when using Terraform in this manner.

Thu 29 Aug 2019, 12:25

26 August 2019

Redpill Linpro Techblog

Evaluating Local DNSSEC Validators

Domain Name System Security Extensions (DNSSEC) is a technology that uses cryptographic signatures to make the Domain Name System (DNS) tamper-proof, safeguarding against DNS hijacking. If your ISP or network operator cares about your online security, their DNS servers will validate DNSSEC signatures for you. DNSSEC is widely deployed: here in Scandinavia, about 80% of all DNS lookups are subject to DNSSEC validation (source). Wondering whether or not your DNS server validates DNSSEC signatures? www.dnssec-or-not.com ...

Mon 26 Aug 2019, 22:00

Pixelpiloten

Kubernetes in AWS with eksctl - Tutorial

So I’m on a quest, a mission, a exploratory journey to find different ways of installing Kubernetes. I’m on this journey because different hosting providers (be it a Cloud provider or simple VPS provider or On prem) will have different tools to what that works best with their hosting solution. Either built by the provider them self or the community for either a Kubernetes as a service-service or installation based more on standard Virtual private servers.

Some of theese tools include:

  • Kubespray - Based on Ansible and can be used on everything from Cloud to On prem.
  • Rancher - Support for many different Cloud providers as well as On prem.
  • Eksctl - Official CLI tool for creating AWS EKS cluster using Cloud formation.
  • Terraform - Mainly made for creating infrastructure with support for many Different cloud provders, but also has support for AWS EKS.

Official CLI tool for EKS

Today I’m gonna focus on the official CLI tool for AWS EKS called eksctl, built in Go by Weaveworks but they have made it Open source and you can find their Github repository here: https://github.com/weaveworks/eksctl and contribute if you want to.

Requirements

Step 1 - Download your AWS API credentials.

  1. Create a user in AWS for programmatic access that has at least the access policys above defined in the Requirements section and take a note of the API credentials.

  2. Create a file in ~/.aws/credentials with this content and replace the placeholders with Your API credentials.

     [default]
     aws_access_key_id=<YOUR-ACCESS-KEY-ID>
     aws_secret_access_key=<YOUR-SECRET-ACCESS-KEY>
    

Step 2 - Install eksctl

  1. Download eksctl for your OS here: https://github.com/weaveworks/eksctl/releases/tag/latest_release.

  2. Extract and put the binary file eksctl somwehere in your $PATH, i put mine in the ~/.local/bin folder on my Linux based laptop.

Step 3 - Create file that declares your cluster setup.

  1. Create a file called cluster.yaml and copy the content from below, this contains a very basic setup but should be enough to test things out.
     apiVersion: eksctl.io/v1alpha5
     kind: ClusterConfig
    
     metadata:
         name: myCluster # Name of your cluster.
         region: eu-north-1 # Region you want to create your cluster and worker nodes in.
         version: "1.13" # Version of Kubernetes you want to install.
    
     nodeGroups:
     - name: myClusterNodeGroup # Name you your Node group (basicly a template) for your worker nodes in Cloud formation
         labels: { role: workers } # Any kind of labels you want to assign them.
         instanceType: m5.large # The size of your worker nodes.
         desiredCapacity: 1 # How many worker nodes do you want?
         ssh:
             publicKeyPath: ~/.ssh/mySshKey.pub # Location to your local public ssh-key you want to copy to the Worker nodes.
    

Step 4 - Create your cluster and access it.

  1. Create the EKS cluster with eksctl using the file you just created.
     $ eksctl create cluster -f cluster.yaml
    
  2. This process should take about 15min and eksctl will add the Kubeconfig information to access your cluster to your ~/.kube/config file, you can also use the flag --kubeconfig ~/.kube/myfolder/config in the above command if you want to write the Kubeconfig to another file instead.

  3. Check that you can access your cluster by listing all your nodes, the master nodes will be excluded here since they are completely managed by AWS.
     $ kubectl get nodes
    

Update your cluster with more nodes.

Lets say you see that your cluster starts to run out of resources, you can very easily adjust the desiredCapacity from 1 to 2 in the cluster.yaml-file for example and then run this command to update the cluster with another node by running:

$ eksctl update cluster -f cluster.yaml

Tadaa!

So thats it, a vey easy way of installing Kubernetes on AWS EKS withouot much hassle, the biggest hurdle is basicly getting your IAM configuration set, rest is pretty straight forward.

PS. You can customize the cluster.yaml with many more settings, look at more examples here: https://github.com/weaveworks/eksctl/tree/master/examples

Mon 26 Aug 2019, 13:00

21 August 2019

Pixelpiloten

Kubernetes in Azure - Tutorial

Many of the cloud providers today provides a Kubernetes as a service where they will maintain the Kubernetes nodes much like a Managed hosting service.

Some of the Cloud providers selling Kubernetes as a service:

This can be a great introduction to start using Kubernetes since you dont have to be an expert in setting up or maintaining a Kubernetes cluster, and in fact majority of the companies using Kubernetes ARE using Kubernetes this way since Kubernetes has a deep integration with cloud providers for things like load balancing and persistent storage.

I have installed Kubernetes in many different ways, all in the search of the easiest and most cloud agnostic way of installing it without taking any shortcusts, but that does not mean that I don’t use cloud providers or the Kubernetes as a service that they provide, quite the opposite.

Recently I have been playing around with Azure and they are one of the cloud providers that provides this service, so lets create a Kubernetes cluster in Azure in this tutorial :)

Requirements

Step 1

Get Authorization information for Terraform to use.

  1. Login to your Microsoft Azure account with the Azure CLI tool, this will open up a browser window where you login.
     $ az login
    
  2. Go back to your terminal and you should have got an output from the command above looking something like this, copy this somewhere since we will use this later (<REDACTED-STRING> is of course your user’s unique authentication details).
     [
         {
             "cloudName": "AzureCloud",
             "id": "<REDACTED-SUBSCRIPTION-ID>",
             "isDefault": true,
             "name": "Free Trial",
             "state": "Enabled",
             "tenantId": "<REDACTED-TENNANT-ID>",
             "user": {
                 "name": "<REDACTED-USERNAME>",
                 "type": "user"
             }
         }
     ]
    
  3. Create a Service principal in your AD with the Azure CLI tool.
     $ az ad sp create-for-rbac --skip-assignment
    
  4. Copy the appId and password string from the output somewhere (<REDACTED-STRING> is of course your unique authentication details for AD).
     {
         "appId": "<REDACTED-APP-ID",
         "displayName": "<REDACTED-DISPLAY-NAME>",
         "name": "<REDACTED-NAME>",
         "password": "<REDACTED-PASSWORD>",
         "tenant": "<REDACTED-TENANT>"
     }
    

Step 2

Create the Terraform files.

  1. Create a folder on your computer and navigate to this folder.

  2. Create a file called variables.tf and paste the content below, replace <YOUR-STRING> with the corresponding value you got from the az login and az ad commands in Step 1.1 and Step 1.3 above.
     variable "account_subscription_id" {
         type = "string"
         default = "<YOUR-ACCOUNT-ID>"
     }
    
     variable "account_tennant_id" {
         type = "string"
         default = "<YOUR-TENNANT-ID>"
     }
    
     variable "service_principal_appid" {
         type = "string"
         default = "<YOUR-SERVICE-PRINCIPAL-APP-ID>"
     }
    
     variable "service_principal_password" {
         type = "string"
         default = "<YOUR-SERVICE-PRINCIPAL-PASSWORD>"
     }
    
     variable "node_count" {
         type = "string"
         default = "1" # This is how many worker nodes you will create.
     }
    
  3. Create a file called main.tf and paste the content below
     provider "azurerm" {
         version           = "=1.28.0"
         subscription_id   = "${var.account_subscription_id}"
         tenant_id         = "${var.account_tennant_id}"
     }
    
     resource "azurerm_resource_group" "myresourcegroup" {
         name     = "myresourcegroup"
         location = "North Europe" # Replace with the region that makes sence to you.
     }
    
     resource "azurerm_kubernetes_cluster" "myk8scluster" {
         name                = "myk8scluster"
         location            = "${azurerm_resource_group.myresourcegroup.location}"
         resource_group_name = "${azurerm_resource_group.myresourcegroup.name}"
         dns_prefix          = "myk8scluster"
    
         agent_pool_profile {
             name            = "default"
             count           = "${var.node_count}"
             vm_size         = "Standard_D1_v2" # A 1 vCPU / 3.5gb Memory VM.
             os_type         = "Linux"
             os_disk_size_gb = 30
         }
    
         service_principal {
             client_id     = "${var.service_principal_appid}"
             client_secret = "${var.service_principal_password}"
         }
    
         tags = {
             Environment = "myk8scluster"
         }
     }
    
     output "kube_config" {
         value = "${azurerm_kubernetes_cluster.myk8scluster.kube_config_raw}"
     }
    

Step 3

Create your cluster :)

  1. Init terraform so it can download the cloud provider plugin for Microsoft Azure, run this in the root of the folder you created your Terraform files in.
     $ terraform init
    
  2. Tell Terraform to start create your cluster, and confirm by writing yes when Terraform asks you for confirmation.
     $ terraform apply
    

Terraform is now going to start and create a Kubernetes cluster in your Microsoft Azure account and for this one worker node setup this will take about 10-15 minutes, and when it is done it will output a Kubeconfig file you can use to authenticate to this cluster.

Access the Kubernetes Dashboard

When you install Kubernetes with Azure’s Kubernetes as a service you get the Kubernetes Dashboard installed automaticly, and the Azure CLI tool makes accessing it a breeze.

Use this command to access the Kubernetes Dashboard.

$ az aks browse --resource-group <NAME-OF-YOUR-RESOURCE-GROUP> --name <YOUR-AZURE-USERNAME>

This command will do a port forward to your Kubernetes dashboard service and open up a Browser window to that url.

Update your cluster?

Lets say you want to increase the number of worker nodes for your cluster, to do this you can just change the node_count in your variables.tf file, like this:

variable "node_count" {
    type = "string"
    default = "2" # Increase to 2 worker nodes.
}

And then just run Terraform again and it will create that second worker node for you.

$ terraform apply

Delete your cluster?

Just run this command and Terraform will delete the cluster you created.

$ terraform destroy

Wed 21 Aug 2019, 11:45

05 August 2019

Redpill Linpro Techblog

A rack switch removal ordeal

I recently needed to remove a couple of decommissioned switches from one of our data centres. This turned out to be quite an ordeal. The reason? The ill-conceived way the rack mount brackets used by most data centre switches are designed. In this post, I will use plenty of pictures to explain why that is, and propose a simple solution on how the switch manufacturers can improve this in future.

Rack switch mounting 101

Mon 05 Aug 2019, 22:00

03 August 2019

Pixelpiloten

K3S - Tutorial

In this tutorial i will go through the steps i made to setup K3S to be able to host this blog on it, the server we will be using will be a bare Ubuntu 18.04 Linux server with at least 1024mb Memory.

What will we do in this Tutorial?

  • Install docker on our server.
  • Install a 1 node Kubernetes cluster.
  • Fetch the Kubeconfig file content to be able to use Kubectl from our local Machine.
  • Install Tiller so we can deploy Helm charts to our cluster.
  • Install Cert manager so we can use that in combination with Traefik for automatic SSL certificate generation for our Kubernetes ingress resources.

Step 1 - Install Docker

SSH into the server you plan to install K3S on.

  1. Update your apt index.
    $ sudo apt-get update
    
  2. Install the packages needed to make apt be able to fetch packages over https
    $ sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
     gnupg-agent \
     software-properties-common
    
  3. Add the Docker GPG key.
    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    
  4. Add the apt repository for Docker.
    $ sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    
  5. Update your apt index again to be able to install docker from the repository we just added.
    $ sudo apt-get update
    
  6. Install docker
    $ sudo apt-get install docker-ce docker-ce-cli containerd.io
    

Step 2 - Install K3S

  1. Download and run the K3S install bash script.
    $ curl -sfL https://get.k3s.io | sh -
    
  2. Wait until script is done (about 30 seconds) and run this command to check if your 1 node cluster is up.
    $ k3s kubectl get node
    
  3. Copy the file contents for the Kubeconfig from /etc/rancher/k3s/k3s.yaml and paste that into the ~/.kube/config file on your local machine. (example of contents below, are unique strings to your cluster)
    apiVersion: v1
    clusters:
    - cluster:
         certificate-authority-data: <REDACTED>
         server: https://localhost:6443 # This needs to be changed.
      name: default
    contexts:
    - context:
         cluster: default
         user: default
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: default
      user:
         password: <REDACTED>
         username: <REDACTED>
    
  4. On your local machine: Change the server value to a public facing ip or Hostname for your server in the ~/.kube/config
  5. On your local machine: Set the KUBECONFIG variable so you can talk to your Kubernetes cluster with Kubectl
    $ export KUBECONFIG=~/.kube/config
    
  6. On your local machine: Check that you can reach your Kubernetes cluster with Kubectl
    $ kubectl get nodes
    NAME       STATUS   ROLES    AGE   VERSION
    pixkube1   Ready    master   10d   v1.14.4-k3s.1
    

Step 3 - Install Helm

  1. First we need to make sure Tiller (server part of Helm) has a ServiceAccount it can use, and give enough permissions for it, in this example i give it cluster-admin permissions, copy and paste this into a file on your local machine and call it something like serviceaccount-tiller.yaml.
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
     name: tiller
     namespace: kube-system
    
  2. Create the Tiller ServiceAccount.
    $ kubectl apply -f serviceaccount-tiller.yaml
    
  3. Download the latest release of Helm from https://github.com/helm/helm/releases for your OS and put it in your $PATH so you can execute it from anywhere, like /usr/local/bin/helm if you are on linux.
  4. Init helm using the Tiller ServiceAccount.
    $ helm init --service-account tiller
    

Step 4 - Install Cert manager

  1. Download the CustomResourceDefinition yaml file from https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml
  2. Apply the CustomResourceDefinition.
    $ kubectl apply -f 00-crds.yaml
    
  3. Add the Jetstack Helm chart repository (the gang behind Cert manager)
    $ helm repo add jetstack https://charts.jetstack.io
    
  4. Install the Cert manager Helm chart.
    $ helm install --name cert-manager --namespace cert-manager jetstack/cert-manager
    
  5. Add the Cert manager TLS Issuer, basicly some config that will identify you at Letsencrypt and a reference to a secret your Ingress will use to get the cert.
    apiVersion: certmanager.k8s.io/v1alpha1
    kind: Issuer
    metadata:
     name: letsencrypt-prod
     namespace: pixelpiloten-blog
    spec:
     acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration, update to your own.
    email: <REDACTED>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    http01: {}
    

Final step - Your Ingress

Add the annotations for traefik in your ingress so Traefik can see them and add TLS to your domain/subdomain, in this example I also redirect all non http requests to https.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: pixelpiloten-blog-ingress
  namespace: pixelpiloten-blog
  annotations:
      kubernetes.io/ingress.class: "traefik"
      certmanager.k8s.io/issuer: "letsencrypt-prod"
      certmanager.k8s.io/acme-challenge-type: http01
      traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: pixelpiloten-blog-service
          servicePort: 80
  tls:
    - hosts:
      - www.pixelpiloten.se
      secretName: pixelpiloten-se-cert

Tadaa!

You have now set up K3S on a server with Helm and Cert manager on it for automatic TLS certificates with Lets encrypt and Traefik. Now go build your application :) Fuck yeah

Sat 03 Aug 2019, 08:00

01 August 2019

Pixelpiloten

K3S - Kubernetes on the cheap

Kubernetes - Will cost you

One of the caveats you run into when you want to run Kubernetes is that you need a lot of computing power (mostly memory) to run it, and you’ll need to “rent” more than one VPS from your Cloud provider (Amazon AWS, Google Cloud, Azure etc.) or wherever you get your servers.

A minimum Kubernetes cluster would look something like this, all though I would recomend running Master and Etcd on separate nodes:

  • 1 Master & Etcd node (needs at least 8gb of memory)
  • 1 Worker node (depends on the workload you deploy to it but at least 4gb memory)

Node is the same thing as a server in Kubernetes)

So the cost for this would be around 100-150 US dollars per month, and if you are in the first steps in deploying applications to run on Kubernetes that can be a bit much.

K3S - Will cost you (much) less

K3S is often described as a Lightweight Kubernetes Distribution and is build by the people behind Rancher and is just that, a slimmed down of Kubernetes where the Rancher team have removed things like:

  • Legacy, alpha, non-default features.
  • Most in-tree plugins (cloud providers and storage plugins).
  • etcd3, in favor of qlite3 as the default storage mechanism.

These changes and more makes the footprint of K3S much much smaller and hey have also made K3S available as a single binary so you dont have to be an expert in installing Kubernetes to get started.

Easy to install. A binary of less than 40 MB. Only 512 MB of RAM required to run.

They have also built support for IOT devices running ARM CPU’s and you can for instance run K3S on a Raspberry PI. Another great thing is that they have included Traefik as the default Ingress controller which have tons of annotations you can use in your ingress definitions, Epic Win!

And in fact..this very blog you are reading this article on is running on K3S on a cheap VPS from Hetzner and the next article i will be a tutorial on how i set this up together with Cert manager - For automatic SSL certificate generation.

Thu 01 Aug 2019, 00:48

28 July 2019

Tore Anderson

Validating SSH host keys with DNSSEC

(Note: this is a repost of an article from the Redpill Linpro techblog.)

We have all done it. When SSH asks us this familiar question:

$ ssh redpilllinpro01.ring.nlnog.net
The authenticity of host 'redpilllinpro01.ring.nlnog.net (2a02:c0:200:104::1)' can't be established.
ECDSA key fingerprint is SHA256:IM/o2Qakw4q7vo9dBMLKuKAMioA7UeJSoVhfc5CYsCs.
Are you sure you want to continue connecting (yes/no/[fingerprint])?

…we just answer yes - without bothering to verify the fingerprint shown.

Many of us will even automate answering yes to this question by adding StrictHostKeyChecking accept-new to our ~/.ssh/config file.

Sometimes, SSH will be more ominous:

$ ssh redpilllinpro01.ring.nlnog.net
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:IM/o2Qakw4q7vo9dBMLKuKAMioA7UeJSoVhfc5CYsCs.
Please contact your system administrator.
Add correct host key in /home/tore/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/tore/.ssh/known_hosts:448
ECDSA host key for redpilllinpro01.ring.nlnog.net has changed and you have requested strict checking.
Host key verification failed.

This might make us stop a bit and ask ourselves: «Has a colleague re-provisioned this node since the last time I logged in to it?»

Most of the time, the answer will be: «Yeah, probably», followed by something like sed -i 448d ~/.ssh/known_hosts to get rid of the old offending key. Problem solved!

These are all very understandable and human ways of dealing with these kinds of repeated questions and warnings. SSH certainly does «cry wolf» a lot! Let us not think too much about what happens that one time someone actually is «DOING SOMETHING NASTY», though…

Another challenge occurs when maintaining a large number of servers using automation software like Ansible. Manually answering questions about host keys might be impossible, as the automation software likely needs to run entirely without human interaction. The cop out way of ensuring it can do so is to disable host key checking altogether, e.g., by adding StrictHostKeyChecking no to the ~/.ssh/config file.

DNSSEC-validated SSH host key fingerprints in DNS

Fortunately a better way of securely verifying SSH host keys exists - one which does not require lazy and error-prone humans to do all the work.

This is accomplished by combining DNS Security Extensions (DNSSEC) with SSHFP resource records.

To make use of this approach, you will need the following:

  1. The SSH host keys published in DNS using SSHFP resource records
  2. Valid DNSSEC signatures on the SSHFP resource records
  3. A DNS recursive resolver which supports DNSSEC
  4. A stub resolver that is configured to request DNSSEC validation
  5. A SSH client that is configured to look for SSH host keys in DNS

I will elaborate on how to implement each of these requirements in the sections below.

1. Publishing SSHFP host keys in DNS

The ssh-keygen utility provides an easy way to generate the correct SSHFP resource records based on contents of the /etc/ssh/ssh_host_*_key.pub files. Run it on the server like so:

$ ssh-keygen -r $(hostname --fqdn).
redpilllinpro01.ring.nlnog.net. IN SSHFP 1 1 5fca087a7c3ebebbc89b229a05afd450d08cf9b3
redpilllinpro01.ring.nlnog.net. IN SSHFP 1 2 cdb4cdaf7734df343fd567e0cab92fd6ac5f2754bfef797826dfd4bcf90f0baf
redpilllinpro01.ring.nlnog.net. IN SSHFP 2 1 613f389a36cf33b67d9bd69e381785b275e101cd
redpilllinpro01.ring.nlnog.net. IN SSHFP 2 2 8a07b97b96d826a7d4d403424b97a8ccdb77105b527be7d7be835d02fdb9cd58
redpilllinpro01.ring.nlnog.net. IN SSHFP 3 1 3e46cecd986042e50626575231a4a155cb0ee5ca
redpilllinpro01.ring.nlnog.net. IN SSHFP 3 2 20cfe8d906a4c38abbbe8f5d04c2cab8a00c8a803b51e252a1585f739098b02b

These entries can be copied and pasted directly into the zone file in question so that they are visible in DNS:

$ dig +short redpilllinpro01.ring.nlnog.net. IN SSHFP | sort
1 1 5FCA087A7C3EBEBBC89B229A05AFD450D08CF9B3
1 2 CDB4CDAF7734DF343FD567E0CAB92FD6AC5F2754BFEF797826DFD4BC F90F0BAF
2 1 613F389A36CF33B67D9BD69E381785B275E101CD
2 2 8A07B97B96D826A7D4D403424B97A8CCDB77105B527BE7D7BE835D02 FDB9CD58
3 1 3E46CECD986042E50626575231A4A155CB0EE5CA
3 2 20CFE8D906A4C38ABBBE8F5D04C2CAB8A00C8A803B51E252A1585F73 9098B02B

How to automatically update the SSHFP records in DNS when a node is being provisioned is left as an exercise for the reader, but one nifty little trick is to run something like ssh-keygen -r "update add $(hostname --fqdn). 3600". This produces output that can be piped directly into nsupdate(1).

If you for some reason can not run ssh-keygen on the server, you can also use a tool called sshfp. This tool will take the entries from ~/.ssh/known_hosts (i.e., those you have manually accepted earlier) and convert them to SSHFP syntax.

2. Ensuring the DNS records are signed with DNSSEC

DNSSEC signing of the data in a DNS zone is a task that is usually performed by the DNS hosting provider, so normally you would not need to do this yourself.

There are several web sites that will verify that DNSSEC signatures exist and validate for any given host name. The two best known are:

If DNSViz shows that everything is «secure» in the left column (example) and the DNSSEC Debugger only shows green ticks (example), your DNS records are correctly signed and the SSH client should consider them secure for the purposes of SSHFP validation.

If DNSViz and the DNSSEC Debugger give you a different result, you will most likely have to contact your DNS hosting provider and ask them to sign your zones with DNSSEC.

3. A recursive resolver that supports DNSSEC

The recursive resolver used by your system must be capable of validating DNSSEC signatures. This can be verified like so:

$ dig redpilllinpro01.ring.nlnog.net. IN SSHFP +dnssec
[...]
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 7, AUTHORITY: 0, ADDITIONAL: 1
[...]

Look for the ad flag («Authenticated Data») in the answer, If present, it means that the DNS server confirms that the supplied answer has a valid DNSSEC signature and is secure.

If the ad flag is missing when querying a hostname known to have valid DNSSEC signatures (e.g., redpilllinpro01.ring.nlnog.net), your DNS server is probably not DNSSEC capable. You can either ask your ISP or IT department to fix that, or change your system use a public DNS server known to be DNSSEC capable.

Cloudflare’s 1.1.1.1 is one well-known example of a public recursive resolver that supports DNSSEC. To change to it, replace any pre-existing nameserver lines in /etc/resolv.conf with the following:

nameserver 1.1.1.1
nameserver 2606:4700:4700::1111
nameserver 1.0.0.1
nameserver 2606:4700:4700::1001

4. Configuring the system stub resolver to request DNSSEC validation

By default, the system stub resolver (part of the C library) does not set the DO («DNSSEC OK») bit in outgoing queries. This prevents DNSSEC validation.

DNSSEC is enabled in the stub resolver by enabling EDNS0. This is done by adding the following line to /etc/resolv.conf:

options edns0

5. Configuring the SSH client to look for host keys in DNS

Easy peasy: either you can add the line VerifyHostKeyDNS yes to your ~/.ssh/config file, or you can supply it on the command line using ssh -o VerifyHostKeyDNS=yes.

Verifying that it works

If you have successfully implemented steps 1-5 above, we are ready for a test!

If you have only done step 3-5, you can still test using redpilllinpro01.ring.nlnog.net (or any other node in the NLNOG RING for that matter). The NLNOG RING nodes will respond to SSH connection attempts from everywhere, and they have all DNSSEC-signed SSHFP records registered.

$ ssh -o UserKnownHostsFile=/dev/null -o VerifyHostKeyDNS=yes no-such-user@redpilllinpro01.ring.nlnog.net
no-such-user@redpilllinpro01.ring.nlnog.net: Permission denied (publickey).

Ignore the fact that the login attempt failed with «permission denied» - this test was a complete success, as the SSH client did not ask to manually verify the SSH host key.

UserKnownHostsFile=/dev/null was used to ensure that any host keys manually added to ~/.ssh/known_hosts at an earlier point in time would be ignored and not skew the test.

It is worth noting that SSH does not add host keys verified using SSHFP records to the ~/.ssh/known_hosts file - it will validate the SSHFP records every time you connect. This ensures that even if the host keys change, e.g., due to the server being re-provisioned, the ominous «IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY» warning will not appear - provided the SSHFP records in DNS have been updated, of course.

Trusting the recursive resolver

The setup discussed in this post places implicit trust in the recursive resolver used by the system. That is, you will be trusting it to diligently validate any DNSSEC signatures on the responses it gives you, and to only set the «Authenticated Data» flag if those signatures are truly valid.

You are also placing trust in the network path between the host and the recursive resolver. If the network is under control by a malicious party, the DNS queries sent from your host to the recursive resolver could potentially be hijacked and redirected to a rogue recursive resolver.

This means that an attacker with the capability to hijack or otherwise interfere with both your SSH and DNS traffic could potentially set up a fraudulent SSH server for you to connect to, and make your recursive resolver lie about the SSH host keys being correct and valid according to DNSSEC. The SSH client will not be able to detect this situation on its own.

In order to detect such attacks, it is necessary for your host to double-check the validity of answers received from the recursive resolver by performing local DNSSEC validation. How to set up this will be the subject of a future post here on the Redpill Lipro techblog. Stay tuned!

Sun 28 Jul 2019, 00:00

05 May 2019

Redpill Linpro Techblog

Validating SSH host keys with DNSSEC

We have all done it. When SSH asks us this familiar question:

$ ssh redpilllinpro01.ring.nlnog.net The authenticity of host 'redpilllinpro01.ring.nlnog.net (2a02:c0:200:104::1)' can't be established. ECDSA key fingerprint is SHA256:IM/o2Qakw4q7vo9dBMLKuKAMioA7UeJSoVhfc5CYsCs. Are you sure you want to continue connecting (yes/no/[fingerprint])? 

…we just answer yes - without bothering to verify the fingerprint shown.

Many of us will even automate answering yes to this question by adding StrictHostKeyChecking accept-new to our ~/.ssh/config ...

Sun 05 May 2019, 22:00

03 April 2019

Redpill Linpro Techblog

Single node Kubernetes setup

These are essentially my notes on setting up a single-node Kubernetes cluster at home. Every time I set up an instance I have to dig through lots of posts, articles and documentation, much of it contradictory or out-of-date. Hopefully this distilled and much-abridged version will be helpful to someone else.

...

Wed 03 Apr 2019, 22:00

02 April 2019

Magnus Hagander

When a vulnerability is not a vulnerability

Recently, references to a "new PostgreSQL vulnerability" has been circling on social media (and maybe elsewhere). It's even got it's own CVE entry. The origin appears to be a blogpost from Trustwave.

So is this actually a vulnerability? (Hint: it's not) Let's see:

by nospam@hagander.net (Magnus Hagander) at Tue 02 Apr 2019, 19:39

24 March 2019

Redpill Linpro Techblog

Configure Alfresco 5.2.x with SAML 2.0

In our project, we have successfully implemented SAML (Security Assertion Markup Language) 2.0 with our Alfresco Content Service v5.2.0. We use AD(Active Directory) to sync users and groups into Alfresco System.

...

Sun 24 Mar 2019, 23:00

21 March 2019

Ingvar Hagelund

Packages of varnish-6.2.0 with matching vmods, for el6 and el7

The Varnish Cache project recently released a new upstream version 6.2 of Varnish Cache. I updated the fedora rawhide package yesterday. I have also built a copr repo with varnish packages for el6 and el7 based on the fedora package. A snapshot of matching varnish-modules (based on Nils Goroll’s branch) is also available.

Packages are available at https://copr.fedorainfracloud.org/coprs/ingvar/varnish62/.

vmods included in varnish-modules:
vmod-bodyaccess
vmod-cookie
vmod-header
vmod-saintmode
vmod-tcp
vmod-var
vmod-vsthrottle
vmod-xkey

by ingvar at Thu 21 Mar 2019, 08:29

08 March 2019

Bjørn Ruberg

Perfectly synchronized dual portscanning

The other day while reviewing my fireplot graphs, I noticed (yet) another portscan. They’re not unusual. This one took around four and a half hour to complete, and covered a lot of TCP ports on one IPv4 address. That’s not unusual either. The curved graph shown below is caused by the plot’s logarithmic Y axis, […]

by bjorn at Fri 08 Mar 2019, 19:08

05 March 2019

Bjørn Ruberg

Honeypot intruders’ HTTP activity

One of my Cowrie honeypots has been configured to intercept various outbound connections, redirecting them into an INetSim honeypot offering corresponding services. When intruders think they’re making an outbound HTTPS connection, they only reach the INetSim server, where their attempts are registered and logged. When someone successfully logs in to the Cowrie honeypot, be it […]

by bjorn at Tue 05 Mar 2019, 08:35

02 March 2019

Bjørn Ruberg

Nagios or Icinga plugin for Mikrotik software and firmware version

When upgrading the software (RouterOS) on Mikrotik devices, you should usually also make sure the firmware (RouterBoot) is upgraded to the same level. In the devices’ various management interfaces including command line, the OS will tell you that there are outstanding firmware patches if you ask it, like this: /system routerboard print routerboard: yes current-firmware: […]

by bjorn at Sat 02 Mar 2019, 16:34

01 March 2019

Ingvar Hagelund

Updated packages of varnish-4.1.11 with matching vmods, for el6 and el7

Recently, the Varnish Cache project released an updated upstream version 4.1.11 of Varnish Cache. This is a maintenance and stability release of varnish 4.1, which you may consider as the former “LTS” branch of varnish. I have updated my varnish 4.1 copr repo with packages for el6 and el7. A selection of matching vmods is also included in the copr repo.

Packages are available at https://copr.fedorainfracloud.org/coprs/ingvar/varnish41/

The following vmods are available:

Included in varnish-modules:
vmod_bodyaccess
vmod_cookie
vmod_header
vmod_saintmode
vmod_softpurge
vmod_tcp
vmod_var
vmod_vsthrottle
vmod_xkey

Packaged separately:
vmod-curl
vmod-digest
vmod-geoip
vmod-memcached
vmod-rfc6052
vmod-rtstatus
vmod-uuid
vmod-vslp

And varnish-agent is also thrown in.

Please test and report bugs. If there is enough interest, I may consider pushing these to fedora as well.

Varnish Cache is a powerful and feature rich front side web cache. It is also very fast, and that is, fast as in powered by The Dark Side of the Force. On steroids. And it is Free Software.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, data center, and cloud, contact us at www.redpill-linpro.com.

by ingvar at Fri 01 Mar 2019, 08:17

24 December 2018

Ingvar Hagelund

Tolkien’s fan service (J.R.R. Tolkien: The Lord of the Rings)

I read Tolkien’s “Canon”, that is, The Hobbit, The Lord of the Rings, and The Silmarillion, every year about Christmas. So also this year.

When I read through the first chapter of The Fellowship of the Ring again, I stumbled over all those small things that remind about The Hobbit. Going through them more systematically, it is clear that Tolkien started out wanting to create a sequal, and he uses a lot of small details to bind the first chapters of the new book closely to the previous one.

Starting with the title, A long expected party, of course closely mimicking the Hobbit’s first chapter An unexpected party. During Bilbo’s feast, Gandalf shows off his firework display, as he did on the Old Tooks parties a long time ago, according to The Hobbit. The firework elements themselves reminiscing parts of the story of the Hobbit. The trees of Greenwood the Great (or Mirkwood if you like), complete with butterflies. Then there are the eagles, a thunderstorm, an embattled army of elves with silver spears, and of course, the mountain and the dragon as the Grand Finale. Then Bilbo holds his speech, reminding the bored guests about his coming to Esgaroth on his 50th birthday, before he makes his special exit.

After Bilbo has disappeared in a flash and a bang, and left 144 flabbergasted guests back in the pavillion, we follow him and Gandalf back into Bag End. Here we see him pulling out his old treasures from The Hobbit; His sword Sting, the green cloak and hood that he borrowed from Dwalin (rather too large for him), and of course, his journey’s diary, the actual Hobbit book itself, nicely written into the story, and, as he tells Gandalf, he has written an end for it: “And he lived happily ever after, to the end of his days”, like the book actually ends. Gandalf reminds Bilbo about the will – the contract with Frodo if you like, that should be put on the same place as Bilbo found his own contract 77 years earlier, by the clock on the mantlepiece. He then sets out with dwarves, again.

At Crickhollow, the evening before the hobbits set out together, Merry and Pippin has made a song mimicking the song the Dwarves sang before Thorin and company set out. Out on the road, Frodo and his merry followers visit a tavern, like Thorin’s travelling party is said to have done too. They enter the wilder region, and Frodo and company sees the hills with old ruins on them, just like Bilbo did. After crossing the same stone bridge, they even discover the trolls that Gandalf tricked to stay out until the dawn made them to stone. Finally, the second book of the Fellowship starts with a rest in Elrond’s house, as did Bilbo.

Tolkien’s eye for details gives the fans of The Hobbit great value for their money, and a world full of small well-known nuggets to get comfortable before the quest takes off into the parts of Middle-Earth where they have not travelled before.

Are there more hints of the Hobbit in The Fellowship of the Ring than those listed here? I probably missed a lot of them.

by ingvar at Mon 24 Dec 2018, 10:54

01 December 2018

Tore Anderson

Enabling IPv6 on the Huawei ME906s-158 / HP lt4132

Last year I wrote a post about my difficulties getting IPv6 to work with the Huawei ME906s-158 WWAN module. I eventually gave up and had it replaced with a module from another manufacturer.

Not long ago, though, I received an e-mail from another ME906s-158 owner who told me that for him, IPv6 worked just fine. That motivated me to brush the dust off my module and try again. This time, I figured it out! Read on for the details.

The Carrier PLMN List

The ME906s-158 comes with a built-in list of nine different PLMN profiles. This list can be managed with proprietary AT command AT^PLMNLIST, which is documented on page 209 of the module’s AT Command Interface Specification.

To interact with the AT command interface, use the option driver. More details on that here.

This is the complete factory default list:

AT^PLMNLIST?

^PLMNLIST: "00000",00000,23106,26207,23802,23806
^PLMNLIST: "20205",26801,20205,26202,26209,27201,27402,50503,54201,53001,40401,40405,40411,40413,40415,40420,40427,40430,40443,40446,40460,40484,40486,40488,40566,40567,405750,405751,405752,405753,405754,405755,405756,20404,20601,20810,21401,21670,22210,22601,23003,23415,24405,24802,27602,27801,28001,28602,28802,29340,42702,60202,62002,63001,63902,64004,64304,65101,65501,90128,23201,28401,64710,46601,42602,22005,41302,29403,50213,50219,21910,25001,27077,52505,23801,40004,42403,46692,52503,73001,24602,24705
^PLMNLIST: "26201",26201,23001,20416,23203,23207,21901,21630,23102,29702,29401,26002,20201,23431,23432
^PLMNLIST: "21403",20610,20801,20802,21403,22610,23101,23430,23433,26803,26003
^PLMNLIST: "50501",50501,50571,50572
^PLMNLIST: "22801",22801,29501
^PLMNLIST: "21407",21405,21407,23402
^PLMNLIST: "99999",24491,24001,23820
^PLMNLIST: "50502",50502 

OK

Each ^PLMNLIST: line represents a single pre-defined PLMN profile, identified by the first MCCMNC number (in double quotes). The "26201" profile is for Deutsche Telekom, the "50501" profile is for Telstra Mobile, and so on.

The rest of the numbers on each line is a list of MCCMNCs that will use that particular profile. For example, if you have a SIM card issued by T-Mobile Netherlands (MCCMNC 20416), then the ME906s-158 will apply the Deutsche Telekom profile ("26201").

Unfortunately, the documentation offers no information on how the PLMN profiles differ and how they change the way the module work.

My provider (Telenor Norway, MCCMNC 24201) is not present in the factory default list. In that case, the module appears to use the "00000" PLMN profile («Generic») as the default, and that one disables IPv6! Clearly, Huawei haven’t read RFC 6540

In any case, this explains why I failed to make IPv6 work last year, and why it worked fine for the guy who mailed me - his provider was among those that used the Deutsche Telekom PLMN profile by default.

Modifying the Carrier PLMN List

The solution seems clear: I need to add my provider’s MCCMNC to an IPv6-capable PLMN profile. The Deutsche Telekom one would probably work, but "99999" («Generic(IPV4V6)») seems like an even more appropriate choice.

Adding MCCMNCs is done with AT^PLMNLIST=1,"$PLMNProfile","$MCCMNC", like so:

AT^PLMNLIST=1,"99999","24201"

OK

(To remove an MCCMNC, use AT^PLMNLIST=0,"$PLMNProfile","$MCCMNC" instead.)

I can now double check that the "99999" PLMN profile includes 24201 for Telenor Norway:

AT^PLMNLIST="99999"

^PLMNLIST: "99999",24491,24001,23820,24201

OK

To make the new configuration take effect, it appears to be necessary to reset the module. This can be done with the AT^RESET command.

Confirming that IPv6 works

It is possible to query the module directly about IPv6 support in at least two different ways:

AT^IPV6CAP?

^IPV6CAP: 7

OK

AT+CGDCONT=?

+CGDCONT: (0-11),"IP",,,(0-2),(0-3),(0,1),(0,1),(0-2),(0,1)
+CGDCONT: (0-11),"IPV6",,,(0-2),(0-3),(0,1),(0,1),(0-2),(0,1)
+CGDCONT: (0-11),"IPV4V6",,,(0-2),(0-3),(0,1),(0,1),(0-2),(0,1)

OK

The ^IPV6CAP: 7 response means «IPv4 only, IPv6 only and IPv4v6» (cf. page 336 of the AT Command Interface Specification), and the +CDGCONT: responses reveal that the modem is ready to configure PDP contexts using the IPv6-capable IP types. Looking good!

Of course, the only test that really matters is to connect it:

$ mmcli --modem 0 --simple-connect=apn=telenor.smart,ip-type=ipv4v6
successfully connected the modem
$ mmcli --bearer 0                                                                                            
Bearer '/org/freedesktop/ModemManager1/Bearer/0'
  -------------------------
  Status             |   connected: 'yes'
                     |   suspended: 'no'
                     |   interface: 'wwp0s20f0u3c3'
                     |  IP timeout: '20'
  -------------------------
  Properties         |         apn: 'telenor.smart'
                     |     roaming: 'allowed'
                     |     IP type: 'ipv4v6'
                     |        user: 'none'
                     |    password: 'none'
                     |      number: 'none'
                     | Rm protocol: 'unknown'
  -------------------------
  IPv4 configuration |   method: 'static'
                     |  address: '10.169.198.77'
                     |   prefix: '30'
                     |  gateway: '10.169.198.78'
                     |      DNS: '193.213.112.4', '130.67.15.198'
                     |      MTU: '1500'
  -------------------------
  IPv6 configuration |   method: 'static'
                     |  address: '2a02:2121:2c4:7105:5a2c:80ff:fe13:9208'
                     |   prefix: '64'
                     |  gateway: '::'
                     |      DNS: '2001:4600:4:fff::52', '2001:4600:4:1fff::52'
                     |      MTU: '1500'
  -------------------------
  Stats              |          Duration: '59'
                     |    Bytes received: 'N/A'
                     | Bytes transmitted: 'N/A'

Success!

Sat 01 Dec 2018, 00:00

28 November 2018

Ingvar Hagelund

Updated packages of varnish-6.0.2 matching vmods, for el6 and el7

Recently, the Varnish Cache project released an updated upstream version 6.0.2 of Varnish Cache. This is a maintenance and stability release of varnish 6.0, which you may consider as the current “LTS” branch of varnish. I have updated the fedora rawhide package, and also updated the varnish 6.0 copr repo with packages for el6 and el7 based on the fedora package. A selection of matching vmods is also included in the copr repo.

Packages are available at https://copr.fedorainfracloud.org/coprs/ingvar/varnish60/

The following vmods are available:

Included in varnish-modules:
vmod-bodyaccess
vmod-cookie
vmod-header
vmod-saintmode
vmod-tcp
vmod-var
vmod-vsthrottle
vmod-xkey

Packaged separately:
vmod-curl
vmod-digest
vmod-geoip
vmod-memcached
vmod-querystring
vmod-uuid

Please test and report bugs. If there is enough interest, I may consider pushing these to fedora as well.

Varnish Cache is a powerful and feature rich front side web cache. It is also very fast, and that is, fast as in powered by The Dark Side of the Force. On steroids. And it is Free Software.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, data center, and cloud, contact us at www.redpill-linpro.com.

by ingvar at Wed 28 Nov 2018, 10:02

19 November 2018

Magnus Hagander

PGConf.EU 2018 - the biggest one yet!

It's now almost a month since PGConf.EU 2018 in Lisbon. PGConf.EU 2018 was the biggest PGConf.EU ever, and as far as I know the biggest PostgreSQL community conference in the world! So it's time to share some of the statistics and feedback.

I'll start with some attendee statistics:

451 registered attendees 2 no-shows 449 actual present attendees

Of these 451 registrations, 47 were sponsor tickets, some of who were used by sponsors, and some were given away to their customers and partners. Another 4 sponsor tickets went unused.

Another 52 were speakers.

This year we had more cancellations than we've usually had, but thanks to having a waitlist on the conference we managed to re-fill all those spaces before the event started.

by nospam@hagander.net (Magnus Hagander) at Mon 19 Nov 2018, 20:01