Planet Redpill Linpro

02 September 2016

Ingvar Hagelund

IPV6: clatd, a component of 464XLAT, for Fedora and EPEL

The World is running out of IPv4 addresses, but luckily, we have IPv6 here now, and running the whole data center on IPv6 only is not just happening, it’s becoming the standard. But what if you have an app, a daemon, or a container that actually needs IPv4 connectivity? Then you may use 464XLAT to provide an IPv4 tunnel through your IPv6 only infrastructure. clatd is one component in 464XLAT.

clatd is a CLAT / SIIT-DC Edge Relay implementation for Linux. From the github wash label:

clatd implements the CLAT component of the 464XLAT network architecture specified in RFC 6877. It allows an IPv6-only host to have IPv4 connectivity that is translated to IPv6 before being routed to an upstream PLAT (which is typically a Stateful NAT64 operated by the ISP) and there translated back to IPv4 before being routed to the IPv4 internet. This is especially useful when local applications on the host requires actual IPv4 connectivity or cannot make use of DNS64 (…) clatd may also be used to implement an SIIT-DC Edge Relay as described in RFC 7756.

Note that clatd relies on Tayga for the actual translation of packets between IPv4 and IPv6.

Yesterday, I pushed clatd for fedora testing and epel testing. Please test and report feedback by bugzilla.

For more information on clatd, see the documentation included in the package, or the clatd github home. For more info on Tayga, visit

For general information about the process of transisioning to the britght future of IPv6, consider

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at

by ingvar at Fri 02 Sep 2016, 11:37

01 September 2016

Redpill Linpro Techblog

IPV6: clatd, a component of 464XLAT, packages for Fedora and EPEL

The World is running out of IPv4 addresses, but luckily, we have IPv6 here now, and running the whole data center on IPv6 only is not just happening, it’s becoming the standard. But what if you have an app, a daemon, or a container that actually needs IPv4 connectivity? Then you may use 464XLAT to provide an IPv4 tunnel through your IPv6 only infrastructure. clatd is one component in 464XLAT.


Thu 01 Sep 2016, 22:00

16 August 2016

Redpill Linpro Techblog

Using systemd-networkd to work your net

On a laptop, per-distribution network tools like ifupdown, network-scripts and netcfg are a bit limiting. NetworkManager is a reasonable solution to roaming and using multiple networks, but for those of us who don’t run environments like GNOME, it’s a little opaque, even now that it has nmcli.

Systemd includes a ...

Tue 16 Aug 2016, 22:00

15 August 2016

Redpill Linpro Techblog

LDAP and password encryption strength

Given the focus on security breaches leaking account information the last few years, we have taken a fresh look at how secure our LDAP passwords really are, and if we can let OpenLDAP use a modern hash algorithm.


Mon 15 Aug 2016, 22:00

11 August 2016

Redpill Linpro Techblog

Encrypted Btrfs for Lazy Road Warriors' laptops

Why Btrfs?

Btrfs is full of new features to take advantage of, such as copy-on-write, storage pools, checksums, support for 16 exabyte filesystems, online grow and shrink, and space-efficient live snapshots. So, if you are used to mange storage with LVM and RAID, Btrfs can replace ...

Thu 11 Aug 2016, 22:00

10 August 2016

Redpill Linpro Techblog

varnish-4.1.3 and varnish-modules-0.9.1 for fedora and epel

The Varnish Cache project recently released varnish-4.1.3 and varnish-modules-0.9.1. Of course, we want updated rpms for Fedora and EPEL.


Wed 10 Aug 2016, 22:00

Knut Ingvald Dietzel

Encrypted Btrfs for Lazy Road Warriors' laptops

Why Btrfs?

Btrfs is full of new features to take advantage of, such as copy-on-write, storage pools, checksums, support for 16 exabyte filesystems, online grow and shrink, and space-efficient live snapshots. So, if you are used to mange storage with LVM and RAID, Btrfs can replace these technologies.

The best way to get familiar with something is to start using it. This post will detail some experiences from installing a laptop with Debian Jessie with Btrfs and swap on encrypted volumes.

The old way

Before switching to Btrfs one could typically put /boot on the first primary partition and designate the next partition to an encrypted volume, which in turn was used for LVM that we would chuck everything else into. For a road warrior with potential sensitive data on disk, full disk encryption is a good thing, and as the LUKS encryption is at the partition level one only has to punch in the pass phrase once during boot.

The Btrfs way

When implementing Btrfs one would like to avoid LVM and entry of pass phrases multiple times. Achieving this with separate encrypted partitions designated for /boot, swap and Btrfs triggers subtle changes in the partitioning and the tools involved during boot.

One way is to partition with /boot on the first primary, then two encrypted volumes – one for swap and one for / with Btrfs, and during initialization of the encrypted volumes make use of the same passphrase for both of the encrypted volumes.

Post booting into your newly installed system:

~# apt-get install keyutils

and add the keyscript=decrypt_keyctl option to both of the encrypted volumes listed in /etc/crypttab. Then issue:

 ~# update-initramfs -u -k all

to update your initramfs to include keyutils. Then reboot and check that the entered passphrase is cached and used to unlock both of the encrypted volumes.

Then what?

Many Linux distributions will install to the default subvolume. This may be undesirable as snapshots and subvolumes will be created inside the root filesystem. A possibly better layout would be to have a snapshots directory and a rootfs subvolume for the root filesystem.

So, we'll create the layout for the new default subvolume:

~# btrfs subvolume snapshot / /rootfs
~# mkdir /snapshots

As the contents under /rootfs will become the new root filesystem, do not make any changes to the current root filesystem until you have rebooted.

Edit /rootfs/etc/fstab so that the new rootfs subvolume will be used on subsequent reboots. I.e. you will need to include subvol=rootfs under options, à la:

# <file system>        <mount point>  <type>  <options>               <dump>  <pass>
/dev/mapper/sdXX_crypt /              btrfs   defaults,subvol=rootfs  0       1

In order to boot into the right subvolume one needs to set the default subvolume to be rootfs. E.g. find the subvolume's ID with:

~# btrfs subvolume list /
ID 262 gen 704 top level 5 path rootfs

and set it as default with:

~# btrfs subvolume set-default 262 /

Then restart to boot into your rootfs subvolume. Note that a measure of success is that the /snapshots folder should be missing. Now, delete the contents of the old root in the default subvolume.

To facilitate creation of new subvolumes/snapshots, make a mountpoint for the default subvolume:

~# mkdir -p /mnt/btrfs/root/

and add it to /etc/fstab:

# <file system>        <mount point>     <type>  <options>                     <dump>  <pass>
/dev/mapper/sda6_crypt /mnt/btrfs/root/  btrfs   defaults,noauto,subvolid=5    0       1

Then one can easily mount /mnt/btrfs/root/ and create snapshots/subvolumes. Yay!

Suggestions for further reading

"Stuff" that helped me in getting acquainted with Btrfs:

  •'s Btrfs Sysadmin Guide and the articles, presentations and podcasts they have linked in.
  •'s articles, part one and two, on Btrfs Storage Pools, Subvolumes And Snapshots.

by Knut Ingvald Dietzel at Wed 10 Aug 2016, 22:00

Ingvar Hagelund

varnish-4.1.3 and varnish-modules-0.9.1 for fedora and epel

The Varnish Cache project recently released varnish-4.1.3 and varnish-modules-0.9.1. Of course, we want updated rpms for Fedora and EPEL.

While there are official packages for el6 and el7, I tend to like to use my Fedora downstream package, also for EPEL. So I have pushed updates for Fedora, and updated copr builds for epel5, epel6, and epel7.

An update of the official supported bundle of varnish modules, varnish-modules-0.9.1, was also released a few weeks ago. I did recently wrap it for Fedora, and am waiting for its review in BZ #1324863. Packages for epel5, epel6, and epel7 are in copr as well.

Fedora updates for varnish-4.1.3 may be found at

The Copr repos for epel are here:

Test and reports are very welcome.

Varnish Cache is a powerful and feature rich front side web cache. It is also very fast, and that is, fast as in powered by The Dark Side of the Force. On steroids. And it is Free Software.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at

by ingvar at Wed 10 Aug 2016, 12:45

07 August 2016

Redpill Linpro Techblog

Setting up Jekyll

So, management wants a microsite for blog-entries ASAP, while the techs wants to use tools they are used to - markdown and git. On top of that, we have a limited spare time for implementing a new solution.

In the intersection of that lies Jekyll!


Sun 07 Aug 2016, 22:00

02 August 2016

Redpill Linpro Techblog


Welcome to our new techblog. This microsite will contain tech-related entries which interests the techies (and other employees) at Redpill Linpro.

We hope you enjoy the articles!

Tue 02 Aug 2016, 22:00

13 July 2016

Bjørn Ruberg

Beneficial side effects of running a honeypot

I’ve been running a honeypot for quite a while now, it started out as a pure SSH honeypot – first with Kippo and then I migrated to Cowrie. Some time later I added more honeypot services to the unit in the form of InetSim. The InetSim software provides multiple plaintext services like HTTP, FTP, and […]

by admin at Wed 13 Jul 2016, 21:42

11 July 2016

Magnus Hagander

Locating the recovery point just before a dropped table

A common example when talking about why it's a good thing to be able to do PITR (Point In Time Recovery) is the scenario where somebody or some thing (operator or buggy application) dropped a table, and we want to do a recover to right before the table was dropped, to keep as much valid data as possible.

PostgreSQL comes with nice functionality to decide exactly what point to perform a recovery to, which can be specified at millisecond granularity, or that of an individual transaction. But what if we don't know exactly when the table was dropped? (Exactly being at the level of specific transaction or at least millisecond).

On way to handle that is to "step forward" through the log one transaction at a time until the table is gone. This is obviously very time-consuming.

Assuming that DROP TABLE is not something we do very frequently in our system, we can also use the pg_xlogdump tool to help us find the exact spot to perform the recovery to, in much less time. Unfortunately, the dropping of temporary tables (implicit or explicit) is included in this, so if your application uses a lot of temporary tables this approach will not work out of the box. But for applications without them, it can save a lot of time.

Let's show an example. This assumes you have already set up the system for log archiving, you have a base backup that you have restored, and you have a log archive.

The first thing we do is try to determine the point where a DROP TABLE happened. We can do this by scanning for entries where rows have been deleted from the pg_class table, as this will always happen as part of the drop.

by (Magnus Hagander) at Mon 11 Jul 2016, 13:36

22 June 2016

Jean-Marc Reymond

ActiveMQ: Message ordering

ActiveMQ being a messaging system based on queues (aka FIFOs), one would take for granted that if there is only one producer and one consumer for a given queue (and they are both single threaded), the order of the messages is preserved.
Well, not always!

Let's say I have the following configuration for my ActiveMQ client running Camel:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="">

<bean id="amqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${}"/>
<property name="userName" value="${}"/>
<property name="password" value="${}"/>

<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" init-method="start" destroy-method="stop">
<property name="maxConnections" value="12"/>
<property name="maximumActiveSessionPerConnection" value="200"/>
<property name="connectionFactory" ref="amqConnectionFactory"/>

<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="pooledConnectionFactory"/>
<property name="transacted" value="true"/>
<property name="cacheLevelName" value="CACHE_NONE"/>

<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="jmsConfig"/>


With this configuration, even with a prefetch of 1 and only one consumer, you risk having messages being consumed out of order even if they were produced in the right order.
The culprit is CACHE_NONE, which you want to use if you are using XA transactions.
But in normal circumstances, with a local transaction manager or with the one built-in with the JmsConfiguration bean, it is recommended to use CACHE_CONSUMER not only to improve performance but also to ensure proper message ordering.

Side note regarding prefetch=1:
Even though one could expect having only one message sent to the consumer until it gets ack-ed (which is when you are done processing it when you have transacted=true), it is still possible to have a second message assigned to that consumer in the dispatch queue, which in effect get blocked until the first message is fully processed (which can be a problem for slow consumers).
The solution (if this is really a problem) would be to use prefetch=0 for that given consumer, but this is costly since the consumer is now polling the broker!

More info here

If message ordering is a big requirement for you, you might want to look at the Camel resequencer.

Update 20160624: And now there is a Jira for this!
[ENTMQ-1783] Combination of CACHE_NONE and Transacted Affects Message Ordering - JBoss Issue Tracker

by Bambitroll ( at Wed 22 Jun 2016, 16:25

15 June 2016

Tore Anderson

IPv6 support in the PlayStation 4

The other day, I noticed with great interest that my PlayStation 4 was using IPv6 to communicate with the Internet. I’m fairly certain that this behaviour is new, so I decided to investigate.

This is what appeared on the wire when it connected to the network:

  1 0.000000000           :: -> ff02::16     ICMPv6 110 Multicast Listener Report Message v2
  2 0.072956000           :: -> ff02::1:ffe2:19c7 ICMPv6 78 Neighbor Solicitation for fe80::2d9:d1ff:fee2:19c7
  3 0.799982000           :: -> ff02::16     ICMPv6 90 Multicast Listener Report Message v2
  4 1.600965000 fe80::2d9:d1ff:fee2:19c7 -> ff02::16     ICMPv6 90 Multicast Listener Report Message v2
  5 2.957012000 fe80::2d9:d1ff:fee2:19c7 -> ff02::2      ICMPv6 70 Router Solicitation from 00:d9:d1:e2:19:c7
  6 2.970763000 fe80::385a:20ff:fe70:f441 -> fe80::2d9:d1ff:fee2:19c7 ICMPv6 270 Router Advertisement from 3a:5a:20:70:f4:41
  7 2.971328000 fe80::2d9:d1ff:fee2:19c7 -> ff02::1:2    DHCPv6 110 Solicit XID: 0xe0e8c5 CID: 0003000100d9d1e219c7
  8 2.973796000 fe80::385a:20ff:fe70:f441 -> fe80::2d9:d1ff:fee2:19c7 DHCPv6 191 Advertise XID: 0xe0e8c5 CID: 0003000100d9d1e219c7 IAA: 2a02:fe0:c071:f00a::f1e
  9 2.974148000 fe80::2d9:d1ff:fee2:19c7 -> ff02::1:2    DHCPv6 152 Request XID: 0xe0e8c5 IAA: 2a02:fe0:c071:f00a::f1e CID: 0003000100d9d1e219c7
 10 2.977070000 fe80::385a:20ff:fe70:f441 -> fe80::2d9:d1ff:fee2:19c7 DHCPv6 223 Reply XID: 0xe0e8c5 CID: 0003000100d9d1e219c7 IAA: 2a02:fe0:c071:f00a::f1e
 11 2.977472000           :: -> ff02::1:ff00:f1e ICMPv6 78 Neighbor Solicitation for 2a02:fe0:c071:f00a::f1e
 12 3.000971000 fe80::2d9:d1ff:fee2:19c7 -> ff02::16     ICMPv6 90 Multicast Listener Report Message v2
 13 3.400970000 fe80::2d9:d1ff:fee2:19c7 -> ff02::16     ICMPv6 90 Multicast Listener Report Message v2
 14 3.977343000 fe80::2d9:d1ff:fee2:19c7 -> ff02::1:ff70:f441 ICMPv6 86 Neighbor Solicitation for fe80::385a:20ff:fe70:f441 from 00:d9:d1:e2:19:c7
 15 3.977615000 fe80::385a:20ff:fe70:f441 -> fe80::2d9:d1ff:fee2:19c7 ICMPv6 86 Neighbor Advertisement fe80::385a:20ff:fe70:f441 (rtr, sol, ovr) is at 3a:5a:20:70:f4:41
 16 3.977874000 2a02:fe0:c071:f00a::f1e -> 2a02:fe0:1:2:1:0:1:110 DNS 103 Standard query 0xc4e3  AAAA
 17 3.987868000 2a02:fe0:1:2:1:0:1:110 -> 2a02:fe0:c071:f00a::f1e DNS 241 Standard query response 0xc4e3  CNAME CNAME AAAA 2a02:26f0:ac:181::1363 AAAA 2a02:26f0:ac:197::1363
 18 3.988383000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 94 62420→80 [SYN] Seq=0 Win=65535 Len=0 MSS=1440 WS=64 SACK_PERM=1 TSval=415148157 TSecr=0
 19 4.005888000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 94 80→62420 [SYN, ACK] Seq=0 Ack=1 Win=28560 Len=0 MSS=1440 SACK_PERM=1 TSval=3194590031 TSecr=415148157 WS=32
 20 4.006231000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 86 62420→80 [ACK] Seq=1 Ack=1 Win=65664 Len=0 TSval=415148175 TSecr=3194590031
 21 4.006361000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 HTTP 166 GET /netstart/ps4 HTTP/1.1
 22 4.021963000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 86 80→62420 [ACK] Seq=1 Ack=81 Win=28576 Len=0 TSval=3194590047 TSecr=415148175
 23 4.022418000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e HTTP 587 HTTP/1.1 403 Forbidden  (text/html)
 24 4.022479000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 86 80→62420 [FIN, ACK] Seq=502 Ack=81 Win=28576 Len=0 TSval=3194590048 TSecr=415148175
 25 4.022780000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 86 62420→80 [ACK] Seq=81 Ack=503 Win=65152 Len=0 TSval=415148191 TSecr=3194590048
 26 4.022849000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 86 62420→80 [FIN, ACK] Seq=81 Ack=503 Win=65664 Len=0 TSval=415148191 TSecr=3194590048
 27 4.037492000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 86 80→62420 [ACK] Seq=503 Ack=82 Win=28576 Len=0 TSval=3194590063 TSecr=415148191
 28 4.045960000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 86 [TCP Dup ACK 27#1] 80→62420 [ACK] Seq=503 Ack=82 Win=28576 Len=0 TSval=3194590071 TSecr=415148191
 29 4.046281000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 74 62420→80 [RST] Seq=82 Win=0 Len=0

There are several things I find noteworthy here:

  1. It supports DHCPv6. Since the DHCPv6 client runs in user space, this strongly indicates that it’s a deliberate move by Sony.
  2. It performs DNS requests over IPv6. A stub resolver also runs in user space, so it’s another indication that this is not accidental.
  3. It uses IPv6 to call home to the dual-stacked URL
  4. The call home URL returns a 403 Forbidden error. However, it does so when accessed using IPv4 as well, so this might not mean much.

For the record, the call home request does not include any personal information beyond the source IP address and a URL indicating it’s a PS4. That said, the request itself is more than enough for Sony to generate useful statistics on how many PS4s with IPv6 Internet access there are out there. The following is the complete call home request made:

GET /netstart/ps4 HTTP/1.1
Connection: close

So far I’ve not seen it use IPv6 for anything else than what I’ve described above. An application like Netflix, which ought to use IPv6 whenever possible, does not. It would appear, therefore, that this is just small beginnings, perhaps done primarily to gather statistics. Nevertheless, I am very excited to see that Sony has begun work on implementing IPv6 support for the PS4.

Technical details

I first noticed the IPv6 capability after upgrading to system software version 3.50. I can’t rule out that it showed up in an earlier update, though, since I haven’t actively looked for it after installing earlier updates.

I tested various different network environments to figure out what exactly the PS4 supports. It would appear that Sony has done a thorough job:

  • It supports assignment of global IPv6 addresses using both SLAAC and DHCPv6 IA_NA. When using SLAAC, the Interface Identifier appears to be randomly generated. That is, the IID does not embed the PS4’s MAC address, and it changes every time the PS4 reconnects to the network.
  • It will learn IPv6 DNS servers from both the Recursive DNS Server RA Option and DHCPv6.
  • Addresses and/or DNS servers learned from DHCPv6 are preferred over those learned from ICMPv6 Router Advertisements (if any).
  • It will start a DHCPv6 client only if either the Managed or OtherConfig RA flag is set. If Managed=1, it will solicit both IA_NA and DNS configuration; otherwise, if OtherConfig=1, it will send a DHCPv6 Information-request message to obtain DNS configuration only.

I did find a couple of bugs too:

  • It would sometimes attempt to use its link-local address to communicate with the DNS server or the HTTP call-home web server, which doesn’t work. This suggests that there is a bug in the PS4’s default address selection logic, or that it failed to activate its SLAAC- or DHCPv6-assigned address. Simply re-connecting to the network would usually resolve this issue.
  • If address assignment is SLAAC-only, and the advertised prefix is off-link, no IPv6 Internet traffic is seen. In this case, the PS4 does not even start the DHCPv6 client even though OtherConfig=1. This is clearly a bug; there’s no reason why SLAAC can’t work perfectly well with off-link prefixes.

The next time I get a system software update, I’ll make sure to re-do all these tests and report any changes in a new post.

Wed 15 Jun 2016, 00:00

07 June 2016

Bjørn Ruberg

Near-realtime blacklist warnings with NetFlow, Perl and OTX

Installing IDS sensors in your network for monitoring traffic is not always feasible, for several possible reasons. Perhaps the network infrastructure is too complex, leading to blind spots. Maybe the affected network links have higher capacity than your ad hoc IDS sensor, causing packet loss on the sensor. Or your company may be organized in […]

by bjorn at Tue 07 Jun 2016, 17:43

24 May 2016

Magnus Hagander is now https only

We've just flipped the switch on to be served on https only. This has been done for a number of reasons:

  • In response to popular request
  • Google, and possibly other search engines, have started to give higher scores to sites on https, and we were previously redirecting accesses to cleartext
  • Simplification of the site code which now doesn't have to keep track of which pages need to be secure and which does not
  • Prevention of evil things like WiFi hotspot providers injecting ads or javascript into the pages

We have not yet enabled HTTP Strict Transport Security, but will do so in a couple of days once we have verified all functionality. We have also not enabled HTTP/2 yet, this will probably come at a future date.

Please help us out with testing this, and let us know if you find something that's not working, by emailing the pgsql-www mailinglist.

There are still some other websites that are not available over https, and we will be working on those as well over the coming weeks or months.

by (Magnus Hagander) at Tue 24 May 2016, 20:09

23 May 2016

Tech Area ECM blogging about Alfresco

Tidying the alfresco workflow database

During an upgrade from Alfresco to 4.2.6 for a client of ours we identified a problematic patch script which does some refactoring of data in the Activiti tables in the database. The customer system has been running for many years and the Activiti historic tables have grown large due to the fact that Alfresco never cleans these automatically. All workflow and task data is stored indefinitely in the database in the act_hi_*.

For this particular system the act_hi_detail table contained 2.1 million records (not that many for a database), however, the nasty SQL used to refactor data in the patch does not work well for a system of this size with that many workflows. The total number of Activiti processes (active and completed) were about 21000 and the patch script ran in our QA environment for many hours. Too many hours for us to have scheduled downtime, so we decided to cancel the upgrade script and find an alternate solution.

After some investigation and some trial and errors we found that when deleting a workflow using the workflow service it clears up the Activiti history tables. This can also be done from the UI per workflow after a workflow has completed. Then you have the option to delete the workflow from the workflow details page. This is nothing you do manually for 21k workflows so here comes the JavaScript console to the rescue!

var ctx =;
var workflowService = ctx.getBean('WorkflowService');
logger.log("Starting cancel workflow script");
log.error("Starting cancel workflow script");

var completedWorkflows = workflowService.getCompletedWorkflows();
var limit = 5000;
logger.log("Limit is: "+limit);
log.error("Limit is: "+limit);
logger.log("Number of completed workflows: "+completedWorkflows.size());
log.error("Number of completed workflows: "+completedWorkflows.size());
if (completedWorkflows) {
	for (var i=0;i<completedWorkflows.size();i++) {
		var wf = completedWorkflows.get(i);
		if (wf.isActive()) {
			logger.log("Workflow is still active: "+wf.getId());
			log.error("Workflow is still active: "+wf.getId());
		} else {
			logger.log("Deleting workflow: "+wf.getId());
			log.error("Deleting workflow: "+wf.getId());
		if (i>=limit) {
logger.log("Finished cancel workflow script");
log.error("Finished cancel workflow script");

This script will delete all completed workflows, both old jBPM workflows and Activiti workflows. The script will most likely time out when you run it so we added some log4j logging to it as well to get a log trail in our alfresco.log.

Since we are doing some workflow maintenance here we might as well delete all Active old jBPM workflows as well (in this case we knew for a fact that they will never be completed).

var ctx =;
var workflowService = ctx.getBean('WorkflowService');

var activeWorkflows = workflowService.getActiveWorkflows();
var limit = 2000;
logger.log("Limit is: "+limit);
log.error("Limit is: "+limit);
logger.log("Number of active workflows: "+activeWorkflows.size());
log.error("Number of active workflows: "+activeWorkflows.size());
if (activeWorkflows) {
	for (var i=0;i<activeWorkflows.size();i++) {
		var wf = activeWorkflows.get(i);
		if (!wf.isActive()) {
			logger.log("Workflow is not active: "+wf.getId());
			log.error("Workflow is not active: "+wf.getId());
		} else if (wf.getId().indexOf("activiti")===0) {
			logger.log("Activiti workflow: "+wf.getId());
			log.error("Activiti workflow: "+wf.getId());
		} else if (wf.getId().indexOf("jbpm")===0) {
			logger.log("Canceling jBPM workflow: "+wf.getId());
			log.error("Canceling jBPM workflow: "+wf.getId());
		} else {
			logger.log("Unknown workflow type"+wf.getId());
			log.error("Unknown workflow type"+wf.getId());
		if (i>=limit) {

As a result of this maintenance job, we have about 1700 active activiti workflows and an act_hi_detail table with about 210000 rows (about 10% of the original count) and the patch went through in seconds.

by Marcus Svartmark at Mon 23 May 2016, 09:00

Confused JVM? Kill it!

We have a customer where we’ve started to get memory problems in Alfresco recently. Those kind of problems can be very hard to pinpoint but for this particular client we’re almost sure what’s causing the memory problems. Unfortunately for us, this knowledge doesn’t prevent the memory problem. The bad thing with the JVM (in this case) is that even though a memory problem has occured, the JVM is left in a running state, although it’s not in a good running state… This particular Alfresco solution is clustered, and the memory problems ejects the server from Alfresco’s Hazelcast cluster but the LoadBalancer cluster still thinks the server is in the cluster which leads to a lot of problems down the road :(

Our customer have a very good organization around Alfresco, and if the JVM in which Alfresco lives should die when a memory problem occurs, it can be restarted in a breeze. In order to achieve this there is a JVM parameter (-XX:OnOutOfMemoryError) which can be used to execute a script when such an error occurs.

Below is how we solved this for our customer, a step-by-step instruction how to achive this. The server OS is Ubuntu 12.04.

  1. Install the package mailutils if not already installed.
sudo apt-get install mailutils
  • Create a shell script somewhere in your installation path.
  • nano -w /opt/alfresco/current_version/bin/
  • Paste this content into the script.
  • #!/bin/bash
    # First argument  : process id
    # Second argument : server port
    # Third argument  : module (repo, solr or share)
    SUBJECT="Tomcat shut down on $HOSTNAME:$PORT ($MODULE)"
    rm -f $FILENAME
    echo "Server  : $HOSTNAME" >> $FILENAME
    echo "Port    : $PORT" >> $FILENAME
    echo "Module  : $MODULE" >> $FILENAME
    echo "Message : Server got an java.lang.OutOfMemoryError and java process is killed" >> $FILENAME
    mail -a "From: $FROM" -s "$SUBJECT" $TO < $FILENAME
    kill -9 $PROCESSID

    The script takes three parameters. Process id (jvm process), port of Tomcat (HTTP port), and the module which caused the problem (repo, solr or share).

    In order for this to work, a local mail server has to be installed. For our client we’ve installed postfix which acts as a mail relay server.

  • Add the following to the Tomcat startup parameters (for example
  •     JAVA_OPTS="$JAVA_OPTS -XX:OnOutOfMemoryError='/opt/alfresco/current_version/bin/ %p 8080 repo'"
  • Restart Alfresco and force some nasty code to kill it :) and watch how you get a mail and the JVM is killed.
  • by Niklas Ekman at Mon 23 May 2016, 07:36

    20 May 2016

    Jorge Enrique Barrera

    Split a file into a number of equal parts

    As an example, we have a file named primary_data_file.txt that contains 616 lines of data. We want to split this into 4 files, with the equal amount of lines in each.

    $ wc -l primary_data_file.txt 
    616 primary_data_file.txt

    The following command should do the trick:

    split -da 1 -l $((`wc -l < primary_data_file.txt`/4)) primary_data_file.txt split_file --additional-suffix=".txt"

    The option -da generates the suffixes of length 1, as well as using numeric suffixes instead of alphabetical.

    The results after running the command are the following files:

    $ wc -l split_file*
      154 split_file0.txt
      154 split_file1.txt
      154 split_file2.txt
      154 split_file3.txt
      616 total

    by Jorge Enrique Barrera at Fri 20 May 2016, 10:28

    04 May 2016

    Jean-Marc Reymond

    ActiveMQ Command-line utility

    I have been looking for a nice little ActiveMQ CLI utility in order to push/consume messages to/from ActiveMQ via the standard OpenWire protocol for quite a while now and I finally found it!

    Thanks for the glorious developer behind this github project :)

    ActiveMQ Command-line utility

    Make sure you have a look at all the parameters on the main github page of the project ( and use the jar with dependencies if you don't have maven installed.

    Here is a little bash script you can use so make things simpler:

    # blog:
    # github:
    # Parameters examples:
    # -U admin -P admin -p "toto" q_fake
    # -U admin -P admin -c 9 -p "toto" q_fake
    # -U admin -P admin -o /tmp/msgs/fakemsg -c 9 -g q_fake
    # -U admin -P admin -p @/tmp/msgs/fakemsg-no4 q_fake
    java -jar ~/bin/a-1.3.0-jar-with-dependencies.jar "$@"

    by Bambitroll ( at Wed 04 May 2016, 16:44

    27 April 2016

    Ingvar Hagelund

    hitch-1.2.0 for fedora and epel

    Hitch is a libev-based high performance SSL/TLS proxy. It is developed by Varnish Software, and may be used for adding https to Varnish cache.

    hitch-1.2.0 was recently released. Among the new features in 1.2.0, might be mentioned more granular per-site configuration. Packages for Fedora and EPEL6/7 were requested for testing today. Please test and report feedback.

    Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at

    by ingvar at Wed 27 Apr 2016, 23:31

    Bjørn Ruberg

    The inherent risks of visualizing firewall probes

    For some time now, I’ve been graphing all unsolicited network traffic destined for my network. For instance, it’s quite useful for detecting slow scans, which will show up as the diagonally aligned green scatter points in this plot (click to enlarge). Other scans and probes often happen faster, when the attacker isn’t much concerned about […]

    by bjorn at Wed 27 Apr 2016, 06:18

    22 March 2016

    Bjørn Ruberg

    SSH outbound connections – what are they trying?

    Still fascinated by the outbound connection attempts from my Cowrie honeypot, I’ve been looking into what the intruders are trying to obtain with the outbound connections. As previously mentioned, there are bots actively attempting outbound connections towards a lot of remote services. Most are simply TCP socket connection attempts, but now and again the connection […]

    by bjorn at Tue 22 Mar 2016, 21:38

    09 March 2016

    Magnus Hagander

    JSON field constraints

    After giving my presentation at ConFoo this year, I had some discussions with a few people about the ability to put constraints on JSON data, and whether any of the advanced PostgreSQL constraints work for that. Or in short, can we get the benefits from both SQL and NoSQL at the same time?

    My general response to questions like this when it comes to PostgreSQL is "if you think there's a chance it works, it probably does", and it turns out that applies in this case as well.

    For things like UNIQUE keys and CHECK constraints it's fairly trivial, but there are also things like EXCLUSION constraints where there are some special constructs that need to be considered.

    Other than the technical side of things, it's of course also a question of "should we do this". The more constraints that are added to the JSON data, the less "schemaless" it is. On the other hand, other databases that have schemaless/dynamic schema as their main selling points, but still require per-key indexes and constraints (unlike PostgreSQL where JSONB is actually schemaless even when indexed).

    Anyway, back on topic. Keys and constraints on JSON data.

    In PostgreSQL, keys and constraints can be defined on both regular columns and directly on any expression, as long as that expression is immutable (meaning that the output is only ever dependent on the input, and not on any outside state). And this functionality works very well with JSONB as well.

    So let's start with a standard JSONB table:

    postgres=# CREATE TABLE jsontable (j jsonb NOT NULL);
    postgres=# CREATE INDEX j_idx ON jsontable USING gin(j jsonb_path_ops);

    Of course, declaring a table like this is very seldom a good idea in reality - a single table with just a JSONB field. You probably know more about your data than that, so there will be other fields in the table than just the JSONB field. But this table will suffice for our example.

    A standard gin index using jsonb_path_ops is how we get fully schemaless indexing in jsonb with maximum performance. We're not actually going to use this index in the examples below this time, but in real deployments it's likely one of the main reasons to use JSONB in the first place.

    To illustrate the constraints, let's add some data representing some sort of bookings. Yes, this would be much better represented as relational, but for the sake of example we'll use JSON with a semi-fixed schema. We'll also use a uuid in the JSON data as some sort of key, as this is fairly common in these scenarios.

    postgres=# INSERT INTO jsontable (j) VALUES ($${
      "uuid": "4e9cf085-09a5-4b4f-bc99-bde2d2d51f41",
      "start": "2015-03-08 10:00",
      "end": "2015-03-08 11:00",
      "title": "test"
    INSERT 0 1

    by (Magnus Hagander) at Wed 09 Mar 2016, 12:59

    Bjørn Ruberg

    Threat intelligence: OTX, Bro, SiLK, BIND RPZ, OSSEC

    Building a toolbox around threat intelligence can be done with freely available tools. Shared information about malicious behaviour allows you to detect and sometimes prevent activity from – and to – Internet resources that could compromise your systems’ security. I’ve already described how to use lists of malicious domain names in a BIND RPZ (Response […]

    by bjorn at Wed 09 Mar 2016, 07:15

    05 March 2016

    Bjørn Ruberg

    ClamAV client/server setup

    Note: This may very well be well-known information, but I found it difficult to get exact answers from the official ClamAV documentation, available man pages, and other kinds of documentation. The most useful hint originated from a mailing list thread considering ClamAV version 0.70, which is getting rather outdated. My original issue was getting antivirus […]

    by bjorn at Sat 05 Mar 2016, 19:37

    26 February 2016

    Bjørn Ruberg

    Visualizing honeypot activity

    Certain honeypot intruders are quite persistently trying to open outbound SSH tunnels, as described in an earlier article. So far I’ve seen a lot of attempts to open tunnels towards mail server TCP ports 25 (SMTP), 465 (SMTPS) and 587 (submission); web servers on TCP ports 80 (HTTP) and 443 (HTTPS); but also several other […]

    by bjorn at Fri 26 Feb 2016, 16:02

    22 February 2016

    Bjørn Ruberg

    Honeynet outbound probes

    My Cowrie honeypot is now seeing a surge of outbound SSH tunnel probes, both towards different mail servers but also towards a specific web server, probably with the purpose of informing about a successful intrusion. The honeypot has seen outbound attempts before, but not as persistent as with this bot from .ru. Cowrie fakes successful […]

    by bjorn at Mon 22 Feb 2016, 07:42

    Tore Anderson

    IPv6-only data centre RFCs published

    I’m very pleased to report that my SIIT-DC RFCs were published by the IETF last week. If you’re interested in learning how to operate an IPv6-only data centre while ensuring that IPv4-only Internet users will remain able to access the services hosted in it, you should really check them out.

    Start out with Stateless IP/ICMP Translation for IPv6 Data Center Environments (RFC 7755). This document describes the core functionality of SIIT-DC and the reasons why it was conceived.

    If you think that you can’t possibly make your data centre IPv6-only yet because you still need to support few legacy IPv4-only applications or devices, continue with RFC 7756. This document describes how the basic SIIT-DC architecture can be extended to support IPv4-only applications and devices, allowing them to live happily in an otherwise IPv6-only network.

    The third and final document is Explicit Address Mappings for Stateless IP/ICMP Translation (RFC 7757). This extends the previously existing SIIT protocol, making it flexible enough to support SIIT-DC. This extension is not specific to SIIT-DC; other IPv6 transition technologies such as 464XLAT and IVI also make use of it. Unless you’re implementing an IPv4/IPv6 translation device, you can safely skip RFC 7757. That said, if you want a deeper understanding on how SIIT-DC works, I recommend you take the time to read RFC 7757 too.

    So what is SIIT-DC, exactly?

    SIIT-DC is a novel approach to the IPv6 transition that we’ve developed here at Redpill Linpro. It facilitates the use of IPv6-only data centre environments in the transition period where a significant portion of the Internet remains IPv4-only. One could quite accurately say that SIIT-DC delivers «IPv4-as-a-Service» for data centre operators.

    In a nutshell, SIIT-DC works like this: when an IPv4 packet is sent to a service hosted in a data centre (such as a web site), that packet is intercepted by a device called an SIIT-DC Border Relay (BR) as soon as it reaches the data centre. The BR translates the IPv4 packet to IPv6, after which it is forwarded to the IPv6 web server just like any other IPv6 packet. The server’s reply gets routed back to a BR, where it is translated from IPv6 to IPv4, and forwarded through the IPv4 Internet back to the client. Neither the client nor the server need to know that translation between IPv4 and IPv6 is taking place; the IPv4 client thinks it’s talking to a regular IPv4 server, while the IPv6 server thinks it’s talking to a regular IPv6 client.

    There are several reasons why an operator might find SIIT-DC an appealing approach. In no particular order:

    • It facilitates IPv6 deployment without accumulation of IPv4 technical debt. The operator can simply switch from IPv4 to IPv6, rather than committing to operate IPv6 in parallel with IPv4 for the unforseeable future (i.e., dual stack). This greatly reduces complexity and operational overhead.
    • It doesn’t require the native IPv6 infrastructure to be built in a certain way. Any IPv6 network is compatible with SIIT-DC. It does not touch native IPv6 traffic from IPv6-enabled users. This means that when the IPv4 protocol eventually falls into disuse, no migration project will be necessary - SIIT-DC can be safely removed without any impact to the IPv6 infrastructure.
    • It maximises the utilisation of the operator’s public IPv4 addresses. If all the operator has available is a /24, every single of those 256 addresses can be used to provide Internet-facing services and applications. No addresses go to waste due to them being assigned to routers or backend servers (which do not need to communicate with the public Internet). It is no longer necessary to waste addresses by rounding up IPv4 LAN prefix sizes to the nearest power of two. Never again will it be necessary to expand a server LAN prefix, as it will be IPv6-only and thus practically infinitely large.
    • Unlike IPv4 NAT, it is completely stateless. Therefore, it scales in the same way as a standard IP router: the only metrics that matter are packets-per-second and bits-per-second. Its stateless nature makes it trivial to deploy; the BRs can be located anywhere in the IPv6 network. It is possible to spread the load between multiple BRs using standard techniques such as anycast or ECMP. High availability and redundancy are easily accomplished with the use of standard IP routing protocols.
    • Unlike some kinds of IPv4 NAT, it doesn’t hide the source address of IPv4 users. Thus, the IPv6-only application servers remain able to perform tasks which depend on the client’s source address, such as geo-location or abuse logging.
    • It allows for IPv4-only applications or devices to be hosted in an otherwise IPv6-only data centre. This is accomplished through an optional component called a SIIT-DC Edge Relay. This is what is being decribed in RFC 7756.

    The history of SIIT-DC

    I think it was around the year 2008 that it dawned on me that Redpill Linpro’s IPv4 resources would not last forever. At some point in the future we would inevitably be prevented from expanding our infrastructure based on IPv4. It was clear that we needed to come up with a plan on how to deal with that situation well ahead of time. IPv6 obviously needed to be part of that plan, but exactly how wasn’t clear at all.

    Conventional wisdom at the time told us that dual stack, i.e., running IPv4 in parallel with IPv6, was the solution. We did some pilot projects, but the results were discouraging. In particular, these problems quickly became apparent:

    1. It would not prevent us from running out of IPv4. After all, dual stack requires just as many IPv4 addresses as single-stack IPv4.
    2. IPv4 would continue to become an ever more entrenched part of our infrastructure. Every new IPv4-using service or application would inevitably make a future IPv4 sunsetting project even more difficult to pull off.
    3. Server and application operators simply didn’t like running two networking protocols in parallel. Dual stack greatly increased complexity: it became necessary to duplicate service configuration, firewall rules, monitoring targets, and so on, just in order to support both protocols equally well. This duplication in turn created lots of new possibilities of things going wrong, reducing reliability and uptime. And when something did go wrong, troubleshooting the issue required more time. Single stack was therefore seen as superior to dual stack.

    It was clear that we needed a better approach based on single-stack IPv6, but we were unable to find an already existing one which solved all of our problems.

    One of the things that we evaluated, though, was Stateless IP/ICMP Translation (RFC 6145). SIIT looked promising, but it had some significant shortcomings (which RFC 7757’s Problem Statement section elaborates on). In its then-current state, SIIT simply wasn’t flexible enough to be up to the task we had in mind for it. However, we did identify a way SIIT could be improved in order to facilitate our IPv6-only data centre use case. This improvement is what RFC 7757 ended up describing.

    I believe the first time I presented the idea of SIIT-DC (under the working name «RFC 6145 ++») in public was at’s World IPv6 Day seminar back in June 2011. In case you’re interested in a little bit of «history in the making», the slides (starting at page 34) and video (starting at 34:15) from that event are still available.

    A few months later we had a working proof of concept (based on TAYGA) running. By January 2012 I had enough confidence in it to move our corporate home page to it, where it has remained since. I didn’t ask for permission…but fortunately I didn’t have to ask for forgiveness either - to this day there have been zero complaints!

    The solution turned out to work remarkably well, so in keeping with our open source philosophy we decided to document exactly how it worked so that the entire Internet community could benefit from it. To that end, my very first Internet-Draft, draft-anderson-siit-dc-00, was submitted to the IETF in November 2012. I must admit I greatly underestimated the amount of work that would be necessary from that point on…

    The document was eventually adopted by the IPv6 Operations working group (v6ops) and split into three different documents, each covering relatively independent areas of functionality. Then began multiple cycles of peer review and feedback by the working group followed by updates and refinements. I’d especially like to thank Fred Baker, chair of the v6ops working group, for helping out a lot during the process. For a newcomer like me, the IETF procedures can certainly appear rather daunting, but thanks to Fred’s guidance it went very smoothly.

    One particularly significant event happened in early 2015, when Alberto Leiva Popper from NIC México joined in the effort as a co-author of RFC 7757-to-be (which describes the specifics of the updated SIIT algorithm). Alberto is the lead developer of Jool, an open-source IPv4/IPv6 translator for the Linux kernel. Thanks to his efforts, RFC 7757-to-be (and, by extension, SIIT-DC) was quickly implemented in Jool, which really helped move things along. The IETF considers the availability of running code to be of utmost importance when considering a proposed new Internet standard, and Jool fit the bill perfectly.

    For the record, we decommissioned our old TAYGA-based SIIT-DC BRs in favour of new ones based on Jool as soon as we could. This was a great success - our Jool BRs are currently handling IPv4 connectivity for hundreds of IPv6-only services and applications, and the number is rapidly growing. We’re very grateful to Alberto and NIC México for all the great work they’ve done with Jool - it’s an absolutely fantastic piece of software. I encourage anyone interested in IPv6 transition to download it and try it out.

    In late 2015 the documents reached IETF consensus, after which they were sent to the RFC Editor. They did a great job with helping improve the language, fixing inconsistencies, pointing out unclear or ambiguous sentences, and so on. When that was done, the only remaining thing was to publish the documents - which, as I mentioned before, happened last week.

    It feels great to have crossed the finish line with these documents, and writing them has certainly been an very interesting exercise. It is also nice to prove that it is possible for regular operators to provide meaningful contributions to the IETF - you don’t have to be an academic or work for one of the big network equipment vendors. That said, it has taken considerable effort, so I certainly look forward to being able to focus fully on my work as a network engineer again. I promise that’s going to result in more good IPv6 news in 2016…watch this space!

    Mon 22 Feb 2016, 00:00

    10 February 2016

    Magnus Hagander

    A new version of Planet PostgreSQL

    I have just pushed code for a new version of the codebase for Planet PostgreSQL.

    For those of you who are just reading Planet, hopefully nothing at all should change. There will probably be some bugs early on, but there are no general changes in functionality. If you notice something that is wrong (given a couple of hours from this post at least), please send an email to planet(at) and we'll look into it!

    For those who have your blogs aggregated at Planet PostgreSQL, there are some larger changes. In particular, you will notice the whole registration interface has been re-written. Hopefully that will make it easier to register blogs, and also to manage the ones you have (such as removing a post that needs to be hidden). The other major change is that Planet PostgreSQL will now email you whenever something has been fetched from your blog - to help you catch configuration mistakes bigger.

    The by far largest changes are in the moderation and administration backend. This will hopefully lead to faster processing of blog submissions, and less work for the moderators.

    by (Magnus Hagander) at Wed 10 Feb 2016, 19:57