Planet Redpill Linpro

02 December 2016

Redpill Linpro Sysadvent

Publishing Jekyll updates with gitlab-ci

Our company has embraced our local GitLab installation extensively. At its core, GitLab provides a repository management system based on the Git versioning system. A very practical extension to GitLab is the Gitlab CI feature.

In short, the GitLab CI is a set of commands that can be run ...

Fri 02 Dec 2016, 23:00

01 December 2016

Redpill Linpro Techblog

Welcome to a new season of our SysAdvent Blog!

This december, the staff at Redpill Linpro runs an advent calendar with sysadmin-related content!

Our season two of the SysAdvent Calendar kicked off, as expected, on December 1st.

...

Thu 01 Dec 2016, 23:00

Redpill Linpro Sysadvent

Liberating the network

The network is a very proprietary place. When you buy an IP router or an Ethernet switch, what you’re really buying is a tightly integrated bundle of hardware and software.

Mixing and matching software and hardware components in order to design a network infrastructure tailored to your precise set of ...

Thu 01 Dec 2016, 23:00

30 November 2016

Redpill Linpro Sysadvent

Grooming your SSL/TLS setup with cipherscan

If you rely on ssl/tls certificates and you have a slew of services to maintain online, things can quickly get out of hand. If you don’t have the time or the resources to keep up to speed with what ciphers to disable or what techniques to employ serverside, you might ...

Wed 30 Nov 2016, 23:00

27 November 2016

Bjørn Ruberg

TCP/7547 on the rise

Since yesterday I’ve registered a significant increase in probes for TCP port 7547. Over the last 12 hours, more than 1000 different IP addresses have tried to contact one of my networks. 1000 probes is of course no big deal, but the port that’s suddenly become of interest can be. The image below shows the […]

by bjorn at Sun 27 Nov 2016, 07:05

24 November 2016

Redpill Linpro Sysadvent

Welcome to a new season of our SysAdvent Blog!

The staff at Redpill Linpro will this December again run an advent calendar with sysadmin-related content!

Our season two of the SysAdvent Calendar will kick off, as expected, on December 1st.

As with the original sysadvent blog, the article contents this year will be a bit longer compared to ...

Thu 24 Nov 2016, 23:00

22 November 2016

Magnus Hagander

A more secure Planet PostgreSQL

Today, Planet PostgreSQL was switched over from http to https. Previously, https was only used for the logged in portions for blog owners, but now the whole site uses it. If you access the page with the http protocol, you will automatically be redirected to https.

As part of this, the RSS feeds have also changed address from http to https (the path part of the URLs remain unchanged). If your feed reader does not automatically follow redirects, this will unfortunately make it stop updating until you have changed the URL.

In a couple of days we will enable HTTP Strict Transport Security on the site as well.

We apologize for the inconvenience for those of you who have to reconfigure your feeds, but we hope you agree the change is for the better.

by nospam@hagander.net (Magnus Hagander) at Tue 22 Nov 2016, 20:29

21 November 2016

Tore Anderson

IPv6 roaming in Sweden

There has been much talk about IPv6 potentially causing problems for subscribers roaming between mobile networks. There’s an entire RFC dedicated to the possible failure cases, and it has even been claimed that «until every carrier has activated IPv6 there is no way to activate IPv6 for international roaming».

My own personal experience with IPv6 roaming hasn’t been quite that bleak, so I’ve therefore decided to thoroughly test IPv6 roaming whenever I have the chance and chronicle the results here, one post per country I visit.

As I am currently attending Internetdagarna 2016 in Stockholm, I now have the opportunity to perform this testing for Sweden.

There are four mobile network operators in Sweden. However, the only one I could roam in while using my Telenor Norway SIM card appeared to be Telenor Sweden. (No big surprise, there.) Thus I was only able to test the Telenor Sweden PLMN.

The tests were performed by separately attempting to establish single-stack IPV6 and dual-stack IPV4V6 data bearers and then visiting ds.test-ipv6.com. This procedure was repeated for all available access technologies I was able to use. The results were as follows:

Visited PLMN MCCMNC Tech IPV6 bearer IPV4V6 bearer
Telenor Sweden 24024 2G 10/10 (IPv6-only) 10/10 (dual stack)
Telenor Sweden 24008 3G 10/10 (IPv6-only) 10/10 (dual stack)
Telenor Sweden 24008 4G 10/10 (IPv6-only) 10/10 (dual stack)

Thus I can conclude that IPv6 roaming in the Telenor Sweden PLMN works 100% perfectly. Kudos to Telenor Norway and Telenor Sweden for making that happen!

Mon 21 Nov 2016, 00:00

12 November 2016

Magnus Hagander

PGConf.EU 2016 attendee statistics

It is now about a week since PGConf.EU 2016, and things are slowly returning to normal :) You'll have to wait a while longer for the traditional summary of the feedback post that I make every year, but there's another piece of statistics I'd like to share.

As always, Dave put the attendees per country statistics into the closing session slides, and we shared some of the top countries. Unsurprisingly, countries like Estonia (the host country), Germany (one of Europes larges country), Sweden and Russia (countries near by) were at the top.

For those looking into more details, here is the actual statistics for all countries and not just the top ones (click for bigger version)

by nospam@hagander.net (Magnus Hagander) at Sat 12 Nov 2016, 16:12

20 October 2016

Ingvar Hagelund

varnish-5.0, varnish-modules-0.9.2 and hitch-1.4.1, packages for Fedora and EPEL

The Varnish Cache project recently released varnish-5.0, and Varnish Software released hitch-1.4.1. I have wrapped packages for Fedora and EPEL.

varnish-5.0 has configuration changes, so the updated package has been pushed to rawhide, but will not replace the ones currently in EPEL nor in Fedora stable. Those who need varnish-5.0 for EPEL may use my COPR repos at https://copr.fedorainfracloud.org/coprs/ingvar/varnish50/. They include the varnish-5.0 and matching varnish-modules packages, and are compatible with EPEL 5, 6, and 7.

hitch-1.4.1 is configure file compatible with earlier releases, so packages for Fedora and EPEL are available in their respective repos, or will be once they trickle down to stable.

As always, feedback is warmly welcome. Please report via Red Hat’s Bugzilla or, while the packages are cooking in testing, Fedora’s Package Update System.

Varnish Cache is a powerful and feature rich front side web cache. It is also very fast, and that is, fast as in powered by The Dark Side of the Force. On steroids. And it is Free Software.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at www.redpill-linpro.com.

by ingvar at Thu 20 Oct 2016, 10:34

19 October 2016

Redpill Linpro Techblog

16 October 2016

Tech Area ECM blogging about Alfresco

Alfresco System Messages

In one of our custom Alfresco implementations we are making releases quite often due to somewhat heavy development. The customer used the site notice dashlet to inform users of upcoming downtimes but since there is a lot of sites with a lot of different users this was a quity cumbersome job to do before each release.

To aid the administrators we developed the alfresco-system-messages addon. Its implemented as a datalist using regular share forms when setting up an upcoming message. However, since this is something used system wide we decided to implement it as a Share Admin Console component, placing a datalistcontainer in the regular data dictionary of the Alfresco installation. Messages is time based and have different colours depending on priority.

all-pages-sm

admin-console-sm

This addon can be found here https://github.com/Redpill-Linpro/alfresco-systemmessages

by billerby at Sun 16 Oct 2016, 13:20

02 September 2016

Ingvar Hagelund

IPV6: clatd, a component of 464XLAT, for Fedora and EPEL

The World is running out of IPv4 addresses, but luckily, we have IPv6 here now, and running the whole data center on IPv6 only is not just happening, it’s becoming the standard. But what if you have an app, a daemon, or a container that actually needs IPv4 connectivity? Then you may use 464XLAT to provide an IPv4 tunnel through your IPv6 only infrastructure. clatd is one component in 464XLAT.

clatd is a CLAT / SIIT-DC Edge Relay implementation for Linux. From the github wash label:

clatd implements the CLAT component of the 464XLAT network architecture specified in RFC 6877. It allows an IPv6-only host to have IPv4 connectivity that is translated to IPv6 before being routed to an upstream PLAT (which is typically a Stateful NAT64 operated by the ISP) and there translated back to IPv4 before being routed to the IPv4 internet. This is especially useful when local applications on the host requires actual IPv4 connectivity or cannot make use of DNS64 (…) clatd may also be used to implement an SIIT-DC Edge Relay as described in RFC 7756.

Note that clatd relies on Tayga for the actual translation of packets between IPv4 and IPv6.

Yesterday, I pushed clatd for fedora testing and epel testing. Please test and report feedback by bugzilla.

For more information on clatd, see the documentation included in the package, or the clatd github home. For more info on Tayga, visit http://www.litech.org/tayga/.

For general information about the process of transisioning to the britght future of IPv6, consider https://en.wikipedia.org/wiki/IPv6_transition_mechanism

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at www.redpill-linpro.com.

by ingvar at Fri 02 Sep 2016, 11:37

01 September 2016

Redpill Linpro Techblog

IPV6: clatd, a component of 464XLAT, packages for Fedora and EPEL

The World is running out of IPv4 addresses, but luckily, we have IPv6 here now, and running the whole data center on IPv6 only is not just happening, it’s becoming the standard. But what if you have an app, a daemon, or a container that actually needs IPv4 connectivity? Then you may use 464XLAT to provide an IPv4 tunnel through your IPv6 only infrastructure. clatd is one component in 464XLAT.

...

Thu 01 Sep 2016, 22:00

16 August 2016

Redpill Linpro Techblog

Using systemd-networkd to work your net

On a laptop, per-distribution network tools like ifupdown, network-scripts and netcfg are a bit limiting. NetworkManager is a reasonable solution to roaming and using multiple networks, but for those of us who don’t run environments like GNOME, it’s a little opaque, even now that it has nmcli.

Systemd includes a ...

Tue 16 Aug 2016, 22:00

15 August 2016

Redpill Linpro Techblog

LDAP and password encryption strength

Given the focus on security breaches leaking account information the last few years, we have taken a fresh look at how secure our LDAP passwords really are, and if we can let OpenLDAP use a modern hash algorithm.

...

Mon 15 Aug 2016, 22:00

11 August 2016

Redpill Linpro Techblog

Encrypted Btrfs for Lazy Road Warriors' laptops

Why Btrfs?

Btrfs is full of new features to take advantage of, such as copy-on-write, storage pools, checksums, support for 16 exabyte filesystems, online grow and shrink, and space-efficient live snapshots. So, if you are used to mange storage with LVM and RAID, Btrfs can replace ...

Thu 11 Aug 2016, 22:00

10 August 2016

Redpill Linpro Techblog

varnish-4.1.3 and varnish-modules-0.9.1 for fedora and epel

The Varnish Cache project recently released varnish-4.1.3 and varnish-modules-0.9.1. Of course, we want updated rpms for Fedora and EPEL.

...

Wed 10 Aug 2016, 22:00

Knut Ingvald Dietzel

Encrypted Btrfs for Lazy Road Warriors' laptops

Why Btrfs?

Btrfs is full of new features to take advantage of, such as copy-on-write, storage pools, checksums, support for 16 exabyte filesystems, online grow and shrink, and space-efficient live snapshots. So, if you are used to mange storage with LVM and RAID, Btrfs can replace these technologies.

The best way to get familiar with something is to start using it. This post will detail some experiences from installing a laptop with Debian Jessie with Btrfs and swap on encrypted volumes.

The old way

Before switching to Btrfs one could typically put /boot on the first primary partition and designate the next partition to an encrypted volume, which in turn was used for LVM that we would chuck everything else into. For a road warrior with potential sensitive data on disk, full disk encryption is a good thing, and as the LUKS encryption is at the partition level one only has to punch in the pass phrase once during boot.

The Btrfs way

When implementing Btrfs one would like to avoid LVM and entry of pass phrases multiple times. Achieving this with separate encrypted partitions designated for /boot, swap and Btrfs triggers subtle changes in the partitioning and the tools involved during boot.

One way is to partition with /boot on the first primary, then two encrypted volumes – one for swap and one for / with Btrfs, and during initialization of the encrypted volumes make use of the same passphrase for both of the encrypted volumes.

Post booting into your newly installed system:

~# apt-get install keyutils

and add the keyscript=decrypt_keyctl option to both of the encrypted volumes listed in /etc/crypttab. Then issue:

 ~# update-initramfs -u -k all

to update your initramfs to include keyutils. Then reboot and check that the entered passphrase is cached and used to unlock both of the encrypted volumes.

Then what?

Many Linux distributions will install to the default subvolume. This may be undesirable as snapshots and subvolumes will be created inside the root filesystem. A possibly better layout would be to have a snapshots directory and a rootfs subvolume for the root filesystem.

So, we'll create the layout for the new default subvolume:

~# btrfs subvolume snapshot / /rootfs
~# mkdir /snapshots

As the contents under /rootfs will become the new root filesystem, do not make any changes to the current root filesystem until you have rebooted.

Edit /rootfs/etc/fstab so that the new rootfs subvolume will be used on subsequent reboots. I.e. you will need to include subvol=rootfs under options, à la:

# <file system>        <mount point>  <type>  <options>               <dump>  <pass>
/dev/mapper/sdXX_crypt /              btrfs   defaults,subvol=rootfs  0       1

In order to boot into the right subvolume one needs to set the default subvolume to be rootfs. E.g. find the subvolume's ID with:

~# btrfs subvolume list /
ID 262 gen 704 top level 5 path rootfs

and set it as default with:

~# btrfs subvolume set-default 262 /

Then restart to boot into your rootfs subvolume. Note that a measure of success is that the /snapshots folder should be missing. Now, delete the contents of the old root in the default subvolume.

To facilitate creation of new subvolumes/snapshots, make a mountpoint for the default subvolume:

~# mkdir -p /mnt/btrfs/root/

and add it to /etc/fstab:

# <file system>        <mount point>     <type>  <options>                     <dump>  <pass>
/dev/mapper/sda6_crypt /mnt/btrfs/root/  btrfs   defaults,noauto,subvolid=5    0       1

Then one can easily mount /mnt/btrfs/root/ and create snapshots/subvolumes. Yay!

Suggestions for further reading

"Stuff" that helped me in getting acquainted with Btrfs:

  • Kernel.org's Btrfs Sysadmin Guide and the articles, presentations and podcasts they have linked in.
  • Linux.com's articles, part one and two, on Btrfs Storage Pools, Subvolumes And Snapshots.

by Knut Ingvald Dietzel at Wed 10 Aug 2016, 22:00

Ingvar Hagelund

varnish-4.1.3 and varnish-modules-0.9.1 for fedora and epel

The Varnish Cache project recently released varnish-4.1.3 and varnish-modules-0.9.1. Of course, we want updated rpms for Fedora and EPEL.

While there are official packages for el6 and el7, I tend to like to use my Fedora downstream package, also for EPEL. So I have pushed updates for Fedora, and updated copr builds for epel5, epel6, and epel7.

An update of the official supported bundle of varnish modules, varnish-modules-0.9.1, was also released a few weeks ago. I did recently wrap it for Fedora, and am waiting for its review in BZ #1324863. Packages for epel5, epel6, and epel7 are in copr as well.

Fedora updates for varnish-4.1.3 may be found at https://bodhi.fedoraproject.org/updates/?packages=varnish

The Copr repos for epel are here: https://copr.fedorainfracloud.org/coprs/ingvar/varnish41/

Test and reports are very welcome.

Varnish Cache is a powerful and feature rich front side web cache. It is also very fast, and that is, fast as in powered by The Dark Side of the Force. On steroids. And it is Free Software.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at www.redpill-linpro.com.

by ingvar at Wed 10 Aug 2016, 12:45

07 August 2016

Redpill Linpro Techblog

Setting up Jekyll

So, management wants a microsite for blog-entries ASAP, while the techs wants to use tools they are used to - markdown and git. On top of that, we have a limited spare time for implementing a new solution.

In the intersection of that lies Jekyll!

...

Sun 07 Aug 2016, 22:00

02 August 2016

Redpill Linpro Techblog

Welcome!

Welcome to our new techblog. This microsite will contain tech-related entries which interests the techies (and other employees) at Redpill Linpro.

We hope you enjoy the articles!

Tue 02 Aug 2016, 22:00

13 July 2016

Bjørn Ruberg

Beneficial side effects of running a honeypot

I’ve been running a honeypot for quite a while now, it started out as a pure SSH honeypot – first with Kippo and then I migrated to Cowrie. Some time later I added more honeypot services to the unit in the form of InetSim. The InetSim software provides multiple plaintext services like HTTP, FTP, and […]

by admin at Wed 13 Jul 2016, 21:42

11 July 2016

Magnus Hagander

Locating the recovery point just before a dropped table

A common example when talking about why it's a good thing to be able to do PITR (Point In Time Recovery) is the scenario where somebody or some thing (operator or buggy application) dropped a table, and we want to do a recover to right before the table was dropped, to keep as much valid data as possible.

PostgreSQL comes with nice functionality to decide exactly what point to perform a recovery to, which can be specified at millisecond granularity, or that of an individual transaction. But what if we don't know exactly when the table was dropped? (Exactly being at the level of specific transaction or at least millisecond).

On way to handle that is to "step forward" through the log one transaction at a time until the table is gone. This is obviously very time-consuming.

Assuming that DROP TABLE is not something we do very frequently in our system, we can also use the pg_xlogdump tool to help us find the exact spot to perform the recovery to, in much less time. Unfortunately, the dropping of temporary tables (implicit or explicit) is included in this, so if your application uses a lot of temporary tables this approach will not work out of the box. But for applications without them, it can save a lot of time.

Let's show an example. This assumes you have already set up the system for log archiving, you have a base backup that you have restored, and you have a log archive.

The first thing we do is try to determine the point where a DROP TABLE happened. We can do this by scanning for entries where rows have been deleted from the pg_class table, as this will always happen as part of the drop.

by nospam@hagander.net (Magnus Hagander) at Mon 11 Jul 2016, 13:36

22 June 2016

Jean-Marc Reymond

ActiveMQ: Message ordering


ActiveMQ being a messaging system based on queues (aka FIFOs), one would take for granted that if there is only one producer and one consumer for a given queue (and they are both single threaded), the order of the messages is preserved.
Well, not always!

Let's say I have the following configuration for my ActiveMQ client running Camel:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

<bean id="amqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${env.activemq.broker.url}"/>
<property name="userName" value="${env.activemq.broker.username}"/>
<property name="password" value="${env.activemq.broker.password}"/>
</bean>

<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" init-method="start" destroy-method="stop">
<property name="maxConnections" value="12"/>
<property name="maximumActiveSessionPerConnection" value="200"/>
<property name="connectionFactory" ref="amqConnectionFactory"/>
</bean>

<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="pooledConnectionFactory"/>
<property name="transacted" value="true"/>
<property name="cacheLevelName" value="CACHE_NONE"/>
</bean>

<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="jmsConfig"/>
</bean>

</blueprint>

With this configuration, even with a prefetch of 1 and only one consumer, you risk having messages being consumed out of order even if they were produced in the right order.
The culprit is CACHE_NONE, which you want to use if you are using XA transactions.
But in normal circumstances, with a local transaction manager or with the one built-in with the JmsConfiguration bean, it is recommended to use CACHE_CONSUMER not only to improve performance but also to ensure proper message ordering.

Side note regarding prefetch=1:
Even though one could expect having only one message sent to the consumer until it gets ack-ed (which is when you are done processing it when you have transacted=true), it is still possible to have a second message assigned to that consumer in the dispatch queue, which in effect get blocked until the first message is fully processed (which can be a problem for slow consumers).
The solution (if this is really a problem) would be to use prefetch=0 for that given consumer, but this is costly since the consumer is now polling the broker!

More info here


If message ordering is a big requirement for you, you might want to look at the Camel resequencer.


Update 20160624: And now there is a Jira for this!
[ENTMQ-1783] Combination of CACHE_NONE and Transacted Affects Message Ordering - JBoss Issue Tracker


by Bambitroll (noreply@blogger.com) at Wed 22 Jun 2016, 16:25

15 June 2016

Tore Anderson

IPv6 support in the PlayStation 4

The other day, I noticed with great interest that my PlayStation 4 was using IPv6 to communicate with the Internet. I’m fairly certain that this behaviour is new, so I decided to investigate.

This is what appeared on the wire when it connected to the network:

  1 0.000000000           :: -> ff02::16     ICMPv6 110 Multicast Listener Report Message v2
  2 0.072956000           :: -> ff02::1:ffe2:19c7 ICMPv6 78 Neighbor Solicitation for fe80::2d9:d1ff:fee2:19c7
  3 0.799982000           :: -> ff02::16     ICMPv6 90 Multicast Listener Report Message v2
  4 1.600965000 fe80::2d9:d1ff:fee2:19c7 -> ff02::16     ICMPv6 90 Multicast Listener Report Message v2
  5 2.957012000 fe80::2d9:d1ff:fee2:19c7 -> ff02::2      ICMPv6 70 Router Solicitation from 00:d9:d1:e2:19:c7
  6 2.970763000 fe80::385a:20ff:fe70:f441 -> fe80::2d9:d1ff:fee2:19c7 ICMPv6 270 Router Advertisement from 3a:5a:20:70:f4:41
  7 2.971328000 fe80::2d9:d1ff:fee2:19c7 -> ff02::1:2    DHCPv6 110 Solicit XID: 0xe0e8c5 CID: 0003000100d9d1e219c7
  8 2.973796000 fe80::385a:20ff:fe70:f441 -> fe80::2d9:d1ff:fee2:19c7 DHCPv6 191 Advertise XID: 0xe0e8c5 CID: 0003000100d9d1e219c7 IAA: 2a02:fe0:c071:f00a::f1e
  9 2.974148000 fe80::2d9:d1ff:fee2:19c7 -> ff02::1:2    DHCPv6 152 Request XID: 0xe0e8c5 IAA: 2a02:fe0:c071:f00a::f1e CID: 0003000100d9d1e219c7
 10 2.977070000 fe80::385a:20ff:fe70:f441 -> fe80::2d9:d1ff:fee2:19c7 DHCPv6 223 Reply XID: 0xe0e8c5 CID: 0003000100d9d1e219c7 IAA: 2a02:fe0:c071:f00a::f1e
 11 2.977472000           :: -> ff02::1:ff00:f1e ICMPv6 78 Neighbor Solicitation for 2a02:fe0:c071:f00a::f1e
 12 3.000971000 fe80::2d9:d1ff:fee2:19c7 -> ff02::16     ICMPv6 90 Multicast Listener Report Message v2
 13 3.400970000 fe80::2d9:d1ff:fee2:19c7 -> ff02::16     ICMPv6 90 Multicast Listener Report Message v2
 14 3.977343000 fe80::2d9:d1ff:fee2:19c7 -> ff02::1:ff70:f441 ICMPv6 86 Neighbor Solicitation for fe80::385a:20ff:fe70:f441 from 00:d9:d1:e2:19:c7
 15 3.977615000 fe80::385a:20ff:fe70:f441 -> fe80::2d9:d1ff:fee2:19c7 ICMPv6 86 Neighbor Advertisement fe80::385a:20ff:fe70:f441 (rtr, sol, ovr) is at 3a:5a:20:70:f4:41
 16 3.977874000 2a02:fe0:c071:f00a::f1e -> 2a02:fe0:1:2:1:0:1:110 DNS 103 Standard query 0xc4e3  AAAA ena.net.playstation.net
 17 3.987868000 2a02:fe0:1:2:1:0:1:110 -> 2a02:fe0:c071:f00a::f1e DNS 241 Standard query response 0xc4e3  CNAME ena.net.playstation.net.edgekey.net CNAME e4963.dscg.akamaiedge.net AAAA 2a02:26f0:ac:181::1363 AAAA 2a02:26f0:ac:197::1363
 18 3.988383000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 94 62420→80 [SYN] Seq=0 Win=65535 Len=0 MSS=1440 WS=64 SACK_PERM=1 TSval=415148157 TSecr=0
 19 4.005888000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 94 80→62420 [SYN, ACK] Seq=0 Ack=1 Win=28560 Len=0 MSS=1440 SACK_PERM=1 TSval=3194590031 TSecr=415148157 WS=32
 20 4.006231000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 86 62420→80 [ACK] Seq=1 Ack=1 Win=65664 Len=0 TSval=415148175 TSecr=3194590031
 21 4.006361000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 HTTP 166 GET /netstart/ps4 HTTP/1.1
 22 4.021963000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 86 80→62420 [ACK] Seq=1 Ack=81 Win=28576 Len=0 TSval=3194590047 TSecr=415148175
 23 4.022418000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e HTTP 587 HTTP/1.1 403 Forbidden  (text/html)
 24 4.022479000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 86 80→62420 [FIN, ACK] Seq=502 Ack=81 Win=28576 Len=0 TSval=3194590048 TSecr=415148175
 25 4.022780000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 86 62420→80 [ACK] Seq=81 Ack=503 Win=65152 Len=0 TSval=415148191 TSecr=3194590048
 26 4.022849000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 86 62420→80 [FIN, ACK] Seq=81 Ack=503 Win=65664 Len=0 TSval=415148191 TSecr=3194590048
 27 4.037492000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 86 80→62420 [ACK] Seq=503 Ack=82 Win=28576 Len=0 TSval=3194590063 TSecr=415148191
 28 4.045960000 2a02:26f0:ac:181::1363 -> 2a02:fe0:c071:f00a::f1e TCP 86 [TCP Dup ACK 27#1] 80→62420 [ACK] Seq=503 Ack=82 Win=28576 Len=0 TSval=3194590071 TSecr=415148191
 29 4.046281000 2a02:fe0:c071:f00a::f1e -> 2a02:26f0:ac:181::1363 TCP 74 62420→80 [RST] Seq=82 Win=0 Len=0

There are several things I find noteworthy here:

  1. It supports DHCPv6. Since the DHCPv6 client runs in user space, this strongly indicates that it’s a deliberate move by Sony.
  2. It performs DNS requests over IPv6. A stub resolver also runs in user space, so it’s another indication that this is not accidental.
  3. It uses IPv6 to call home to the dual-stacked URL http://ena.net.playstation.net/netstart/ps4.
  4. The call home URL returns a 403 Forbidden error. However, it does so when accessed using IPv4 as well, so this might not mean much.

For the record, the call home request does not include any personal information beyond the source IP address and a URL indicating it’s a PS4. That said, the request itself is more than enough for Sony to generate useful statistics on how many PS4s with IPv6 Internet access there are out there. The following is the complete call home request made:

GET /netstart/ps4 HTTP/1.1
Connection: close
Host: ena.net.playstation.net

So far I’ve not seen it use IPv6 for anything else than what I’ve described above. An application like Netflix, which ought to use IPv6 whenever possible, does not. It would appear, therefore, that this is just small beginnings, perhaps done primarily to gather statistics. Nevertheless, I am very excited to see that Sony has begun work on implementing IPv6 support for the PS4.

Technical details

I first noticed the IPv6 capability after upgrading to system software version 3.50. I can’t rule out that it showed up in an earlier update, though, since I haven’t actively looked for it after installing earlier updates.

I tested various different network environments to figure out what exactly the PS4 supports. It would appear that Sony has done a thorough job:

  • It supports assignment of global IPv6 addresses using both SLAAC and DHCPv6 IA_NA. When using SLAAC, the Interface Identifier appears to be randomly generated. That is, the IID does not embed the PS4’s MAC address, and it changes every time the PS4 reconnects to the network.
  • It will learn IPv6 DNS servers from both the Recursive DNS Server RA Option and DHCPv6.
  • Addresses and/or DNS servers learned from DHCPv6 are preferred over those learned from ICMPv6 Router Advertisements (if any).
  • It will start a DHCPv6 client only if either the Managed or OtherConfig RA flag is set. If Managed=1, it will solicit both IA_NA and DNS configuration; otherwise, if OtherConfig=1, it will send a DHCPv6 Information-request message to obtain DNS configuration only.

I did find a couple of bugs too:

  • It would sometimes attempt to use its link-local address to communicate with the DNS server or the HTTP call-home web server, which doesn’t work. This suggests that there is a bug in the PS4’s default address selection logic, or that it failed to activate its SLAAC- or DHCPv6-assigned address. Simply re-connecting to the network would usually resolve this issue.
  • If address assignment is SLAAC-only, and the advertised prefix is off-link, no IPv6 Internet traffic is seen. In this case, the PS4 does not even start the DHCPv6 client even though OtherConfig=1. This is clearly a bug; there’s no reason why SLAAC can’t work perfectly well with off-link prefixes.

The next time I get a system software update, I’ll make sure to re-do all these tests and report any changes in a new post.

Wed 15 Jun 2016, 00:00

07 June 2016

Bjørn Ruberg

Near-realtime blacklist warnings with NetFlow, Perl and OTX

Installing IDS sensors in your network for monitoring traffic is not always feasible, for several possible reasons. Perhaps the network infrastructure is too complex, leading to blind spots. Maybe the affected network links have higher capacity than your ad hoc IDS sensor, causing packet loss on the sensor. Or your company may be organized in […]

by bjorn at Tue 07 Jun 2016, 17:43

24 May 2016

Magnus Hagander

www.postgresql.org is now https only

We've just flipped the switch on www.postgresql.org to be served on https only. This has been done for a number of reasons:

  • In response to popular request
  • Google, and possibly other search engines, have started to give higher scores to sites on https, and we were previously redirecting accesses to cleartext
  • Simplification of the site code which now doesn't have to keep track of which pages need to be secure and which does not
  • Prevention of evil things like WiFi hotspot providers injecting ads or javascript into the pages

We have not yet enabled HTTP Strict Transport Security, but will do so in a couple of days once we have verified all functionality. We have also not enabled HTTP/2 yet, this will probably come at a future date.

Please help us out with testing this, and let us know if you find something that's not working, by emailing the pgsql-www mailinglist.

There are still some other postgresql.org websites that are not available over https, and we will be working on those as well over the coming weeks or months.

by nospam@hagander.net (Magnus Hagander) at Tue 24 May 2016, 20:09

23 May 2016

Tech Area ECM blogging about Alfresco

Tidying the alfresco workflow database

During an upgrade from Alfresco 4.2.5.1 to 4.2.6 for a client of ours we identified a problematic patch script which does some refactoring of data in the Activiti tables in the database. The customer system has been running for many years and the Activiti historic tables have grown large due to the fact that Alfresco never cleans these automatically. All workflow and task data is stored indefinitely in the database in the act_hi_*.

For this particular system the act_hi_detail table contained 2.1 million records (not that many for a database), however, the nasty SQL used to refactor data in the patch does not work well for a system of this size with that many workflows. The total number of Activiti processes (active and completed) were about 21000 and the patch script ran in our QA environment for many hours. Too many hours for us to have scheduled downtime, so we decided to cancel the upgrade script and find an alternate solution.

After some investigation and some trial and errors we found that when deleting a workflow using the workflow service it clears up the Activiti history tables. This can also be done from the UI per workflow after a workflow has completed. Then you have the option to delete the workflow from the workflow details page. This is nothing you do manually for 21k workflows so here comes the JavaScript console to the rescue!

var ctx = Packages.org.springframework.web.context.ContextLoader.getCurrentWebApplicationContext();
var log = Packages.org.apache.log4j.Logger.getLogger("RL_CANCEL_DELETED_WORKFLOWS");
var workflowService = ctx.getBean('WorkflowService');
logger.log("Starting cancel workflow script");
log.error("Starting cancel workflow script");

var completedWorkflows = workflowService.getCompletedWorkflows();
var limit = 5000;
logger.log("Limit is: "+limit);
log.error("Limit is: "+limit);
logger.log("Number of completed workflows: "+completedWorkflows.size());
log.error("Number of completed workflows: "+completedWorkflows.size());
if (completedWorkflows) {
	for (var i=0;i<completedWorkflows.size();i++) {
		var wf = completedWorkflows.get(i);
		if (wf.isActive()) {
			logger.log("Workflow is still active: "+wf.getId());
			log.error("Workflow is still active: "+wf.getId());
		} else {
			logger.log("Deleting workflow: "+wf.getId());
			log.error("Deleting workflow: "+wf.getId());
			workflowService.deleteWorkflow(wf.getId());
		}
		if (i>=limit) {
			break;
		}
	}
}
logger.log("Finished cancel workflow script");
log.error("Finished cancel workflow script");

This script will delete all completed workflows, both old jBPM workflows and Activiti workflows. The script will most likely time out when you run it so we added some log4j logging to it as well to get a log trail in our alfresco.log.

Since we are doing some workflow maintenance here we might as well delete all Active old jBPM workflows as well (in this case we knew for a fact that they will never be completed).

var ctx = Packages.org.springframework.web.context.ContextLoader.getCurrentWebApplicationContext();
var log = Packages.org.apache.log4j.Logger.getLogger("RL_DELETE_JBPM_WORKFLOWS");
var workflowService = ctx.getBean('WorkflowService');

var activeWorkflows = workflowService.getActiveWorkflows();
var limit = 2000;
logger.log("Limit is: "+limit);
log.error("Limit is: "+limit);
logger.log("Number of active workflows: "+activeWorkflows.size());
log.error("Number of active workflows: "+activeWorkflows.size());
if (activeWorkflows) {
	for (var i=0;i<activeWorkflows.size();i++) {
		var wf = activeWorkflows.get(i);
		if (!wf.isActive()) {
			logger.log("Workflow is not active: "+wf.getId());
			log.error("Workflow is not active: "+wf.getId());
		} else if (wf.getId().indexOf("activiti")===0) {
			logger.log("Activiti workflow: "+wf.getId());
			log.error("Activiti workflow: "+wf.getId());
		} else if (wf.getId().indexOf("jbpm")===0) {
			logger.log("Canceling jBPM workflow: "+wf.getId());
			log.error("Canceling jBPM workflow: "+wf.getId());
			workflowService.cancelWorkflow(wf.getId());
		} else {
			logger.log("Unknown workflow type"+wf.getId());
			log.error("Unknown workflow type"+wf.getId());
		}
		if (i>=limit) {
			break;
		}
	}
}

As a result of this maintenance job, we have about 1700 active activiti workflows and an act_hi_detail table with about 210000 rows (about 10% of the original count) and the patch went through in seconds.

by Marcus Svartmark at Mon 23 May 2016, 09:00

Confused JVM? Kill it!

We have a customer where we’ve started to get memory problems in Alfresco recently. Those kind of problems can be very hard to pinpoint but for this particular client we’re almost sure what’s causing the memory problems. Unfortunately for us, this knowledge doesn’t prevent the memory problem. The bad thing with the JVM (in this case) is that even though a memory problem has occured, the JVM is left in a running state, although it’s not in a good running state… This particular Alfresco solution is clustered, and the memory problems ejects the server from Alfresco’s Hazelcast cluster but the LoadBalancer cluster still thinks the server is in the cluster which leads to a lot of problems down the road :(

Our customer have a very good organization around Alfresco, and if the JVM in which Alfresco lives should die when a memory problem occurs, it can be restarted in a breeze. In order to achieve this there is a JVM parameter (-XX:OnOutOfMemoryError) which can be used to execute a script when such an error occurs.

Below is how we solved this for our customer, a step-by-step instruction how to achive this. The server OS is Ubuntu 12.04.

  1. Install the package mailutils if not already installed.
sudo apt-get install mailutils
  • Create a shell script somewhere in your installation path.
  • nano -w /opt/alfresco/current_version/bin/mail-and-kill.sh
  • Paste this content into the script.
  • #!/bin/bash
    
    # First argument  : process id
    # Second argument : server port
    # Third argument  : module (repo, solr or share)
    
    PROCESSID="$1"
    PORT="$2"
    MODULE="$3"
    FROM="noreply@example.com"
    TO="it-operations@example.com"
    SUBJECT="Tomcat shut down on $HOSTNAME:$PORT ($MODULE)"
    FILENAME="/tmp/mail-and-kill.txt"
    
    rm -f $FILENAME
    
    echo "Server  : $HOSTNAME" >> $FILENAME
    echo "Port    : $PORT" >> $FILENAME
    echo "Module  : $MODULE" >> $FILENAME
    echo "Message : Server got an java.lang.OutOfMemoryError and java process is killed" >> $FILENAME
    
    mail -a "From: $FROM" -s "$SUBJECT" $TO < $FILENAME
    
    kill -9 $PROCESSID
        

    The script takes three parameters. Process id (jvm process), port of Tomcat (HTTP port), and the module which caused the problem (repo, solr or share).

    In order for this to work, a local mail server has to be installed. For our client we’ve installed postfix which acts as a mail relay server.

  • Add the following to the Tomcat startup parameters (for example setenv.sh).
  •     JAVA_OPTS="$JAVA_OPTS -XX:OnOutOfMemoryError='/opt/alfresco/current_version/bin/mail-and-kill.sh %p 8080 repo'"
        
  • Restart Alfresco and force some nasty code to kill it :) and watch how you get a mail and the JVM is killed.
  • by Niklas Ekman at Mon 23 May 2016, 07:36