leif's blog

QNAP and Torrent

I noticed a noticeable amount of Torrent traffic both from, and to, my QNAP NAS server. It turns out, if you enable the Download Station application, it starts doing all sorts of Torrent discoveries on its own. I have no idea why, so for now, I have simply removed the application entirely. I noticed this by doing a

$ tcpdump port 6889



autorun.sh for my QNAP server

I needed to do basically rc.local on my QNAP NAS server, since it setups various things at bootup time. Changes to these setups would therefore not persist a reboot. Via QNAP support, I got a link to a page showing how to setup an autorun.sh script as part of the bootup process. This was all splendid, except I couldn't find my device in the list. So, I found this command to probe the system which boot drive I'm on, and from thereon it was easy to find which partition to mount. What I ran was basically this:

[admin@Freya /]# /sbin/hal_app --get_boot_pd port_id=0
[admin@Freya /]# mount -t ext2 /dev/sdg6 /tmp/config
[admin@Freya /]# cat /tmp/config/autorun.sh
/share/CACHEDEV1_DATA/homes/admin/bin/autorun.sh &

I made it such that it simply just run an autorun.sh from my normal drive, this way, I don't have to go through these hoops, and modify the flash image constantly when I want to modify change something. Right now my script is pretty limited, but for example, it now allows me to add a "search" domain to my /etc/resolv.conf. Yeah, for some reason, the QNAP OS does not let you add search domain for the resolvers, if you use static IPs (with DHCP, it gets it from the DHCP server, of course).


systemd and disk storage

Well, my battles with systemd continues... I had a box with limited disk space, and it was using over 4GB for just systemd journals. You can see the current journal usage with

$ sudo journalctl --disk-usage
​Journals take up 3.9G on disk.

I've tweaked this now, with a setting in /etc/systemd/journald.conf:


In addition, I ran a couple of commands from the command line:

$ sudo journalctl --verify
$ sudo journalctl --vacuum-size=1G

Jenkins slaves, Java Web Start and proxies

So we (Apache Traffic Server) have our Jenkins CI system behind a proxy, naturally. This works very well. We have a few remote slaves, that uses the "local" Java start processes, and when they fetch the .jnpl file, the destination host and port for it to talk to Jenkins itself is wrong. It (of course) tries to talk to the proxy host, which doesn't work! This was fairly easy to fix, in the Node configuration for the slaves, click the advanced option, and add a host:port value for "Tunnel connection through".


Optimizing Drupal7 CSS and JS

Even though Drupal (since long ago) supports merging CSS and JS into one file each, after I upgraded from v6 to v7, I still ended up getting more than one of each. It turns out, Drupal has some notion of groups, and it would only merge the CSS / JS elements within each group. I did some web searches, and came up with the following:

function pixture_reloaded_js_alter(&$js) {
  if (arg(0) === 'admin' || strpos($_GET['q'], 'search/google') === 0) {

  uasort($js, 'drupal_sort_css_js');
  $weight = 0;

  foreach ($js as $name => $javascript) {
    $js[$name]['group'] = -100;
    $js[$name]['weight'] = ++$weight;
    $js[$name]['every_page'] = 1;
    $js[$name]['scope'] = 'footer';

function pixture_reloaded_css_alter(&$css) {
  uasort($css, 'drupal_sort_css_js');

  $print = array();
  $weight = 0;
  foreach ($css as $name => $style) {
    $css[$name]['group'] = 0;
    $css[$name]['weight'] = ++$weight;
    $css[$name]['every_page'] = TRUE;

    if ($css[$name]['media'] == 'print') {
      $print[$name] = $css[$name];

  $css = array_merge($css, $print);

This goes into the Theme's template.php file, in my case I use the Pixture Reloaded theme. I don't know much about Drupal nor PHP, so I don't know what this might break. But this accomplishes three things:

  1. Merge all CSS into one single CSS.
  2. Merge all JS into one single JS.
  3. Move the JS to the "footer" of the page (this is important for improved page rendering, but could potentially break some sites I'd imagine).


Forward proxy over HTTPS

Most clients supports what we call Forward Proxying: You explicitly tell it which server (and port) to use as a proxy. This has traditionally been done over HTTP, with the addition of support for the CONNECT method for HTTPS request. We are now starting to see some clients supporting Forward Proxy over HTTPS, and you might wonder why? Well, a few reasons could include

  • Even with CONNECT there can be some leakage of information. The CONNECT request includes the destination server and port, in clear text.
  • Authentication to the proxy.
  • Overal, we're transitioning away from HTTP.

I saw this tweet from Daniel Stenberg, looking for volunteers to implement support for this in curl. I don't know if he's got any takers yet :). Firefox and Chrome both are working on this feature, Chrome already having the basics available. Since I work on a proxy server (Apache Traffic Server), I took the opportunity to test it with the latest Chrome. Lo and behold, it simply worked right out of the box! I started chrome with this (OSX) command:

% Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --proxy-server=https://localhost:443



Subscribe to RSS - leif's blog