Thursday, September 17th, 2015 | Author:
  • Part 1 – Introduction – Setting up Simple Queues (This post)
  • Part 2 – Reliably Identifying traffic – Setting up Mangle Rules (Coming Soon TM)
  • Part 3 – Priorities and Limits – Setting up Queue Trees (Coming Soon TM)
  • Part 4 – Monitoring Usage – Redefining Queues – Limiting Abusive Devices (Coming Soon TM)
  • Part 5 – ??? Profit ???


The first problem one usually comes across after being tasked with improving an Internet connection is that the connection is overutilised. Typically nobody knows why, who, or what is causing the problem – except of course everyone blames the ISP. Sometimes it is the ISP – but typically you can’t prove that without having an alternative connection immediately available. I currently manage or help manage four “sites/premises” that use QoS to manage their Internet connectivity. One is my workplace, two are home connections, and the last one is a slightly variable one – usually just a home connection but alternatively, for a weekend every few months, it becomes a 140-man (and growing) LAN. Fun. :)

MikroTik and RouterOS

MikroTik‘s RouterOS is very powerful in the right hands. Many other routers support QoS but not with the fine-grain control MikroTik provides. Alternatively you could utilise other Linux-based router OS’s, such as DD-WRT, Smoothwall, Untangle, and so forth. Most of these typically require that you have a spare server lying about or a compatible hardware router. Mikrotik sells RouterBoards that have RouterOS builtin – and they are relatively inexpensive.

My experience with routers is primarily with Cisco and MikroTik – and my experience with QoS is primarily with Allot’s NetEnforcer/NetXplorer systems and MikroTik. The most popular MikroTik devices in my experience (other than their dedicated long-range wireless devices) have been their rb750 (new version named “hEX“) and rb950-based boards. They have many others available and are relatively inexpensive. In historical comparison with Cisco’s premium devices, I’ve tended to describe MikroTik’s devices as “90% the features at 10% the cost”. As this guide is aimed primarily at SME/Home use, inexpensive makes more sense. If you’re looking at getting a MikroTik device, note that MikroTik routers do not typically include DSL modems, thus your existing equipment is typically still necessary. Note also that this is not a tutorial on setting up a MikroTik device from scratch. There are plenty of guides available online for that already.

Theory into practice – first steps

To set up QoS correctly, you need to have an idea of a policy that takes into account the following:

  • The overall connection speed
  • How many users/devices will be using the connection
  • The users/devices/services/protocols that should be prioritised for latency and/or throughput

To achieve the above in my examples, I will assume the following:

  • The MikroTik is set up with the default network configuration where the local network is and the Internet connection is provided via PPPoE.
  • The connection speed is 10/2Mbps (10 Mbps download speed; 2 Mbps upload speed)
  • There will be 5 users with as many as 15 devices (multiple computers/tablets/mobile phones/WiFi etc)
  • Typical downloads require high priority with throughput but low-priority with latency
  • Gaming/Skype/Administrative protocols require high priority with both latency and throughput
  • No users are to be prioritised over others

The first and probably quickest step is to set up what RouterOS refers to as a Simple Queue.

I’ve made a short script that I have saved on my MikroTik devices to set up the simple queues. It is as follows:

:for x from 1 to 254 do={
 /queue simple add name="internet-usage-$x" dst="pppoe" max-limit=1900k/9500k target="192.168.88.$x"

What the above does is limit the maximum speed any individual device can use to “1900k” (1.9Mb) upload and “9500k” (9.5Mb) download.


  • The reason why the max limits are at 95% of the line’s maximum speed is that this guarantees no single device can fully starve the connection, negatively affecting the other users. With a larger userbase I would enforce this limit further. For example, with 100 users on a 20Mb service I might set this limit to 15Mb or even as little as 1Mb. This is entirely dependent on how “abusive” the users are and, as you figure out where and how much abuse occurs, you can adjust it appropriately.
  • The prefix “internet-usage” in the name parameter can be customised. Typically I set these to refer to the premises name. For example, with premises named “alpha” and “beta”, I will typically put “internet-alpha” and “internet-beta”. This helps with instinctively differentiating between sites.
  • The dst parameter has “pppoe” in the example. This should be substituted with the name of the interface that provides the Internet connection.

Ensure you customise the script to be appropriate to your configuration. Save the script to the MikroTik and run it – or paste it directly into the MikroTik’s terminal to execute it.

In my next post I will go over setting up what RouterOS refers to as Mangle rules. These rules serve to identify/classify the network traffic in order to make finer-grained QoS possible.

Category: random  | Leave a Comment
Thursday, September 17th, 2015 | Author:

Privacy, Time, Money

I don’t like debit orders. I’ve never liked the idea that another entity can, at will, take almost any amount of my money (well … whatever’s available). A colleague pointed out the issue with MTN would have been avoided had I been using a debit order. Maybe the “convenience” factor isn’t such a bad thing.

I suppose the penultimate question here is whether or not you want the convenience and can trust institutions (in this case with your money) – or if you can’t trust them and are willing to forgo that convenience. In my case, even though I still question the convenience, I learned the hard way with MTN that it doubly can be inconvenient to have your connected world reduced to “remote island” status. Almost everyone today goes with the convenience factor.


On the other hand, now a long time ago, I had a dispute with Planet Fitness where convenience was a double-edged sword. I reported their business practice to the Consumer Complaints Commission (since re-organised as the National Consumer Commission) and never got feedback from them. The gist of the issue is that Planet Fitness’s sales agent lied to me and a friend in order to get more commission/money out of my pocket.

I’m a Discovery Vitality member which gives many benefits, including reduced rates on Premium brands – mostly health-related of course, as Discovery is a Medical Aid/Health Insurance provider. To put it simply, Discovery is awesome. Vitality’s benefits cover gym memberships which further includes Planet Fitness. You still have to pay something, a small token of sorts, to Discovery, for the gym membership. But, after all, they WANT me to be healthy, so they don’t mind footing the bulk of the bill. But, apparently, this means Planet Fitness’ sales agents don’t get the commission!

So what does this result in? The result is that PF’s sales agent gave me an inflated figure for a “Vitality-based” membership. He lied. He then had me sign on the dotted line for an inflated price of a “regular” membership (yes, it was actually more than even a regular membership would have cost), ending up about 4 and 5 times as much as the Vitality-based membership.


Some time in 2011 I finally wisened up to the costs I was supposed to be paying. Discovery I am sure wouldn’t be too happy about this fiasco. I spoke to the Manager at the gym, and I was assured that the entire contract would be scrapped. I’m not one for violence … unless its for sport … in an Octagon … but after my 5th visit to the Manager to ask why the Debit Orders were still happening, he told me he was surprised I hadn’t brought weapons with me for the visit. After a few more visits, the Manager had actually left Planet Fitness and explained to me that the “contract” was between myself and Head Office and that the local gym, apparently a franchise-style operation, had little to no say about whether or not it could be cancelled. If Head Office said no, tough luck.

By this point I’d lost it. I had my bank put a stop to the debit orders. It was a huge schlep: I had to contact the bank every month because the debit order descriptions would change ever so slightly. It also cost me a little every couple of months to “reinstate” the blocking service. I can’t help but think the banking system supports regular expressions but the staff don’t necessarily know how to use it.

Technically I’m still waiting on the CCC to get back to me (never happened – and of course they were re-organised as mentioned above so the case probably fell through the cracks). Of course, by that point PF also wanted to blacklist me for not paying!

The Unexpected Hero

A haphazard mention of the issue to Discovery (I think I called them about a dentist visit) resulted in a callback by one of Discovery’s agents. They then asked me to describe the problem, in detail and in writing, to better explain from my perspective what had really happened. I obliged. It turns out I was right about them not being “too happy” about it. In fact they really didn’t like it. About three weeks later, Planet Fitness refunded me in FULL for all monies that had ever been paid to them.

Discovery is Awesome. :)

Sunday, August 04th, 2013 | Author:

I had a power outage affect my server’s large md RAID array. Rather than let the server as a whole be down while waiting for it to complete an fsck, I had it boot without the large array so I could run the fsck manually.

However, when running it manually I realised I had no way of knowing how far it was and how long it would take to complete. This is especially problematic with such a large array. With a little searching  I found the tip of adding the -C parameter when calling fsck. I couldn’t find this in the documentation however: fsck –help showed no such option.

The option turns out to be ext4-specific, and thus shows a perfectly functional progress bar with a percentage indicator. To find the information, instead of “fsck –help” or “man fsck”, you have to input “fsck.ext4 –help” or “man fsck.ext4”. :)

Sunday, August 04th, 2013 | Author:


Much had changed since I last mentioned my personal server – it has grown by leaps and bounds (it now has a 7TB md RAID6) and it had recently been rebuilt with Ubuntu Server.

Arch was never a mistake. Arch Linux had already taught me so much about Linux (and will continue to do so on my other desktop). But Arch definitely requires more time and attention than I would like to spend on a server. Ideally I’d prefer to be able to forget about the server for a while until a reminder email says “um … there’s a couple updates you should look at, buddy.”

Space isn’t free – and neither is space

The opportunity to migrate to Ubuntu was the fact that I had run out of SATA ports, the ports required to connect hard drives to the rest of the computer – that 7TB RAID array uses a lot of ports! I had even given away my very old 200GB hard disk as it took up one of those ports. I also warned the recipient that the disk’s SMART monitoring indicated it was unreliable. As a temporary workaround to the lack of SATA ports, I had even migrated the server’s OS to a set of four USB sticks in an md RAID1. Crazy. I know. I wasn’t too happy about the speed. I decided to go out and buy a new reliable hard drive and a SATA expansion card to go with it.

The server’s primary Arch partition was using about 7GB of disk. A big chunk of that was a swap file, cached data and otherwise miscellaneous or unnecessary files. Overall the actual size of the OS, including the /home folder, was only about 2GB. This prompted me to look into a super-fast SSD drive, thinking perhaps a smaller one might not be so expensive. It turned out that the cheapest non-SSD drive I could find actually cost more than one of these relatively small SSDs. Yay for me. :)

Choice? Woah?!

In choosing the OS, I’d already decided it wouldn’t be Arch. Out of all the other popular distributions, I’m most familiar with Ubuntu and CentOS. Fedora was also a possibility – but I hadn’t seriously yet considered it for a server. Ubuntu won the round.

The next decision I had to make didn’t occur to me until Ubiquity (Ubuntu’s installation wizard) asked it of me: How to set up the partitions.

I was new to using SSDs in Linux – I’m well aware of the pitfalls of not using them correctly, mostly due to their risk of poor longevity if misused.

I didn’t want to use a dedicated swap partition. I plan on upgrading the server’s motherboard/CPU/memory not too far in the future. Based on that I decided I will put swap into a swap file on the existing md RAID. The swap won’t be particularly fast but its only purpose will be for that rare occasion when something’s gone wrong and the memory isn’t available.

This then left me to give the root path the full 60GB out of an Intel 330 SSD. I considered separating /home but it just seemed a little pointless, given how little was used in the past. I first set up the partition with LVM – something I’ve recently been doing whenever I set up a Linux box (really, there’s no excuse not to use LVM). When it got to the part where I would configure the filesystem, I clicked the drop-down and instinctively selected ext4. Then I noticed btrfs in the same list. Hang on!!

But a what?

Btrfs (“butter-eff-ess”, “better-eff-ess”, “bee-tree-eff-ess”, or whatever you fancy on the day) is a relatively new filesystem developed in order to bring Linux’ filesystem capabilities back on track with current filesystem tech. The existing King-of-the-Hill filesystem, “ext” (the current version called ext4) is pretty good – but it is limited, stuck in an old paradigm (think of a brand new F22 Raptor vs. an F4 Phantom with a half-jested attempt at an equivalency upgrade) and is unlikely to be able to compete for very long with newer Enterprise filesystems such as Oracle’s ZFS. Btrfs still has a long way to go and is still considered experimental (depending on who you ask and what features you need). Many consider it to be stable for basic use – but nobody is going to make any guarantees. And, of course, everyone is saying to make and test backups!


The most fundamental difference between ext and btrfs is that btrfs is a “CoW” or “Copy on Write” filesystem. This means that data is never actually deliberately overwritten by the filesystem’s internals. If you write a change to a file, btrfs will write your changes to a new location on physical media and will update the internal pointers to refer to the new location. Btrfs goes a step further in that those internal pointers (referred to as metadata) are also CoW. Older versions of ext would have simply overwritten the data. Ext4 would use a Journal to ensure that corruption won’t occur should the AC plug be yanked out at the most inopportune moment. The journal results in a similar number of steps required to update data. With an SSD, the underlying hardware operates a similar CoW process no matter what filesystem you’re using. This is because SSD drives cannot actually overwrite data – they have to copy the data (with your changes) to a new location and then erase the old block entirely. An optimisation in this area is that an SSD might not even erase the old block but rather simply make a note to erase the block at a later time when things aren’t so busy. The end result is that SSD drives fit very well with a CoW filesystem and don’t perform as well with non-CoW filesystems.

To make matters interesting, CoW in the filesystem easily goes hand in hand with a feature called deduplication. This allows two (or more) identical blocks of data to be stored using only a single copy, saving space. With CoW, if a deduplicated file is modified, the separate twin won’t be affected as the modified file’s data will have been written to a different physical block.

CoW in turn makes snapshotting relatively easy to implement. When a snapshot is made the system merely records the new snapshot as being a duplication of all data and metadata within the volume. With CoW, when changes are made, the snapshot’s data stays intact, and a consistent view of the filesystem’s status at the time the snapshot was made can be maintained.

A new friend

With the above in mind, especially as Ubuntu has made btrfs available as an install-time option, I figured it would be a good time to dive into btrfs and explore a little. :)

Part 2 coming soon …

Monday, October 29th, 2012 | Author:

It appears that, in infinite wisdom, Google have a security feature that can block an application from accessing or using your google account. I can see how this might be a problem for Google’s users, in particular their GTalk and Gmail users. In my case it was Pidgin having an issue with the Jabber service (which is technically part of GTalk). I found the solution after a little digging. I was surprised at how old the issue was and how long this feature has existed!

To unlock the account and get your application online, use Google’s Captcha page here.