Bandwidth limiting with tc

Bandwidth limiting with tc

In linux tc binary is used to manipulate or control traffic. It’s syntax can be quite confusing so I suggest to read a manual to tc which clearly explaint the modes and possibilities of tc.

http://tldp.org/HOWTO/Traffic-Control-HOWTO/intro.html 

After reading it you could be able to see the difference between pfifo_fast, SFQ, HTB and TBF what are some of the concepts used when limiting bandwidth.

To see in which mode your interface is working you type:

# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 08:00:27:b5:a9:9c brd ff:ff:ff:ff:ff:ff

So you will see most of the time your interface works in a pfifo_fast mode. Now imagine you have a process which ist sending a huge load of packets and the receiving server is not able to process it and therefore it simply drops the packets if the receiving buffer is full. To see the status of a buffer use netstat:

If the value „buffer receive errors is greater than 0 you probably have problem with buffer. You can enlarge the receive buffer in kernel with net.core.rmem_default and
net.core.rmem_max values. But if it is still not enough the best option is to simply limit the traffic.

To limit the traffic generally between two interfaces you simply use:

tc qdisc add dev enp1s0 root tbf rate 160Mbit burst 40Mbit latency 900s

root – apply it to egress direction (ingress would be simply ingress)
tbf – apply tbf concept
rate – allowed bandwidth
burs – after reaching this value the rate would be applied, otherwise do not limit anything
latency – the amout of tome to hold the packets in the bucket before dropping it if they were not sent out

Now when you start generating UDP packets and run iptraf-ng on the other side you will see that after applying this rule your bandwidth is succesfuly limited to this value.

But, sometimes you want to limit only certain packets and not the whole interface.

tc qdisc add dev eth1 root handle 1:0 htb
tc class add dev enp0s8 parent 1: classid 1:10 htb rate 1Mbit ceil 1Mbit prio 0
tc filter add dev enp0s8 parent 1: prio 0 protocol ip handle 10 fw flowid 1:11

First define new qdisc (tc qdisc add dev enp0s8 root handle 1:0 htb). The second number must be a zero this time when defining new qdisc. (tc class add dev enp0s8 parent 1: classid 1:10 htb rate 1Mbit ceil 1Mbit prio 0) defines new class for interface enp0s8 and add it to parent qdisc defined before, defines new classid rule (Minor number now must be different than 0 so in our case I have chosen 10) and minimum (rate) and maximum (ceil) bandwidth for a class. The last rule (tc filter add dev enp0s8 parent 1: prio 0 protocol ip handle 10 fw flowid 1:11) creates filter rule for a classid 10 and marked iptables flow 11.

To list the qdisc info you can see it with:

 tc -s qdisc ls dev enp0s8 

Now you have to mark a flow with iptables to match the rules defined before.
-A OUTPUT -i enp0s8 -t mangle -p udp --dport 333 -j MARK --set-mark 11

Check if the rule is matching and you will see that the bandwhith is limited to 1Mbit.

Tc has a lot of possibilities (you can add latency, simulate traffic loss, make round-robin router etc. ), this is only the brief concept of how it works and what can you achieve with it.