Simulating Lag with Docker Containers
I was building a Grafana dashboard with Docker Compose that would monitor a local service. I wasn’t able to figure out if my calculations for the graph were correct because the latency was so low being that it was running in Docker Compose locally. I needed a way to proxy traffic to the service that was being monitored while creating some artificial lag. After doing some research I found a native library in Linux called “Traffic Control (TC)” and decided to pair that with Socat in a Docker container.
GitHub
Be sure to checkout the GitHut because this blog posts describes what I did there. If you don’t, some of this will not make any sense.
Overall Design of Solution
I created a custom .NET application that will monitor Kafka Connect by querying it’s APIs and reporting that data through Prometheus. I will then have a Grafana dashboard to visualize that data and alert me if something fails. To make sure I’m getting reliable metrics, I will use an intermediary container (Simulated Lag) and re-route traffic to the Kafka Connect API (running on port 8083). The custom .NET application will no longer query the Kafka Connnect API but will query the “Simulated Lag” container which will forward traffic to the Kafka Connect API. Thus, creating an artificial lag.
Why not use Iptables?
At first, I thought the ideal solution would be to use iptables since that’s a great way to route traffic to another source. After wrestling with this for quite some time, I eventually realized that this will not work easily. Iptables is something that runs at the Kernell level and is tightly used by Docker to manage traffic.
Traffic Control (TC)
Traffic control is a library in Linux that can be used to manipulate the flow of traffic on an ethernet device. This is great because we can simulate latency, drop packets, and so forth to create artificial latency.
Commands
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# add 100ms delay tc qdisc add dev eth0 root netem delay 100ms # add 250ms delay tc qdisc add dev eth0 root netem delay 250ms # add 100ms/10ms random delay tc qdisc add dev eth0 root netem delay 100ms 10ms # change to 100ms delay with 10ms random tc qdisc change dev eth0 root netem delay 100ms 10ms # delete tc qdisc del dev eth0 root netem |
Socat
For this particular case, I created a custom application that would query Kafka Connect’s APIs. The intermediary container would run Traffic Control (TC) to create lag and also re-route traffic using Socat, a multipurpose relay. This was easily done with a single command.
Using Dig I was able to get the IP address of the Docker container running Kafka Connect. I was originally running this in Docker Compose so that “connect” was a reference to the container running Kafka Connect.DESTINATION_IP=$(dig connect +short)
The socat command re-routes traffic.socat tcp-listen:8083,reuseaddr,fork tcp:$DESTINATION_IP:8083
This basically listens on port 8083 and re-routes traffic to the Kafka Connect Docker Container’s IP address on port 8083.
Grafana
As you can see there is a spike and the graph is hovering around 200ms. While I set the delay to 100ms there’s an additional 100ms for rerouting to the other container.
Conclusion
I was able to create artificial latency using this approach which greatly improved my visualizations of my Grafana dashboard locally. This allowed me to confidently build and test this locally.