Skip to main content
Version: latest

profile_qdisc_latency

The profile_qdisc_latency gadget gathers information about the usage of the network interfaces, generating a histogram distribution of latency caused by the network scheduler when consuming packets off qdiscs, when the gadget is stopped.

The histogram shows the number of packets enqueued to qdiscs (count column) that lie in the latency range interval-start -> interval-end (µs column). By default the latency is measured in microseconds. If the --ms flag is passed, it will be shown in milliseconds.

This guide will use the netem qdisc to emulate delay in sending packets. To configure it, the tc program of the iproute2 is used.

Getting started

Running the gadget:

$ kubectl gadget run ghcr.io/inspektor-gadget/gadget/profile_qdisc_latency:latest [flags]

Guide

Run the gadget in a terminal:

$ kubectl gadget run profile_qdisc_latency:latest --node minikube-docker

It will start to display the qdisc latency distribution as follows:

latency
µs : count distribution
0 -> 1 : 2 |****************************************|
1 -> 2 : 0 | |
2 -> 4 : 0 | |
4 -> 8 : 0 | |
8 -> 16 : 0 | |
16 -> 32 : 0 | |
32 -> 64 : 0 | |
64 -> 128 : 0 | |
128 -> 256 : 0 | |
256 -> 512 : 0 | |
512 -> 1024 : 0 | |
1024 -> 2048 : 0 | |
2048 -> 4096 : 0 | |
4096 -> 8192 : 0 | |
8192 -> 16384 : 0 | |
16384 -> 32768 : 0 | |
32768 -> 65536 : 0 | |
65536 -> 131072 : 0 | |
131072 -> 262144 : 0 | |
262144 -> 524288 : 0 | |
524288 -> 1048576 : 0 | |
1048576 -> 2097152 : 0 | |
2097152 -> 4194304 : 0 | |
4194304 -> 8388608 : 0 | |
8388608 -> 16777216 : 0 | |
16777216 -> 33554432 : 0 | |
33554432 -> 67108864 : 0 | |

Now to introduce some more latency, let's add a netem qdisc with some latency and jitter.

# Start by creating our testing namespace
$ kubectl create ns qdisc-latency-test

# Run a pod that we will emulate network latency in
$ kubectl run -n qdisc-latency-test --rm -it netem-test \
--image=alpine --restart=Never \
--privileged

# Inside the container run the following commands
$ apk update
$ apk add iproute2
# This will introduce a latency of 100ms with 100ms jitter
$ tc qdisc add dev eth0 root netem delay 100ms 100ms

# Now let's ping some remote host
$ ping google.com

Using the profile qdisc-latency gadget, we can generate another histogram to analyse the latency of scheduled network packets:

# Run the gadget again
$ kubectl gadget run profile_qdisc_latency:latest --node minikube-docker
latency
µs : count distribution
0 -> 1 : 2 |**** |
1 -> 2 : 0 | |
2 -> 4 : 0 | |
4 -> 8 : 1 |** |
8 -> 16 : 0 | |
16 -> 32 : 5 |*********** |
32 -> 64 : 14 |******************************** |
64 -> 128 : 17 |****************************************|
128 -> 256 : 0 | |
256 -> 512 : 0 | |
512 -> 1024 : 0 | |
1024 -> 2048 : 0 | |
2048 -> 4096 : 0 | |
4096 -> 8192 : 0 | |
8192 -> 16384 : 0 | |
16384 -> 32768 : 0 | |
32768 -> 65536 : 0 | |
65536 -> 131072 : 0 | |
131072 -> 262144 : 0 | |
262144 -> 524288 : 0 | |
524288 -> 1048576 : 0 | |
1048576 -> 2097152 : 0 | |
2097152 -> 4194304 : 0 | |
4194304 -> 8388608 : 0 | |
8388608 -> 16777216 : 0 | |
16777216 -> 33554432 : 0 | |
33554432 -> 67108864 : 0 | |

The new histogram shows how the latency numbers increased.

You can clean up the resources created during this guide by running the following commands:

$ kubectl delete ns qdisc-latency-test

Exporting metrics

The profile_qdisc_latency gadget can expose the histograms it generates to a Prometheus endpoint. To do so, you need to activate both the metrics listener as well as the gadget collector. To enable the metrics listener, check the Exporting Metrics documentation. To enable the collector for the profile_qdisc_latency gadget with the metrics name qdisc-latency-metrics, run the following command:

WIP: Headless mode for kubectl gadget is under development
WIP: Headless mode for kubectl gadget is under development

Finally, stop metrics collection:

WIP: Headless mode for kubectl gadget is under development