profile_qdisc_latency
The profile_qdisc_latency
gadget gathers information about the usage of the
network interfaces, generating a histogram distribution of latency caused by
the network scheduler when consuming packets off qdiscs, when the gadget is stopped.
The histogram shows the number of packets enqueued to qdiscs (count
column) that lie in the
latency range interval-start
-> interval-end
(µs
column). By default the latency is
measured in microseconds. If the --ms
flag is passed, it will be shown in milliseconds.
This guide will use the netem qdisc to
emulate delay in sending packets. To configure it, the tc
program of the iproute2
is used.
Getting started
Running the gadget:
- kubectl gadget
- ig
$ kubectl gadget run ghcr.io/inspektor-gadget/gadget/profile_qdisc_latency:latest [flags]
$ sudo ig run ghcr.io/inspektor-gadget/gadget/profile_qdisc_latency:latest [flags]
Guide
Run the gadget in a terminal:
- kubectl gadget
- ig
$ kubectl gadget run profile_qdisc_latency:latest --node minikube-docker
It will start to display the qdisc latency distribution as follows:
latency
µs : count distribution
0 -> 1 : 2 |****************************************|
1 -> 2 : 0 | |
2 -> 4 : 0 | |
4 -> 8 : 0 | |
8 -> 16 : 0 | |
16 -> 32 : 0 | |
32 -> 64 : 0 | |
64 -> 128 : 0 | |
128 -> 256 : 0 | |
256 -> 512 : 0 | |
512 -> 1024 : 0 | |
1024 -> 2048 : 0 | |
2048 -> 4096 : 0 | |
4096 -> 8192 : 0 | |
8192 -> 16384 : 0 | |
16384 -> 32768 : 0 | |
32768 -> 65536 : 0 | |
65536 -> 131072 : 0 | |
131072 -> 262144 : 0 | |
262144 -> 524288 : 0 | |
524288 -> 1048576 : 0 | |
1048576 -> 2097152 : 0 | |
2097152 -> 4194304 : 0 | |
4194304 -> 8388608 : 0 | |
8388608 -> 16777216 : 0 | |
16777216 -> 33554432 : 0 | |
33554432 -> 67108864 : 0 | |
Now to introduce some more latency, let's add a netem
qdisc with some latency and jitter.
# Start by creating our testing namespace
$ kubectl create ns qdisc-latency-test
# Run a pod that we will emulate network latency in
$ kubectl run -n qdisc-latency-test --rm -it netem-test \
--image=alpine --restart=Never \
--privileged
# Inside the container run the following commands
$ apk update
$ apk add iproute2
# This will introduce a latency of 100ms with 100ms jitter
$ tc qdisc add dev eth0 root netem delay 100ms 100ms
# Now let's ping some remote host
$ ping google.com
Using the profile qdisc-latency gadget, we can generate another histogram to analyse the latency of scheduled network packets:
# Run the gadget again
$ kubectl gadget run profile_qdisc_latency:latest --node minikube-docker
latency
µs : count distribution
0 -> 1 : 2 |**** |
1 -> 2 : 0 | |
2 -> 4 : 0 | |
4 -> 8 : 1 |** |
8 -> 16 : 0 | |
16 -> 32 : 5 |*********** |
32 -> 64 : 14 |******************************** |
64 -> 128 : 17 |****************************************|
128 -> 256 : 0 | |
256 -> 512 : 0 | |
512 -> 1024 : 0 | |
1024 -> 2048 : 0 | |
2048 -> 4096 : 0 | |
4096 -> 8192 : 0 | |
8192 -> 16384 : 0 | |
16384 -> 32768 : 0 | |
32768 -> 65536 : 0 | |
65536 -> 131072 : 0 | |
131072 -> 262144 : 0 | |
262144 -> 524288 : 0 | |
524288 -> 1048576 : 0 | |
1048576 -> 2097152 : 0 | |
2097152 -> 4194304 : 0 | |
4194304 -> 8388608 : 0 | |
8388608 -> 16777216 : 0 | |
16777216 -> 33554432 : 0 | |
33554432 -> 67108864 : 0 | |
The new histogram shows how the latency numbers increased.
$ sudo ig run profile_qdisc_latency:latest
It will start to display the qdisc latency distribution as follows:
latency
µs : count distribution
0 -> 1 : 2 |****************************************|
1 -> 2 : 0 | |
2 -> 4 : 0 | |
4 -> 8 : 0 | |
8 -> 16 : 0 | |
16 -> 32 : 0 | |
32 -> 64 : 0 | |
64 -> 128 : 0 | |
128 -> 256 : 0 | |
256 -> 512 : 0 | |
512 -> 1024 : 0 | |
1024 -> 2048 : 0 | |
2048 -> 4096 : 0 | |
4096 -> 8192 : 0 | |
8192 -> 16384 : 0 | |
16384 -> 32768 : 0 | |
32768 -> 65536 : 0 | |
65536 -> 131072 : 0 | |
131072 -> 262144 : 0 | |
262144 -> 524288 : 0 | |
524288 -> 1048576 : 0 | |
1048576 -> 2097152 : 0 | |
2097152 -> 4194304 : 0 | |
4194304 -> 8388608 : 0 | |
8388608 -> 16777216 : 0 | |
16777216 -> 33554432 : 0 | |
33554432 -> 67108864 : 0 | |
Now to introduce some more latency, let's add a netem
qdisc with some latency and jitter.
# We spawn a container where we will add network latency to
$ docker run --rm --privileged --name qdisc-latency-test -it alpine
# Inside the container run the following commands
apk update
apk add iproute2
# This will introduce a latency of 100ms with 100ms jitter
tc qdisc add dev eth0 root netem delay 100ms 100ms
# Now let's ping some remote host
ping google.com
Using the profile qdisc-latency gadget, we can generate another histogram to analyse the latency of scheduled network packets:
$ sudo ig run profile_qdisc_latency:latest
latency
µs : count distribution
0 -> 1 : 2 |**** |
1 -> 2 : 0 | |
2 -> 4 : 0 | |
4 -> 8 : 1 |** |
8 -> 16 : 0 | |
16 -> 32 : 5 |*********** |
32 -> 64 : 14 |******************************** |
64 -> 128 : 17 |****************************************|
128 -> 256 : 0 | |
256 -> 512 : 0 | |
512 -> 1024 : 0 | |
1024 -> 2048 : 0 | |
2048 -> 4096 : 0 | |
4096 -> 8192 : 0 | |
8192 -> 16384 : 0 | |
16384 -> 32768 : 0 | |
32768 -> 65536 : 0 | |
65536 -> 131072 : 0 | |
131072 -> 262144 : 0 | |
262144 -> 524288 : 0 | |
524288 -> 1048576 : 0 | |
1048576 -> 2097152 : 0 | |
2097152 -> 4194304 : 0 | |
4194304 -> 8388608 : 0 | |
8388608 -> 16777216 : 0 | |
16777216 -> 33554432 : 0 | |
33554432 -> 67108864 : 0 | |
You can clean up the resources created during this guide by running the following commands:
- kubectl gadget
- ig
$ kubectl delete ns qdisc-latency-test
$ docker rm -f qdisc-latency-test
Exporting metrics
The profile_qdisc_latency
gadget can expose the histograms it generates to a
Prometheus endpoint. To do so, you need to activate both the metrics listener as
well as the gadget collector. To enable the metrics listener, check the
Exporting Metrics documentation. To enable
the collector for the profile_qdisc_latency
gadget with the metrics name
qdisc-latency-metrics
, run the following command:
- kubectl gadget
- ig
WIP: Headless mode for kubectl gadget is under development
gadgetctl run ghcr.io/inspektor-gadget/gadget/profile_qdisc_latency:latest \
--annotate=qdisc_latency:metrics.collect=true \
--otel-metrics-name=qdisc:qdisc-latency-metrics \
--detach
- kubectl gadget
- ig
WIP: Headless mode for kubectl gadget is under development
Unless you configured the metrics listener to do differently, the
metrics will be available at http://localhost:2224/metrics
on the
server side. For the profile_qdisc_latency
gadget we ran above, you
can find the metrics under the qdisc-latency-metrics
scope:
$ curl http://localhost:2224/metrics -s | grep qdisc-latency
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="1"} 0
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="2"} 0
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="4"} 0
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="8"} 0
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="16"} 0
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="32"} 0
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="64"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="128"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="256"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="512"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="1024"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="2048"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="4096"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="8192"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="16384"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="32768"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="65536"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="131072"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="262144"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="524288"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="1.048576e+06"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="2.097152e+06"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="4.194304e+06"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="8.388608e+06"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="1.6777216e+07"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="3.3554432e+07"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="6.7108864e+07"} 3
latency_bucket{otel_scope_name="qdisc-latency-metrics",otel_scope_version="",le="+Inf"} 3
latency_sum{otel_scope_name="qdisc-latency-metrics",otel_scope_version=""} 192
latency_count{otel_scope_name="qdisc-latency-metrics",otel_scope_version=""} 3
otel_scope_info{otel_scope_name="qdisc-latency-metrics",otel_scope_version=""} 1
Finally, stop metrics collection:
- kubectl gadget
- ig
WIP: Headless mode for kubectl gadget is under development
$ gadgetctl list
ID NAME TAGS GADGET
3e68634c4c28 amazing_payne ghcr.io/inspektor-gadget/gadget/profile_qdisc_latency:latest
$ gadgetctl delete 3e68634c4c28
3e68634c4c28a981a60fe96a29d24a99