NGINX Prometheus exporter, but it's awk 01 June 2021 on Krystian's Keep

NGINX exposes a few interesting metrics with its stub status module. However, they are exported in a non-standard format which looks like this

Active connections: 4 
server accepts handled requests
 6756 6756 16204 
Reading: 0 Writing: 1 Waiting: 3 

If you wanted Prometheus to scrape these you are out of luck, but there are two solutions to this problem. One is knyar/nginx-lua-prometheus and the other is nginxinc/nginx-prometheus-exporter. Both are probably fine, but I don’t really like the idea of running an additional deamon, which’s sole purpose is to serve metrics from other daemon.

If you happen to already have fcgiwrap running, we can just use awk. If you don’t have fcgiwrap, install and run it. You can use a simple setup like this for exporting many more metrics. Introducing: “NGINX exporter, but it’s awk”!

Inspired by this method to transform the output of unbound-control stats into Prometheus metrics style output I decided to do the same for NGINX.

Here is the result:

# Read nginx stub_status
# and output Prometheus metrics style output.
# Use it like this: curl -s "localhost:8080/nginx_status" | awk -f "metrics.awk"

BEGIN {
	FS=" ";
}

/^Active connections: [0-9].*$/ {
	m["active"]=$3
}

/^ [0-9].* [0-9].* [0-9].*$/ {
	m["accepted"]=$1
	m["handled"]=$2
	m["requests"]=$3
}

/^Reading: [0-9].* Writing: [0-9].* Waiting: [0-9].*$/ {
	m["reading"]=$2
	m["writing"]=$4
	m["waiting"]=$6
}

END {
	print "# HELP nginx_connections_accepted Accepted client connections"
	print "# TYPE nginx_connections_accepted counter"
	print "nginx_connections_accepted " m["accepted"]
	print ""

	print "# HELP nginx_connections_active Active client connections"
	print "# TYPE nginx_connections_active gauge"
	print "nginx_connections_active " m["active"]
	print ""

	print "# HELP nginx_connections_handled Active client connections"
	print "# TYPE nginx_connections_handled counter"
	print "nginx_connections_handled " m["handled"]
	print ""

	print "# HELP nginx_connections_reading Connections where NGINX is reading the request header"
	print "# TYPE nginx_connections_reading gauge"
	print "nginx_connections_reading " m["reading"]
	print ""

	print "# HELP nginx_connections_waiting Idle client connections"
	print "# TYPE nginx_connections_waiting gauge"
	print "nginx_connections_waiting " m["waiting"]
	print ""

	print "# HELP nginx_connections_writing Connections where NGINX is writing the response back to the client"
	print "# TYPE nginx_connections_writing gauge"
	print "nginx_connections_writing " m["writing"]
	print ""

	print "# HELP nginx_http_requests_total Total http requests"
	print "# TYPE nginx_http_requests_total counter"
	print "nginx_http_requests_total " m["requests"]
	print ""
}

Short and straight to the point. Now, this is just the awk program. We need to run it against the output of NGINX stub status module, but first let’s activate the module itself. Add the following inside your http directive in nginx.conf.

server {
	listen 8080 default_server;
	listen [::]:8080 default_server;

	location /nginx_status {
		stub_status;
	}
}

Reload NGINX configuration. You will most likely need to use sudo or doas for this.

nginx -s reload

And check if you can access the status page.

curl "localhost:8080/nginx_status"

You should see something resembling what you saw at the beginning of this post.

Next save the awk program presented above somewhere convenient. I’ll assume it is placed in /usr/local/share/nginx/metrics.awk

Now let us test the program. Run the following command. (The -s flag prevents curl from printing the progress bar to stderr.)

curl -s "localhost:8080/nginx_status" | awk -f "/usr/local/share/nginx/metrics.awk"

You should see something that starts like this:

# HELP nginx_connections_accepted Accepted client connections
# TYPE nginx_connections_accepted counter
nginx_connections_accepted 7135

If there is no number after nginx_connections_accepted something did not work properly.

Next add a shell script that runs this whole shebang (pun intended) and save somewhere convenient. I recommend `/usr/local/bin/nginx-metrics'.

#!/bin/sh

curl -s "localhost:8080/nginx_status" | awk -f "/usr/local/share/nginx/metrics.awk"

Do not forget to make it executable. (You’ll need sudo or doas.)

chmod +x /usr/local/share/nginx/metrics.awk

Last step. Add the following inside our server directive in nginx.conf, adjusting the path to fcgiwrap.sock if needed.

	location /nginx/metrics {
		gzip off;
		fastcgi_pass unix:/run/fcgiwrap/fcgiwrap.sock;
		include /etc/nginx/fastcgi_params;
		fastcgi_param SCRIPT_FILENAME /usr/local/bin/nginx-metrics;
	}

That’s it. Now you have NGINX metrics, Prometheus-style at localhost:8080/nginx/metrics. Add a suitable scrape config to prometheus.yml, paying extra attention to add metrics_path: '/nginx/metrics' if you used the same path as me.

Done, minimalistic NGINX exporter using only awk and existing NGINX + fcgiwrap setup.

Have a comment on one of my posts? Start a discussion in my public inbox by sending an email to ~krystianch/public-inbox@lists.sr.ht [mailing list etiquette]

Articles from blogs I read

Status update, May 2025

Hi! Today wlroots 0.19.0 has finally been released! Among the newly supported protocols, color-management-v1 lays the first stone of HDR support (backend and renderer bits are still being reviewed) and ext-image-copy-capture-v1 enhances the previous screen ca…

via emersion May 15, 2025

The British Airways position on various border disputes

My spouse and I are on vacation in Japan, spending half our time seeing the sights and the other half working remotely and enjoying the experience of living in a different place for a while. To get here, we flew on British Airways from London to Tokyo, and I…

via Drew DeVault's blog May 5, 2025

You cannot have our user's data

As you may have noticed, SourceHut has deployed Anubis to parts of our services to protect ourselves from aggressive LLM crawlers.1 Much ink has been spilled on the subject of the LLM problem elsewhere, and we needn’t revisit that here. I do want to take thi…

via Blogs on Sourcehut April 15, 2025

Generated by openring