Securing a Prometheus Pushgateway deployment on OpenShift

Michael Kotelnikov
6 min readApr 19, 2022
Dor Beach - Israel, July 2020

In this article I describe how you can harden a Prometheus Pushgateway instance on top of Red Hat OpenShift Container Platform. During the course of the blog I go through a real-life Pushgateway deployment scenario and explain why simply deploying a Pushgateway without taking security into consideration might be a bad idea for your monitoring system.

Pushgateway Basics

Prometheus Pushgateway is a common tool that receives data from external agents and turns it into “scrapable” format for a Prometheus instance to monitor.

Prometheus & Pushgateway architecture — Pushgateway receives data from an application, while Prometheus scrapes the Pushgateway

More information about Pushgateway can be found in the official Prometheus documentation — https://github.com/prometheus/pushgateway

Pushgateway on OpenShift

Like many other technologies and tools, a Pushgateway can also be containerized and deployed on OpenShift. In the next scenario I go through such a Pushgateway implementation. An overview of the architecture I’m going to work with can be seen in the next diagram —

The Pushgateway component receives events from both internal OpenShift applications and external organizational components

The above architecture does the job, but it might be prone to security vulnerabilities. Since we are pushing data to a Pushgateway without verifying the “pushing client”, a malicious entity may spoof data and send it to the Pushgateway. Therefore, making the monitoring system unreliable, as shown in the next diagram —

A malicious entity may time his “pushes” to run right after the real data is sent from a legitimate application, therefore exposing faulty data for the Prometheus instance to scrape

Client Verification

In order to restrict potential malicious entities, a client verification mechanism must be set. For the sake of this demo, I will protect the Pushgateway instance using a user and a password. All clients that try to push data to the Pushgateway will have to provide a set of credentials to authenticate.

To set the credentials, I add an nginx sidecar container to the Pushgateway pod. The nginx container acts as an authentication proxy that verifies whether traffic is coming from a legitimate client or not.

The diagram shows how an application provides a username & password in order to authenticate against the Pushgateway’s authentication proxy

As shown in the above diagram, the credentials are verified by an authentication proxy. If valid, the data is forwarded to the Pushgateway. If invalid, an “401: Authorization Required” message is returned to the client.

Kubernetes NetworkPolicies

With that being said, you must not forget NetworkPolicies. Even if you set a client verification mechanism in the pod, a malicious entity may still access the Pushgateway container directly and “skip” the authentication proxy, as shown in the next diagram —

In order to restrict such behavior, you can set a NetworkPolicy that restricts network access to all ports that are not used by the nginx container in the Pushgateway namespace. You may set a policy that accepts packets that access the pod with port 8080/tcp and drops all other network flows, as shown in the next diagram —

Setting Everything Up

After going through all aspects of the architecture in theory, let’s deploy it in practice.

NOTE: All resources deployed during this section can be found in the next GitHub repository — https://github.com/michaelkotelnikov/k8s-secured-pushgateway.git

Step 1 — Create a Namespace for the Pushgateway resources

Firstly, you need to create a namespace in your OpenShift cluster. The namespace name in this scenario is pushgateway. Run the next command to create the namespace —

$ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/k8s-secured-pushgateway/master/k8s-resources/pushgateway/namespace.yaml

Step 2 — Create a secret for Nginx credentials

Create a set of credentials for the nginx authentication proxy. I created my credentials using the htpasswd command. The credentials I created are — user: pushgateway ; password: redhat. To deploy the secret containing the credentials, run the next command —

$ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/k8s-secured-pushgateway/master/k8s-resources/pushgateway/htpasswd-secret.yaml

Step 3 — Create a configmap for Nginx configurations

Create the next configmap to map an nginx.conf file to the nginx authentication proxy container. The nginx configurations make nginx send all traffic it receives to the adjacent Pushgateway container while verifying it against the htpasswd credentials. The configuration snippet —

location / {auth_basic "Pushgateway server authentication";auth_basic_user_file /opt/bitnami/nginx/conf/htpasswd;proxy_pass http://localhost:9091;}

To deploy the configmap resource that contains the configuration file, run the next command —

$ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/k8s-secured-pushgateway/master/k8s-resources/pushgateway/nginx-configmap.yaml

Step 4— Create the Pushgateway deployment

Now that you have created all credentials and configurations, you may deploy the Pushgateway & authentication proxy instance. To deploy Pushgateway, run the next command —

$ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/k8s-secured-pushgateway/master/k8s-resources/pushgateway/deployment.yaml

Note that after running the command, a pod with two containers is created —

$ oc get pods -n pushgateway
NAME READY STATUS RESTARTS AGE
pushgateway-7bc8f587db-ppvcr 2/2 Running 0 72s

Step 5— Create a Service & Route

To enable networking towards the Pushgateway pod, create the service and route resources. To create the resources run the next commands —

$ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/k8s-secured-pushgateway/master/k8s-resources/pushgateway/service.yaml$ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/k8s-secured-pushgateway/master/k8s-resources/pushgateway/route.yaml

Step 6— Verify authentication

Now that we have created network access towards the Pushgateway instance, let’s try pushing some data into it! At first, let’s send a curl request with no authentication parameters —

$ echo "some_metric 3.17" | curl -k --data-binary @- https://pushgateway-pushgateway.apps.restore-test.cloudlet-dev.com/metrics/job/some_job

<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>

Running the same command with authentication parameters provides a successful result —

$ echo "some_metric 3.19" | curl -k --data-binary @- https://pushgateway-pushgateway.apps.restore-test.cloudlet-dev.com/metrics/job/some_job --user pushgateway:redhat

We can see the metric in the Pushgateway instance —

A metric with a job name of some_job and a value of 3.19

Step 7— Hardening network connections

As I mentioned earlier, we might have enabled an authentication mechanism using an nginx authentication proxy, but a malicious entity with access to a pod on the same OpenShift cluster may still send packets to the pushgateway container directly; without going through the nginx proxy container. For example, an attacker may execute the next command from a pod in a different namespace on the same OpenShift cluster —

$ curl -k http://10-129-3-170.pushgateway.pod.cluster.local:9091<!DOCTYPE html><html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="robots" content="noindex,nofollow">
<title>Prometheus Pushgateway</title>
...

The command accesses the pod by its IP and Pushgateway port (9091). In order to restrict such actions, a NetworkPolicy resource needs to be applied. You may run the next command to apply a NetworkPolicy resource that fits this scenario —

$ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/k8s-secured-pushgateway/master/k8s-resources/pushgateway/networkpolicy.yaml

After applying the network policy, performing the same curl command yields the next result —

$ curl -k http://10-129-3-170.pushgateway.pod.cluster.local:9091curl: (7) Failed to connect to 10-129-3-170.pushgateway.pod.cluster.local port 9091: Connection timed out

Conclusion

Security in Prometheus is important. You must always take an extra step to protect your monitoring system from malicious actions that may come from inside or outside of your organization.

During the course of the article I showed how you can leverage OpenShift and Kubernetes mechanisms to harden a Pushgateway instance. I hope this blog was helpful, thank you for reading!

--

--