Managing Network Security Lifecycles in Multi Cluster OpenShift Environments with OpenShift Platform Plus

Golan Heights, August 2021

In this blog, I discuss how the tools in the OpenShift Platform Plus bundle help an organization maintain and secure network traffic flows in multi cluster OpenShift environments. Proper network management can be challenging when working within Kubernetes based environments. Within this article, several common scenarios will be described that highlight these complexities and describe how OpenShift Platform Plus addresses these concerns to improve network maintenance and security.

Native Network Management in OpenShift

Out of the box, OpenShift provides mechanisms to control network traffic, like NetworkPolicy and EgressNetworkPolicy objects. An administrator in an OpenShift namespace can maintain NetworkPolicy resources to define which pods can communicate with one another in OpenShift clusters. For example, an ingress NetworkPolicy can be configured in an application’s namespace to allow traffic from application-1 to application-2, but disallow traffic from application-3 to application-2.

The above scenario can be recreated by deploying the following NetworkPolicy resource in the application-2 namespace -

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-application-1-webserver
Namespace: application-2
spec:
podSelector:
matchLabels:
app: webserver
ingress:
- from:
- namespaceSelector:
matchLabels:
name: application-1
podSelector:
matchLabels:
app: webserver

NetworkPolicy resources provide the fundamental mechanism for network traffic management in OpenShift and Kubernetes environments. But, relying on NetworkPolicies alone is not enough for a robust network management profile -

  • NetworkPolicy objects are namespace scoped. Therefore, a NetworkPolicy needs to be deployed for each application namespace in an OpenShift cluster. In some cases, administrators / developers miss certain namespaces and do not configure a NetworkPolicy object in them, leaving the applications in these namespaces vulnerable.
  • When maintaining multiple OpenShift clusters, it is important to keep the clusters aligned. The clusters must be associated with proper NetworkPolicy objects per namespace. Network configurations in all clusters must be monitored and governed via a single console; if one of the clusters drifts from the desired configuration, an alert should be initiated.
  • Anomalous traffic must be monitored, even if NetworkPolicies are configured in a namespace. NetworkPolicies may be permissive and allow malicious actors to circumvent defined rules. Therefore, it’s important to maintain a mechanism that enforces appropriate network controls to prevent unauthorized traffic within OpenShift clusters.

To overcome these challenges, a centralized platform for governance, monitoring and compliance needs to be established. In this blog, you will create this target state using the tools included with OpenShift Platform Plus -

Each tool in the list above provides capabilities that help maintaining and securing network traffic in multi cluster OpenShift environments. The next scenario demonstrates how these tools can be implemented together in an organization.

Scenario

This scenario defines a multi cluster OpenShift environment that simulates a typical organization deploying multiple applications. The environment is managed by Red Hat Advanced Cluster Management for Kubernetes and secured by Red Hat Advanced Cluster Security for Kubernetes -

  • A hub cluster is created for Red Hat Advanced Cluster Management for Kubernetes and Red Hat Advanced Cluster Security for Kubernetes. The cluster is used for management purposes only.
  • 2 managed clusters are used for the organizational applications. These clusters are used for deploying applications only.

Red Hat Advanced Cluster Management for Kuberenetes is used to deploy two applications on the managed clusters. Both applications are simple Apache web servers. These applications will be used to illustrate how network traffic is managed between applications.

To deploy the applications in your environment, run the following commands on your hub cluster -

<hub-cluster> $ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/opp-network-blog/master/demo-application/rhacm-resources/application-1.yaml<hub-cluster> $ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/opp-network-blog/master/demo-application/rhacm-resources/application-2.yaml

After applying each set of manifests, both applications are now deployed and managed by Red Hat Advanced Cluster Management for Kubernetes -

Now that you have covered the scenario, let’s deploy additional tools from OpenShift Platform Plus to properly manage network traffic between the applications. The first tool you will deploy is the Compliance Operator.

Compliance Operator

The Compliance Operator lets OpenShift Container Platform administrators identify the set of technical controls that a cluster should comply with, and provides them with an overview of gaps including paths to remediation. More information about the Compliance Operator and its components can be found in the following links -

Since the environment you are using is comprised of multiple OpenShift clusters, you will use Red Hat Advanced Cluster Management to distribute the Compliance Operator and monitor scan results across the managed clusters.

Alongside its application management capabilities, Red Hat Advanced Cluster Management uses its policy framework to ensure your Kubernetes clusters are configured in the desired state to meet internal enterprise and external technical regulatory compliance standards. If a managed cluster violates a defined policy, an alert is sent to the RHACM console. In this scenario, you will use a governance policy to maintain an instance of the Compliance Operator across all clusters in the environment. More information about the integration between Red Hat Advanced Cluster Management and the Compliance Operator can be found at — Managing NIST 800–53 controls in a multicluster OpenShift environment — Part 3.

You will use the Compliance Operator to identify whether the managed clusters in the environment are compliant with the ocp4-moderate-configure-network-policies-namespaces compliance rule. The compliance rule makes sure that all non-system namespaces have a NetworkPolicy object configured.

To deploy the Compliance Operator using Advanced Cluster Management, run the following commands on the hub cluster -

<hub-cluster> $ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/opp-network-blog/master/demo-policies/namespace.yaml<hub-cluster> $ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/opp-network-blog/master/demo-policies/compliance-operator-policy.yaml<hub-cluster> $ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/opp-network-blog/master/demo-policies/compliance-operator-moderate-scan-policy.yaml

After the policies are configured, the Governance dashboard in Red Hat Advanced Cluster Management interface indicates that both clusters are compliant with the policy-compliance-operator policy, but are non-compliant with the policy-moderate-scan policy.

To understand which compliance rules are violated by each cluster, navigate to the Governance dashboard, and click on policy-moderate-scan. Afterward, navigate to the Status tab and click on View details next to the violating templates to see all violating compliance rules -

Search for the ocp4-moderate-configure-network-policies-namespaces compliance rule in the scan results and click on View yaml next to its name. As you can see, the clusters are non-compliant with the rule because there are non-system namespaces that do not have a NetworkPolicy resource configured.

By using the Compliance Operator you are able to discover which clusters have namespaces with no network policies. However, just identifying these namespaces is not enough. Additional steps are needed to design network rules based on the application’s logic. In complex applications with many components, this could be quite a challenging task. To solve this problem, you will use the network management tools and capabilities that come with Red Hat Advanced Cluster Security for Kubernetes as part of OpenShift Platform Plus.

Red Hat Advanced Cluster Security for Kubernetes

Red Hat Advanced Cluster Security for Kubernetes provides advanced security for Kubernetes, enabling organizations to build, deploy, run, and manage intelligent applications securely at scale. With multiple layers of security analysis, the platform enables DevSecOps adoption and provides a holistic, ranked risk assessment across managed clusters.

In this scenario, Red Hat Advanced Cluster Security is used to identify, control and restrict network traffic flows between applications on the managed clusters. In order to understand the existing network traffic flows in the clusters use the Network Graph feature. Using the Network Graph, you are able to identify all active connections between applications and external entities. Furthermore, you can identify all allowed connections based on the rules defined in NetworkPolicy objects within namespaces.

The application logic that we are looking to realize in this scenario allows application-1 to communicate with application-2, while application-2 should not be allowed to communicate with application-1. The flow is explained in the next diagram -

Navigate to Network Graph in the Red Hat Advanced Cluster Security dashboard to visualize the above diagram at runtime. For example, if I execute a curl command in the application-1 pod, I will see an arrow that describes the network flow from application-1 to application-2.

As you can see from the above Network Graph snippet, Red Hat Advanced Cluster Security was able to identify the traffic that originated from application-1 and record it into the graph. Furthermore, Red Hat Advanced Cluster Security added the network flow into the baseline settings of application-1. The baseline settings define which network flows have been recorded by Red Hat Advanced Cluster Security for a certain deployment. By using the baseline settings, you are able to generate NetworkPolicy objects that correspond to the recorded network traffic, and you can initiate a violation if anomalous traffic does not correspond to the baseline settings. The recorded baseline settings in this scenario are described in the next table -

By observing the baseline settings, you can see that application-1 initiated traffic towards the webserver deployment in application-2. Note that you may select certain flows in the baseline settings and mark them as anomalous. Anomalous flows can be visualized in the Network Graph or even initiate an alert whenever they occur.

Now that the baseline settings are set for application-1 and application-2, let’s use Red Hat Advanced Cluster Security to automatically generate NetworkPolicy objects based on the recorded network flows. To create the NetworkPolicies, navigate to Network Graph in the Red Hat Advanced Cluster Security dashboard, click on Network Policy Simulator, and click on Generate and simulate network policies. As a result, a long YAML file is provided which contains NetworkPolicy objects that correspond to the recorded rules in the baseline settings for all namespaces. In the long list, you can find the NetworkPolicy objects for the application-1 and application-2 namespaces -

...
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: "2021-10-25T12:05:04Z"
labels:
network-policy-generator.stackrox.io/generated: "true"
name: stackrox-generated-webserver
namespace: application-2
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: application-1
podSelector:
matchLabels:
app: webserver
ports:
- port: 8080
protocol: TCP
podSelector:
matchLabels:
app: webserver
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: "2021-10-25T12:05:04Z"
labels:
network-policy-generator.stackrox.io/generated: "true"
name: stackrox-generated-webserver
namespace: application-1
spec:
podSelector:
matchLabels:
app: webserver
policyTypes:
- Ingress
...

Red Hat Advanced Cluster Security generated the NetworkPolicy objects, but applying them manually at their current state to all managed clusters would not be the most efficient way of managing these configurations in a multi-cluster production environment. In order to keep the clusters aligned and up to organizational governance standards, we use Red Hat Advanced Cluster Management policies.

Red Hat Advanced Cluster Management for Kubernetes

Red Hat Advanced Cluster Management for Kubernetes is a platform that is designed to help developers and administrators to manage cloud-native applications that run on multiple OpenShift clusters. RHACM supports the creation of OpenShift clusters across multiple cloud providers. It provides a monitoring system that visualizes resources and metrics from all clusters in the environment. It implements deployment tools that provision applications across multiple clusters. Most importantly, it provides policy-based governance that enables clusters to operate at a desired configuration state based on best practices. It also easily detects clusters that may have drifted from such a desired configuration state.

In this scenario, Red Hat Advanced Cluster Management is used to distribute the RHACS generated NetworkPolicy resources equally across all managed clusters in the environment. Red Hat Advanced Cluster Management uses Policy objects to enforce the creation of NetworkPolicies on the managed clusters.

Let’s use the GitOps approach to deploy the policies, as described in the Contributing and deploying community policies with Red Hat Advanced Cluster Management and GitOps blog post. Using the GitOps approach allows users to treat policies like source code. When you update policies in your Git repository, the policies are updated on your RHACM hub cluster as well. To use the GitOps functionality follow the next steps -

  • Create a repository in GitHub for your Policy objects.
  • Place the NetworkPolicy objects created by RHACS into a Policy custom resource (e.g — Policy that wraps a NetworkPolicy).
  • Clone the policy-collection repository. Afterwards, use the deploy/deploy.sh script from the repository to deploy the Policy. Make sure to point the script to your repository. For example -
<hub-cluster> $ git clone https://github.com/open-cluster-management/policy-collection.git<hub-cluster> $ cd policy-collection/deploy/<hub-cluster> $ ./deploy.sh --url https://github.com/michaelkotelnikov/opp-network-blog.git --path networkpolicy-policy --branch master --name network-policy --namespace rhacm-policies

After you create the resources, view the RHACM Governance dashboard. All of the clusters should be in a compliant state, with no violations -

The NetworkPolicy objects were created successfully on both cluster-a and cluster-b. Verify the NetworkPolicy objects by running the following command -

<cluster-a> $ oc get NetworkPolicy -A | grep application
application-1 stackrox-generated-webserver app=webserver 33m
application-2 stackrox-generated-webserver app=webserver 33m
<cluster-b> $ $ oc get NetworkPolicy -A | grep application
application-1 stackrox-generated-webserver app=webserver 33m
application-2 stackrox-generated-webserver app=webserver 33m

Red Hat Advanced Cluster Management now manages the NetworkPolicy objects across the clusters. Any changes to the Policies are done via Git and Pull Requests only. Any direct change to the NetworkPolicy objects on the managed clusters will be mitigated using RHACM’s configuration policy controller.

Anomalies

As of now, Network Policies are set between application-1 and application-2. The NetworkPolicies limit traffic between the namespaces and deployments, but they are not bulletproof. A malicious actor may still find a unique way to overcome certain restrictions and perform unauthorized requests that work around NetworkPolicy rules. Therefore, it’s important to monitor any suspicious network flows between pods in the cluster and make sure that a violation is initiated if such traffic occurs.

Let’s say a malicious actor compromised cluster-a’s RBAC mechanism and managed to get the rights to deploy pods in the application-1 namespace. The malicious actor creates a pod with the name anomalous and gives it the app: webserver label (the same as the label the main application in the namespace has). By deploying the anomalous pod in the application-1 namespace and providing it with the app: webserver label, the actor gains the permission to access the pod in the application-2 namespace. The network traffic diagram after the malicious actor deploys his pod -

In such scenarios our goal is to identify the malicious actor and the anomaly as soon as possible. To identify the anomaly, you will use Red Hat Advanced Cluster Security. As discussed earlier in this blog, RHACS creates a baseline network setting per deployment on the managed clusters. As an administrator, you can set RHACS to initiate a violation if the baseline settings are violated. Setting the alerting system can be done by clicking on Alert on baseline violations in a certain deployment definition in the Network Graph dashboard in Red Hat Advanced Cluster Security.

In our scenario, you need to set the Alert on baseline violations option for the webserver deployment in the application-2 namespace. After setting this configuration, traffic from anomalous pods will be flagged and an alert will be initiated. To set the alerting system press on the switch next to Alert on baseline violations -

To create an anomalous pod in the application-1 namespace, apply the following deployment resource to cluster-a -

<cluster-a> $ oc apply -f https://raw.githubusercontent.com/michaelkotelnikov/opp-network-blog/master/anomalous-app/deployment.yaml

After the anomalous application is deployed, try to initiate traffic towards the web server instance in the application-2 namespace -

<cluster-a> $ oc rsh anomalous-5dfbf568cc-g6pkc<anomalous-pod (cluster-a)> $ curl http://webserver.application-2.svc:8080<html><body><h1>It works!</h1></body></html>

Note that as soon as the traffic is initiated from the anomalous pod, the Network Graph in Red Hat Advanced Cluster Security updates with a new flow. The new network flow is marked as anomalous -

The alert can also be seen at the Violation dashboard in Red Hat Advanced Cluster Security or any other platform that integrates with RHACS violations -

Conclusion

Maintaining network compliance in OpenShift environments is important. It may get hard when applications and clusters start to scale. Environments may drift from compliance, network policies become complex and anomalous traffic is not monitored at runtime.

Red Hat OpenShift Plus helps compliance officers, SecOps teams and SREs to address these complexities by providing distributed compliance scans using the Compliance Operator. It generates NetworkPolicy resources using the Advanced Cluster Security baselining feature. It spreads NetworkPolicy resources across the OpenShift cluster fleet using Advanced Cluster Management. And, it monitors network traffic at runtime in order to maintain the integrity of the organizational network baseline.

--

--

--

Cloud Consultant, Red Hat

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Inheritance in Ruby: The Ancestors Chain

How to Ask for Tech Support So You Get Good Answers Quickly

How to install an SSL Certificate on Cerberus FTP Server?

Fetching Emails from GitHub

Profitable VPS for GSA Search Engine Ranker

Bi-Weekly Progress Report — October 20th — November 3rd 2021

The How and Why of BlockValue

Downtown golden

CS Glossary Ep.0 | Foreword

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Michael Kotelnikov

Michael Kotelnikov

Cloud Consultant, Red Hat

More from Medium

Setup basic horizontal scaling in Kubernetes

Deploy Mirantis Secure Registry on any Kubernetes (Minikube, EKS, GKE, K0S, etc)

Kubernetes Network Policies with Cilium

How we leveraged nip.io and custom CA for Otomi