Deploying Single Node Bare metal OpenShift using Advanced Cluster Management
In this article I describe the procedure of deploying a single node, bare metal OpenShift using Red Hat Advanced Cluster Management for Kubernetes 2.4 CIM (Central Infrastructure Management). The purpose of this article is to provide a complete walk through of the process — from the moment you install RHACM (Red Hat Advanced Cluster Management for Kubernetes) until you have a running SNO (Single Node OpenShift) on a bare metal server in your environment. Other SNO deployment references can be found at -
Environment Setup
The environment setup I’m using for this blog consists of -
- Red Hat OpenShift Container Platform 4.9.0 (Used for installing RHACM)
- Red Hat Advanced Cluster Management for Kubernetes 2.4.0
- A fake Bare metal server (32 GB RAM, 12 CPU, 160 GB HDD)
To set up and configure Advanced Cluster Management in your environment, follow the product’s documentation or blogs that cover the topic.
DNS & DHCP Setup
In this installation, I’m using a bind DNS server, and a dhcpd DHCP server to set up networking for the SNO deployment.
The DNS zone I’m using for the deployment —
$TTL 86400
@ 1D IN SOA ns.sno.ocp.lab. hostmaster.sno.ocp.lab (
2002022419 ; serial
3H ; refresh
15 ; retry
1w ; expire
3h ; nxdomain ttl
)
IN NS ns.sno.ocp.lab.$ORIGIN sno.ocp.lab.; server host definitions
ns IN A 10.0.0.222
api IN A 10.0.0.45
master-0 IN A 10.0.0.45$ORIGIN apps.sno.ocp.lab.
* IN A 10.0.0.45
The DHCP configuration I’m using for the deployment -
default-lease-time 600;
max-lease-time 7200;authoritative;log-facility local7;subnet 10.0.0.0 netmask 255.255.255.0 {
option subnet-mask 255.255.255.0;
option broadcast-address 10.0.0.255;
option domain-name-servers 10.0.0.222;
option domain-name "sno.ocp.lab";
option ntp-servers 10.0.0.222;
option routers 10.0.0.254;
}host sno {
option host-name "master-0.sno.ocp.lab";
hardware ethernet 56:6f:80:a6:00:25;
fixed-address 10.0.0.45;
}
Central Infrastructure Management
Central Infrastructure Management (CIM) is an implementation of the assisted installer service in RHACM. CIM is not configured out of the box because it requires persistent storage. The CIM instance can be configured manually by running the next command on the hub cluster -
cat <<EOF | oc create -f -
apiVersion: agent-install.openshift.io/v1beta1
kind: AgentServiceConfig
metadata:
name: agent
namespace: open-cluster-management
spec:
databaseStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
filesystemStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
EOF
To validate that CIM is configured correctly, run the next commands on the hub cluster -
$ oc get pods -n open-cluster-management | grep assisted
assisted-image-service-77b65c6b55-xxfz9 1/1 Running 0 82s
assisted-service-dcfd5d884-9g7cz 2/2 Running 1 82s$ oc get pvc -n open-cluster-management
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
assisted-service Bound pvc-ad0e84f3-b43b-4811-a932-f1f8fe7718e7 20Gi RWO gp2 32s
postgres Bound pvc-100a1dad-8f83-449c-9bd8-ea3b09f19e28 10Gi RWO gp2 32s
All pods should be running, all PVCs should be bound.
Infrastructure Environment
Now that you have configured CIM, you may configure the Infrastructure environment. The infrastructure environment lets you manage and aggregate physical hosts in your organization. To create an infrastructure environment, log into RHACM’s dashboard and navigate to — Infrastructure -> Infrastructure environments -> Create infrastructure environment. Fill the fields in the form. the result may look like -
The Pull secret can be obtained at — https://console.redhat.com/openshift/install/
Press on Create to initialize the infrastructure environment.
After the infrastructure environment is created, navigate to Infrastructure -> Infrastructure environments -> <infrastructure environment name> -> Add host. A window should pop up with a discovery ISO URL. The discovery ISO is used by CIM to inject the discovery agent to physical hosts. The agent aggregates data about the physical host, and reports back to CIM.
Download the ISO and load it as virtual media to your physical server. Boot the server from the discovery ISO. The server boots and fetches the Red Hat CoreOS live rootfs image for the discovery agent to run on -
The physical server must have networking configured (DHCP) for this stage to be successful.
After a couple of minutes, a reference to the physical server will appear at the RHACM dashboard at — Infrastructure -> Infrastructure environments -> <infrastructure environment name>->Hosts -
Press on Approve host to continue. The host status should now be Ready.
The Approve host option is used as a security measure in CIM.
Deploying SNO
After setting up the host, you’re ready to proceed with the cluster creation. But, before you begin with the process, make sure to run the next command on your hub cluster to allow SNO creation -
$ oc patch hiveconfig hive --type merge -p '{"spec":{"targetNamespace":"hive","logLevel":"debug","featureGates":{"custom":{"enabled":["AlphaAgentInstallStrategy"]},"featureSet":"Custom"}}}'
Create a Credential for your cluster deployment to use during the provisioning process. To create the credential, navigate to Credentials -> Add credential -> On Premise. Follow the setup process -
Press on Add to create the credential.
After creating the secret, navigate to Infrastructure -> Clusters -> Create cluster. Choose the On-premises provider and provide the secret you have created in the previous step -
Press on Next.
At the next page of the setup, provide the cluster name and make sure that the Install single node OpenShift (SNO) checkbox is checked. Validate the pull secret and base domain references -
Make sure that <cluster-name>.<base-domain> correlates with the DNS zone you have created for the SNO deployment.
Press on Next.
At the next page of the setup, associate any Ansible automation you’d like to run at different stages of the cluster’s lifecycle -
Press on Next.
At the next page of the setup, validate that the information you have provided until now is correct -
Press on Save.
At the next page of the setup, associate your physical server to the SNO deployment -
Press on Next.
At the next page of the setup, wait until the physical server’s status changes from Binding to Bound. The server is now ready to become a Single Node OpenShift instance. Make sure to select the subnet you’d like to deploy SNO on. Afterwards, paste the public key that’s going to be used for future troubleshooting of the physical server -
Press on Next.
At the next page of the setup, validate that the information you have provided until now is correct -
Press on Save and install.
The installation process of SNO begins. Press on See cluster details to monitor the deployment’s progress -
Note that the physical server will reboot several times. Make sure to change the boot order from the discovery ISO to a physical disk now.
You may press on View Cluster Events to view the installation progress -
When the deployment process is finished, the next screen appears -
Note that the kubeadmin user password and OpenShift console URL are now available at the installation dashboard.
SNO is now being imported to RHACM’s management stack. RHACM installs the Klusterlet agent on the SNO. When the Klusterlet finishes its installation process, the SNO appears with the Ready status in RHACM’s cluster management dashboard.
You may now login into the cluster, and perform final validations that confirm a successful installation -
$ oc login -u kubeadmin -p <kubeadmin-password> https://api.sno.ocp.lab:6443$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0.sno.ocp.lab Ready master,worker 53m v1.22.0-rc.0+a44d0f0$ oc get clusteroperators
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
...
authentication 4.9.5 True False False 12s
console 4.9.5 True False False 25m
csi-snapshot-controller 4.9.5 True False False 43m
dns 4.9.5 True False False 42m
etcd 4.9.5 True False False 43m
image-registry 4.9.5 True False False 33m
ingress 4.9.5 True False False 33m
insights 4.9.5 True False False 40m
kube-apiserver 4.9.5 True False False 33m
kube-controller-manager 4.9.5 True False False 36m
kube-scheduler 4.9.5 True False False 34m
kube-storage-version-migrator 4.9.5 True False False 48m
machine-api 4.9.5 True False False 39m
machine-approver 4.9.5 True False False 42m
machine-config 4.9.5 True False False 42m
marketplace 4.9.5 True False False 45m
...$ oc get pods -n open-cluster-management-agent-addon
NAME READY STATUS RESTARTS AGE
klusterlet-addon-appmgr-c4b5b4f89-pwks9 1/1 Running 1 (3m11s ago) 5m36s
klusterlet-addon-certpolicyctrl-575b46c45b-t7f2s 1/1 Running 1 (3m11s ago) 5m34s
klusterlet-addon-iampolicyctrl-864fbbdf55-tcn8c 1/1 Running 0 5m39s
klusterlet-addon-operator-645c68cb-mbfk5 1/1 Running 0 6m18s
klusterlet-addon-policyctrl-config-policy-76579d9498-bdbww 1/1 Running 0 5m37s
klusterlet-addon-policyctrl-framework-74d48966b6-qsjt4 3/3 Running 0 5m37s
klusterlet-addon-search-54cc74968b-8pn4c 1/1 Running 0 5m38s
klusterlet-addon-workmgr-5655fdf4b-9rfxm 1/1 Running 1 (3m11s ago) 5m38s
Conclusion
Red Hat Advanced Cluster Management for Kubernetes 2.4 makes Bare Metal Single Node OpenShift super easy to work with. Try it yourself!