Deploy Bare Metal OpenShift in a Disconnected Environment Using Advanced Cluster Management

Almog Elfassy
6 min readApr 11, 2022

Introduction

Cloud architecture is changing, as many organizations are realizing that their cloud architecture will be distributed, with smaller clouds spread out over several locations and having to carry workloads reliably to the edge. As well as the benefit of a distributed cloud, it is necessary for the clouds to be deployed multiple times and then be managed thoroughly. Advanced Cluster Management is the answer.

The purpose of this article is to describe how to deploy Openshift bare metal three-node cluster using Red Hat Advanced Cluster Management.

Rather than relying on screenshots only, I suggest that you read the written instructions. The purpose of the screenshots is primarily to provide a visual aid for you and to make sure you are on the right page at each step.

For other deployment references:

https://docs.openshift.com/container-platform/4.9/scalability_and_performance/ztp-deploying-disconnected.html#ztp-acm-preparing-to-install-disconnected-acm_ztp-deploying-disconnected

Prerequisites

  • OpenShift cluster 4.9.x with ACM operator installed (Hub Cluster)
  • Network connectivity to the Baseboard Management Controller (BMC) on the edge Bare-metal nodes
  • External DHCP server for the network addresses including a Mac reservation, Because there is a correlation between the DNS records and IPs.
  • DNS Records + PTR
  • RHCOS live iso and rootfs
  • Local registry with the relevant installation images
  • External httpd server for the RHCOS live iso and rootfs files.

Central Resources

Create a namespace for the edge cluster

oc create -f << EOF >>
apiVersion: v1
kind: Namespace
metadata:
name: "edge-a"
labels:
name: "edge-a"
EOF

Provide OpenShift container platform images for the desired cluster version

oc create -f << EOF >>
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: openshift-4.9.19
namespace: multicluster-engine
spec:
releaseImage: registry.aelfassy.local:443/ocp4/openshift4:4.9.19-x86_64
EOF

Create a ConfigMap containing the mirror registry config

oc create -f << EOF >>
apiVersion: v1
kind: ConfigMap
metadata:
name: assisted-installer-mirror-config
namespace: multicluster-engine
labels:
app: assisted-service
data:
ca-bundle.crt: |
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
registries.conf: |
unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
[[registry]]
prefix = ""
location = "registry.redhat.io/rhacm2"
mirror-by-digest-only = true
[[registry.mirror]]
location = "registry.aelfassy.local:5000/rhacm2"
[[registry]]
prefix = ""
location = "registry.redhat.io/multicluster-engine"
mirror-by-digest-only = true
[[registry.mirror]]
location = "registry.aelfassy.local:5000/multicluster-engine"
[[registry]]
prefix = ""
location = "quay.io/openshift-release-dev/ocp-release"
mirror-by-digest-only = true
[[registry.mirror]]
location = "registry.aelfassy.local:5000/openshift/release"
[[registry]]
prefix = ""
location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev"
mirror-by-digest-only = true
[[registry.mirror]]
location = "registry.aelfassy.local:5000/openshift/release"
EOF

Configure persistent storage for Central Infrastructure Management (CIM) and the url for the live ISO and rootfs.

oc create -f << EOF >>
apiVersion: agent-install.openshift.io/v1beta1
kind: AgentServiceConfig
metadata:
name: agent
namespace: multicluster-engine
spec:
databaseStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
filesystemStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
mirrorRegistryRef:
name: 'assisted-installer-mirror-config'
osImages:
- openshiftVersion: "4.9"
version: "49.84.202201262103-0"
url: "http://registry.aelfassy.local/rhcos-4.9.0-x86_64-live.x86_64.iso"
rootFSUrl: "http://registry.aelfassy.local/rhcos-live-rootfs.x86_64.img"
cpuArchitecture: "x86_64"
EOF

Check the assisted pods in open-cluster-management namespace to validate that CIM is configured correctly

Create the Image Pull Secret custom resource

oc create -f << EOF >>
apiVersion: v1
kind: Secret
metadata:
name: assisted-deployment-pull-secret
namespace: edge-a
stringData:
.dockerconfigjson: '{"auths":{..."registry.aelfassy.local:443":{"auth":"...,"email":"..."}}}'
EOF

Create the InfraEnv custom resource

oc create -f << EOF >>
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: edge-a
namespace: edge-a
spec:
sshAuthorizedKey: 'ssh-rsa…'
pullSecretRef:
name: assisted-deployment-pull-secret
EOF

NOTE: Now all of the central resources on the HUB cluster and Infrastructure Environment for the edge cluster are created.

Discovery Hosts

Log into RHACM’s dashboard and navigate to Infrastructure -> host inventory, there you will see the cluster you have just created -> Click on the cluster name

Click on the ‘Add host’ button

Download the ISO for the edge nodes. The discovery ISO is used by CIM to inject the discovery agent to the physical hosts. The agent aggregates data about the physical host, and reports back to CIM.

NOTE: Once the node boots from the discovery ISO a reference to the physical server will appear on the RHACM dashboard.

Navigate to: Infrastructure -> Infrastructure environments -> Our infrastructure environment name -> Hosts tab -> Press on ‘Approve host’ to continue. (The host status should now be Ready)

Create the Cluster

In RHACM’s dashboard navigate to: Credentials -> Click on the Add credential button

Click on the On-Premise option

Fill in the fields in the two steps form, the result may look like this:

Review and validate that the information you have provided is correct

In RHACM’s dashboard navigate to: Infrastructure -> Clusters -> Create cluster

Click on the On-Premise option and use the credential that we created in the previous step to fill in the Infrastructure provider credential fields

Fill in the fields in the form. the result may look like this:

Optional: Add Ansible Automations that you would like to run at any stage of the cluster’s runtime

Review and validate that the information you have provided is correct

Here you’ll see the approved nodes added automatically

Cluster network step, the subnet will be discovered automatically and after that, we can configure static IPs for API and Ingress.

NOTE: In my example, the cluster will have 3 nodes and their role is ‘automatic’ (as in chosen by the system) but in a bigger cluster we can choose the role of the nodes ourselves (master or worker).

Review and validate that the information you have provided is correct

Once the cluster is ready click on the ‘Go to cluster list’ button

The installation status shows and after approximately one hour, the cluster will be ready !

When the deployment process is complete, you’ll see the above screen — it will show you the following important details such as the kubeadmin user and password that with it you log in to your cluster, and your OpenShift console URL.
All of this will remain available at the installation dashboard.

Now you can use your brand new OpenShift cluster and rule the world!

--

--

Almog Elfassy

Cloud Architect at Red Hat | DevOps | Lecturer | CCIE #63990