Upgrade OpenShift Cluster in a Disconnected Environment Using Advanced Cluster Management
Introduction
Cloud architecture is changing, as many organizations are realizing that their cloud architecture will be distributed, with smaller clouds spread out over several locations and having to carry workloads reliably to the edge. As well as the benefit of a distributed cloud, it is necessary for the clouds to be managed thoroughly, for example taking care to ensure that the clouds are upgraded to the new versions is crucial for the proper management of the clouds.
The purpose of this article is to describe how to upgrade OpenShift clusters using Red Hat Advanced Cluster Management.
Prerequisites
- OpenShift cluster with ACM and UpdateService operators installed (Hub cluster)
- OpenShift cluster managed by a Hub cluster
- Local registry with the relevant installation images
The hub cluster would be the cluster that sends the update graph like Red Hat OpenShift Update Graph, and the managed clusters use the graph to update themselves.
The information was obtained from the Cincinnati git repo.
Steps on the Hub Cluster
Create Graph Init Container
Log into the mirror registry
podman login registry.aelfassy.local:5000 -u [registry-user] -p [registry-pass]
Clone Cincinnati repo and enter the dir
git clone https://github.com/openshift/cincinnati-operator.gitcd cincinnati-operator
Build and push graph data init
podman build -f ./dev/Dockerfile -t registry.aelfassy.local:5000/cincinnati-graph-data-container:latestpodman push registry.aelfassy.local:5000/cincinnati-graph-data-container:latest
Certificate
Must inject a self-signed mirror registry certificate into a configmap on the hub that will be attached to the UpdateService instance (So that it can communicate with the mirror registry)
Either create the trusted-ca configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: trusted-ca
data:
updateservice-registry: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
Add the additionalTrustedCA to image.config/cluster — MCP might restart with this
$ oc edit image.config.openshift.io clusterspec:
additionalTrustedCA:
name: trusted-ca
Deploy instance from UpdateService operator
Now a new update service instance will start
$ oc get pods -n openshift-update-serviceNAME READY STATUS RESTARTS AGEaelfassy-84b7c5d4d9-xvlsw 2/2 Running 0 32s$ oc get routes -n openshift-update-serviceNAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARDupdateservice-policy-engine-route updateservice-policy-engine-route-openshift-cincinnati.apps.aelfassy.local updateservice-policy-engine policy-engine edge/None None
Now we should be able to curl the update service route API to get back upgrade options
curl --silent --header 'Accept:application/json' "https://updateservice-policy-engine-route-openshift-cincinnati.apps.aelfassy.local/api/upgrades_info/v1/graph?channel=stable-4.10" -k | jq '.'
Update the Cluster Version
POLICY_ENGINE_GRAPH_URI="$(oc -n openshift-update-service get updateservice updateservice -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph')"PATCH="{\"spec\":{\"upstream\":\"${POLICY_ENGINE_GRAPH_URI}\"}}"
Get the Ingress Router Certificate
If the edge cluster, is different than the cluster where UpdateService was installed, you can either add the CA cert on the node or in the proxy configuration
Run the following command
openssl s_client -connect console-openshift-console.apps.aelfassy.local:443
Save the certificate from the output
The certificate was saved in a file named ingress.crt (This note is for the following steps)
Steps on the Managed Cluster
Add the CA cert from the previous step — MCP might restart with this
oc create configmap trusted-updateservice-ca -n openshift-config --from-file=ca-bundle.crt=ingress.crtoc patch proxy/cluster -p '{"spec":{"trustedCA":{"name":"trusted-updateservice-ca"}}}' --type merge
Apply release signature verification config
To make the release safe, we can add a release signature configmap
export release=<version>export digest=$(oc adm release info ${DISCONNECTED_REGISTRY}/ocp4/release:${release}-x86_64 --registry-config=${XDG_RUNTIME_DIR}/containers/auth.json -o json | jq -r .digest)export signature=$(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/${digest//:/=}/signature-1" | base64 -w0 && echo)cat <<EOF | oc create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: release-image-${release}
namespace: openshift-config-managed
labels:
release.openshift.io/verification-signatures: ""
binaryData:
${digest//:/-}: ${signature}
EOF
Run the $PATCH output being what was set on the HUB cluster in the previous step
$ oc patch clusterversion version -p $PATCH --type merge$ oc get clusterversion version -o yamlapiVersion: config.openshift.io/v1
kind: ClusterVersion
metadata:
creationTimestamp: "2022-06-06T14:29:07Z"
generation: 4
name: version
resourceVersion: "466031"
uid: 734e421b-f3e0-11fa-8b67-2crd5c6057b6
spec:
channel: stable-4.10
clusterID: crd5c6-11fa-4875-f3e0-734e421badd3
upstream: https://updateservice-policy-engine-route-openshift-cincinnati.apps.aelfassy.local/api/upgrades_info/v1/graphq
Let’s upgrade our cluster
Once we have finished configuring the hub cluster and the managed cluster, we can finally press the magic buttons!
Select the channel for the cluster
Select the cluster and click on the ‘Upgrade clusters’ button.
Select the new version and click on the ‘Upgrade’ button.
For to check the upgrade status, click on the relevant cluster from the managed clusters list -> Upgrading to 4.10.15 -> View upgrade details
From the edge cluster console, we can see the progress
CLI commands can also be used to check the progress
After approximately one hour, the cluster will be ready !
Conclusion
Now you don’t need to update every individual cluster, just have one central hub cluster, and the rest of your managed clusters can be updated based on one graph.
Administrators can easily manage OpenShift versions and updates from one cluster to many others using the OpenShift Update Service operator.