Hosted Control Plane on OpenShift Virtualization (Kubevirt)
Introduction
Hosted Control Planes (HCP) can be created on a single central cluster for customers, offering administrative permissions, full control, and separation from other customers. This innovative approach provides OpenShift/Kubernetes clusters at scale, addressing cost and time-to-provision challenges while maintaining a clear separation between management and workloads. These clusters are fully compatible with standard Kubernetes toolchains and compliant with the OpenShift Container Platform.
By hosting OpenShift control planes at scale and running them as workloads on a management cluster, we address both cost efficiency and provisioning time, ensuring a clear separation between management and workloads.
In this article, I will explore the possibility of Cluster-as-a-Service with virtual workers node that is provided by Red Hat OpenShift Virtualization.
Prerequisites
- OCP version 4.14 +
- RBD Storage solution (CSI)
- OpenShift Hub cluster
- Bare Metal nodes
- Metal LB operator configured
- Pull secret file
- Base domain
- OCP managed cluster must be configured with OVNKubernetes as the default pod network CNI.
- OCP managed cluster must have wildcard DNS routes enabled
Let’s start :)
But first, What is OpenShift Virtualization?
OpenShift Virtualization (Kubevirt) lets developers and administrators bring VMs, as they are, into Kubernetes and the containerized world. Once there, they can run multi-tier workloads in a single, declarative environment. This allows developers to focus on the containerized parts of the workload first, without slowing the modernization process while trying to figure out what to do with applications that are still running in VMs. With OpenShift Virtualization, containers and VMs can be managed using the same cloud-native tools and processes.
OpenShift Virtualization allows organizations to work with a single modern platform simplifying their management and reducing costs. For more information, click here.
Metal LB Operator
To provide ingress traffic capabilities to our Hosted Cluster, we will use a LoadBalancer service by the MetalLB operator. The MetalLB pool address will used in the hosted cluster for API and TCP/UDP service.
OpenShift Virtualization Operator
Go to the Operator Hub to install the OpenShift Virtualization
Advanced Cluster Manager and Multicluster Engine Operators
Go to the Operator Hub to install the Advanced Cluster Manager (ACM) and Multicluster Engine Operators
Creating a Hosted Cluster with OpenShift Virtualization
Ingress controllers should be able to use wildcard routes
oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
HCP CLI
Once ACM is installed the hcp
command line can be download
curl -O https://hcp-cli-download-multicluster-engine.apps.aelfassy.com/linux/amd64/hcp.tar.gz
tar xvfz hcp.tar.gz -C /usr/local/bin/
Create a Hosted Cluster
Using the hcp cli:
hcp create cluster kubevirt \
--name aelfassy-demo \
--release-image quay.io/openshift-release-dev/ocp-release:4.14.9-x86_64 \
--node-pool-replicas 3 \
--pull-secret ~/pull-secret.json \
--memory 6Gi \
--cores 2 \
(Option: for example, NVMe sc) --etcd-storage-class <>
oc get pod -n clusters-aelfassy-demo
oc get --namespace clusters hostedclusters
A hosted cluster view, including control plane components, NodePool, and more, will appear in the Hub cluster → clusters view
The hosted cluster’s workers will be VMs
The hosted cluster’s *.apps record is created automatically and will be part of the Hub cluster’s *.apps
The PVC for the hosted cluster’s etcd pods and VMs workers created automatically
Login to the Hosted Cluster
In the cluster installation progress section, click on the details and you can obtain the Kubeconfig, Kubeadmin password, and Console URL.
Check the nodes in the hosted cluster’s console
The operating system for the worker nodes will be CoreOS, CoreOS disks are imported automatically.
The storage class in the hosted cluster’s created automatically
The hosted cluster’s API record is created automatically with Metal LB operator
oc get service -n clusters-aelfassy-demo
Login to the Hosted Cluster with hcp cli
hcp create kubeconfig --name aelfassy-demo > aelfassy-demo-kubeconfig
oc get node —-kubeconfig=aelfassy-demo-kubeconfig
Check the hosted cluster’s nodepool in the Hub cluster
oc -n clusters get nodepool
Add a new worker to the hosted cluster
oc -n clusters scale nodepool aelfassy-demo --replicas=4
oc get vm -n clusters-aelfassy-demo
Auto Scaling for our Hosted Cluster
The option I will explain in this section enables automatic and dynamic resource management. This allows us to optimally utilize the servers’ pool across multiple clusters as needed.
oc -n clusters-aelfassy-demo patch nodepool aelfassy-demo --type=json -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 6, "min": 3 }}]'
NOTE: You can create a new nodepool with nodes with different resources from the default one.
Hosted cluster workloads with TCP/UDP service
When we want to create an ingress TCP/UDP ports service, the system allows us to create a Type LB service that automatically uses the pool defined in the Hub cluster, just like the automatic deployment for the API address of the Hosted cluster.
Create a service type LB (hosted cluster)
Check the services in the Hub cluster
NOTE: The Metal LB operator should not be installed on a hosted cluster
Command to destroy HCP cluster :)
hcp destroy cluster kubevirt --name <>
NOTE: Although GitOps is not within the scope of this article, it is important to note that the create cluster’s process can be managed automatically using a GitOps approach.
Now, the hosted cluster is ready! You can use your hosted cluster and rule the world!