Initially, our Kubernetes platform was built around a single, large cluster. In this model, while each tenant's workloads operated on dedicated nodes, they shared a common control plane. Network segmentation was achieved by assigning nodes to specific tenants, with pod scheduling restricted to designated namespaces through node selectors. Access to these namespaces was controlled by RBAC, and only cluster administrators could modify node selector labels. For example, tenant-a had exclusive access to namespace-a, which utilized node-a-1 and node-a-2, whereas tenant-b utilized namespace-b backed by node-b-1 and node-b-2.
This configuration offered several advantages that were particularly appealing during our early adoption phase of Kubernetes:
However, as the scale and complexity of workloads have grown, the limitations of a single shared cluster have become increasingly apparent:
Additionally, our shared cluster platform was deployed on our pre-existing virtualization stack. While generally effective, this setup posed challenges for running clusters without deep integration with the underlying vendor technology.
In response to these challenges, we have developed the Intility Developer Platform 2.0, or Platform 2.0 for short, which represents a significant evolution in our approach to Kubernetes infrastructure.
Platform 2.0 is built on a robust bare-metal Kubernetes cluster, referred to as the "hub cluster". This foundational cluster hosts multiple tenant clusters using two key technologies: Hosted Control Planes and KubeVirt.
Hosted Control Planes (HCP) allows us to run the Kubernetes control plane components, such as the API server, etcd, and controller manager, as managed services. This model significantly reduces the overhead associated with maintaining separate control planes for each tenant cluster, enhancing our ability to scale while reducing resource consumption and provisioning time. Hosting clusters in this manner also simplifies management at scale, as they all exist as native Kubernetes resources within the hub cluster.
KubeVirt is a technology that enables running virtual machines in Kubernetes and managing them like other resources. Below is a simple example of a KubeVirt VM yaml manifest. This declarative approach of handling VMs makes it extremely easy to provision and scale nodes for each cluster, and it integrates perfectly with hosted control planes.
KubeVirt enables the use of Kubernetes as a hypervisor for virtual machines; combined with HCP, this transforms Kubernetes into a hypervisor for Kubernetes clusters.
If you had asked a Kubernetes admin 5 years ago whether running a cluster on bare-metal was a good idea, their answer would likely have been no. However, thanks to projects like metal3, assisted installer, metallb, and many others, it is now a completely viable strategy. Since we are going to run hosted cluster nodes as VMs in the hub cluster, bare-metal was the obvious choice.
A bare-metal Kubernetes cluster serves as the foundation of the platform, as mentioned we call this the hub cluster. It is spread across three sites in Intility's existing data center infrastructure. Almost all automation and integration of the platform will go through this layer in some way.
The hub cluster will run operators like Multi-Cluster Engine, Rook, KubeVirt, HyperShift and Crossplane to execute its main purpose which is to host other Kubernetes clusters along with their nodes in the form of KubeVirt VMs.
Inside the hosted clusters, we have standardized on Cilium as the default CNI. Cilium is built on eBPF and gives us detailed information about all traffic in a cluster in real-time, without any performance overhead. This visibility is crucial for both troubleshooting and optimizing network performance.
Cilium also supports advanced network policies, which are essential for ensuring the security and compliance of our multi-tenant environment. With Cilium, we can define fine-grained network policies that control the flow of traffic between different parts of the cluster, ensuring that each tenant’s data remains isolated and secure.
Our new platform leverages Kubernetes to enhance automation and scalability, with Crossplane playing a pivotal role. By integrating Crossplane, we can automate complex tasks and centralize management, significantly simplifying the deployment and maintenance processes.
Crossplane allows us to define a single resource that encapsulates all the necessary components for a deployment, abstracting away the complexity. For instance, deploying a cluster can be as simple as defining a single YAML resource:
This single definition triggers the deployment of all underlying resources required for a cluster.
A complete Intility Managed Cluster requires close to 20 different resources both inside and outside the Hub Cluster to meet our standard, so with Crossplane we are able to simplify this significantly.
Hosted Clusters will be equipped with a suite of security tools, integrated by Intility into what we call the Threat Detection Agent. This agent monitors abnormal and suspicious behavior in the Linux kernel and Audit logs from the nodes, generating alarms processed by AI. Examples of such behavior include unexpected spikes in network traffic, unauthorized access attempts, and unusual resource consumption.
The Threat Detection Agent gathers more information about the affected container and, when a high likelihood of a threat is detected, implements mitigation measures. For instance, it can automatically isolate or delete affected containers to prevent further damage. In other cases, the agent suggests possible mitigation measures, such as applying patches or adjusting network policies. The Developer Portal will display the alarms alongside detailed reports from the agent.
Even though all this new tech backing the new platform is very exciting for an administrator, from the users perspective it might all be irrelevant. Why would you want a cluster managed by Intility in a solution like this when you could just get one with a few clicks in DigitalOcean or GCP?
As an end user you might not care that your control plane is hosted or that the nodes are actually KubeVirt VMs running in an underlying Kubernetes – you just want your cluster and you want it to work, with the ability to create and destroy them with ease if need be.
To truly add value on top of all this, we have made a management interface with unique features that simplify and streamline cluster operations. Our interface offers intuitive controls, real-time monitoring, and automated maintenance tasks, ensuring that your clusters are not only easy to deploy but also effortless to manage. With built-in scalability, security enhancements, and seamless integration with your existing workflows, we aim to provide a robust, user-friendly solution that meets your needs and exceeds your expectations.