In 2014, Google released Kubernetes, a platform that leverages containerization to run services and applications. Google has since turned Kubernetes into an open source project titled the Cloud Native Computing Foundation. Kubernetes is capable of provisioning and managing containers. Kubernetes controls the scheduling and running of containers, while automating operation tasks. It also designates the preferred state to deploy containers. This enables for quick interchangeability of new containers while replacing old ones during migration.
Anatomy of a Google Kubernetes Engine
Kubernetes pods consist of container groups and are the most granular deployment units in Kubernetes orchestration. Each Kubernetes cluster contains a “master” system that controls and manages cluster events. The kube-controller-manager is the core of the Kubernetes master machine. Under control of the kube-controller-manager are nodes, worker machines that contain IT resources. Containers run on top of nodes. The kube-controller-manager component administers control over nodes and activities including:
- Replication of pods
- Connection of services to pods
- Other services and tokens
Kube-scheduler determines what containers will run on a particular Kubernetes node. Kubernetes users will set up the parameters. These parameters, which include policy rules, define requirements such as data locations, network traffic load, rules for affinity and anti-affinity, and restrictions set on hardware and software.
There are a number of components that are needed to make the Kubernetes master-node system work. These components provide pod management and runtime environment maintenance. There are other parts that must be implemented for Kubernetes to run, which includes:
kubelet component: on each cluster node, the kubelet component ensures that the containers exist in the correct pod by keeping track of Kubernetes API specs. The kubelet component ensures that the container is in good operating condition, including system health.
kube-proxy: a network proxy that also runs on each cluster node, maintaining network rules. A packet filtering network, kube-proxy also delivers communication management between the pods and the network. Kubernetes requires a runtime component to ensure the integration and operate the containers. Two runtime components that are often used are:
- Kubernetes Container Runtime Interface
As a free, open source program, Kubernetes features APIs with conventions. The structure of the conventions lends itself to simplified, no-nonsense development, while ensuring consistent mechanism configurations that can be used across a range of use cases. Users activate the kube-apiserver component to gain access to the Kubernetes API Server.
What is Containerization?
It’s a good idea at this point to gain an understanding of containers. Containerization takes a computing system’s resources and provisions them into instances that, albeit similar, are smaller than virtual machines. Containers isolate software into units. Containers are useful for temporary operating conditions within pods, such as for application migration from one environment to another. Containers run software services and applications in an ephemeral manner, in that they only operate for short durations.
Due to their temporary existence, containers are nearly impossible to manually tamper with, e.g. create, start, monitor or destroy them. Since containers isolate software, they become difficult to scale. It takes more containers to run applications in a cluster than on a server. Container behavior is difficult to monitor. Metrics that containers produce are difficult to measure since the number of disparate containers increase exponentially due to software isolation. There are, however, certain metrics you can measure:
- Hosts (CPU and memory disk)
- Orchestrator (services and volumes)
- Container (CPU, memory disk and network)
- Container internals (applications, database and caching)
- Service response times
Kubernetes Helps to Address Challenges of Using Containerization
Kubernetes is useful for administrative creation and monitoring of container clusters. Through automation, Kubernetes can restart containers that are not operating properly, delete containers that aren’t responsive, release application updates, and asses container health.
Kubernetes’ container management capabilities include load balancing across containers (to handle data peaks). Kubernetes also manages how and where data is stored – including data formats, and local vs. cloud locations. Kubernetes administrators can also determine how containers are made available to users, e.g. Doman Name System information and IP addresses.
Kubernetes Provides Container Deployment Insights to Optimize CDLC and Operations
Through automation, Kubernetes can be used to streamline IT project rollouts and rollbacks. With the right commands, Kubernetes will designate available nodes for container placement based on resource needs for optimal performance. Kubernetes will also manage sensitive information such as passwords, tokens and other authentication data, e.g. SSH keys).
Some Things to Know Before Implementing Kubernetes
Kubernetes should be considered a separate application from the CI/CD workflow. Kubernetes orchestrations can only manage virtual instances and should not be considered a part of the software developed in CI/CD. Therefore, Kubernetes are operable only during and after the deployment phase of the cloud development life cycle (CDLC).
Kubernetes lacks the ability to deliver middleware. Kubernetes also lacks the ability to provide application services. Applications deployed, running and managed on Kubernetes are not native to Kubernetes, no matter how much resources are accessed by other applications running through a Kubernetes cluster.
Kubernetes Users Use ThreatModeler to Build Secure IT Environments
To learn how ThreatModeler will enable you to visualize your organization’s CDLC attack surface, and implement security requirements to mitigate threats, we recommend scheduling a live demo with a threat modeling expert. You can also contact our team. You may also visit booth #2068 at RSA Conference in San Francisco to learn more about our platform! We will be onsite at RSA Conference February 24-27.
ThreatModeler to Participate in a Webinar Hosted by the Cyber Tech Accord
On February 26, 2020, the Cyber Tech Accord hosts an informative webinar, How to Achieve Secure CDLC Through Threat Modeling. We recommend registering for this practical overview of the Cloud Development Life Cycle (CDLC), and the importance of securing the CDLC as early as the design and planning phases. ThreatModeler’s very own Senior Director of Threat Intelligence, Alex Bauert, will present the CDLC overview, and discuss the value of threat modeling to create a CDLC visualization of the attack surface to shine a light on the security threats and requirements for mitigation. We will also highlight architecture pattern standards, and offer advice for vulnerability management assessments, plus prioritization of remediation, resolution and mitigation.