Kubernetes (K8s) is an open-source platform used to deploy, scale, and manage containerized applications automatically. This service simplifies the orchestration of Docker containers, extends their functionality, and helps our clients make the entire infrastructure more stable and scalable.
In this article, we will discuss what Kubernetes is and how your project can benefit from it. We will also talk about the opportunities offered by our service.
Containerization is a method used to isolate applications from each other.
An application is put into a single container together with all its dependencies, and an individual environment is created. It consumes a strictly limited number of resources.
It’s similar to how different virtual machines are placed together in a cloud, yet there are some significant differences.
Here is what distinguishes containers from virtual machines:
Containers’ main advantages:
Containers have a lot of advantages. But if they are numerous, there appears a problem—they become quite difficult to manage.
In this case, you’ll need to create new containers, delete the unnecessary ones, distribute resources, move containers to other hosts, and so on. If your resources are not enough or one of the machines has broken down, you will need to monitor the container status, update containers, and perform many other tasks. If you opt for doing all this manually, all your team’s resources will have to be involved in the process.
Kubernetes has been created to solve this problem by automating container orchestration.
Kubernetes automates container management. It allows you to fulfil the following tasks:
At the same time, Kubernetes implements a declarative approach. This means that you won’t need to give the system any specific commands. All you need to do is specify the desired end result: Kubernetes will automatically choose the best ways to achieve the result specified and bring the infrastructure to the desired form using the API.
1. Process automation. As mentioned above, the main task of K8s is automating container management. The service simplifies your work, i.e., it reduces the work load on your IT team.
2. Use of multi-cloud. Kubernetes makes moving containers from one host to another much easier. You can even use multiple clouds within one infrastructure.
You can distribute the load in the cloud efficiently without being tied to a single vendor, thus increasing your IT ROI.
3. Costs reduction. K8s automatically distributes your infrastructure resources and gives as many resources to each container as necessary. This helps avoid computing power overuse and eliminates unnecessary expenditure.
Moreover, since Kubernetes reduces the workload on the IT team, the employees can focus on more important issues and solve them faster instead of being busy fulfilling website administration tasks. You will simplify the testing processes, develop and bring new products to the market much faster, and generate more revenue.
4. Instant scaling. Kubernetes can automatically reduce or increase the computing power depending on your needs.
5. Increased fault tolerance. If a container stopped working and became unresponsive, K8s can quickly restart it. Checking container status and restarting the containers is also an automatic process, which means that your team won’t need to spend any time on this.
6. Safe canary testing. Before releasing an update, you often need to test it on your clients first. To achieve this, you can launch the updated service in test mode and send a small part of the traffic to it. If everything works fine, you can direct the main traffic to it step by step.
Kubernetes makes this process very easy. You can create a copy of the container that your application is running on, launch an update in the copy, and gradually redirect the traffic from the main container to the copy.
If you happen to find out that something went wrong during the tests, you won’t need to roll back any changes. You can simply switch off the duplicate container and redirect all your traffic back to the main container.
7. Safe data storage. Kubernetes can store and manage confidential information such as passwords, OAuth tokens, and SSH keys.
You can deploy and update confidential information and application configurations without changing container images or exposing data.
To understand how Kubernetes works, let’s consider some basic notions related to it.
Pod is a basic K8s unit. It is a combination of one or more containers meant for joint deployment with additional resources connected with these containers.
Additional resources help containers function correctly within the system. These can be restart policies, container execution information (for example, port numbers or container version), shared storage, and other similar items.
In most cases, a pod includes one container, but there may be several. It is necessary to combine several containers into one pod if they are closely connected with each other—for example, if they run the same application microservices that perform related tasks.
Node is a virtual machine or a physical server where containers are launched.
Several nodes connected to each other form a cluster.
Pool is a group of cluster nodes with the same technical characteristics, i.e., a set of identical machines that your infrastructure runs on.
Kubernetes implements the Master—Slave concept.
All nodes are divided into two types:
Master node is the main element that manages worker nodes.
Its main tasks are:
A master node can be compared to a boss who gives orders to his subordinates and monitors their work.
Worker nodes act as the boss’s subordinates. Pods are placed and launched on the worker nodes.
The mechanisms that check the pod status, distribute the traffic between them, and fulfill different commands from the master node are also located on the worker nodes.
We have enlisted the main Kubernetes components. Of course, this is far from everything, yet it is enough for you to gain an overall idea of how K8s works.
Managed Kubernetes is a new G-Core Labs cloud service that will allow you to use K8s within our Cloud infrastructure and facilitate your work with clusters immensely.
The service makes it possible to create clusters, manage the nodes through an all-in-one G-Core Labs panel, and automate processes even more efficiently.
Thus, you get all the capacities of Kubernetes including a flexible infrastructure, while we take care of such routine tasks as deploying clusters and managing master nodes.
For now, you can deploy your cluster within one data center only, but in the future, we are going to make it possible to connect the nodes located in different data centers.
Managed Kubernetes offers an autoscaling option, meaning that the system will automatically increase and decrease the number of nodes in the pool. If the resources are insufficient, the service will add more virtual machines, and if some nodes aren’t used for over 20 minutes, they get removed.
You can define the minimum and the maximum number of nodes in the pool on your own. Autoscaling can be turned off if necessary.
We also support the autohealing function: the system is constantly monitoring the node status and replaces the non-working nodes. This feature increases the fault tolerance of our clients’ infrastructure. It can also be turned off if necessary.
You can manage this service via the control panel or API. You can:
If you are already connected to the G-Core Labs Cloud, Managed Kubernetes is already available in your control panel. There is no need to enable any additional features.
Now the service is in beta testing. This is why it’s free of charge.
Open the cloud control panel, head to the Kubernetes section, and click on Create Cluster.
Select a region that the data center will be located in. The cluster will be deployed using the resources of this data center.
Create pools within the cluster.
Enter the pool name (it can be any name of your choice) and specify the initial number of nodes. This is exactly how many nodes will be within this pool after the cluster has been launched.
Next, specify the minimum and the maximum number of nodes in order to configure autoscaling correctly. The system won’t allow the number of nodes to reach a value that is below the minimum or to exceed the maximum.
If you don’t want to use the autoscaling function, just set the maximum number of nodes to be the same as the minimum one. This value must match the initial number of nodes in the pool.
Next, select the type of the virtual machines that will be launched in the pool. Since the pool is a group of nodes with the same technical characteristics, we can choose only one virtual machine type.
You can choose any of the five types of virtual machines available:
Next, select the disk size and type where the pool data will be stored.
There are four options concerning the disc type. They differ in the drive type (SSD or HDD), acceptable IOPS number, and the maximum bandwidth.
As soon as you’ve specified all the settings mentioned, the pool will be created.
You can create as many pools as you need. To add one more pool to the cluster, just click on Add pool and configure all the settings as described above.
Then you can enable or disable the autohealing function.
Next, add the cluster nodes to the private network and the subnet. You can either select an existing network or create a new one by clicking on Add a new network.
Next, you need to add an SSH key to connect to the cluster nodes. You can either choose one of the keys that have already been added to your account, or generate a new one.
Finally, you will need to specify the cluster name (it can be any name of your choice)…
…and doublecheck all the cluster settings on the right side of the screen.
Click on Create Cluster. Ready! The cluster will be launched in a few minutes.
Now that the cluster has been created, it appears in the Kubernetes section of the control panel.
You can edit it by clicking on the cluster name.
You will be taken to the section with the overall information about the cluster, where its current state and status as well as the number of pools and nodes are indicated. The Pools tab displays a list of all pools with the main information. You can edit any of them, e.g.:
You can also add one more pool to the cluster. At the end of the list on the Pools tab, there will be an Add pool button. Click on it. A new pool is created in the same way as a new cluster.
You can check the load on every node on your own.
To do this, select the necessary pool in the Pools tab and click on the arrow opposite to it. A nodes list will expand. Click on the node that you need.
Head to the Monitoring tab.
You will see charts with two buttons above them. The left button configures the period of the data displayed and the right one—information updates frequency on your screen.
The statistics are displayed for 10 metrics:
Read more about the work with Managed Kubernetes in the Kubernetes section of our knowledge base.
We are constantly improving our cloud services to help our clients grow their business even faster and cheaper. Our convenient and technologically advanced cloud will allow you to achieve your business goals without any extra costs and efforts.