Attention
You have Cookie disabled in your browser. Displaying the site will be incorrect!

Managed Kubernetes—a brand-new cloud service

Kubernetes (K8s) is an open-source platform used to deploy, scale, and manage containerized applications automatically. This service simplifies the orchestration of Docker containers, extends their functionality, and helps our clients make the entire infrastructure more stable and scalable.

We’ve recently launched a new G-Core Labs Cloud service called Managed Kubernetes. It allows you to use K8s within our Cloud and manage containers effortlessly.

In this article, we will discuss what Kubernetes is and how your project can benefit from it. We will also talk about the opportunities offered by our service.

What are containers and why do they need to be managed?

Containerization is a method used to isolate applications from each other.

An application is put into a single container together with all its dependencies, and an individual environment is created. It consumes a strictly limited number of resources.

It’s similar to how different virtual machines are placed together in a cloud, yet there are some significant differences.

Here is what distinguishes containers from virtual machines:

  • A virtual machine is an analog of a full-fledged server. It has its own hardware resources, a virtual processor, and other components. A container is only used to isolate the application and all other elements associated with it.
  • A virtual machine has its own operating system. A container uses the host’s OS and common host core.
  • Containers weigh much less. You can launch many more containers than virtual machines on one and the same server. Plus, containers can be deployed on virtual machines in the cloud.

Differences between containers and virtual machines

Containers’ main advantages:

  • Containers are ideal for microservice architecture. This approach implies splitting an application into several relatively independent components, i.e., microservices. This allows you to speed up the development process and increase your server’s overall fault tolerance. If you need to introduce some changes to one of the components, you won’t need to stop the entire application.
  • Containers simplify the process of moving an application to another server. You can deploy a ready-made container on any machine where Docker is installed in literally just a few clicks. You can deploy your entire service infrastructure on several hosts.
  • They are safe for the main operating system. Since the application is isolated in the container, its bugs and failures won’t affect neither other programs, nor the server’s work.

Containers have a lot of advantages. But if they are numerous, there appears a problem—they become quite difficult to manage.

In this case, you’ll need to create new containers, delete the unnecessary ones, distribute resources, move containers to other hosts, and so on. If your resources are not enough or one of the machines has broken down, you will need to monitor the container status, update containers, and perform many other tasks. If you opt for doing all this manually, all your team’s resources will have to be involved in the process.

Kubernetes has been created to solve this problem by automating container orchestration.

What is Kubernetes?

Managed Kubernetes—a brand-new cloud service

Kubernetes automates container management. It allows you to fulfil the following tasks:

  • easily scale infrastructure, create new containers, and remove unnecessary resources;
  • restart and update containers;
  • monitor container status;
  • distribute resources among the containers and balance your traffic.

At the same time, Kubernetes implements a declarative approach. This means that you won’t need to give the system any specific commands. All you need to do is specify the desired end result: Kubernetes will automatically choose the best ways to achieve the result specified and bring the infrastructure to the desired form using the API.

Kubernetes—key benefits

Managed Kubernetes—a brand-new cloud service

1. Process automation. As mentioned above, the main task of K8s is automating container management. The service simplifies your work, i.e., it reduces the work load on your IT team.

2. Use of multi-cloud. Kubernetes makes moving containers from one host to another much easier. You can even use multiple clouds within one infrastructure.

You can distribute the load in the cloud efficiently without being tied to a single vendor, thus increasing your IT ROI.

3. Costs reduction. K8s automatically distributes your infrastructure resources and gives as many resources to each container as necessary. This helps avoid computing power overuse and eliminates unnecessary expenditure.

Moreover, since Kubernetes reduces the workload on the IT team, the employees can focus on more important issues and solve them faster instead of being busy fulfilling website administration tasks. You will simplify the testing processes, develop and bring new products to the market much faster, and generate more revenue.

4. Instant scaling. Kubernetes can automatically reduce or increase the computing power depending on your needs.

5. Increased fault tolerance. If a container stopped working and became unresponsive, K8s can quickly restart it. Checking container status and restarting the containers is also an automatic process, which means that your team won’t need to spend any time on this.

6. Safe canary testing. Before releasing an update, you often need to test it on your clients first. To achieve this, you can launch the updated service in test mode and send a small part of the traffic to it. If everything works fine, you can direct the main traffic to it step by step.

Kubernetes makes this process very easy. You can create a copy of the container that your application is running on, launch an update in the copy, and gradually redirect the traffic from the main container to the copy.

If you happen to find out that something went wrong during the tests, you won’t need to roll back any changes. You can simply switch off the duplicate container and redirect all your traffic back to the main container.

7. Safe data storage. Kubernetes can store and manage confidential information such as passwords, OAuth tokens, and SSH keys.

You can deploy and update confidential information and application configurations without changing container images or exposing data.

How Kubernetes works

To understand how Kubernetes works, let’s consider some basic notions related to it.

Pod is a basic K8s unit. It is a combination of one or more containers meant for joint deployment with additional resources connected with these containers.

Additional resources help containers function correctly within the system. These can be restart policies, container execution information (for example, port numbers or container version), shared storage, and other similar items.

Managed Kubernetes—a brand-new cloud service

In most cases, a pod includes one container, but there may be several. It is necessary to combine several containers into one pod if they are closely connected with each other—for example, if they run the same application microservices that perform related tasks.

Node is a virtual machine or a physical server where containers are launched.

Several nodes connected to each other form a cluster.

Managed Kubernetes—a brand-new cloud service

Pool is a group of cluster nodes with the same technical characteristics, i.e., a set of identical machines that your infrastructure runs on.

Kubernetes implements the Master—Slave concept.

All nodes are divided into two types:

  • master node;
  • worker node.

Master node is the main element that manages worker nodes.

Its main tasks are:

  • Distributing pods across nodes to provide everyone with enough resources.
  • Monitoring the overall cluster status.
  • Ensuring the interaction with the cluster and sending commands to various cluster elements.

A master node can be compared to a boss who gives orders to his subordinates and monitors their work.

Worker nodes act as the boss’s subordinates. Pods are placed and launched on the worker nodes.

The mechanisms that check the pod status, distribute the traffic between them, and fulfill different commands from the master node are also located on the worker nodes.

Managed Kubernetes—a brand-new cloud service

We have enlisted the main Kubernetes components. Of course, this is far from everything, yet it is enough for you to gain an overall idea of how K8s works.

What is Managed Kubernetes?

Managed Kubernetes is a new G-Core Labs cloud service that will allow you to use K8s within our Cloud infrastructure and facilitate your work with clusters immensely.

The service makes it possible to create clusters, manage the nodes through an all-in-one G-Core Labs panel, and automate processes even more efficiently.

Thus, you get all the capacities of Kubernetes including a flexible infrastructure, while we take care of such routine tasks as deploying clusters and managing master nodes.

Service specifics:

  • You only have access only to the worker nodes, while the master node is controlled by our administrators. You don’t have to waste your time on routine tasks and can focus on development.
  • You can create and configure a cluster in accordance with your tasks in the control panel. You can determine the number of worker nodes as well as configure such functions as autoscaling and autohealing.
  • Our virtual machines are now used as working nodes. But in the future, we are going to make it possible to add bare metal servers to clusters.
  • We are currently using the 1.20.6 Kubernetes version. If a new version is released, you will be able to update your version without losing data in just a few clicks in the control panel.

Managed Kubernetes architecture in G-Core Labs Cloud
Managed Kubernetes architecture in G-Core Labs Cloud

For now, you can deploy your cluster within one data center only, but in the future, we are going to make it possible to connect the nodes located in different data centers.

Managed Kubernetes offers an autoscaling option, meaning that the system will automatically increase and decrease the number of nodes in the pool. If the resources are insufficient, the service will add more virtual machines, and if some nodes aren’t used for over 20 minutes, they get removed.

You can define the minimum and the maximum number of nodes in the pool on your own. Autoscaling can be turned off if necessary.

We also support the autohealing function: the system is constantly monitoring the node status and replaces the non-working nodes. This feature increases the fault tolerance of our clients’ infrastructure. It can also be turned off if necessary.

You can manage this service via the control panel or API. You can:

  • create clusters;
  • create pools and nodes within them and change the number of nodes in the pool;
  • scale the cluster;
  • set up autoscaling and autohealing within the pool;
  • assign a floating IP and connect to the nodes via SSH;
  • track the node load.

How to enable the new service

If you are already connected to the G-Core Labs Cloud, Managed Kubernetes is already available in your control panel. There is no need to enable any additional features.

Now the service is in beta testing. This is why it’s free of charge.

How to use managed Kubernetes

1. Create a cluster

Open the cloud control panel, head to the Kubernetes section, and click on Create Cluster.

How to create a cluster with Managed Kubernetes
How to create a cluster with Managed Kubernetes

Select a region that the data center will be located in. The cluster will be deployed using the resources of this data center.

Choosing a region when creating a cluster with Managed Kubernetes
Choosing a region when creating a cluster with Managed Kubernetes

Create pools within the cluster.

Adding a pool when creating a cluster with Managed Kubernetes
Adding a pool when creating a cluster with Managed Kubernetes

Enter the pool name (it can be any name of your choice) and specify the initial number of nodes. This is exactly how many nodes will be within this pool after the cluster has been launched.

Next, specify the minimum and the maximum number of nodes in order to configure autoscaling correctly. The system won’t allow the number of nodes to reach a value that is below the minimum or to exceed the maximum.

Setting the initial number of nodes and configuring autoscaling when creating a cluster with Managed Kubernetes
Setting the initial number of nodes and configuring autoscaling when creating a cluster with Managed Kubernetes

If you don’t want to use the autoscaling function, just set the maximum number of nodes to be the same as the minimum one. This value must match the initial number of nodes in the pool.

Next, select the type of the virtual machines that will be launched in the pool. Since the pool is a group of nodes with the same technical characteristics, we can choose only one virtual machine type.

Choosing the type of virtual machine in the pool when creating a cluster with Managed Kubernetes
Choosing the type of virtual machine in the pool when creating a cluster with Managed Kubernetes

You can choose any of the five types of virtual machines available:

  • Standard virtual machines—the amount of the gigabytes of memory is 2–4 times larger than the amount of the vCPU memory.
  • CPU—the amount of the vCPU memory is equal to the amount of the gigabytes of memory.
  • Memory virtual machines are machines with a lot of memory. The amount of the gigabytes of memory is 8 times larger than the amount of the vCPU memory.
  • High-frequency virtual machines have a processor clock speed of 3.37 GHz in their basic configuration.
  • SGX virtual machines with the support of the Intel SGX Technology.

Next, select the disk size and type where the pool data will be stored.

Volume settings in a pool when creating a cluster with Managed Kubernetes
Volume settings in a pool when creating a cluster with Managed Kubernetes

There are four options concerning the disc type. They differ in the drive type (SSD or HDD), acceptable IOPS number, and the maximum bandwidth.

As soon as you’ve specified all the settings mentioned, the pool will be created.

You can create as many pools as you need. To add one more pool to the cluster, just click on Add pool and configure all the settings as described above.

Adding a pool when creating a cluster with Managed Kubernetes
Adding a pool when creating a cluster with Managed Kubernetes

Then you can enable or disable the autohealing function.

Adding a pool when creating a cluster with Managed Kubernetes
Configuring the autohealing function when creating a cluster with Managed Kubernetes

Next, add the cluster nodes to the private network and the subnet. You can either select an existing network or create a new one by clicking on Add a new network.

Network settings when creating a cluster with Managed Kubernetes
Network settings when creating a cluster with Managed Kubernetes

Next, you need to add an SSH key to connect to the cluster nodes. You can either choose one of the keys that have already been added to your account, or generate a new one.

Adding an SSH key when creating a cluster with Managed Kubernetes
Adding an SSH key when creating a cluster with Managed Kubernetes

Finally, you will need to specify the cluster name (it can be any name of your choice)…

How to specify the cluster name in Managed Kubernetes
How to specify the cluster name in Managed Kubernetes

…and doublecheck all the cluster settings on the right side of the screen.

Cluster settings in Managed Kubernetes
Cluster settings in Managed Kubernetes

Click on Create Cluster. Ready! The cluster will be launched in a few minutes.

2. Edit pools

Now that the cluster has been created, it appears in the Kubernetes section of the control panel.

Launched clusters in Managed Kubernetes of G-Core Labs Cloud
Launched clusters in Managed Kubernetes of G-Core Labs Cloud

You can edit it by clicking on the cluster name.

You will be taken to the section with the overall information about the cluster, where its current state and status as well as the number of pools and nodes are indicated. The Pools tab displays a list of all pools with the main information. You can edit any of them, e.g.:

  • rename them;
  • change the current number of nodes (as long as the autoscaling function allows it);
  • edit autoscaling limits;
  • delete the pool.

Editing pools in Managed Kubernetes
Editing pools in Managed Kubernetes

You can also add one more pool to the cluster. At the end of the list on the Pools tab, there will be an Add pool button. Click on it. A new pool is created in the same way as a new cluster.

Adding pools in Managed Kubernetes
Adding pools in Managed Kubernetes

3. Check node load

You can check the load on every node on your own.

To do this, select the necessary pool in the Pools tab and click on the arrow opposite to it. A nodes list will expand. Click on the node that you need.

How to check the load on nodes in a cluster with Managed Kubernetes—Step 1
How to check the load on nodes in a cluster with Managed Kubernetes—Step 1

Head to the Monitoring tab.

How to check the load on nodes in a cluster in Managed Kubernetes—Step 2
How to check the load on nodes in a cluster in Managed Kubernetes—Step 2

You will see charts with two buttons above them. The left button configures the period of the data displayed and the right one—information updates frequency on your screen.

Setting up the display of node load data in Managed Kubernetes
Setting up the display of node load data in Managed Kubernetes

The statistics are displayed for 10 metrics:

  • CPU Utilization—processor load, %.
  • RAM Utilization—what percentage share of the RAM is used by the node.
  • Network BPS ingress—how fast the incoming traffic is received (bytes per second).
  • Network BPS egress—how fast the outgoing traffic is sent (bytes per second).
  • Network PPS ingress—how fast the incoming traffic is received (packets per second).
  • Network PPS egress—how fast the outgoing traffic is sent (packets per second).
  • sda/Disk IOPS read—how fast information is read on the disk (number of operations per second).
  • sda/Disk IOPS write—how fast the data are recorded on the disk (number of operations per second).
  • sda/Disk BPS read and sda/Disk BPS write—these are the same as the two previous metrics but measured in the number of bytes transferred per second.

Example chart:

Example node load data chart in Managed Kubernetes
Example node load data chart in Managed Kubernetes

Read more about the work with Managed Kubernetes in the Kubernetes section of our knowledge base.

Let’s sum it up

  1. Application containerization has a lot of advantages and is ideal for microservice architecture. But when there are many containers in your infrastructure, managing them manually becomes difficult.
  2. Kubernetes was created for automating container management processes.
  3. K8s reduces the IT team workload, simplifies infrastructure deployment in a multi-cloud, helps reduce costs, and makes scaling and application testing much easier.
  4. The main Kubernetes unit is called “pod” (a group of one or more containers). Pods are located on nodes, which are virtual or physical machines. The interconnected nodes form a cluster. The cluster has a master node that controls worker nodes.
  5. Managed Kubernetes is a new G-Core Labs Cloud server that allows you to use Kubernetes within our Cloud infrastructure and simplifies your work with it.
  6. Our new service allows you to focus on development. We do all routine tasks connected with master nodes and cluster deployment instead of you.
  7. You can create a cluster, customize it for your tasks, and manage it using a simple and convenient control panel.
  8. For now, the service is in beta testing, this is why you can use it for free.

We are constantly improving our cloud services to help our clients grow their business even faster and cheaper. Our convenient and technologically advanced cloud will allow you to achieve your business goals without any extra costs and efforts.

Subscribe to a useful newsletter

Favorable offers and important news once a month. No spam