Deploying and Managing a Cluster on Linode Kubernetes Engine (LKE)
Traducciones al EspañolEstamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
NoteThis guide uses Linode Kubernetes Engine (LKE) to deploy a managed Kubernetes cluster. For more information on Kubernetes key concepts, see our Beginner’s Guide to Kubernetes
The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes. When you deploy an LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes. Your LKE cluster’s Master node runs the Kubernetes control plane processes – including the API, scheduler, and resource controllers.
Additional LKE features:
- etcd Backups: A snapshot of your cluster’s metadata is backed up continuously, so your cluster is automatically restored in the event of a failure.
- High Availability: All of your control plane components are monitored and automatically recover if they fail.
- Kubernetes Dashboard All LKE installations include access to a Kubernetes Dashboard installation.
In this Guide
In this guide you will learn:
CautionThis guide’s example instructions create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to remove it when you have finished the guide.
If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account.
Before You Begin
Install kubectl
You need to install the kubectl client to your computer before proceeding. Follow the steps corresponding to your computer’s operating system.
macOS:
Install via Homebrew:
brew install kubectl
If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.
Linux:
Download the latest kubectl release:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
Make the downloaded file executable:
chmod +x ./kubectl
Move the command into your PATH:
sudo mv ./kubectl /usr/local/bin/kubectl
NoteYou can also install kubectl via your package manager; visit the Kubernetes documentation for instructions.
Windows:
Visit the Kubernetes documentation for a link to the most recent Windows release.
Create an LKE Cluster
Log into your Linode Cloud Manager account.
From the Linode dashboard, click the Create button at the top of the page and select Kubernetes from the dropdown menu.
The Create a Kubernetes Cluster page will appear. At the top of the page, you’ll be required to select the following options:
In the Cluster Label field, provide a name for your cluster. The name must be unique between all of the clusters on your account. This name will be how you identify your cluster in the Cloud Manager’s Dashboard.
From the Region dropdown menu, select the Region where you would like your cluster to reside.
From the Version dropdown menu, select a Kubernetes version to deploy to your cluster.
In the Add Node Pools section, select the hardware resources for the Linode worker node(s) that make up your LKE cluster. To the right of each plan, select the plus
+
and minus-
to add or remove a Linode to a node pool one at time. Once you’re satisfied with the number of nodes in a node pool, select Add to include it in your configuration. If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.Note
Currently, the available plan types for LKE worker nodes are Shared, Dedicated CPU, and High Memory plans. In order to meet the minimum system requirements for LKE nodes, 1 GB Shared Nanodes are not an option for worker nodes.Once a pool has been added to your configuration, you will see it listed in the Cluster Summary on the right-hand side of the Cloud Manager detailing your cluster’s hardware resources and monthly cost. Additional pools can be added before finalizing the cluster creation process by repeating the previous step for each additional pool.
When you are satisfied with the configuration of your cluster, click the Create Cluster button on the right hand side of the screen. Your cluster’s detail page will appear on the following page where you will see your Node Pools listed. From this page, you can edit your existing Node Pools, access your Kubeconfig file, and view an overview of your cluster’s resource details.
Connect to your LKE Cluster with kubectl
After you’ve created your LKE cluster using the Cloud Manager, you can begin interacting with and managing your cluster. You connect to it using the kubectl client on your computer. To configure kubectl, download your cluster’s kubeconfig file.
Access and Download your kubeconfig
Anytime after your cluster is created you can download its kubeconfig. The kubeconfig is a YAML file that will allow you to use kubectl to communicate with your cluster. Here is an example kubeconfig file:
- File: example-cluster-kubeconfig.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
apiVersion: v1 kind: Config preferences: {} clusters: - cluster: certificate-authority-data: LS0tLS1CRUd... server: https://example.us-central.linodelke.net:443 name: lke1234 users: - name: lke1234-admin user: as-user-extra: {} token: LS0tLS1CRUd... contexts: - context: cluster: lke1234 namespace: default user: lke1234-admin name: lke1234-ctx current-context: lke1234-ctx
This configuration file defines your cluster, users, and contexts.
To access your cluster’s kubeconfig, log into your Cloud Manager account and navigate to the Kubernetes section.
From the Kubernetes listing page, click on your cluster’s more options ellipsis and select Download kubeconfig. The file will be saved to your computer’s
Downloads
folder.You can also download the kubeconfig from the Kubernetes cluster’s details page.
When viewing the Kubernetes listing page, click on the cluster for which you’d like to download a kubeconfig file.
On the cluster’s details page, under the kubeconfig section, click the Download icon. The file will be saved to your
Downloads
folder.To view the contents of your kubeconfig file, click on the View icon. A pane will appear with the contents of your cluster’s kubeconfig file.
To improve security, change the
kubeconfig.yaml
file permissions to be only accessible by the current user:chmod go-r /Downloads/kubeconfig.yaml
Open a terminal shell and save your kubeconfig file’s path to the
$KUBECONFIG
environment variable. In the example command, the kubeconfig file is located in theDownloads
folder, but you should alter this line with this folder’s location on your computer:export KUBECONFIG=~/Downloads/kubeconfig.yaml
Note
It is common practice to store your kubeconfig files in~/.kube
directory. By default, kubectl will search for a kubeconfig file namedconfig
that is located in the~/.kube
directory. You can specify other kubeconfig files by setting the$KUBECONFIG
environment variable, as done in the step above.View your cluster’s nodes using kubectl.
kubectl get nodes
Note
If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context. Visit our Troubleshooting Kubernetes guide to learn how to switch cluster contexts.You are now ready to manage your cluster using kubectl. For more information about using kubectl, see Kubernetes’ Overview of kubectl guide.
Persist the Kubeconfig Context
If you create a new terminal window, it does not have access to the context that you specified using the previous instructions. This context information can be made persistent between new terminals by setting the
KUBECONFIG
environment variable in your shell’s configuration file.
NoteIf you are using Windows, review the official Kubernetes documentation for how to persist your context.
These instructions persist the context for users of the Bash terminal. They are similar for users of other terminals:
Navigate to the
$HOME/.kube
directory:cd $HOME/.kube
Create a directory called
configs
within$HOME/.kube
. You can use this directory to store your kubeconfig files.mkdir configs
Copy your
kubeconfig.yaml
file to the$HOME/.kube/configs
directory.cp ~/Downloads/kubeconfig.yaml $HOME/.kube/configs/kubeconfig.yaml
Note
Alter the above line with the location of the Downloads folder on your computer.
Optionally, you can give the copied file a different name to help distinguish it from other files in the
configs
directory.Open up your Bash profile (e.g.
~/.bash_profile
) in the text editor of your choice and add your configuration file to the$KUBECONFIG
PATH variable.If an
export KUBECONFIG
line is already present in the file, append to the end of this line as follows; if it is not present, add this line to the end of your file:export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config:$HOME/.kube/configs/kubeconfig.yaml
Close your terminal window and open a new window to receive the changes to the
$KUBECONFIG
variable.Use the
config get-contexts
command forkubectl
to view the available cluster contexts:kubectl config get-contexts
You should see output similar to the following:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * lke1234-ctx lke1234 lke1234-admin default
If your context is not already selected, (denoted by an asterisk in the
current
column), switch to this context using theconfig use-context
command. Supply the full name of the cluster (including the authorized user and the cluster):kubectl config use-context lke1234-ctx
You should see output like the following:
Switched to context "lke1234-ctx".
You are now ready to interact with your cluster using
kubectl
. You can test the ability to interact with the cluster by retrieving a list of Pods. Use theget pods
command with the-A
flag to see all pods running across all namespaces:kubectl get pods -A
You should see output like the following:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-dc6cb64cb-4gqf4 1/1 Running 0 11d kube-system calico-node-bx2bj 1/1 Running 0 11d kube-system calico-node-fg29m 1/1 Running 0 11d kube-system calico-node-qvvxj 1/1 Running 0 11d kube-system calico-node-xzvpr 1/1 Running 0 11d kube-system coredns-6955765f44-r8b79 1/1 Running 0 11d kube-system coredns-6955765f44-xr5wb 1/1 Running 0 11d kube-system csi-linode-controller-0 3/3 Running 0 11d kube-system csi-linode-node-75lts 2/2 Running 0 11d kube-system csi-linode-node-9qbbh 2/2 Running 0 11d kube-system csi-linode-node-d7bvc 2/2 Running 0 11d kube-system csi-linode-node-h4r6b 2/2 Running 0 11d kube-system kube-proxy-7nk8t 1/1 Running 0 11d kube-system kube-proxy-cq6jk 1/1 Running 0 11d kube-system kube-proxy-gz4dc 1/1 Running 0 11d kube-system kube-proxy-qcjg9 1/1 Running 0 11d
Modify a Cluster’s Node Pools
You can use the Linode Cloud Manager to modify a cluster’s existing node pools by adding or removing nodes. You can also recycle your node pools to replace all of their nodes with new ones that are upgraded to the most recent patch of your cluster’s Kubernetes version, or remove entire node pools from your cluster. For an automated approach, you can also enable cluster autoscaling to automatically create and remove nodes as needed. This section covers completing those tasks. For any other changes to your LKE cluster, you should use kubectl.
Access your Cluster’s Details Page
Click the Kubernetes link in the sidebar. The Kubernetes listing page appears and you see all of your clusters listed.
Click the cluster that you wish to modify. The Kubernetes cluster’s details page appears.
Adding a Node Pool
To add a new Node Pool to your cluster, navigate to the cluster’s details page and select the add a node pool option to the right of the node pools section.
In the new window that appears, select the hardware resources that you’d like to add to your new Node Pool. To the right of each plan, select the plus
+
and minus-
to add or remove a Linode to a node pool one at time. Once you’re satisfied with the number of nodes in a node pool, select Add Pool to include it in your configuration. If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.
Edit or Remove Existing Node Pools
On your cluster’s details page, click the Resize Pool option at the top-right of each entry in the Node Pools section.
Using the sidebar that appears to the right of the page, you can now remove
-
or add+
Linodes to the pool, and the total cost of your new resources will be displayed. To accept these changes, select theSave Changes
button to continue.Caution
Shrinking a node pool will result in deletion of Linodes. Any local storage on deleted Linodes (such as “hostPath” and “emptyDir” volumes, or “local” PersistentVolumes) will be erased.To remove a node pool from the cluster’s details page, click the Delete Pool option at the top-right of each entry in the Node Pools section. A pop-up message will then appear confirming that you’re sure you’d like to proceed with deletion. Select the
Delete
option, and your Node Pool will proceed to be deleted.Note
Your cluster must always have at least one active node pool.
Configure Cluster Autoscaling
In Kubernetes, Cluster Auto-Scaling refers to a method by which Kubernetes users can configure their cluster to automatically scale the amount of physical nodes available in a node pool up and down as hardware needs of the pool increase or decrease. While this feature can be applied manually using resources like the Cluster Autoscaler provided by Kubernetes, LKE can manage this potential automatically through the Cloud Manager and the Linode API.
The LKE autoscaler will only apply changes when the following conditions are met:
If Pods are unschedulable due to an insufficient number of Nodes in the Node Pool, the auto-scaler will increase the number of physical nodes to the amount required.
If Pods are able to be scheduled on less Nodes than are currently available in the Node Pool, Nodes will be drained and removed automatically. Pods on drained nodes will be immediately rescheduled on pre-existing nodes. The Node Pool will be decreased to match only the needs of the current workload.
LKE Autoscaling is configured for individual Node Pools directly through the Linode Cloud Manager.
To Enable cluster autoscaling, access the cluster’s details page.
Click the Autoscale Pool option at the top-left of each entry in the Node Pools section. The Autoscaling menu will appear.
If the Autoscaler is currently disabled, select the autoscaler switch toggle to turn the feature on.
Once the Autoscaler is enabled, the Minimum
Min
and MaximumMax
fields can be set. Both the Minimum and Maximum field can be any number between1
and99
. Each number represents a set of Nodes in the node pool. A minimum of10
for example, will allow for no less than ten nodes in the node pool, while a maximum of10
will allow for no more than ten nodes in the node pool.Select the
Save Changes
button to complete the process, and officially activate the autoscaling feature.
NoteThe LKE Autoscaler will not automatically increase or decrease the size of the node pool if the current node pool is either below the minimum of the autoscaler, or above the maximum. This behavior can be further described by following examples:
If the Node pool has 3 nodes in the current node pool and a minimum of 5, the autoscaler will not automatically scale the current node pool up to meet the minimum. It will only scale up if pods are unschedulable otherwise.
If the Node Pool has 10 nodes in the current node pool and a maximum of 7, the autoscaler will not automatically scale the current node pool down to meet the maximum. It can only scale down when the maximum is at or above the current number of nodes in the node pool. This is an intentional design choice to prevent the disruption of existing workloads.
Upgrade a Cluster
To Upgrade a cluster access the cluster’s details page.
If an upgrade is available, a banner will appear that will display the next available Kubernetes version. Select the Upgrade Version button at the end of the banner to upgrade to the next available Kubernetes version.
Upgrading a cluster is a two step process which involves first setting the Cluster to use the next version when Nodes are Recycled, and then Recycling all of the Nodes within the Cluster.
For step 1, click on the Upgrade Version button to complete the upgrade process.
Note
If step one of the upgrade process is completed without the completion of step two, the nodes in the cluster will need to be recycled using the Recycle all Nodes button.For step 2, click on the Recycle All Nodes button to set all nodes to complete the upgrade process. Nodes will be recycled on a rolling basis so that only one node will be down at a time throughout the recycling process.
Recycle Nodes
Nodes can be recycled by selected the recycle option for an individual node, in a node pool or, or for all nodes in the cluster. All recycle options are found in the cluster’s details page
To recycle all Nodes on all Node Pools in a cluster, select the Recycle All Nodes option to the right of the Node Pools section.
To recycle a node pool from the cluster’s details page, click the Recycle Nodes option at the top-right of each entry in the Node Pools section.
To recycle an individual Node, find the Node Pools section on the cluster’s details page, find the individual node that will be recycled, and click on the Recycle button to the right of the respective entry.
When selecting any recycle option a pop-up message will appear confirming that the node or nodes will be recycled. Select the Recycle
option, and your Node or Node Pool will proceed to recycle its nodes. If the Recycle all Nodes or Recycle Nodes option are selected, then nodes will be upgraded on a rolling basis so that only one node will be down at a time throughout the recycling process.
Reset Cluster Kubeconfig
In cases where access to a cluster using a current kubeconfig must be revoked, LKE provides the ability to Reset a cluster kubeconfig. This will effectively remove the current kubeconfig, and create a new one for cluster administrators to use.
To reset the cluster kubeconfig access the cluster’s details page.
Select the Reset button under the kubeconfig sub-category.
- A confirmation message will appear confirming the Kubeconfig reset. Select the Reset kubeconfig button to proceed.
A new kubeconfig will now be created. Once this process is completed, the new kubeconfig can be Accessed and Downloaded as usual.
Enable High Availability
In LKE, enabling HA ( High Availability) creates additional replicas of your control plane components, adding an additional layer of redundancy to your Kubernetes Cluster and ensuring 99.99% uptime for both the control plane and worker nodes. HA is an optional feature recommended for production workloads. It must be manually enabled either when creating a new cluster or editing an existing cluster.
In more detail, upgrading to High Availability on LKE results in the following changes:
- etcd and kube-api-server increases from one to three replicas.
- All other components, the Cloud Controller Manager, kube-scheduler, and kube-controller-manager, increase from one to two replicas, with leader election put in place.
When multiple replicas are created as part of LKE HA, they are always placed on separate infrastructure to better support uptime and redundancy.
Unlike other LKE configuration options, High Availability is an optional billable service that increases the overall operating cost of your cluster. For more information, see our pricing page.
CautionWhile upgrading to an HA cluster is always possible, downgrading your cluster is not currently supported. Enabling HA is an irreversible change for your cluster.
Enabling HA During Cluster Creation
High Availability can be enabled during cluster creation from the Create a Kubernetes Cluster page at any time.
From the Create a Kubernetes Cluster page, navigate to the Cluster Summary section.
Check the box next to the Enable HA Control Plane option.
Create additional configuration options as desired for your configuration. When you are satisfied with the configuration of your cluster, click the Create Cluster button in the Cluster Summary section.
Your cluster’s detail page will appear on the following page where you will see your Node Pools listed. From this page, you can edit your existing Node Pools, access your Kubeconfig file, and view an overview of your cluster’s resource details.
Enabling HA on Existing Clusters
High Availability can be added to pre-existing clusters at any given time through the cluster’s Summary Page.
CautionEnabling HA on a pre-existing cluster will result in the following changes:
- All nodes will be deleted and new nodes will be created to replace them.
- Any local storage (such as
hostPath
volumes) will be erased.- The upgrade process may take several minutes to complete, as nodes will be replaced on a rolling basis.
To reach the summary page for the cluster, navigate first to the Kubernetes section of the Cloud Manager.
Select the Cluster by label that you would like to enable HA for. The summary page for the cluster appears.
To enable HA, select the Upgrade to HA button at the top of the page.
A new window appears, asking you to confirm all of the changes that come with High Availability. Read through the message and select the Enable HA Control Plane checkbox to confirm that you agree to the changes. Then click the Upgrade to HA button.
All clusters that have HA enabled will have an HA Cluster watermark on their summary page.
Delete a Cluster
You can delete an entire cluster using the Linode Cloud Manager. These changes cannot be reverted once completed.
Click the Kubernetes link in the sidebar. The Kubernetes listing page will appear and you will see all your clusters listed.
Select the More Options Ellipsis to the right of the cluster you’d like to delete, and select the
Delete
option:A confirmation pop-up will appear. Enter in your cluster’s name and click the Delete button to confirm.
The Kubernetes listing page will appear and you will no longer see your deleted cluster.
General Network and Firewall Information
In an LKE cluster, some entities and services are only accessible from within that cluster while others are publicly accessible (reachable from the internet).
Private (accessible only within the cluster)
- Pod IPs, which use a per-cluster virtual network in the range 10.2.0.0/16
- ClusterIP Services, which use a per-cluster virtual network in the range 10.128.0.0/16
Public (accessible over the internet)
- NodePort Services, which listen on all Nodes with ports in the range 30000-32768.
- LoadBalancer Services, which automatically deploy and configure a NodeBalancer.
- Any manifest which uses hostNetwork: true and specifies a port.
- Most manifests which use hostPort and specify a port.
Exposing workloads to the public internet through the above methods can be convenient, but this can also carry a security risk. You may wish to manually install firewall rules on your cluster nodes. The following policies are needed to allow communication between the node pools and the control plane and block unwanted traffic:
- ALlow kubelet health checks: TCP port 10250 from 192.168.128.0/17 Accept
- Allow Wireguard tunneling for kubectl proxy: UDP port 51820 from 192.168.128.0/17 Accept
- Allow Calico BGP traffic: TCP port 179 from 192.168.128.0/17 Accept
- Allow NodePorts for workload services: TCP/UDP port 30000 - 32767 192.168.128.0/17 Accept
- Block all other TCP traffic: TCP All Ports All IPv4/All IPv6 Drop
- Block all other UDP traffic: UDP All Ports All IPv4/All IPv6 Drop
- Block all ICMP traffic: ICMP All Ports All IPv4/All IPv6 Drop
- IPENCAP for IP ranges 192.168.128.0/17 for internal communication between node pools and control plane.
For additional information, please see this community post. Future LKE release may allow greater flexibility for the network endpoints of these types of workloads.
Please note, at this time, nodes should be removed from the Cloud Firewall configuration before removing/recycling of node pools within the Kubernetes configuration. Also, when adding node pools to the Kubernetes cluster, Cloud Firewall must be updated with the new node pool(s). Failure to add the new nodes creates a security risk.
NoteAll new LKE clusters create a service namedKubernetes
in thedefault
namespace designed to ease interactions with the control plane. This is a standard service for LKE clusters.
Next Steps
Now that you have a running LKE cluster, you can start deploying workloads to it. Refer to our other guides to learn more:
- How to Deploy a Static Site on Linode Kubernetes Engine
- Create and Deploy a Docker Container Image to a Kubernetes Cluster
- Troubleshooting Kubernetes Guide
- See all our Kubernetes guides
More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
This page was originally published on