Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Getting started with MicroK8s on Ubuntu Core

1. Introduction

What is Ubuntu Core

Ubuntu Core is a version of the Ubuntu operating system designed and engineered for IoT and embedded systems. It is built entirely from snap packages to create a secure, robust, confined and transaction-based OS that’s easy to install, deploy and upgrade.

What is Kubernetes

Kubernetes is an orchestration platform for containerised applications. Kubernetes abstracts compute, networking and storage resources and manages container lifecycle in a reliable and scalable way. Built with DevOps principles, Kubernetes automates operational tasks, such as workload redeployments and upgrades and provides APIs for granular resource control.

What is MicroK8s

MicroK8s is a lightweight CNCF-certified Kubernetes distribution for clouds, workstations, edges and IoT devices. Being a snap, it runs all Kubernetes services natively (i.e. no virtual machines), it includes all dependencies in a single package and gets transparent mission-critical security updates. MicroK8s is optimised for simplicity and robustness, as installation, set up and operations, such as enabling of monitoring services and high-availability clustering are either automated or done through a single command.

Why Microk8s on Core

MicroK8s and Ubuntu Core share benefits such as reliability and security, with features such as self-healing, high availability and automatic OTA updates. Ubuntu is the operating system of choice for Kubernetes in the clouds. Combining Ubuntu Core and MicroK8s, creates a streamlined, embedded Kubernetes experience, with optimisations for size and performance in IoT and Edge applications.

What you’ll learn

  • How to install Ubuntu Core on your preferred IoT device such as an Intel NUC or a Raspberry Pi
  • How to install MicroK8S on Ubuntu Core
  • How to check the status of the installation
  • How to enable MicroK8S add-ons
  • How to deploy containers on Kubernetes
  • How to check the deployment status

2. Install Ubuntu Core

Now that you have your IoT device handy, let’s start by installing Ubuntu Core.

There are currently two guides for this:

Note that in order to run MicroK8s you should use a 64-bit Ubuntu Core version.

If you managed to successfully complete the steps from either of the two guides, you should now have access to the Ubuntu Core terminal on your device.


3. Install MicroK8S on Ubuntu Core

Installation

To install the latest version of Microk8s on Ubuntu Core, run:

snap install microk8s --channel=latest/edge/strict

Below the expected output:

ubuntu@ubuntu:~$ snap install microk8s --channel=latest/edge/strict

microk8s (edge/strict) v1.22.3 from Canonical✓ installed

ubuntu@ubuntu:~$

The installation can take up to a few minutes, depending on your hardware resources and network connection.

What Kubernetes version is this installing?

MicroK8s is packaged in a snap and as such it will be automatically updated to newer point releases.

The strictly confined MicroK8s version is currently on a dedicated snap channel, that is aligned with the latest version of upstream Kubernetes.

Channels are made up of a track (or series) and an expected level of stability, based on MicroK8s releases (Stable, Candidate, Beta, Edge). For more information about available releases, run:

snap info microk8s

4. Start Microk8s and check the status

Microk8s is not started by default after installation. To start MicroK8s run:

sudo microk8s start

This command initiates all Kubernetes services, both for the control plane and the worker. Now, to check the status of your MicroK8s node after the installation is finished you can use:

sudo microk8s status --wait-ready

The status should indicate that microk8s is running.

ubuntu@ubuntu:~$ sudo microk8s status

microk8s is running

high-availability: no

   datastore master nodes: 127.0.0.1:19001

   datastore standby nodes: none

addons:

   enabled:

      ha-cluster # Configure high availability on the current node

   disabled:

      ambassador # Ambassador API Gateway and Ingress

      ...

      storage # Storage class; allocates storage from host directory

      traefik # traefik Ingress controller for external access

ubuntu@ubuntu:~$

Enable the necessary MicroK8s addons

Now that you have the Kubernetes services up and running you should set up additional services, such as the Kubernetes dashboard, CoreDNS or local storage to make full use of your Kubernetes. Many of those services are available as MicroK8s addons and can be easily enabled by running the microk8s enable command:

sudo microk8s enable dns 
microk8s enable dashboard
microk8s enable storage

These addons can be disabled at any time by running the microk8s disable command:

sudo microk8s disable dns 
...

You can see the list of available addons and the ones currently enabled, with the microk8s status command.

List of the most important addons

  • dns: Deploy DNS. This addon may be required by others, thus we recommend you always enable it.
  • dashboard: Deploy kubernetes dashboard.
  • storage: Create a default storage class. This storage class makes use of the hostpath-provisioner pointing to a directory on the host.
  • ingress: Create an ingress controller.
  • gpu: Expose GPU(s) to MicroK8s by enabling the nvidia-docker runtime and nvidia-device-plugin-daemonset. Requires NVIDIA drivers to be already installed on the host system.
  • istio: Deploy the core Istio services. You can use the microk8s istioctl command to manage your deployments.
  • registry: Deploy a docker private registry and expose it on localhost:32000. The storage addon will be enabled as part of this addon.

5. Deploy a sample container workload

You can now use the microk8s kubectl to deploy your containers. In this example we deploy nodered, a programming tool for wiring together hardware devices

sudo microk8s kubectl create deployment nodered --image=nodered/node-red

Use kubectl to check the pods:

ubuntu@ubuntu:~$ sudo microk8s kubectl get pods
NAME                        READY     STATUS            RESTARTS  AGE
nodered-7555b955f9-68cl9    0/1       ContainerCreating 0         3s
ubuntu@ubuntu:~$ sudo microk8s kubectl get pods
NAME                        READY     STATUS            RESTARTS  AGE
nodered-7555b955f9-68cl9    1/1       Running           0         16s

Next the deployment needs to be exposed using the kubectl command to make it accessible from the network:

sudo microk8s kubectl expose deployment nodered --type=NodePort --port=1880 --name=nodered-service

6. Check the deployment status and access your application

You can check the deployment status using the following command:

sudo microk8s kubectl get services
ubuntu@ubuntu:~$ sudo microk8s kubectl get services
NAME               TYPE         CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP    10.152.183.1    <none>        443/TCP        81m
nodered-service    NodePort     10.152.183.46   <none>        1880:30663/TCP 5s

The exposed port is randomly generated. In the above example, we can see that the port is 30663.

In order to access the application’s graphical interface, you need to open your browser and type the following URL scheme: http://:<EXPOSED_PORT>

Example: http://192.168.1.222:30663/


7. Where to next?