From Laptop to Edge: A Complete Guide to MicroK8s, a lightweight Kubernetes

From Laptop to Edge: A Complete Guide to MicroK8s, a lightweight Kubernetes
Photo by Louis Reed / Unsplash

We often need robust orchestration without the massive overhead of a full-blown Kubernetes cluster, especially when working with edge devices or resource-constrained laptops.

In this guide, I’ll walk you through setting up MicroK8s, a lightweight, production-grade Kubernetes distribution. We will start with a basic deployment on WSL/Linux, upgrade to a declarative YAML setup, implement security best practices, and finally, scale to a multi-node High Availability (HA) cluster with shared storage.

This guide is perfect for setting up a “forever-running” server on a spare laptop or deploying edge analytics on Raspberry Pis.

Part 1: Installation & Single-Node Setup

MicroK8s is ideal for small devices because it packages all Kubernetes services into a single snap package.

1. Prerequisites (for WSL/Linux)

If you are running on WSL (Windows Subsystem for Linux), you must ensure `systemd` is enabled, as MicroK8s relies on it for service management.

Check for systemd:

ps -p 1 -o comm=

If the output is `init`, you must enable systemd in your WSL settings before proceeding.

2. Install MicroK8s

We install MicroK8s using `snap` and configure user permissions to avoid typing `sudo` for every command.

# Install the package
sudo snap install microk8s - classic

# Add your user to the MicroK8s group
sudo usermod -a -G microk8s $USER

# Apply the group change (or restart terminal)
su - $USER

# Wait for the cluster to be ready
microk8s status - wait-ready

Part 2: Deploying Your First App (Imperative Method)

Let’s test the cluster by deploying `podinfo`, a lightweight web application. We will first enable standard addons.

Enable DNS and Dashboard:

microk8s enable dns dashboard

Deploy and Expose:

# Create the deployment
microk8s kubectl create deployment podinfo - image=ghcr.io/stefanprodan/podinfo

# Expose it internally via NodePort
microk8s kubectl expose deployment podinfo - type=NodePort - port=9898

# Check the service to find the assigned port
microk8s kubectl get service podinfo

You can now test the endpoint using `curl <CLUSTER-IP>:<PORT>`.

Testing Self-Healing & Scaling

One of Kubernetes’ main features is resilience. If you delete a pod, the ReplicaSet will immediately spin up a new one.

# Delete a pod to test auto-recovery
microk8s kubectl delete pod <your-pod-name>

# Scale up for high availability (HA)
microk8s kubectl scale deployment podinfo - replicas=2

Part 3: The “GitOps” Way (Declarative YAML)

Running ad-hoc commands is fine for testing, but for a “forever-running” server, we need Infrastructure as Code. We will deploy a persistent Jupyter Notebook environment using a single YAML file.

1. Enable Storage

To save data (notebooks) when a container restarts, we need to enable the hostpath storage addon.

microk8s enable hostpath-storage

2. Create the Configuration (`jupyter-stack.yaml`)

We will improve security by using Kubernetes Secrets for the password, rather than hardcoding it or checking logs for a token.

Create a file named `jupyter-stack.yaml`:

# 1. Secret (Stores the password securely)
apiVersion: v1
kind: Secret
metadata:
 name: jupyter-secret
type: Opaque
stringData:
 token: "my-secure-password" # Change this to your preferred password
 - -
 
# 2. Persistent Volume Claim (Requesting Storage)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: jupyter-pvc
spec:
 storageClassName: microk8s-hostpath
 accessModes:
 - ReadWriteOnce
 resources:
 requests:
 storage: 2Gi
 - -
 
# 3. Deployment (The Application)
apiVersion: apps/v1
kind: Deployment
metadata:
 name: jupyter-notebook
spec:
 replicas: 1
 selector:
 matchLabels:
 app: jupyter
 template:
 metadata:
 labels:
 app: jupyter
 spec:
 containers:
 - name: jupyter
 image: jupyter/base-notebook:latest
 ports:
 - containerPort: 8888
 env:
 - name: JUPYTER_TOKEN
 valueFrom:
 secretKeyRef:
 name: jupyter-secret
 key: token
 volumeMounts:
 - mountPath: /home/jovyan/work
 name: notebook-storage
 volumes:
 - name: notebook-storage
 persistentVolumeClaim:
 claimName: jupyter-pvc
 - -
 
# 4. Service (Networking)
apiVersion: v1
kind: Service
metadata:
 name: jupyter-servicespec:
 selector:
 app: jupyter
 ports:
 - protocol: TCP
 port: 80 # Service port
 targetPort: 8888 # Container port
 type: NodePort
 

3. Deploy and Access

Apply the configuration:

microk8s kubectl apply -f jupyter-stack.yaml

To access this securely from your local machine, use Port Forwarding:

microk8s kubectl port-forward service/jupyter-service 8888:80

Open `http://localhost:8888` in your browser and log in with the password defined in your Secret (`my-secure-password`).

Part 4: Role-Based Access Control (RBAC)

If you are sharing this cluster or automating cleanup, you shouldn’t use the root admin user. Let’s create a restricted user, `data-scientist`, who can manage deployments but cannot view system secrets.

1. Create Credentials:

openssl genrsa -out data-scientist.key 2048

openssl req -new -key data-scientist.key -out data-scientist.csr -subj "/CN=data-scientist"

2. Approve Certificate via Kubernetes:

Wrap the CSR in a YAML object and apply it to the cluster, then approve it:

microk8s kubectl certificate approve data-scientist

3. Create RoleBinding:

Grant the user admin rights only within the `default` namespace:

microk8s kubectl create rolebinding data-scientist-admin \
 - clusterrole=admin \
 - user=data-scientist \
 - namespace=default

4. Switch Context:

You can now switch to this user to safely perform operations like deleting deployments:

microk8s kubectl config use-context data-scientist-context

microk8s kubectl delete -f jupyter-stack.yaml

Part 5: High Availability (Multi-Node Cluster)

Running on a single node has a flaw: if the device fails, the app stops. If we have two devices (e.g., a laptop and a desktop, or two Raspberry Pis), we can cluster them.

However, standard local storage (`hostpath`) locks data to one specific machine. To fix this, we need NFS (Network File System) so the data follows the application regardless of which node it runs on.

1. Join the Nodes

On the Main Node:

microk8s add-node
# Output: microk8s join 10.8.0.4:25000/…

On the Worker Node:
Run the join command provided by the master:
microk8s join 10.8.0.4:25000/YOUR_TOKEN_HERE

2. Set Up Shared NFS Storage

MicroK8s includes a community addon for NFS.
On the Main Node:

sudo apt-get install nfs-common -y

microk8s enable community

microk8s enable nfs

Note: Ensure `nfs-common` is installed on all worker nodes as well.

3. Deploy HA Configuration

Update your YAML to use storageClassName: nfs. This ensures that if Node A dies, Kubernetes can reschedule the pod to Node B, and it will still be able to mount your notebooks.

Critical Note on Replicas:
Keep `replicas: 1` for Jupyter. Jupyter databases lock files. Running two simultaneous copies will corrupt your data. Rely on Kubernetes failover, not active-active replication.

Conclusion

You now have a production-ready environment capable of running on minimal hardware. We’ve moved from a simple `snap install` to a secure, multi-node cluster with persistent shared storage perfect for hosting your personal data engineering projects or edge analytics pipelines.

Subscribe to Digital Lab

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe