DevOpsKubernetes

Kubernetes Minikube #1 – Configmaps, Storage, Ingress

Let’s mess around with Kubernetes’ Minikube and learn how to use it to launch an application with an ingress point, external configuration and volume claims. We will do this by reproducing a bunch of OpenShift MiniLabs and hack around with a bunch of kubectl commands and Kubernete’s yaml resource files in the process. Now for a picture of Adelaide.

adelaide

Objective

This tutorial will cover the following:

  • Install Minikube and associated dependencies
  • Launch Minikube using VM-less technique
  • Create an application from an existing Docker image
  • Set up a Kubernetes ingress to access the application
  • Demonstrate use of Kubernetes configmaps
  • Use Kubernetes to create/claim a permanent volume

images

Setup

1. Install Software

The setup used here is for a Linux environment and required VirtualBox, kubectl and Minikube. The installations instructions for same and other environments are documented at https://kubernetes.io/docs/tasks/tools/install-minikube/ .

2. Start Minikube

Once the necessary software has been installed we are ready to go. We are going to launch a VM-less minikube instance using the instructions as per below. This approach requires a Docker daemon and has the advantage that your image assets are all colocated in your local Docker image registry. An OpenShift equivalent to this is “oc cluster up” as shown here.

# Check installed software
$ kubectl version 
$ minikube version

# Stop/start minikube
$ minikube stop
$ sudo minikube addons enable kube-dns 
$ sudo minikube start \ 
  --vm-driver=none \ 
  --feature-gates=Accelerators=true 
$ sudo minikube addons list
$ kubectl get svc --namespace=kube-system

# Follow instructions as per Minikube stdout
$ export CHANGE_MINIKUBE_NONE_USER=true 
$ sudo mv /root/.kube $HOME/.kube 
# this will write over any previous configuration
$ sudo chown -R $USER $HOME/.kube
$ sudo chgrp -R $USER $HOME/.kube

$ sudo mv /root/.minikube $HOME/.minikube 
# this will write over any previous configuration
$ sudo chown -R $USER $HOME/.minikube
$ sudo chgrp -R $USER $HOME/.minikube

# Check all OK then launch the dashboard
$ minikube addons enable ingress
$ minikube status
$ docker images | grep gcr.io
$ minikube dashboard

Verify

1. Create Namespace

We will use a namespace to organise our application assets. Do this as follows, and then make that the default namespace. The OpenShift equivalent to this is “oc new-project” as shown here.

# Create a new namespace
$ kubectl create -f - << EOF!
apiVersion: v1
kind: Namespace
metadata:
  name: mlops
EOF!

# Check your new namespace then set as default
$ kubectl get namespaces
$ kubectl config set-context $(kubectl config current-context) --namespace=cotd

2. Create Application

We are going to create a pod by bootstrapping it straight off an existing Docker image (stefanopicozzi/pets. Do this as follows. The OpenShift equivalent to this is “oc new-app” as shown here.

# Pull down a Docker image
$ docker pull stefanopicozzi/pets

# Create a deployment and expose directly from the image


nbsp;kubectl run pets --image=stefanopicozzi/pets --port=8080 -n=cotd
$ kubectl expose deployment pets --type=NodePort --port=8080 --target-port=8080 --name=pets -n=cotd

# Check pod status
$ kubectl get deployments -n=cotd
$ kubectl get pods -n=cotd

$ curl $(minikube service pets --url -n=cotd)

3. Create Ingress

Create an ingress point into your application as follows. Once successful you will be available to access the endpoint using something like http://pets. The OpenShift equivalent to this is “oc expose service” as shown here.

# Set up an Ingress
$ kubectl create -n=cotd -f - << EOF!
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: pets-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: pets
    http:
      paths:
      - path: /
        backend:
          serviceName: pets
          servicePort: 8080
EOF!
$ kubectl get ingress -n=cotd 
$ kubectl describe ingress pets-ingress -n=cotd 

# Set up host file entry for convenience and verify
$ sudo echo "$(minikube ip) pets" | sudo tee -a /etc/hosts
$ while true; do curl -s http://pets/item.php | grep "data/images" | awk '{print $5}'; sleep 2; done
data/images/pets/riki.jpg
data/images/pets/tipsy.jpg
data/images/pets/milo.jpg
...

4. Configure Configmaps

The COTD sample application has an environment variable (SELECTOR) to change themes, e.g. cats, cities, pets. To demonstrate  externalising configuration data, we can set this variable using Kubernetes configmaps. The OpenShift equivalent to this is “oc create configmap” as shown here.

One of the configuration steps requires editing the deployment yaml. You can inspect a complete and known-good deployment file at https://bitbucket.org/emergile/cotd/src/master/etc/kubernetes/deployment.yaml .

# Create a configmap that changes application to show cities
$ kubectl create configmap pets-config -n=cotd --from-literal=selector=cities 
$ kubectl edit deployment pets -n=cotd
 ...
 spec:
   containers:
...
     env:
     - name: SELECTOR
       valueFrom:
         configMapKeyRef:
           key: selector
           name: pets-config
$ while true; do curl -s http://pets/item.php | grep "data/images" | awk '{print $5}'; sleep 2; done
data/images/cities/christchurch.jpg
data/images/cities/canberra.jpg
data/images/cities/christchurch.jpg
...

# Now change the env variable value to cities and verify
$ kubectl delete configmap pets-config -n=cotd
$ kubectl create configmap pets-config -n=cotd --from-literal=selector=cats
$ export POD=$(sudo kubectl get pods -n=cotd | grep pets | awk '{print $1}')
$ kubectl delete pod $POD -n=cotd
$ while true; do curl -s http://pets/item.php | grep "data/images" | awk '{print $5}'; sleep 2; done
data/images/cats/christchurch.jpg
data/images/cats/canberra.jpg
data/images/cats/christchurch.jpg
...

5. Working with Storage

Let’s now demonstrate how to attach storage to our pod. We will do this by mapping the images directory to a local HostPath file system. You can verify success by noting that the pet images (http://pet) in your Browser do not appear until the image content is copied across to the local HostPath. The OpenShift equivalent to this is shown here.

There are 3 steps to this procedure – 1) create a persistent volume, 2) create a claim and 3) map the storage path to your pod. Again, one of the configuration steps requires editing the deployment yaml. You can inspect a complete and known-good deployment file at https://bitbucket.org/emergile/cotd/src/master/etc/kubernetes/deployment.yaml .

# 1. Create a persistent volume 
$ kubectl create -n=cotd -f - << EOF!
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pets-pv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/pets"
EOF!
$ kubtectl get pv

# 2. Create a peristent volume claim
$ kubectl create -n=cotd  -f - << EOF!
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pets-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi
EOF!
$ kubectl get pvc

# 3. Map claim to pod
$ kubectl edit deployment pets -n=cotd
...
    spec:

        volumeMounts:
        - mountPath: "/opt/app-root/src/data/images"
          name: pets-images-storage
...
      volumes:
      - name: pets-images-storage
        persistentVolumeClaim:
          claimName: pets-pvc
status:
...

# Point a browser and point to http://pets and storage claim 
$ cd /tmp
$ git clone https://bitbucket.org/emergile/cotd.git
$ cd pets
$ sudo cp -r ../cotd/data/images/* .

Trivia

Reset

Get used to rebuilding your Minikube environment now and then. Do the following, then reboot for good measure.

# Reset minikube environment to ignore previous vm-driver 
$ minikube stop 
$ minikube delete 
$ sudo rm -rf $HOME/.kube $HOME/.minikube

Internet Not Accessible

If you cannot curl or ping an Internet resource from inside your pod, check your /etc/resolv.conf for a valid DNS entry such as 8.8.8.8 . Hunt around for techniques to prevent this file being overwritten, e.g. https://askubuntu.com/questions/157154/how-do-i-include-lines-in-resolv-conf-that-wont-get-lost-on-reboot . If you need to change /etc/resolv.conf, restart as follows:

$ sudo /etc/init.d/network-manager restart
$ minikube start ...

ImageBackOff

If you experience some weirdness with Kubernetes attempting to create a container, refer “Internet Not Accessible” tip.

Canary/Blue/Green/A/B Deployments

Not covered in this blog #1 have been the various Cloud deployment patterns. OpenShift implementation patterns for this are described here. If you need to achieve this without OpenShift, suggest you check out Istio, e.g. as at https://istio.io/blog/canary-deployments-using-istio.html . Now read blog #2.

Leave a Reply