This is the fifth article in a series covering detailing building a Raspberry Pi Kubernetes development cluster. In the first parts we covered Getting the cluster up and running. In this post we will cover how to solve some of the network challenges.
Many of the applications that I deploy to my cluster have a web based UI. In order to ensure I can route requests to the cluster there is some final preparation to do.
DNS – Part 1
Firstly, we have already configured a domain on the router. In my case this is “radicalgeek.local”. This domain name will be appended to any hostname that is managed by the router. We can also use this domain name in our kubernetes pods if we want to expose a service on the local network only.
You can see this in action now as you have already created a hostname on the router named “cluster”. Try pinging the fully qualified domain name
1 |
$ ping cluster.radicalgeek.local |
You should get a response from one of your manager nodes. In order to point any hostname to our cluster in a load balanced way all we will need to do is create a host entry for it that points to the IP address of the router.
On the router navigate to Natwork > Hostnames. Click “Add” and then create a record for the hostname “traefik” with the IP address of the router. In my case this is 192.168.1.1
Reverse Proxy
Next we are going to switch the OpenWRT Luci UI to use port 81 and install Nginx to act as a load balancing reverse proxy. First use ssh to connect to your router.
1 |
$ ssh root@192.168.1.1 |
Now we need to free up the ports we want to route to our cluster. To do so open the file /etc/config/uhttpd. Find the lines listen_http and listen_https and change the ports to 81 and 444 respectively.
1 |
$ nano /etc/config/uhttpd |
Restart the uhttps service
1 |
$ service uhttpd restart |
With the ports freed up we can now install Nginx. Reload the router’s UI, but this time including the new port number
http://192.168.1.1:81
Login and navigate to Systerm > Software and install the packages “nginx-all-module” and “nginx-ssl”.
Back in the terminal we can now set up nginx so that it routes all traffic to the cluster in a load balanced way.
Edit the file /etc/nginx/nginx.conf
If you see any server {} sections inside the http {} section, just remove them.
Now create a new stream {} section after the closing http curly bracket. In here create a upstream {} and server {} section for each protocol you would like to forward to your cluster. The upstream lists the load balanced servers to forward the request to. Here is my complete nginx.conf file. Port 32222 is used for Gitlab ssh which we will cover in a later article.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
user nobody nogroup; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; #default_type application/octet-stream; server_names_hash_bucket_size 64; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; } stream { upstream clusterhttp { server 10.1.0.21:80 max_fails=3 fail_timeout=10s; server 10.1.0.22:80 max_fails=3 fail_timeout=10s; server 10.1.0.23:80 max_fails=3 fail_timeout=10s; server 10.1.0.24:80 max_fails=3 fail_timeout=10s; server 10.1.0.25:80 max_fails=3 fail_timeout=10s; server 10.1.0.26:80 max_fails=3 fail_timeout=10s; server 10.1.0.27:80 max_fails=3 fail_timeout=10s; server 10.1.0.28:80 max_fails=3 fail_timeout=10s; server 10.1.0.29:80 max_fails=3 fail_timeout=10s; server 10.1.0.30:80 max_fails=3 fail_timeout=10s; } server { listen 80; proxy_pass clusterhttp; proxy_next_upstream on; } upstream clustertls { server 10.1.0.21:443 max_fails=3 fail_timeout=10s; server 10.1.0.22:443 max_fails=3 fail_timeout=10s; server 10.1.0.23:443 max_fails=3 fail_timeout=10s; server 10.1.0.24:443 max_fails=3 fail_timeout=10s; server 10.1.0.25:443 max_fails=3 fail_timeout=10s; server 10.1.0.26:443 max_fails=3 fail_timeout=10s; server 10.1.0.27:443 max_fails=3 fail_timeout=10s; server 10.1.0.28:443 max_fails=3 fail_timeout=10s; server 10.1.0.29:443 max_fails=3 fail_timeout=10s; server 10.1.0.30:443 max_fails=3 fail_timeout=10s; } server { listen 443; proxy_pass clustertls; proxy_next_upstream on; } upstream clusterssh { server 10.1.0.21:32222 max_fails=3 fail_timeout=10s; server 10.1.0.22:32222 max_fails=3 fail_timeout=10s; server 10.1.0.23:32222 max_fails=3 fail_timeout=10s; server 10.1.0.24:32222 max_fails=3 fail_timeout=10s; server 10.1.0.25:32222 max_fails=3 fail_timeout=10s; server 10.1.0.26:32222 max_fails=3 fail_timeout=10s; server 10.1.0.27:32222 max_fails=3 fail_timeout=10s; server 10.1.0.28:32222 max_fails=3 fail_timeout=10s; server 10.1.0.29:32222 max_fails=3 fail_timeout=10s; server 10.1.0.30:32222 max_fails=3 fail_timeout=10s; } server { listen 32222; proxy_pass clusterssh; proxy_next_upstream on; } } |
Restart the Nginx service
1 |
$ service nginx restart |
DNS – Part 2
So far we are only supporting DNS for the internal network. If we want to expose services on the public internet we will also need to set up external DNS. To do this you will need an external DNS host and Dynamic DNS Service provider. For DNS my radicalgeek.co.uk zone file is hosted by http://24host.co.uk but any hosting provider will do. And for Dynamic DNS I use NoIP because it is cheap and easily supported on the router.
Start by signing up to NoIP and registering a domain name. Mine is radicalgeek.ddns.net. Once you have your domain login to your router and Navigate to System > Software. Install the packages ddns-scripts, ddns-scripts_no-ip_com and luci-app-ddns.
Once the software is installed navigate to Services > Dynamic DNS. Select “Edit” on the ipv4 entry.
Enter your new domain name in the “Lookup Hostname” field and the “Domain” field. Select “no-ip.com” from the “DDNS Service provider” field and then enter your NoIP username and password then click save any apply. Your router will now update this DNS record any time your external IPS provided IP changes.
That is all well and good, but I do not want to use that DNS name for my services. Next head over to your domain hosting provider and edit your DNS zone file. It will be diffrent for each provider, but ultimately you want to create a wild card CNAME record. Mine looks like this:
This wild card record will route any subdomain of dev.radicalgeek.co.uk to the dynamic DNS name which in turn resolves to the router. The router will simply forward any traffic it revives to the cluster in a load balanced way. If the cluster is running an ingress for the DNS name in the request, then it will be forwarded on to the correct pod.
So we can now spin up a pod using any subdomain of dev.radicalgeek.co.uk, and it will be accessible from the internet with no further router or DNS configuration, and the traffic is load balanced too.
Cert manager
Now that we have the ability to route external DNS to our cluster, we should also make sure we can encrypt that traffic. With the configuration in place we can use Certificate Manager to request certificates from Lets Encrypt.
To install Certificate manager to the cluster, first create a certManager-namespace.yml file with the following content
1 2 3 4 5 6 |
apiVersion: v1 kind: Namespace metadata: name: cert-manager labels: name: cert-manager |
Apply the manifest to your cluster
1 |
$ kubectl apply -f certManager-namespace.yml |
Now we can use Helm to deploy certificate manager
1 |
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml |
All that is left to do now is to create a cluster issuer to issue our certificates. Create a cluster-issuer.yml file
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: email: mark@radicalgeek.co.uk server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-prod solvers: - http01: ingress: ingressTemplate: metadata: annotations: kubernetes.io/ingress.class: traefik |
Apply the manifest
1 |
$ kubectl apply -f cluster-issuer.yml |
Finished
Your cluster should now be be ready to run pods that are accessible to the internet in a load balanced and secure way. In the next article we will will use what we have done here to expose some internal dashboards on the *.local domain