If you have been following along with these articles so far, you should now have a working PoE powered Raspberry Pi cluster PXE booting with iSCSI root file systems. In this post we are going to do some initial deployments to the cluster to set up iSCSI persistent volumes and automated volume snapshots.
In order for our pods to persist data in an acceptable way, we need to ensure kubernetes can dynamically provision iSCSI LUNs on the Synology NAS. To do this we will need to deploy the official Synology iSCSI container storage interface.
https://github.com/SynologyOpenSource/synology-csi
Unfortunately, at the time of writing this project doesn’t support arm64, however it will hopefully be coming soon in this pull request:
https://github.com/SynologyOpenSource/synology-csi/pull/9
Luckily for you I have already made these updates available via my own fork of the driver, and included some of the other parts you need to get this working.
https://gitlab.radicalgeek.co.uk/hoofer/infrastructure/synology-iscsi-csi
Feel free to clone this repo locally to deploy it to your cluster.
1 2 3 |
$ git clone https://gitlab.radicalgeek.co.uk/hoofer/infrastructure/synology-iscsi-csi $ cd synology-iscsi-csi |
Preparing the NAS
If you followed the recommendations in Part 2 then Volume 4 on your NAS is already set up ready to hold you persistent volumes. You should also have a k3s user on your NAS and and have noted down the password. If you have not, go back and do that now.
Installing the CSI Driver with the Snapshot feature
As described in the Synology readme you first need to ensure that the Volume Snapshot Custom Resource Definitions and the Common Snapshot Controller are installed. The Common Snapshot Controller has been updated to use an image that supports arm64.
1 |
$ kubectl apply -f manifests/setup/ |
Next we need to provide the configuration for the driver to connect to the NAS. Open the config/client-info.yaml and update the values for the IP address of the SAN and the username and password the cluster will use to connect.
1 2 3 4 5 6 7 8 9 |
$ nano config/client-info.yml --- clients: - host: 10.1.0.20 port: 5000 https: false username: k8s password: <yourPassword> |
You are now ready to install the driver. Run the Synology provided script. (Again the manifests have been updated with images that support arm64)
1 |
$ ./scripts/deploy.sh install --all |
This will create both a storage class and the volume snapshot class. Here is an example of creating a persistent volume that will use this storage class, and automatically provision a LUN on the NAS for the volume.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-pv-claim namespace: my-app labels: app: my-app spec: storageClassName: synology-iscsi-storage accessModes: - ReadWriteMany resources: requests: storage: 200Gi |
Lets set up the scheduled snapshotter to automate volume snapshots, which will allow us to to roll back in the event of a problem, or provision new environment form point in time volumes.
Installing the Scheduled Volume Snapshotter
The Scheduled volume snapshotter is available on github but once again the images used do not support arm64.
https://github.com/ryaneorth/k8s-scheduled-volume-snapshotter
Unfortunately I could not find an image that supported arm64 elsewhere, so I have had to build this image myself. This is as simple as cloning the official repository and building the docker image on your raspberry Pis. If you do now want to do this feel free to use the image I have built. You can specify the image as a value when you deploy the Helm chart.
1 |
$ helm upgrade --install --set image.tag=latest --set image.repository=reg.gitlab.radicalgeek.co.uk/hoofer/infrastructure/k8s-scheduled-volume-snapshotter scheduled-volume-snapshotter https://github.com/ryaneorth/k8s-scheduled-volume-snapshotter/releases/download/v0.7.0/helm-chart.tgz |
You should now be able to set up scheduled volume snapshots. Here is an example of how to create a daily snapshot of the volume we created above.
1 2 3 4 5 6 7 8 9 10 11 12 |
apiVersion: k8s.ryanorth.io/v1beta1 kind: ScheduledVolumeSnapshot metadata: name: my-pv-scheduled-snapshot namespace: my-app spec: snapshotClassName: synology-snapshotclass persistentVolumeClaimName: my-pv-claim snapshotFrequency: 1d snapshotRetention: 3d snapshotLabels: firstLabel: mySnapshot |
Your cluster should now be set up ready for deploying our first workloads. In the next post we will deploy some important prerequisites for our development infrastructure.