Backup of Kubernetes Cluster using Velero

What is Velero

Velero is the official Open-source project from VMware: https://velero.io. It can back up all Kubernetes API objects and persistent volumes from the cluster on which it is installed. Backed up objects can be restored on the same cluster, or on a new one. Using a package like Velero is essential for any serious development in the Kubernetes cluster.

What We Are Going To Cover

  • Getting EC2 Client Credentials

  • Adjusting “values.yaml”, the configuration file

  • Creating namespace called velero for precise access to the Kubernetes cluster

  • Installing Velero with a Helm chart

  • Installing and deleting backups using Velero

  • Example 1 Basics of Restoring an Application

  • Example 2 Snapshot of Restoring an Application

Prerequisites

No. 1 Hosting

You need a CREODIAS hosting account with Horizon interface https://horizon.cloudferro.com.

The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at https://new.cloudferro.com/.

No. 2 Authorize to OpenStack

To be able to connect to the cloud, the openstack command must be installed. If not installed already, use article How to install OpenStackClient for Linux on CREODIAS

Then you have to authenticate your account to the cloud. See article How to activate OpenStack CLI access to CREODIAS cloud using one- or two-factor authentication.

No. 3 How to Access Kubernetes cluster post-deployment

To use Velero, you will need an active kubectl command. See article How To Access Kubernetes Cluster Post Deployment Using Kubectl On CREODIAS OpenStack Magnum

To verify that kubectl has access to your cluster, run this command:

$ kubectl get nodes -o wide

If the output of the command shows your cluster or clusters, it confirms kubectl has proper access to the cloud.

No. 4 Handling Helm

Helm is a package for installing Kubernetes packages. First you install Helm and then use it to install Velero. See Deploying Helm Charts on Magnum Kubernetes Clusters on CREODIAS Cloud.

Check whether Helm has been properly installed using the following command:

$ helm version
version.BuildInfo{Version:"v3.9.2", GitCommit:"1addefbfe665c350f4daf868a9adc5600cc064fd", GitTreeState:"clean", GoVersion:"go1.17.12"}

No. 5 Using Swift

Swift is an OpenStack Object Store module, allowing you to store and retrieve data with a simple API. It’s built for scale and is optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound.

Assuming that the openstack command is fully operational, use the following commands to install swift module and create a bucket named “backup”:

$ sudo apt-get install python3-pip # install pip
$ pip install python-swiftclient # install python openstack client
$ swift post backup # where "backup" is the name of your container (bucket)

Attention

Container name cannot contain spaces and special characters. Using a name “backup velero” or “backup-velero” will result in an error message in the next steps, while a name such as “backupvelero” will not.

Before Installing Velero

It is highly recommended to update & upgrade your environment (example shown here is for Ubuntu):

$ sudo apt update && sudo apt upgrade

It will be necessary to have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero compatibility matrix.

To sum it up: you need to have commands and apps such as openstack, Helm, kubectl, swift available, connected to the cloud and ready to run. Also, a container to which the cluster will be backed up, must be available.

Installation step 1 Getting EC2 Client Credentials

First fetch EC2 credentials from OpenStack. They are necessary to access private bucket (container). Generate them on your own by executing the following commands:

$ openstack ec2 credentials create
$ openstack ec2 credentials list

Save somewhere the Access Key and the Secret Key. They will be needed in the next step, in which you set up a Velero configuration file.

Installation step 2 Adjust the Configuration file - “values.yaml”

Now create or adjust a configuration file for Velero. Use a text editor of your choice to create that file. On MacOS or Linux, for example, you can use nano, like this:

sudo nano values.yaml

Use the configuration file provided below. Fill in the required fields, which are marked with #:

Attention

Fields name and bucket name cannot contain spaces and special characters. Using a name “backup velero” or “backup-velero” will result in an error message in the next steps, while a name such as “backupvelero” will not.

Values.yaml

initContainers:
- name: velero-plugin-for-aws
  image: velero/velero-plugin-for-aws:v1.4.0
  imagePullPolicy: IfNotPresent
  volumeMounts:
    - mountPath: /target
      name: plugins

configuration:
  provider: aws
  backupStorageLocation:
    provider: aws
    name: ## enter name of backup storage location (could be anything)
    bucket: ## enter name of bucket created in openstack
    default: true
    config:
      region: waw3-1
      s3ForcePathStyle: true
      s3Url: ## enter URL of object storage (for example "https://s3.waw3-1.cloudferro.com")
credentials:
  secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
    cloud: |
      [default]
      aws_access_key_id=
      aws_secret_access_key=
  ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
snapshotsEnabled: false
deployRestic: true
restic:
  podVolumePath: /var/lib/kubelet/pods
  privileged: true
schedules:
  mybackup:
    disabled: false
    schedule: "0 6,18 * * *" ## choose time, when scheduled backups will be make.
    template:
      ttl: "240h" ## choose ttl, after which the backups will be removed.
      snapshotVolumes: false

Paste the content to the configuration file “values.yaml” and save.

Example of an already configured file:

initContainers:
- name: velero-plugin-for-aws
  image: velero/velero-plugin-for-aws:v1.4.0
  imagePullPolicy: IfNotPresent
  volumeMounts:
    - mountPath: /target
      name: plugins

configuration:
  provider: aws
  backupStorageLocation:
    provider: aws
    name: velerobackuptesting
    bucket: bucket
    default: true
    config:
      region: waw3-1
      s3ForcePathStyle: true
      s3Url: https://s3.waw3-1.cloudferro.com
credentials:
  secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
    cloud: |
      [default]
      aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x
      aws_secret_access_key= dee1581dac214d3dsa34037e826f9148
  ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
snapshotsEnabled: false
deployRestic: true
restic:
  podVolumePath: /var/lib/kubelet/pods
  privileged: true
schedules:
  mybackup:
    disabled: false
    schedule: "0 * * *"
    template:
      ttl: "168h"
      snapshotVolumes: false

Installation step 3 Creating Namespace

Velero must be installed in eponymous namespace, velero. This is the command to create it:

$ kubectl create namespace velero
namespace/velero created

Installation step 4 Installing Velero with a Helm chart

Here are the commands to install Velero by means of a Helm chart:

$ helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts

The output is:

"vmware-tanzu" has been added to your repositories
$ helm install vmware-tanzu/velero --namespace velero --version 2.28 -f values.yaml --generate-name

The output is:

NAME: velero-1658856389
LAST DEPLOYED: Tue Jul 26 19:26:39 2022
NAMESPACE: velero
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:

Check that Velero is up and running:

$ kubectl get deployment/velero-1658856389 -n velero

The output will be similar to this:

NAME              READY     UP-TO-DATE     AVAILABLE AGE
velero-1658856389  1/1         1 1              45s

Check that the secret has been created:

$ kubectl get secret/velero-1658856389 -n velero

The result is:

       NAME                TYPE     DATA     AGE
velero-1658856389         Opaque     1       50s

Installation step 5 Installing Velero CLI

The final step is to install Velero CLI – Command Line Interface suitable for working from the terminal window on your operating system.

Download the client specified for your operating system from: https://github.com/vmware-tanzu/velero/releases, using wget. Here we are downloading version velero-v1.9.1-linux-amd64.tar.gz but it is recommended to download the latest version. In that case, change the name of the tar.gz file accordingly.

$ wget https://github.com/vmware-tanzu/velero/releases/download/v1.9.1/velero-v1.9.1-linux-amd64.tar.gz
--2022-08-27 12:32:58-- https://github.com/vmware-tanzu/velero/releases/download/v1.9.1/velero-v1.9.1-linux-amd64.tar.gz
Saving to: ‘velero-v1.9.1-linux-amd64.tar.gz’
2022-08-27 12:36:29 (156 KB/s) - ‘velero-v1.9.1-linux-amd64.tar.gz’ saved [27924025/27924025]

Extract the tarball:

$ tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
velero-v1.9.1-linux-amd64/LICENSE
velero-v1.9.1-linux-amd64/examples/README.md
velero-v1.9.1-linux-amd64/examples/minio
velero-v1.9.1-linux-amd64/examples/minio/00-minio-deployment.yaml
velero-v1.9.1-linux-amd64/examples/nginx-app
velero-v1.9.1-linux-amd64/examples/nginx-app/README.md
velero-v1.9.1-linux-amd64/examples/nginx-app/base.yaml
velero-v1.9.1-linux-amd64/examples/nginx-app/with-pv.yaml
velero-v1.9.1-linux-amd64/velero

Move the extracted velero binary to somewhere in your $PATH (/usr/local/bin for most users):

$ cd velero-v1.9.1-linux-amd64
# System might force using sudo
$ sudo mv <velero-binary-name> <$PATH-location>
# check if velero is working
$ velero version
Client:
   Version: v1.9.1
   Git commit: 6021f148c4d7721285e815a3e1af761262bff029

After these operations, you should be allowed to use velero commands. For help how to use them, execute:

$ velero help

Working with Velero

If you want to restore an object or persistent volume in the same cluster, you must first delete the existing one. If you want to restore backups from a cluster to another cluster, it’s possible. You must install Velero on a new cluster and connect to the same s3 bucket where the backups are stored. Example commands:

Backup all API objects:

$ velero backup create <name of backup>
$ velero backup create test
Backup request "test" submitted successfully.

Backup all api objects in default namespace:

$ velero backup create <name of backup> --include-namespaces velero
$ velero backup create test --include-namespaces default
Backup request "test" submitted successfully.

Show backups:

$ velero backup get
NAME   STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
backup  New      0       0     <nil>   n/a                      <none>
test    New      0       0     <nil>   n/a                      <none>
test-1  New      0       0     <nil>   n/a                      <none>

Restore from backup:

Important

If you want to restore an object, first delete it from the cluster.

$ velero restore create <name of restore> --from-backup <name of backup>
$ velero restore create restore --from-backup test
Restore request "restore" submitted successfully.
Run `velero restore describe <name of restore>` or `velero restore logs <name of restore>` for more details.

Example 1 Basics of Restoring an Application

Here is an example how to clone from GitHub and ensure that Velero is working properly.

You can clone the examples by executing:

$ git clone https://github.com/vmware-tanzu/velero.git
Cloning into 'velero'...
Resolving deltas: 100% (27049/27049), done.
$ cd velero

Start the sample nginx app:

$ kubectl apply -f examples/nginx-app/base.yml
kubectl apply -f base.yaml
namespace/nginx-example unchanged
deployment.apps/nginx-deployment unchanged
service/my-nginx unchanged

Create a backup:

$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.

Simulate a disaster:

$ kubectl delete namespaces nginx-example
# Wait for the namespace to be deleted
namespace "nginx-example" deleted

Restore your lost resources:

$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20220728013338" submitted successfully.
Run `velero restore describe nginx-backup-20220728013338` or `velero restore logs nginx-backup-20220728013338` for more details.

$ velero backup get
NAME         STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
backup         New     0       0     <nil>   n/a                      <none>
nginx-backup   New     0       0     <nil>   n/a                      <none>

Example 2 Snapshot of Restoring an Application

Start the sample nginx app:

$ kubectl apply -f examples/nginx-app/with-pv.yaml
namespace/nginx-example created
persistentvolumeclaim/nginx-logs created
deployment.apps/nginx-deployment created
service/my-nginx created

Create a backup with PV snapshotting:

$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.

Simulate a disaster:

$ kubectl delete namespaces nginx-example
namespace "nginx-example" deleted

Important

Because the default reclaim policy for dynamically-provisioned PVs is “Delete”, these commands should trigger your cloud provider to delete the disk that backs up the PV. Deletion is asynchronous, so this may take some time.

Restore your lost resources:

$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20220728015234" submitted successfully.
Run `velero restore describe nginx-backup-20220728015234` or `velero restore logs nginx-backup-20220728015234` for more details.