What is Velero

Velero is the official Open-source project from VMware: https://velero.io. It can back up all Kubernetes API objects and persistent volumes from the cluster on which it is installed. Backupped objects can be restored on the same cluster, or on a new one.

 

How to install Velero

 

Before You Begin

First, it is highly recommended to update & upgrade your environment (example is shown for Ubuntu):

$ sudo apt update && sudo apt upgrade

It will be necessary to have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero compatibility matrix.

To use Velero, you will need kubectl. kubectl installation is described on Install Tools page of the official Kubernetes site.

 

Swift

The OpenStack Object Store module, known as Swift, offers allows you to store and retrieve data with a simple API. It's built for scale and is optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound.

Before the initial deployment, EC2 credentials should be fetched from OpenStack.

See the details: How to install OpenStackClient (Linux)?

Once you have authorized to your RC file, use the following commands to install swift module and create a bucket named "backup":

Container name cannot contain spaces and special characters. Using a name "backup velero" or "backup-velero" will result in an error message in the next steps, while a name such as "backupvelero" will not.
$ sudo apt-get install python3-pip # install pip
$ pip install python-swiftclient # install python openstack client
$ swift post backup # where "backup" is the name of your container (bucket)

 

Getting EC2 client credentials

EC2 credentials are necessary to access private bucket (container).

Users are able to generate ec2 credentials on their own by invoking these commands:

$ openstack ec2 credentials create
$ openstack ec2 credentials list

Save somewhere Access Key and Secret Key. They will be needed in the configuration file.

 

Installing Helm

Helm has an installer script that will automatically grab the latest version and install it locally.

You can fetch that script, and then execute it locally:

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.9.2-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
[sudo] password for User:
helm installed into /usr/local/bin/helm

You can check if the helm is installed properly using:

$ helm version
version.BuildInfo{Version:"v3.9.2", GitCommit:"1addefbfe665c350f4daf868a9adc5600cc064fd", GitTreeState:"clean", GoVersion:"go1.17.12"}

 

Configuration file - "values.yaml"

The next step will help you to adjust the configuration file to your needs.

Use a text editor of your choice to create that file, on MacOS or Linux you can use nano, like this:

sudo nano values.yaml

We provide a configuration file below. Please fill it in the required fields (we have marked them with #):

Fields name and bucket cannot contain spaces and special characters. Using a name "backup velero" or "backup-velero" will result in an error message in the next steps, while a name such as "backupvelero" will not.

Values.yaml

initContainers:
  - name: velero-plugin-for-aws
    image: velero/velero-plugin-for-aws:v1.4.0
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins

configuration:
  provider: aws
  backupStorageLocation:
    name: ## enter name of backup storage location (could be anything)
    bucket: ## enter name of bucket created in openstack
    default: true
    config:
      region: RegionOne
      s3ForcePathStyle: true
      s3Url: ## enter URL of object storage (for example "https://s3.waw3-1.cloudferro.com")
credentials:
  secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
    cloud: |
      [default]
      aws_access_key_id=
      aws_secret_access_key=
  ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
snapshotsEnabled: false
deployRestic: true
restic:
  podVolumePath: /var/lib/kubelet/pods
  privileged: true
schedules:
  mybackup:
    disabled: false
    schedule: "0 6,18 * * *" ## choose time, when scheduled backups will be make.
    template:
      ttl: "240h" ## choose ttl, after which the backups will be removed.
      snapshotVolumes: false

Paste the content to the configuration file "values.yaml" and save.

Example of already configured file:

initContainers:
  - name: velero-plugin-for-aws
    image: velero/velero-plugin-for-aws:v1.4.0
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins

configuration:
  provider: aws
  backupStorageLocation:
    name: velerobackuptesting
    bucket: bucket
    default: true
    config:
      region: RegionOne
      s3ForcePathStyle: true
      s3Url: https://s3.waw3-1.cloudferro.com
credentials:
  secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
    cloud: |
      [default]
      aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x
      aws_secret_access_key= dee1581dac214d3dsa34037e826f9148
  ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
snapshotsEnabled: false
deployRestic: true
restic:
  podVolumePath: /var/lib/kubelet/pods
  privileged: true
schedules:
  mybackup:
    disabled: false
    schedule: "0 * * *"
    template:
      ttl: "168h"
      snapshotVolumes: false

 

Creating namespace

Namespaces are a way to organize clusters into virtual sub-clusters — they can be helpful when different teams or projects share a Kubernetes cluster. Any number of namespaces are supported within a cluster, each logically separated from others but with the ability to communicate with each other.

Before creating namespace, verify that kubectl have access to your Cloud. See basic data about the cluster with the following command:

$ kubectl get nodes -o wide

If the output of the command shows your cluster / clusters, it confirms kubectl has proper access to the cloud.

Accordingly, you can use this command to create namespace:

$ kubectl create namespace <name of your namespace>
$ kubectl create namespace veleronamespace
namespace/veleronamespace created

Installing the helm chart with commands

$ helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
"vmware-tanzu" has been added to your repositories
$ helm install vmware-tanzu/velero --namespace <name of your namespace> --version 2.28 -f values.yaml --generate-name
$ helm install vmware-tanzu/velero --namespace veleronamespace --version 2.28 -f values.yaml --generate-name
NAME: velero-1658856389
LAST DEPLOYED: Tue Jul 26 19:26:39 2022
NAMESPACE: veleronamespace
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:

Check that Velero is up and running:

$ kubectl get deployment/<NAME> -n <namespace>
$ kubectl get deployment/velero-1658856389 -n veleronamespace
NAME              READY     UP-TO-DATE     AVAILABLE AGE
velero-1658856389  1/1         1 1              45s

Check that the secret has been created:

$ kubectl get secret/<NAME> -n <namespace>
$ kubectl get secret/velero-1658856389 -n veleronamespace
       NAME                TYPE     DATA     AGE
velero-1658856389         Opaque     1       50s

 

Installing Velero CLI

The final step is to install Velero CLI

Download the client specified for your operating system from: https://github.com/vmware-tanzu/velero/releases using wget (It is recommended to download the latest version):

$ wget https://github.com/vmware-tanzu/velero/releases/download/v1.9.0/velero-v1.9.0-linux-amd64.tar.gz
--2022-07-27 12:32:58-- https://github.com/vmware-tanzu/velero/releases/download/v1.9.0/velero-v1.9.0-linux-amd64.tar.gz
Saving to: ‘velero-v1.9.0-linux-amd64.tar.gz’
2022-07-27 12:36:29 (156 KB/s) - ‘velero-v1.9.0-linux-amd64.tar.gz’ saved [27924025/27924025]

Extract the tarball:

$ tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
velero-v1.9.0-linux-amd64/LICENSE
velero-v1.9.0-linux-amd64/examples/README.md
velero-v1.9.0-linux-amd64/examples/minio
velero-v1.9.0-linux-amd64/examples/minio/00-minio-deployment.yaml
velero-v1.9.0-linux-amd64/examples/nginx-app
velero-v1.9.0-linux-amd64/examples/nginx-app/README.md
velero-v1.9.0-linux-amd64/examples/nginx-app/base.yaml
velero-v1.9.0-linux-amd64/examples/nginx-app/with-pv.yaml
velero-v1.9.0-linux-amd64/velero

Move the extracted velero binary to somewhere in your $PATH (/usr/local/bin for most users):

$ mv <RELEASE-TARBALL-NAME> $PATH
# System might force using sudo
$ sudo mv <RELEASE-TARBALL-NAME> $PATH
# check if velero is working
$ velero version
Client:
   Version: v1.9.0
   Git commit: 6021f148c4d7721285e815a3e1af761262bff029

After this operations, you should be allowed to use velero commands. How to use them, please execute this command:

$ velero help

 

How to operate Velero

If you want to restore an object or persistent volume in the same cluster, you must first delete the existing one. If you want to restore backups from a cluster to another cluster, it's possible. You must install Velero on a new cluster and connect to the same s3 bucket where the backups are stored.

Example commands:

Backup all API objects:

$ velero backup create <name of backup>
$ velero backup create test
Backup request "test" submitted successfully.

Backup all api objects in default namespace:

$ velero backup create <name of backup> --include-namespaces <name of namespace>
$ velero backup create test --include-namespaces default
Backup request "test" submitted successfully.

Show backups:

$ velero backup get
NAME   STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
backup  New      0       0     <nil>   n/a                      <none>
test    New      0       0     <nil>   n/a                      <none>
test-1  New      0       0     <nil>   n/a                      <none>

Restore from backup:

If you want to restore an object, first delete it from the cluster.
$ velero restore create <name of restore> --from-backup <name of backup>
$ velero restore create restore --from-backup test
Restore request "restore" submitted successfully.
Run `velero restore describe <name of restore>` or `velero restore logs <name of restore>` for more details.

 

Basic Example of restoring application

We have also prepared some examples which you can clone, and ensure that Velero is working properly.

You can clone the examples by executing:

$ git clone https://github.com/vmware-tanzu/velero.git
Cloning into 'velero'...
Resolving deltas: 100% (27049/27049), done.
$ cd velero

Start the sample nginx app:

$ kubectl apply -f examples/nginx-app/base.yml
kubectl apply -f base.yaml
namespace/nginx-example unchanged
deployment.apps/nginx-deployment unchanged
service/my-nginx unchanged

Create a backup:

$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.

Simulate a disaster:

$ kubectl delete namespaces nginx-example
# Wait for the namespace to be deleted
namespace "nginx-example" deleted

Restore your lost resources:

$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20220728013338" submitted successfully.
Run `velero restore describe nginx-backup-20220728013338` or `velero restore logs nginx-backup-20220728013338` for more details.

$ velero backup get
NAME         STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
backup         New     0       0     <nil>   n/a                      <none>
nginx-backup   New     0       0     <nil>   n/a                      <none> 

 

Snapshot Example of restoring application

Start the sample nginx app:

$ kubectl apply -f examples/nginx-app/with-pv.yaml
namespace/nginx-example created
persistentvolumeclaim/nginx-logs created
deployment.apps/nginx-deployment created
service/my-nginx created

Create a backup with PV snapshotting:

$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.

Simulate a disaster:

$ kubectl delete namespaces nginx-example
namespace "nginx-example" deleted
Because the default reclaim policy for dynamically-provisioned PVs is “Delete”, these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time.

Restore your lost resources:

$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20220728015234" submitted successfully.
Run `velero restore describe nginx-backup-20220728015234` or `velero restore logs nginx-backup-20220728015234` for more details.

 

 

You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

https://kubernetes.io/docs/

https://helm.sh/docs/

https://velero.io/docs/