- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Processing Sentinel-5P data using HARP and Python
- EO Data Access (R)evolution
- Land cover classification using remote sensing and AI/ML technology
- AI-based satellite image enhancer and mosaicking tools
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- EO4UA
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Legal Matters
- FAQ
- News
- Partner Services
- About Us
- Forum
- Knowledgebase
What is Velero
Velero is the official Open-source project from VMware: https://velero.io. It can back up all Kubernetes API objects and persistent volumes from the cluster on which it is installed. Backupped objects can be restored on the same cluster, or on a new one.
What we are going to Cover
- Preparing for the installation of Velero (update and upgrade your environment, secure access to Kubernetes cluster etc.)
- Authorizing to OpenStack and installing the Swift module from OpenStack suite of modules
- Getting EC2 Client Credentials
- Installing Helm to help us automate the installation of Velero
- Adjusting “values.yaml”, the configuration file
- Creating namespaces to gain precise access to the Kubernetes cluster
- Installing Velero with a Helm chart
- Installing and deleting backups using Velero
- Example 1 Basics of Restoring an Application
- Example 2 Snapshot of Restoring an Application
Before Installing Velero
Before You Begin
First, it is highly recommended to update & upgrade your environment (example is shown for Ubuntu):
$ sudo apt update && sudo apt upgrade
It will be necessary to have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero compatibility matrix.
To use Velero, you will need kubectl. kubectl installation is described on Install Tools page of the official Kubernetes site.
Swift
The OpenStack Object Store module, known as Swift, offers allows you to store and retrieve data with a simple API. It's built for scale and is optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound.
Step 1 Authorize to OpenStack and Install the Swift Module
Before the initial deployment, EC2 credentials should be fetched from OpenStack.
See the details: How to install OpenStackClient (Linux)?
Once you have authorized to your RC file, use the following commands to install swift module and create a bucket named "backup":
$ sudo apt-get install python3-pip # install pip $ pip install python-swiftclient # install python openstack client $ swift post backup # where "backup" is the name of your container (bucket)
Step 2 Getting EC2 client credentials
EC2 credentials are necessary to access private bucket (container).
Users are able to generate ec2 credentials on their own by invoking these commands:
$ openstack ec2 credentials create $ openstack ec2 credentials list
Save somewhere Access Key and Secret Key. They will be needed in the configuration file.
Step 3 Installing Helm
Helm is package for installing Kubernetes packages. First you install Helm and then use it to install Velero.
Helm has an installer script that will automatically grab the latest version and install it locally. The commands are:
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 $ chmod 700 get_helm.sh $ ./get_helm.sh Downloading https://get.helm.sh/helm-v3.9.2-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin [sudo] password for User: helm installed into /usr/local/bin/helm
Check whether Helm has been properly installed using the following command:
$ helm version version.BuildInfo{Version:"v3.9.2", GitCommit:"1addefbfe665c350f4daf868a9adc5600cc064fd", GitTreeState:"clean", GoVersion:"go1.17.12"}
Step 4 Adjust the Configuration file - "values.yaml"
The next step will help you to adjust the configuration file to your needs.
Use a text editor of your choice to create that file, on MacOS or Linux you can use nano, like this:
sudo nano values.yaml
Use the configuration file provided below. Please fill in the required fileds, which are marked with #:
Values.yaml
initContainers: - name: velero-plugin-for-aws image: velero/velero-plugin-for-aws:v1.4.0 imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /target name: plugins configuration: provider: aws backupStorageLocation: name: ## enter name of backup storage location (could be anything) bucket: ## enter name of bucket created in openstack default: true config: region: RegionOne s3ForcePathStyle: true s3Url: ## enter URL of object storage (for example "https://s3.waw3-1.cloudferro.com") credentials: secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret. cloud: | [default] aws_access_key_id= aws_secret_access_key= ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents. snapshotsEnabled: false deployRestic: true restic: podVolumePath: /var/lib/kubelet/pods privileged: true schedules: mybackup: disabled: false schedule: "0 6,18 * * *" ## choose time, when scheduled backups will be make. template: ttl: "240h" ## choose ttl, after which the backups will be removed. snapshotVolumes: false
Paste the content to the configuration file "values.yaml" and save.
Example of already configured file:
initContainers: - name: velero-plugin-for-aws image: velero/velero-plugin-for-aws:v1.4.0 imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /target name: plugins configuration: provider: aws backupStorageLocation: name: velerobackuptesting bucket: bucket default: true config: region: RegionOne s3ForcePathStyle: true s3Url: https://s3.waw3-1.cloudferro.com credentials: secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret. cloud: | [default] aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x aws_secret_access_key= dee1581dac214d3dsa34037e826f9148 ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents. snapshotsEnabled: false deployRestic: true restic: podVolumePath: /var/lib/kubelet/pods privileged: true schedules: mybackup: disabled: false schedule: "0 * * *" template: ttl: "168h" snapshotVolumes: false
Step 5 Creating namespace
Namespaces are a way to organize clusters into virtual sub-clusters — they can be helpful when different teams or projects share a Kubernetes cluster. Any number of namespaces are supported within a cluster, each logically separated from others but with the ability to communicate with each other.
Before creating namespace, verify that kubectl have access to your Cloud. See basic data about the cluster with the following command:
$ kubectl get nodes -o wide
If the output of the command shows your cluster / clusters, it confirms kubectl has proper access to the cloud.
Accordingly, you can use this command to create namespace:
$ kubectl create namespace <name of your namespace>
$ kubectl create namespace veleronamespace namespace/veleronamespace created
Step 6 Installing the helm chart with commands
$ helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts "vmware-tanzu" has been added to your repositories $ helm install vmware-tanzu/velero --namespace <name of your namespace> --version 2.28 -f values.yaml --generate-name
$ helm install vmware-tanzu/velero --namespace veleronamespace --version 2.28 -f values.yaml --generate-name NAME: velero-1658856389 LAST DEPLOYED: Tue Jul 26 19:26:39 2022 NAMESPACE: veleronamespace STATUS: deployed REVISION: 1 TEST SUITE: None NOTES:
Check that Velero is up and running:
$ kubectl get deployment/<NAME> -n <namespace>
$ kubectl get deployment/velero-1658856389 -n veleronamespace NAME READY UP-TO-DATE AVAILABLE AGE velero-1658856389 1/1 1 1 45s
Check that the secret has been created:
$ kubectl get secret/<NAME> -n <namespace>
$ kubectl get secret/velero-1658856389 -n veleronamespace NAME TYPE DATA AGE velero-1658856389 Opaque 1 50s
Step 7 Installing Velero CLI
The final step is to install Velero CLI
Download the client specified for your operating system from: https://github.com/vmware-tanzu/velero/releases using wget (It is recommended to download the latest version):
$ wget https://github.com/vmware-tanzu/velero/releases/download/v1.9.1/velero-v1.9.1-linux-amd64.tar.gz --2022-08-27 12:32:58-- https://github.com/vmware-tanzu/velero/releases/download/v1.9.1/velero-v1.9.1-linux-amd64.tar.gz Saving to: ‘velero-v1.9.0-linux-amd64.tar.gz’ 2022-08-27 12:36:29 (156 KB/s) - ‘velero-v1.9.1-linux-amd64.tar.gz’ saved [27924025/27924025]
Extract the tarball:
$ tar -xvf <RELEASE-TARBALL-NAME>.tar.gz velero-v1.9.1-linux-amd64/LICENSE velero-v1.9.1-linux-amd64/examples/README.md velero-v1.9.1-linux-amd64/examples/minio velero-v1.9.1-linux-amd64/examples/minio/00-minio-deployment.yaml velero-v1.9.1-linux-amd64/examples/nginx-app velero-v1.9.1-linux-amd64/examples/nginx-app/README.md velero-v1.9.1-linux-amd64/examples/nginx-app/base.yaml velero-v1.9.1-linux-amd64/examples/nginx-app/with-pv.yaml velero-v1.9.1-linux-amd64/velero
Move the extracted velero binary to somewhere in your $PATH (/usr/local/bin for most users):
$ mv <RELEASE-TARBALL-NAME> $PATH # System might force using sudo $ sudo mv <RELEASE-TARBALL-NAME> $PATH # check if velero is working $ velero version Client: Version: v1.9.1 Git commit: 6021f148c4d7721285e815a3e1af761262bff029
After this operations, you should be allowed to use velero commands. How to use them, please execute this command:
$ velero help
Step 8 Operate velero
If you want to restore an object or persistent volume in the same cluster, you must first delete the existing one. If you want to restore backups from a cluster to another cluster, it's possible. You must install Velero on a new cluster and connect to the same s3 bucket where the backups are stored.
Example commands:
Backup all API objects:
$ velero backup create <name of backup>
$ velero backup create test Backup request "test" submitted successfully.
Backup all api objects in default namespace:
$ velero backup create <name of backup> --include-namespaces <name of namespace>
$ velero backup create test --include-namespaces default Backup request "test" submitted successfully.
Show backups:
$ velero backup get NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR backup New 0 0 <nil> n/a <none> test New 0 0 <nil> n/a <none> test-1 New 0 0 <nil> n/a <none>
Restore from backup:
$ velero restore create <name of restore> --from-backup <name of backup>
$ velero restore create restore --from-backup test Restore request "restore" submitted successfully. Run `velero restore describe <name of restore>` or `velero restore logs <name of restore>` for more details.
Example 1 Basics Example of restoring application
We have also prepared some examples which you can clone, and ensure that Velero is working properly.
You can clone the examples by executing:
$ git clone https://github.com/vmware-tanzu/velero.git Cloning into 'velero'... Resolving deltas: 100% (27049/27049), done. $ cd velero
Start the sample nginx app:
$ kubectl apply -f examples/nginx-app/base.yml kubectl apply -f base.yaml namespace/nginx-example unchanged deployment.apps/nginx-deployment unchanged service/my-nginx unchanged
Create a backup:
$ velero backup create nginx-backup --include-namespaces nginx-example Backup request "nginx-backup" submitted successfully.
Simulate a disaster:
$ kubectl delete namespaces nginx-example # Wait for the namespace to be deleted namespace "nginx-example" deleted
Restore your lost resources:
$ velero restore create --from-backup nginx-backup Restore request "nginx-backup-20220728013338" submitted successfully. Run `velero restore describe nginx-backup-20220728013338` or `velero restore logs nginx-backup-20220728013338` for more details. $ velero backup get NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR backup New 0 0 <nil> n/a <none> nginx-backup New 0 0 <nil> n/a <none>
Example 2 Snapshot Example of restoring application
Start the sample nginx app:
$ kubectl apply -f examples/nginx-app/with-pv.yaml namespace/nginx-example created persistentvolumeclaim/nginx-logs created deployment.apps/nginx-deployment created service/my-nginx created
Create a backup with PV snapshotting:
$ velero backup create nginx-backup --include-namespaces nginx-example Backup request "nginx-backup" submitted successfully. Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
Simulate a disaster:
$ kubectl delete namespaces nginx-example namespace "nginx-example" deleted
Restore your lost resources:
$ velero restore create --from-backup nginx-backup Restore request "nginx-backup-20220728015234" submitted successfully. Run `velero restore describe nginx-backup-20220728015234` or `velero restore logs nginx-backup-20220728015234` for more details.
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.