- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Processing Sentinel-5P data using HARP and Python
- EO Data Access (R)evolution
- Land cover classification using remote sensing and AI/ML technology
- AI-based satellite image enhancer and mosaicking tools
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- EO4UA
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Legal Matters
- FAQ
- News
- Partner Services
- About Us
- Forum
- Knowledgebase
Kubernetes
How To Access Kubernetes Cluster Post Deployment Using Kubectl On Creodias OpenStack Magnum
This article assumes that you have access to CloudFerro WAW3-1 infrastructure, which has Kubernetes support built-in (OpenStack Magnum module).
If your CREODIAS account has access only to CF2 infrastructure, please contact support to get access to WAW3-1.
In this tutorial, you start with a freshly installed Kubernetes cluster on Cloudferro OpenStack server and connect the main Kubernetes tool, kubectl, to the cloud.
What We Are Going To Cover
How to connect kubectl to the OpenStack Magnum server
How to access clusters with kubectl
Prerequisites
No. 1 Hosting
To follow this article, use your Creodias hosting account with WAW3-1 server and Horizon interface.
No. 2 Installation of kubectl
Standard types of kubectl installation are described on Install Tools page of the official Kubernetes site.
No. 3 A cluster already installed on Magnum site
- With Horizon interface: How to Create a Kubernetes Cluster Using Creodias OpenStack Magnum.
- With command line interface: How To Use Command Line Interface for Kubernetes Clusters On Creodias OpenStack Magnum.
- Or, you may want to create a new cluster called k8s-cluster, just for this occasion – by using the following CLI command:
openstack coe cluster create --cluster-template k8s-stable-1.21.5 --labels eodata_access_enabled=false,floating-ip-enabled=true,master-lb-enabled=true --merge-labels --keypair sshkey --master-count 3 --node-count 2 --master-flavor eo1.large --flavor eo1.large k8s-cluster
It takes some 10-20 minutes for the new cluster to form.
In the rest of this text we shall use cluster name k8s-cluster – be sure to use the name of the existing cluster instead.
No. 4 Connect openstack client to the cloud
Prepare openstack and magnum clients by executing Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud from article How To Install OpenStack and Magnum Clients for Command Line Interface to Creodias Horizon.
The Plan
Follow up the steps listed in Prerequisite No. 2 and install kubectl on the platform of your choice.
Use the existing Kubernetes cluster on Cloudferro or install a new one using the methods outlined in Prerequisites Nos. 3.
Use Step 2 in Prerequisite No. 4 to enable connection of openstack and magnum clients to the cloud.
You are then going to connect kubectl to the Cloud.
Step 1 Download Certificates From the Server
The openstack command to download the corresponding configuration file from Magnum has these input parameters:
openstack coe cluster config --help
usage: openstack coe cluster config [-h]
[--dir <dir>] [--force] [--output-certs]
[--use-certificate] [--use-keystone]
<cluster>
Get Configuration for a Cluster
positional arguments:
<cluster> The name or UUID of cluster to update
optional arguments:
-h, --help show this help message and exit
--dir <dir> Directory to save the certificate and config files.
--force Overwrite files if existing.
--output-certs Output certificates in separate files.
--use-certificate Use certificate in config files.
--use-keystone Use Keystone token in config files.
You will use command
openstack coe cluster config
to download the files that kubectl needs for authentication with the server. Create a new directory called k8sdir into which the files will be downloaded:
mkdir k8sdir
Then download the certificates into that folder:
openstack coe cluster config
--dir k8sdir
--force
--output-certs
k8s-cluster
There will be four files:
ls k8sdir
ca.pem cert.pem config key.pem
Parameter –output-certs produces .pem files, which are X.509 certificates, originally created so that they can be sent via email. File config combines the .pem files and contains all the information needed for kubectl to access the cloud. Using –force overwrites the existing files (if any), so you are guaranteed to work with only the latest versions of the files from server.
The result of this command is shown in the row below:
export KUBECONFIG=/Users/duskosavic/CloudferroDocs/k8sdir/config
Copy this command and paste it into the command line of terminal, then press the Enter key on the keyboard to execute it. System variable KUBECONFIG will be thus initialized and the kubectl command will have access to the config file at all times.
Step 2 Verify That kubectl Has Access to the Cloud
See basic data about the cluster with the following command:
kubectl get nodes -o wide
The result is:

That verifies kubectl has proper access to the cloud.
To see available commands kubectl has, use:
kubectl --help
The listing is too long to reproduce here, but here is how it starts:

kubectl also has a long list of options, which are parameters that can be applied to any command. See them with
kubectl options
What To Do Next
With kubectl operational, you can
deploy apps on the cluster,
access multiple clusters,
create load balancers,
access applications in the cluster using port forwarding,
use Service to access application in a cluster,
list container images in the cluster
use Services, Deployments and all other resources in a Kubernetes cluster.
Kubernetes dashboard is a visual alternative to kubectl. To install it, see Using Dashboard To Access Kubernetes Cluster Post Deployment On Creodias OpenStack Magnum.