- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Processing Sentinel-5P data using HARP and Python
- EO Data Access (R)evolution
- Land cover classification using remote sensing and AI/ML technology
- AI-based satellite image enhancer and mosaicking tools
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- EO4UA
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Legal Matters
- FAQ
- News
- Partner Services
- About Us
- Forum
- Knowledgebase
This article assumes that you have access to CloudFerro WAW3-1 infrastructure, which has Kubernetes support built-in (OpenStack Magnum module).
If your CREODIAS account has access only to CF2 infrastructure, please contact support to get access to WAW3-1.
In this tutorial, you will start with an empty Horizon screen and end up running a full Kubernetes cluster.
What We Are Going To Cover
Creating a new Kubernetes cluster using the default cluster template
Visual interpretation of created networks and Kubernetes cluster nodes
Prerequisites
No. 1 Hosting
To follow this article, use your Creodias hosting account with WAW3-1 server and Horizon interface.
No. 2 Private and public keys
An SSH key-pair created in OpenStack dashboard. To create it, follow this article How to create key-pair in OpenStack dashboard?
The key pair created in that article is called “sshkey”. You will use it as one of the parameters for creation of the Kubernetes cluster.
No. 3 Project quotas and flavors limits
Article Dashboard Overview - Project quotas and flavors limits will introduce you to quotas and flavors in OpenStack.
No. 4 Account ManagementPlease bear in mind that all the resources that you require and use will reflect on the state of your account wallet, so please check often your account statistics here.
No. 5 Autohealing of Kubernetes Clusters
To learn more about autohealing of Kubernetes clusters, follow this official article What is Magnum Autohealer?.
No. 6 Accessing Kubernetes cluster with no load balancer attached to it
How to access Kubernetes cluster if there is no load balancer attached during the creation of the cluster: How To Create API Server LoadBalancer for Kubernetes Cluster on Creodias OpenStack Magnum.
Select Amongst the Three Basic Kubernetes Cluster Templates
There are three basic Kubernetes cluster templates supplied with each Creodias OpenStack system:
k8s-stable-1.21.5 for Kubernetes release 1.21
k8s-stable-1.22.5 for Kubernetes release 1.22
k8s-stable-1.23.5 for Kubernetes release 1.23
From the technical standpoint of view, they are all equivalent as each will produce a working Kubernetes cluster on Creodias OpenStack Magnum hosting.
To avoid confusion, we are using k8s-stable-1.21.5 throughout the text and in CLI commands. Feel free to replace with any of the other two templates to suit your needs, goals and environment.
Step 1 Create New Cluster Screen
Click on Container Infra and then on Clusters.

There are no clusters yet so click on button + Create Cluster on the right side of the screen.

On the left side and in blue color are the main options – screens into which you will enter data for the cluster. The three with the asterisks, Details, Size, and Network are mandatory; you must visit them and either enter new values or confirm the offered default values within each screen. When all the values are entered, the Submit button in the lower right corner will become active.
Cluster Name
This is your first cluster, name it just Kubernetes.
Cluster name cannot contain spaces. Using a name such as XYZ k8s Production will result in an error message, while a name such as XYZ-k8s-Production won't.
Cluster Template
In this tutorial, there are three templates to choose from. The only difference is that they relate to one particular release of Kubernetes, namely, 1.21, 1.22 and 1.23. Select k8s-stable-1.21.5, our exemplar default template in this article.
You immediately see how the cluster template is formed:
Availability Zone
nova is the name of the related module in OpenStack and is the only option offered here.
Keypair
See Prerequisite No. 2 to learn more about private and public keys.
For uniformity with other articles on Kubernetes here, choose sshkey from the Prerequisites section. In practice, you will choose your own key pairs, probably generated for each task or OpenStack module you have access to.
Addon Software - Enable Access to EO Data
This field is specific to OpenStack systems that are developed by Cloudferro hosting company. EODATA here means Earth Observation Data and refers to data gained from scientific satelites monitoring the Earth.
Checking this field on, will install a network which will have access to the downloaded satelite data.
If you are just trying to learn about Kubernetes on OpenStack, leave this option unchecked. And vice versa: if you want to go into production and use satelite data, turn it on.
There is cluster template label called eodata-access which -- if turned on -- will have the same effect of creating a network for connecting to the eodata.
This is what the screen looks like when all the data have been entered:

In this step, you have defined main node details, where node denotes a Kubernetes node, a part of the cluster that you are forming.
Click on lower right button Next or on option Size from the left main menu of the screen to proceed to the next step of defining a Kubernetes cluster.
Step 2 Define Master and Worker Nodes
This is the critical part of defining the cluster. If the parameters you enter in this screen are wrong or take too many resources, the creation of cluster will fail.
This is how this window is looking before entering the data:
If there are any fields with default values, such as Flavor of Master Nodes and Flavor of Worker Nodes, these values were predefined in the cluster template.
Number of Master Nodes
Number of Master Nodes
Kubernetes cluster has master and worker nodes. In real applications, you would want to have at least two master nodes and, in general, as many as you can. Here, you want to create your first cluster in a new environment so settle for just 1 master node.
Flavor of Master Nodes
Select eo1.large for master node flavor.
Number of Worker Nodes

Enter 3. This is for introductory purposes only, in real life this number can vary from less than ten and up to a few thousands and more.
Flavor of Worker Nodes
Again, choose eo1.large.
Auto Scaling

When there is lot of demand for workers’ services, the Kubernetes system can scale to using more worker nodes. In this case, you require 3 worker nodes so the options are to subtract one and add 1, hence, the minimum number of worker nodes is 2 and the maximum number is 4.
Here is what the screen Size looks like when all the data are entered:

In this step, you have defined the size of your cluster, selecting how many master and worker nodes will there be, as well as their respective sizes.
To proceed, click on lower right button Next or on option Network from the left main menu.
Step 3 Defining Network
This is the last of mandatory screens and the blue Submit button in the lower right corner is now active. (If it is not, use screen button Back to fix values in previous screens.)
Technically speaking, blue button Submit will become active as soon as this window appears on the screen. However, there is no guarantee that the cluster will be formed if you click on it right away. It may well have wrong parameters or some of the needed resources may not be available and the process may stop on its own after 10 or 20 minutes of waiting.
Resources tied up from one attempt of creating a cluster are not automatically reclaimed when you again attempt to create a new cluster. Therefore, several attempts in a row will lead to a stalemate situation, in which no cluster will be formed until all of the tied up resources are freed up.
Enable Load Balancer for Master Nodes
The check box to enable load balancer for master nodes has two completely different meanings when checked and not checked.
Non-checked state
If you accept the default state of unchecked, no load balancer will be created. Use that value if in earlier screens you specified exactly one master node to be created – using load balancer on only one node does not make much sense anyway.
For example, you will specify only one master node when you are learning the ropes and are experimenting with creation of clusters, just to see how it works.
Danger
If this field is turned off and there is one master node only, it will be accessible ONLY from the cluster internal network. In that case, you will have to access the cluster by using SSH. Please see Prerequisite No. 6 how to authenticate and use kubectl in this particular case.
Checked state
If checked, the load balancer for master nodes will be created. If you specified two or more master nodes in previous screens, then this field must be checked.
Warning
Be sure to check this field on as that will yield higher chances of successfully creating the Kubernetes cluster.
Two or more master nodes enable high availability of the system – if one of the master nodes stops working, the cluster as a whole won’t fail.
In production situation, you will want to have several master nodes and this field must then be checked on.
Create New Network
This box comes turned on, meaning that the system will create a network just for this cluster. Since Kubernetes clusters need subnets for inter-communications, a related subnetwork will be firstly created and then used further down the road.
Turn this field off if you already have one or more networks that you expressly want to use for the cluster. After turning it off, there are two new fields at your disposal:
Both fields have an asterisk behind them, meaning you must specify a concrete value in each of the two fields.
Note
It is strongly recommended to use automatic creation of network when creating a new cluster. This is guaranteed to work in most circumstances.
Option Use an Existing Network
Clicking on the field shows existing networks. Their number will vary depending on the previous history of clusters and networks creation:
Cluster API
There are two options for network access control:
If you choose the network to be accessible on the public internet, you will get a warning:
You still can select it but it makes you be more cautious. Kubernetes clusters are made to run on the server as a backup to the main web interface and not to be used as the web interface itself.
Ingress Controller
In Kubernetes parlance, ingress is the incoming traffic, i.e. traffic from web for the most part. The opposite, traffic from the network to the internet is called egress.
Ingress controller distributes traffic from the outside to a particular node in the cluster.
You can select to create a system without ingress controller or you can use the one controller that is built into the system, the NGINX ingress controller.
The main difference between load balancer and ingress controllers is this:
Load Balancer type of service maps a single app onto an IP address.
Ingress resource, such as the offered NGINX Ingress Controller, maps several apps onto one IP address.
NGINX Ingress Controller is not a Load Balancer but can help with load balancing the cluster.
NGINX ingress will run as 3 replicas on 3 separate nodes. This will override the minimum number of nodes in Magnum autoscaler.
Proxy Protocol Support For Load Balancer Services With Ingress Hostname
If you decide to use NGINX as your ingress controller, you must activate it by setting the label octavia_enable_ingress_hostname to true:
octavia_enable_ingress_hostname=true
You can set up this value either through the Advanced window of Horizon command Create New Cluster or use CLI commands to access the cluster. See examples below.
Options Management and Advanced are not mandatory and if you want to just create a Kubernetes cluster, click on Submit right away after entering data into the Size window.
Option Management
Please see Prerequisite No. 5 for an introduction to autohealing in OpenStack Magnum.
There is just one option in this window, Auto Healing and its field Automatically Repair Unhealthy Nodes.
Node is a basic unit of Kubernetes cluster and the Kubernetes systems software will automatically poll the state of each cluster; if not ready or not available, the system will replace the unhealthy node with a healthy one – provided, of course, that this field is checked on.
If this is your first time trying out the formation of Kubernetes clusters, auto healing may not be of interest to you. In production, however, auto healing should always be on, otherwise, you will be missing a big part of standard Kubernetes functionality.
Option Advanced

Option Advanced allows for entering of so-called labels, which are named parameters for the Kubernetes system.
Normally, you don’t have to enter anything here. But, if you have previously decided to use NGINX as the ingress controller, then you must add the following code into the Advanced window:
octavia_enable_ingress_hostname=true
Warning
Labels can change how the cluster creation is performed. There is a set of labels, called the Template and Workflow Labels, that the system sets up by default. If this check box is left as is, that is, unchecked, the default labels will be used unchanged. That guarantees that the cluster will be formed with all of the essential parameters in order. Even if you add your own labels, as shown in the image above, everything will still function. If you turn on the field I do want to override Template and Workflow Labels and if you use any of the Template and Workflow Labels by name, they will be set up the way you specified. Use this option very rarely, if at all, and only if you are sure of what you are doing.
Step 4 Forming of the Cluster
OpenStack will start creating the Kubernetes cluster for you. It will show a cloud message with green background in the upper right corner of the windows, stating that the creation of the cluster has been started.
Cluster generation usually takes from 10 to 15 minutes. It will be automatically abandoned if duration time is longer than 60 minutes.
If there is any problem with creation of the cluster, the system will signal it in various ways. You may see a message in the upper right corner, with a red background, like this:
Just repeat the process and in most cases you will proceed to the following screen:

Click on the name of the cluster, Kubernetes, and see what it will look like if everything went well.

There is another way of controlling the process. Click on Network in the main menu and then on Network Topology. You will see a real time graphical representation of the network. As soon as the one of the cluster elements is added, it will be shown on screen.
Another way to watch is to show labels:

Orange area represents the newly formed Kubernetes cluster and there will be four nodes, the one ending with 0 is master, the other three are worker nodes. In this particular image, the cluster has an external router, which is connected to the external network.
If you wanted to go to production, you would most likely erase networks that do not support the cluster.
Here is the state of the instances after the cluster is created:

Node names start with kubernetes because that is the name of the cluster in lower case.
Step 5 Accessing the Results
Here is what OpenStack Magnum created for you as the result of filling in data in those three screens:
A new network called Kubernetes, complete with subnet, ready to connect further.
New instances – virtual machines that serve as nodes.
A new external router.
New security groups, and of course
A fully functioning Kubernetes cluster on top of all these other elements.

You selected the number of clusters to be 3 but after a while, the cluster auto-scaled itself to 2 – there is no traffic to the cluster at all.
What To Do Next
You now have a fully operational Kubernetes cluster. You can install an app into it, say, using ready-made Docker images, then send some traffic to it and so on. You can also use the Kubernetes dashboard and watch the state of the cluster online.
Another route is to continue using OpenStack Magnum the way it is intended to, that is, to create a cluster template and then use it at will. Cluster templates provide additional flexibility such as
automatic creation of related load balancers,
automatic creation of floating IP addresses
and more.
Article How to Create a Kubernetes Cluster Template Using Creodias OpenStack Magnum will provide more information on creating Kubernetes clusters with ready made templates.
Article How To Use Command Line Interface for Kubernetes Clusters On Creodias OpenStack Magnum shows how to use command line interface to create Kubernetes clusters.
Article Using Dashboard To Access Kubernetes Cluster Post Deployment On Creodias OpenStack Magnum shows what transpires within a Kubernetes cluster in a visual way,.