- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Processing Sentinel-5P data using HARP and Python
- EO Data Access (R)evolution
- Land cover classification using remote sensing and AI/ML technology
- AI-based satellite image enhancer and mosaicking tools
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- EO4UA
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Legal Matters
- FAQ
- News
- Partner Services
- About Us
- Forum
- Knowledgebase
Kubernetes
How to Create a Kubernetes Cluster Template Using Creodias OpenStack Magnum
How to Create a Kubernetes Cluster Template Using Creodias OpenStack Magnum
This article assumes that you have access to CloudFerro WAW3-1 infrastructure, which has Kubernetes support built-in (OpenStack Magnum module).
If your CREODIAS account has access only to CF2 infrastructure, please contact support to get access to WAW3-1.
OpenStack Magnum on Cloudferro hosting comes with three basic cluster templates and using any of them is a preferred way of creating new clusters. In this article you will learn how to create your own cluster template, however, restrict it to situations which cannot be covered by any of the basic three cluster templates.
Pros and Cons Of Creating a New Custom Cluster Template
The suggested route is to
first create the cluster with the one of the default templates,
finetune it to your liking, then
capture the parameters in the form of a user created cluster template, after which you still have to
test the newly created cluster template by creating a new cluster and verifying that it is working properly.
The pros might be:
conserve what is working for you and avoid future testing,
capture hardware details that are not in your usual setup, or
when onboarding a new member of the team, present them with a custom cluster template, thus eliminating possibilities of errors and avoiding additional testing.
The cons might be:
additional testing and verification of the clusters generated with a custom template,
the combination of parameters you froze when generating a cluster template might not work in the future,
the tradeoff between the time required to generate a quality custom template vs. the number of times you will use it – don’t create a custom template to use it only once or twice, you would be better served by using the default custom template directly.
You should create some documentation for why you created the new template, in which situations to use it, who is going to use it and so on.
What We Are Going To Cover
Creating a new Kubernetes cluster template
Using labels to change the behaviour of Magnum
Creating a new Kubernetes cluster using that template
How to create cluster template using command line interface (CLI)
Prerequisites
No. 1 Hosting
To follow this article, use your Creodias hosting account with WAW3-1 server and Horizon interface.
No. 2 Private and public keys
An SSH key-pair created in OpenStack dashboard. To create it, follow this article How to create key-pair in OpenStack dashboard?. You will have created keypair called “sshkey” and you will be able to use it for this tutorial as well.
No. 3 Project quotas and flavors limits
The article Dashboard Overview - Project quotas and flavors limits gives basic definitions of quotas and flavors in OpenStack. (Briefly, a quota is how many, say, instances you can have in total, while flavor is how large do you want one instance to be.)
No. 4 Networks for Kubernetes Cluster
If you want to use an existing network and base a Kubernetes cluster on it, the article, How to create a network with router in Horizon Dashboard? will show you how to do it.
No. 5 Create New Kubernets Cluster From a Cluster Template
Once you define a new custom template in this tutorial, you can generate a new Kubernetes cluster in the same way way as is described in article How to Create a Kubernetes Cluster Using Creodias OpenStack Magnum.
No. 6 Command Line Interface to OpenStack Server
See article How To Install OpenStack and Magnum Clients for Command Line Interface to Creodias New Horizon how to gain access to command line interface to OpenStack and Magnum.
No. 7 Account Management
Please bear in mind that all the resources that you require and use will reflect on the state of your account wallet, so please check often your account statistics here https://portal.creodias.eu/clientarea.php#sso.
No. 8 Autohealing of Kubernetes Clusters
To learn more about autohealing of Kubernetes clusters, follow this official article What is Magnum Autohealer?.
Select Amongst the Three Basic Kubernetes Cluster Templates
There are three basic Kubernetes cluster templates supplied with each Creodias OpenStack system:
- k8s-stable-1.21.5 for Kubernetes release 1.21
- k8s-stable-1.22.5 for Kubernetes release 1.22
- k8s-stable-1.23.5 for Kubernetes release 1.23
From the technical standpoint of view, they are all equivalent as each will produce a working Kubernetes cluster on Creodias OpenStack Magnum hosting.
To avoid confusion, we are using k8s-stable-1.21.5 throughout the text and in CLI commands. Feel free to replace with any of the other two templates to suit your needs, goals and environment.
Cluster Templates
Cluster template is a set of parameters which govern the creation of Kubernetes clusters in OpenStack environment.
The main menu command Container Infra has two subcommands, Clusters and Cluster Templates. Clicking on the latter shows all cluster templates in the system:

Shorthand COE stands for Container Orchestration Engine - software that organizes and controls the running of basic containers.
Template clouduser is a user created template that you build by tweaking or adding new elements into one of the default templates. For instance, template clouduser contains keypair sshkey that is not initially present in any of the default templates.
To see parameters for each cluster template, click on its name in blue, in the Name column. Here is what the default template looks like:

It is fedora-coreos public virtual machine, using calico for its network driver and so on. You can change most of these parameters while creating your own cluster template.
Step 1 Create Cluster Template – option Info
Clicking on Container Infra and then on Cluster Templates shows the existing templates:

k8s-stable-1.21.5 is one of the default templates while clouduser is a user made template. Once you finish your template, it will show in this screen as well.
Click on + Create Cluster Template to start the process:

An asterisk near the name of the option denotes that it is mandatory to visit that screen or field.
Cluster Template Name
Make it reflect the goal of the setup. For this occasion, let us pretend we are making a cluster for selling books online and call it Bookshelf.
Container Orchestration Engine

Select Kubernetes because it is by far the most popular container platform today.
Public
Whether the cluster will be accessible by other users under Magnum. Default is not public, which means that only the admin, owner or users in the same tenancy will have access to the cluster.
Hidden
Hidden, but can still be referenced.
Enable Registry
This is about Docker registry. The default is to use the public Docker registry. If this field is turned off, Magnum will substitute its own local registry in the cluster.
Disable TLS
TLS stands for Transport Layer Security. It is enabled by default, which raises the level of security in the system (needs a key and signed interface to access the Kubernetes cluster). In development phase, can be turned off.
Here is what the Info window looks like with all the data entered:

Click on lower right button Next or on option Node Spec from the left main menu of the screen to proceed to the next step of defining a Kubernetes cluster template.
Step 2 Node Specification
In this step you will set up the number of nodes as well as their flavors.

Click on question mark with black background to see the explanation and definitions of the fields:

If you choose Kubernetes in Info window, then the option to specify Docker image size will not be present in window Node Spec. It may be available if you chooose COE other than Kubernetes in the first place.
Image
Operating system that the nodes in cluster will run on.

There are three options, versions and subversion of fedora-coreos 35 operating system. Choose version 35.
Keypair
The SSH keys you got from the Prerequisites article No. 2. The name is sshkey.
Flavor
The size of worker nodes (previously known as minions). There are 23 options to choose from. In the menu, the options are:

With an OpenStack CLI command
openstack flavor list
you get definition of resources that will be available with each flavor:

Please see Prerequisite No. 6 for using command line interface with your cloud server.
Select eo1.large for master node flavor. That is a comfortable size of 4 central processors, 8 GB or RAM memory and 32 GB of disk storage.
Not every combination of parameters will result in a Kubernetes cluster. Fedora distributions require at least 10 GB in size while a “small” flavor such as eo1.xsmall will have only 8 GB of storage, therefore, will result in an error message.
Master Flavor
The same applies to master flavor – the size of Master node(s). To be on the safe side, choose eo1.large again.
Flavors and the number of VCPUs are not the only factors that decide whether the cluster will be installed or not. Each cluster will claim its own resources that subtract from the total quota of resources. In particular, Magnum will reserve 11 security group rules for worker nodes and 18 rules for each master node. Here is what the consumption of security rules and groups looks like after the Kubernetes cluster was formed in the Prerequisites article No. 3:

Volume Driver
The value of this field will depend on the Selected Container Orchestration engine in step Info. If you selected Kubernetes as your COE, the volume driver offered be Cinder. In all other cases, the value of driver offered will be Rexray.
With Cinder for Kubernetes, the end user may use storage without knowing where the actual hardware for storage is, or on what type of device.
Insecure Registry
The URL of alternative Docker registry from which you intend to download Docker images. If you do not have one such registry, leave this field empty.
This is what the Node Spec screen looks like with all the values entered in:

In this step, you have defined flavors and Docker parameters for nodes in a Kubernetes cluster. The next step is to define the network to connect the nodes into one whole.
Step 3 Defining Network
In the Network window, you define properties of the underlying network for the cluster. Here is what it looks like in the beginning:

Clicking on the question mark in the upper right corner shows default values for certain fields in this window:

Network Driver

For Kubernetes, choose between Flannel and Calico. The default cluster template uses Calico, so that is the recommendation.
The actual choice of drivers shown depends on the selected Container Orchestration Engine. For Kubernetes, the choice is between Flannel and Calico, for Docker the choice is between Docker and Flannel, and for Mesos and DC/OC, the only choice is Docker network driver.
HTTP Proxy, HTTPS Proxy, No Proxy
Leave these fields empty unless you already have such proxies settled in the system.
External Network ID
A mandatory field. Option external is always present, but there may be others, as in the following image:
Manila network will be present if your version of Cloudferro OpenStack supports shared file system.
Fixed Network
If you select a network here, it will, obviously, be used to create a cluster later on. If not, you will be able to define the network later in the process of creating a cluster from a cluster template.

The default value of this field is 10.0.0.0/24.
Fixed Subnet
A network selected in the above field Fixed Network must have a subnet defined. This field is empty unless you define a concrete network in field Fixed Network, like this:

DNS
Nameserver to use for the cluster template. The default is 8.8.8.8, but a similar value of 8.8.4.4 is often encountered as well.
Master LB
Attach LoadBalancer to the main node or not. Default is False.

Floating IP
To create a floating IP for the cluster or not. Default is to create it automatically.
Here is the Network screen filled in for this example:

Step 4 Define Labels
Labels are variables that can define or redefine the actions of Magnum. Here is a typical list:

OpenStack Magnum will automatically use a set of predefined labels, however, it does not set their default values in any way.
Here is a list of labels used by template k8s-stable-1.21.5:

If you want your new template to have identical labels as the default template, enter the following into the Labels field:
auto_healing_controller=magnum-auto-healer,auto_scaling_enabled=true,autoscaler_tag=v1.22.0,cinder_csi_plugin_tag=v1.21.1-1.0.0,cloud_provider_enabled=true,cloud_provider_tag=v1.21.0,container_infra_prefix=registry-public.cloudferro.com/magnum/,eodata_access_enabled=false,etcd_volume_size=8,etcd_volume_type=ssd,hyperkube_prefix=registry-public.cloudferro.com/magnum/,k8s_keystone_auth_tag=v1.21.0,kube_tag=v1.21.5-rancher1,magnum_auto_healer_tag=v1.21.0,master_lb_floating_ip_enabled=true
Your new cluster template is going to have the following parameters:

Click on button Submit in the lower right corner to create the cluster template.
Additional Labels
There are several labels that can be used in certain circumstances:
eodata-access
Label eodata-access has boolean value and enables or disables eodata network search.
With this label turned on, tags used for search are “eodata-access”, “eodata_access”. This guarantees that the eodata, if available, will be found in the system.
octavia_enable_ingress_hostname
Proxies and load balancers can use proxy protocol to pass the client’s information (the IP address and the port) to the next proxy or load balancer. If you activated the NGINX ingress controller, then set up the value of label
octavia_enable_ingress_hostname=true
to true in order to enable its proxy protocol.
The default value of this boolean label is “false”.
Labels to properly activate auto healing
Prerequisite No. 8 will give you a formal introduction to the notion of Kubernetes autohealing, as implemented in OpenStack Magnum.
The proper way to activate auto healing is to use the following label:
auto_healing_enabled=True
Step 5 Creating the Cluster Template
In most cases, you will instantly see a new template, called Bookshelf, in the list:

How to Create a Cluster Template Using the CLI
Please see Prerequisite No. 6 for an introduction to enabling and using the command line interface to OpenStack Magnum cloud.
Use help option to see parameters of the cluster template command:
openstack coe cluster template create -h
The output is:
openstack coe cluster template create [-h]
[-f {json,shell,table,value,yaml}]
[-c COLUMN]
[--noindent]
[--prefix PREFIX]
[--max-width <integer>]
[--fit-width]
[--print-empty]
--coe <coe>
--image <image>
--external-network <external-network>
[--keypair <keypair>]
[--fixed-network <fixed-network>]
[--fixed-subnet <fixed-subnet>]
[--network-driver <network-driver>]
[--volume-driver <volume-driver>]
[--dns-nameserver <dns-nameserver>]
[--flavor <flavor>]
[--master-flavor <master-flavor>]
[--docker-volume-size <docker-volume-size>]
[--docker-storage-driver <docker-storage-driver>]
[--http-proxy <http-proxy>]
[--https-proxy <https-proxy>]
[--no-proxy <no-proxy>]
[--labels <KEY1=VALUE1,KEY2=VALUE2;KEY3=VALUE3...>]
[--tls-disabled]
[--public]
[--registry-enabled]
[--server-type <server-type>]
[--master-lb-enabled]
[--floating-ip-enabled]
[--floating-ip-disabled]
<name>
Here is an example command to create a new cluster template:
openstack coe cluster template create kubecluster
--image "fedora-coreos-34.20210904.3.0"
--external-network external
--master-flavor eo1.large
--flavor eo1.large
--docker-volume-size 50
--network-driver calico
--docker-storage-driver overlay2
--master-lb-enabled
--volume-driver cinder
--labels boot_volume_type=,boot_volume_size=50,kube_tag=v1.18.2,availability_zone=nova
--coe kubernetes -f value -c uuid
In terminal window it looks like this:

You have successfully created a new cluster template called kubecluster.
Output Parameters for the Cluster Template Command
Parameters -f and -c define what the output of the command will look like in the terminal. In the image above, -f value requests values to be shown instead of names and -c uuid shows a column for value uuid.
To show parameters, the options for -f are:
json, JSON format
shell, as in Linux shell
table, as a table, can be very unwieldy,
value, values,
yaml, in YAMO format.
What To Do Next
Once a new template is finished, you can follow the article How to Create a Kubernetes Cluster Using Creodias OpenStack Magnum and use it to create a new Kubernetes cluster.