- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- EO Data Access (R)evolution
- Land cover classification using remote sensing and AI/ML technology
- AI-based satellite image enhancer and mosaicking tools
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Integration Guides
- OGC API
- Custom Processing Scripts
- Legal Matters
- Partner Services
- About Us
Creating Additional Nodegroups in Kubernetes Cluster on Creodias OpenStack Magnum
This article assumes that you have access to CloudFerro WAW3-1 infrastructure, which has Kubernetes support built-in (OpenStack Magnum module).
If your CREODIAS account has access only to CF2 infrastructure, please contact support to get access to WAW3-1.
The Benefits of Using Nodegroups
A nodegroup is a group of nodes from a Kubernetes cluster that have the same configuration and run the user's containers. One and the same cluster can have various nodegroups within it, so instead of creating several independent clusters, you may create only one and then separate the groups into the nodegroups.
A nodegroup separates the roles within the cluster and can
* limit the scope of damage if a given group is compromised,
* regulate the number of API requests originating from a certain group, and
* create scopes of privileges to specific node types and related workloads.
Other uses of nodegroup roles also include:
* Testing and staging purposes.
* If your Kubernetes environment is small on resources, you can create a minimal Kubernetes cluster and later on add nodegroups and thus enhance the number of control and worker nodes.
* Nodes in a group can be created, upgraded and deleted individually, without affecting the rest of the cluster.
What We Are Going To Cover
The structure of command openstack coe nodelist
How to produce manageable output from nodelist set of commands
How to list what nodegroups are available in a cluster
How to show the contents of one particular nodegroup in a cluster
How to create a new nodegroup
How to delete an existing nodegroup
How to update nodegroups
How to resize a nodegroup
The benefits of using nodegroups in Kubernetes clusters
No. 1 Hosting
To follow this article, use your Creodias hosting account with WAW3-1 server and Horizon interface.
No. 2 Creating clusters with CLI
The article How To Use Command Line Interface for Kubernetes Clusters On Creodias OpenStack Magnum will introduce you to creation of clusters using a command line interface.
No. 3 Connect openstack client to the cloud
Prepare openstack and magnum clients by executing Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud from article How To Install OpenStack and Magnum Clients for Command Line Interface to Creodias Horizon.
No. 4 Check available quotas
Before creating additional node groups check the state of the resources with Horizon commands Computer => Overview.
Nodegroups SubcommandsOnce you create a Kubernetes cluster on OpenStack Magnum, it is possible to use a CLI group of commands openstack coe nodelist to add or delete the nodes. With this, you can repurpose the cluster to include various images, change volume access, set up max and min values for the number of nodes and so on. In this article you get hands on experience combining the CLI commands and Horizon visual aspects of cluster creation.
There are five subcommands for nodegroup command:
coe nodegroup create coe nodegroup delete coe nodegroup list coe nodegroup show coe nodegroup update
Step 1 Access the Current State of Clusters and Their Nodegroups
Here is which clusters are available in the system:
openstack coe cluster list --max-width 120
The default process of creating Kubernetes clusters on OpenStack Magnum produces two nodegroups, default-master and default-worker. Use commands
openstack coe nodegroup list kubelbtrue openstack coe nodegroup list k8s-cluster
to list default nodegroups for those two clusters, kubelbtrue and k8s-cluster.
The default-worker node group cannot be removed or reconfigured so plan ahead when creating the base cluster.
Step 2 How to Create a New Nodegroup
In this step you learn about the parameters available for the nodegroup create command. This is the general structure:
openstack coe nodegroup create [-h] [--docker-volume-size <docker-volume-size>] [--labels <KEY1=VALUE1,KEY2=VALUE2;KEY3=VALUE3...>] [--node-count <node-count>] [--min-nodes <min-nodes>] [--max-nodes <max-nodes>] [--role <role>] [--image <image>] [--flavor <flavor>] [--merge-labels] <cluster> <name>
You will now create a nodegroup of two members, it will be called testing, the role will be called test, and add it to the cluster k8s-cluster:
openstack coe nodegroup create --node-count 2 --role test k8s-cluster testing
Then use the command
openstack coe nodegroup list k8s-cluster
to list the nodegroups twice. The first time, it will be in status of creating, the second time, after a few seconds, it will have been created already.
In Horizon, use command Orchestration => Stacks to list the mechanisms that create new instances. In this case, the stack looks like this:
Still in Horizon, click on commands Contaner Infra => Clusters => k8s-clusters and see that there are now five nodes in total:
Step 3 Using role to Filter Nodegroups in the Cluster
It is possible to filter node groups according to the role. Here is the command to show only the test nodegroup:
openstack coe nodegroup list k8s-cluster --role test
Several node groups can share the same role name.
The roles can be used to schedule the nodes when using the kubectl command directly on the cluster.
Step 4 Show Details of the Nodegroup Created
Command show presents the details of a nodegroup in various formats – json, table, shell, value or yaml. The default is table but use parameter –max-width to limit the number of columns in it:
openstack coe nodegroup show --max-width 80 k8s-cluster testing
Step 5 Delete the Existing Nodegroup
In this step you shall try to create a nodegroup with small footprint:
openstack coe nodegroup create --node-count 2 --role test --image cirros-0.4.0-x86_64-2 --flavor eo1.xsmall k8s-cluster cirros
After one hour, the command was cancelled and the creation has failed. The resources will, however, stay frozen in the system so here is how to delete them.
One way is to use the CLI delete subcommand, like this:
openstack coe nodegroup delete k8s-cluster cirros
The status will be changed to DELETE_IN_PROGRESS.
Another way is to find the instances of those created nodes and delete them through the Horizon interface. Find the existing instances with commands Compute => Instance and filter by Instance Name, with text k8s-cluster-cirros-. It may look like this:
and then delete them by clicking on red button Delete Instances.
You will get a confirmation text in cloud in the upper right corner.
Regardless of the way, the instances will not be deleted immediately, but rather scheduled to be deleted in some near future.
The default master and worker node groups cannot be deleted but all the others can.
Step 6 Update the Existing Nodegroup
In this step you will directly update the existing nodegroup, rather than adding and deleting them in a row. The example command is:
openstack coe nodegroup update k8s-cluster testing replace min_node_count=1
Instead of replace, it is also possible to use verbs add and delete.
In the above example, you are setting up the minimum value of nodes to 1. (Previously it was 0 as parameter min_node_count was not specified and its default value is 0.)
Step 7 Resize the Nodegroup
Resizing the nodegroup is similar to resizing the cluster, with the addition of parameter –nodegroup. Currently, the number of nodes in group testing is 2. Make it 1:
openstack coe cluster resize k8s-cluster --nodegroup testing 1
To see the result, apply the command
openstack coe nodegroup list --max-width 120 k8s-cluster
Cluster cannot be scaled outside of min-nodes/max-nodes set when nodegroup was created.
Here is what the state of the networks looks like after all these changes (commands Network => Network Topology => Small in Horizon interface):