- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Processing Sentinel-5P data using HARP and Python
- EO Data Access (R)evolution
- Land cover classification using remote sensing and AI/ML technology
- AI-based satellite image enhancer and mosaicking tools
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- EO4UA
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Legal Matters
- FAQ
- News
- Partner Services
- About Us
- Forum
- Knowledgebase
OTHER
How to configure Kuberentes on CF2 cloud?
Introduction
This guide describes how to install Kubernetes on CREODIAS OpenStack cloud, with support for adding/removing nodes, persistent volumes and load balancing.
This deployment method uses terraform and ansible playbooks from the upstream kubespray project with prepared ansible configuration to enable required features
Pre-installation setup
Prerequisites
domain: cloud_02722
project: cloud_02722 project_with_eo
Flavor: eo1.xsmall
Image: Ubuntu 18.04 LTS
Python: 3.6.9
Create a new instance (vm)
How to create new VM in OpenStack dashboard (Horizon)?
This VM "cloudferro-kubernetes" will be used to install the software necessary to create the kubernetes cluster.
You can also install the software directly on your desktop instead of on this VM.
Getting code
Log in to your vm from your desktop
ssh -i .ssh/john-doe-02 eouser@45.130.28.79 eouser@cloudferro-kubernetes:~$ sudo apt update # it is recommended to update the system eouser@cloudferro-kubernetes:~$ sudo apt upgrade eouser@cloudferro-kubernetes:~$ mkdir code eouser@cloudferro-kubernetes:~$ cd code
Clone 2 repositories:
- upstream kubespray repository (for 2.12.0 release)
- CloudFerro repository with ansible inventory pre-configured for CF2 cloud.
The inventory is then symlinked into its expected location:
eouser@cloudferro-kubernetes:~/code$ git clone --branch release-2.12 https://github.com/kubernetes-sigs/kubespray eouser@cloudferro-kubernetes:~/code$ git clone --branch v0.1 https://gitlab.cloudferro.com/devops/kubernetes/cf2-kubespray kubespray/inventory/cf2-kube
Ansible
Ansible inventory also allows for additional configuration of kubespray deployment. The provided inventory makes sure that deployed cluster can utilize load balancers and persistent storage as provided by OpenStack, but kubespray provides a vast number of configuration options to tweak the cluster to the specific workload.
Before using ansible, please activate Python virtual environment, to avoid installing packages system-wide.
eouser@cloudferro-kubernetes:~/code$ python3 -m venv venv
If you get the following message:
The virtual environment was not created successfully because ensurepip is not available. On Debian/Ubuntu systems, you need to install the python3-venv package using the following command. apt-get install python3-venv You may need to use sudo with that command. After installing the python3-venv package, recreate your virtual environment. Failing command: ['/home/eouser/code/venv/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip']
Invoke:
eouser@cloudferro-kubernetes:~/code$ sudo apt-get install python3-venv eouser@cloudferro-kubernetes:~/code$ python3 -m venv venv eouser@cloudferro-kubernetes:~/code$ source venv/bin/activate (venv) eouser@cloudferro-kubernetes:~/code$ pip install -r kubespray/requirements.txt
OpenStack credentials
Kubespray-based OpenStack deployments require that the deployment host has access to OpenStack credentials - those credentials are used to configure kubernetes' integration with openstack services (for LBaaS and Block Storage integration).
Before the initial deployment, those credentials should be fetched from OpenStack.
See the details: How to install OpenStackClient (Linux)?
In our example the file is named "cloud_02722 project_with_eo-openrc.sh" and has the following content:
#!/usr/bin/env bash # To use an OpenStack cloud you need to authenticate against the Identity # service named keystone, which returns a **Token** and **Service Catalog**. # The catalog contains the endpoints for all services the user/tenant has # access to - such as Compute, Image Service, Identity, Object Storage, Block # Storage, and Networking (code-named nova, glance, keystone, swift, # cinder, and neutron). # # *NOTE*: Using the 3 *Identity API* does not necessarily mean any other # OpenStack API is version 3. For example, your cloud provider may implement # Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is # only for the Identity API served through keystone. export OS_AUTH_URL=https://cf2.cloudferro.com:5000/v3 # With the addition of Keystone we have standardized on the term **project** # as the entity that owns the resources. export OS_PROJECT_ID=db39778a89b242f0a8ba818eaf4f3329 export OS_PROJECT_NAME="cloud_02722 project_with_eo" export OS_USER_DOMAIN_NAME="cloud_02722" if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi export OS_PROJECT_DOMAIN_ID="56b82cc9648e4712bf3080b4cbb2816e" if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi # unset v2.0 items in case set unset OS_TENANT_ID unset OS_TENANT_NAME # In addition to the owning entity (tenant), OpenStack stores the entity # performing the action as the **user**. export OS_USERNAME="john.doe@cloudferro.com" # With Keystone you pass the keystone password. echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT # If your configuration has multiple regions, we set that information here. # OS_REGION_NAME is optional and only valid in certain environments. export OS_REGION_NAME="RegionOne" # Don't leave a blank variable, unset it if it was empty if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi export OS_INTERFACE=public export OS_IDENTITY_API_VERSION=3
Create a file: "openrc.sh" in eouser@cloudferro-kubernetes:~/code$ directory and copy the content of the "cloud_02722 project_with_eo-openrc.sh" to it.
Now install openstack client library
(venv) eouser@cloudferro-kubernetes:~/code$ pip install python-openstackclient
and source the openrc.sh file and enter your CREODIAS password.
(venv) eouser@cloudferro-kubernetes:~/code$ source openrc.sh Please enter your OpenStack Password for project cloud_02722 project_with_eo as user john.doe@cloudferro.com:
Initial deployment
Once pre-installation setup is finished, the installation can begin. The installation process is divided into two phases:
- terraform is used to manage the underlying infrastructure (networks, VMs)
- kubespray's ansible playbooks are used to deploy the cluster.
Install Terraform
(venv) eouser@cloudferro-kubernetes:~/code$ wget -q "https://releases.hashicorp.com/terraform/0.12.20/terraform_0.12.20_linux_amd64.zip" (venv) eouser@cloudferro-kubernetes:~/code$ sudo apt install unzip (venv) eouser@cloudferro-kubernetes:~/code$ unzip terraform_0.12.20_linux_amd64.zip (venv) eouser@cloudferro-kubernetes:~/code$ sudo mv terraform /usr/bin
Kubespray deployment assumes that terraform will be executed from the specific path, so change directory first, and then initialize terraform - this process fetches all terraform dependencies required by kubespray:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ (venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ terraform init contrib/terraform/openstack
You need to generate SSH key pair (see: Generating a SSH keypair in Linux)
(venv) eouser@cloudferro-kubernetes:~$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/eouser/.ssh/id_rsa):[ENTER] Enter passphrase (empty for no passphrase):[ENTER] Enter same passphrase again:[ENTER] Your identification has been saved in /home/eouser/.ssh/id_rsa. Your public key has been saved in /home/eouser/.ssh/id_rsa.pub. ...
Once terraform has been initialized, it can be then used to do the initial deployment:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ terraform apply -var-file=cluster.tfvars contrib/terraform/openstack ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes[ENTER] ... module.compute.openstack_compute_keypair_v2.k8s: Creating... module.compute.openstack_compute_keypair_v2.k8s: Creation complete after 0s [id=kubernetes-cf2-k8s] module.compute.openstack_compute_instance_v2.k8s_master_no_floating_ip[0]: Creating... module.compute.openstack_compute_instance_v2.k8s_node_no_floating_ip[1]: Creating... module.compute.openstack_compute_instance_v2.k8s_node_no_floating_ip[0]: Creating... module.compute.openstack_compute_instance_v2.bastion[0]: Creating... module.compute.openstack_compute_instance_v2.k8s_master_no_floating_ip[0]: Still creating... [10s elapsed] module.compute.openstack_compute_instance_v2.k8s_node_no_floating_ip[1]: Still creating... [10s elapsed] module.compute.openstack_compute_instance_v2.bastion[0]: Still creating... [10s elapsed] module.compute.openstack_compute_instance_v2.k8s_node_no_floating_ip[0]: Still creating... [10s elapsed] module.compute.openstack_compute_instance_v2.k8s_node_no_floating_ip[0]: Creation complete after 12s [id=d2a81b10-25c1-4dc9-bd71-5ab7444b1d65] module.compute.openstack_compute_instance_v2.bastion[0]: Provisioning with 'local-exec'... module.compute.openstack_compute_instance_v2.bastion[0] (local-exec): Executing: ["/bin/sh" "-c" "sed s/USER/eouser/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/45.130.28.241/ > group_vars/no-floating.yml"] module.compute.openstack_compute_instance_v2.bastion[0]: Creation complete after 13s [id=39d095d8-df53-4012-b262-00735eafc8b0] module.compute.openstack_compute_floatingip_associate_v2.bastion[0]: Creating... module.compute.openstack_compute_instance_v2.k8s_node_no_floating_ip[1]: Creation complete after 16s [id=972a67c3-9610-4ee0-bca9-18957e29d1cd] module.compute.openstack_compute_instance_v2.k8s_master_no_floating_ip[0]: Still creating... [20s elapsed] module.compute.openstack_compute_instance_v2.k8s_master_no_floating_ip[0]: Creation complete after 22s [id=7377506b-b9a3-4fbe-a6e2-ddfdf5b25bc4] module.compute.openstack_compute_floatingip_associate_v2.bastion[0]: Still creating... [10s elapsed] module.compute.openstack_compute_floatingip_associate_v2.bastion[0]: Creation complete after 10s [id=45.130.28.241/39d095d8-df53-4012-b262-00735eafc8b0/] Apply complete! Resources: 6 added, 0 changed, 0 destroyed. Outputs: bastion_fips = [ "45.130.28.241", ] floating_network_id = 5a0a9ccb-69e0-4ddc-9563-b8d6ae9ef06c k8s_master_fips = [] k8s_node_fips = [] private_subnet_id = c8146f47-742a-4eff-b616-a646a212c893 router_id = 8df39733-1b62-48a3-b02d-2c494a0e664b
please note down private_subnet_id = c8146f47-742a-4eff-b616-a646a212c893
WARNING: if you get the following error:
Error: Unable to create openstack_compute_keypair_v2 kubernetes-cf2-k8s: Expected HTTP response code [200 201] when accessing [POST https://cf2.cloudferro.com:8774/v2. 1/db39778a89b242f0a8ba818eaf4f3329/os-keypairs], but got 409 instead {"conflictingRequest": {"message": "Key pair 'kubernetes-cf2-k8s' already exists.", "code": 409}}
you should delete the keypair: kubernetes-cf2-k8s from your domain in https://cf2.cloudferro.com/project/key_pairs/
Such error can happen, if you had already created the Kubernetes cluster before in your domain, and did not delete the previously created key pair.
Now you can see the master, nodes and bastion vms created.
Ansible
After the initial terraform deployment is done, first subnet_id must be copied to ansible configuration.
Edit the file with an editor of your choice (eg. nano):
nano group_vars/all/openstack.yml
uncomment the line:
# openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
and set it to the value of private_subnet_id
In case of above example it would be:
openstack_lbaas_subnet_id: "c8146f47-742a-4eff-b616-a646a212c893"
You can execute kubespray's playbook to deploy the cluster:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ cd ../../ (venv) eouser@cloudferro-kubernetes:~/code/kubespray$ ansible-playbook --become -i inventory/cf2-kube/hosts cluster.yml
After over a dozen of minutes you should see:
... PLAY RECAP *************************************************************************************************************** cf2-k8s-bastion-1 : ok=4 changed=1 unreachable=0 failed=0 cf2-k8s-k8s-master-nf-1 : ok=656 changed=145 unreachable=0 failed=0 cf2-k8s-k8s-node-nf-1 : ok=405 changed=87 unreachable=0 failed=0 cf2-k8s-k8s-node-nf-2 : ok=404 changed=87 unreachable=0 failed=0 localhost : ok=1 changed=0 unreachable=0 failed=0 Thursday 20 February 2020 14:02:58 +0000 (0:00:00.201) 0:14:35.790 ***** =============================================================================== kubernetes/preinstall : Install packages requirements ------------------------------------------------------------ 42.55s container-engine/docker : ensure docker packages are installed --------------------------------------------------- 31.44s kubernetes/master : kubeadm | Initialize first master ------------------------------------------------------------ 24.62s download : download_file | Download item ------------------------------------------------------------------------- 19.47s kubernetes/kubeadm : Join to cluster ----------------------------------------------------------------------------- 18.69s download : download_container | Download image if required -------------------------------------------------------- 9.79s etcd : wait for etcd up ------------------------------------------------------------------------------------------- 9.17s download : download_file | Download item -------------------------------------------------------------------------- 8.84s download : download_container | Download image if required -------------------------------------------------------- 8.07s download : download_container | Download image if required -------------------------------------------------------- 7.12s etcd : Configure | Check if etcd cluster is healthy --------------------------------------------------------------- 6.73s container-engine/docker : ensure docker-ce repository is enabled -------------------------------------------------- 6.45s download : download | Download files / images --------------------------------------------------------------------- 6.44s download : download_file | Download item -------------------------------------------------------------------------- 6.44s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template --------------------------------------------- 6.41s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------------------------------------------------- 6.31s download : download_container | Download image if required -------------------------------------------------------- 6.06s bootstrap-os : Fetch /etc/os-release ------------------------------------------------------------------------------ 5.89s download : download_container | Download image if required -------------------------------------------------------- 5.70s download : download_container | Download image if required -------------------------------------------------------- 5.54s
Once finished, two new folders should be created in the inventory directory (venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$
artifacts/ and credentials/
. ├── artifacts │ └── admin.conf ├── cluster.tfvars ├── contrib -> ../../contrib/ ├── credentials │ ├── kube_user.creds │ └── kubeadm_certificate_key.creds ├── docs │ └── deployment.md ├── group_vars │ ├── all │ │ ├── all.yml │ │ ├── azure.yml │ │ ├── coreos.yml │ │ ├── docker.yml │ │ ├── oci.yml │ │ └── openstack.yml │ ├── etcd.yml │ ├── k8s-cluster │ │ ├── addons.yml │ │ ├── k8s-cluster.yml │ │ ├── k8s-net-calico.yml │ │ ├── k8s-net-canal.yml │ │ ├── k8s-net-cilium.yml │ │ ├── k8s-net-contiv.yml │ │ ├── k8s-net-flannel.yml │ │ ├── k8s-net-kube-router.yml │ │ ├── k8s-net-macvlan.yml │ │ └── k8s-net-weave.yml │ └── no-floating.yml ├── hosts -> ../../contrib/terraform/openstack/hosts └── terraform.tfstate
credentials/ stores password for the initial kubernetes user (kube) that has to be used to authenticate to the cluster.
artifacts/ contains kube config for the cluster.
IMPORTANT: with the default deployment, there is no external access to the Kubernetes API server - access is only available via the bastion host and so kubectl should be used on the bastion host, not on the host used for the deployment.
artifacts/admin.conf must be copied to .kube/config on the bastion host:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray$ ssh eouser@[bastion-ip] mkdir .kube/ (venv) eouser@cloudferro-kubernetes:~/code/kubespray$ scp inventory/cf2-kube/artifacts/admin.conf eouser@[bastion-ip]:.kube/config
download kubectl on the bastion host:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray$ ssh eouser@[bastion-ip] eouser@cf2-k8s-bastion-1:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl eouser@cf2-k8s-bastion-1:~$ chmod +x kubectl eouser@cf2-k8s-bastion-1:~$ sudo mv kubectl /usr/local/bin/
Finally, kubectl can be used to interact with the cluster:
eouser@cf2-k8s-bastion-1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION cf2-k8s-k8s-master-nf-1 Ready master 39m v1.16.3 cf2-k8s-k8s-node-nf-1 Ready <none> 38m v1.16.3 cf2-k8s-k8s-node-nf-2 Ready <none> 38m v1.16.3
Terraform
Ansible inventory allows for a limited modification of the environment, such as setting a number of masters and workers, OpenStack flavors for specific instances, whether to allocate a floating IP for every instance, or use bastion host instead.
Those settings live in:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ cluster.tfvars
and should be self-explanatory.
# your Kubernetes cluster name here cluster_name = "cf2-k8s" # SSH key to use for access to nodes public_key_path = "~/.ssh/id_rsa.pub" # image to use for bastion, masters, standalone etcd instances, and nodes image = "Ubuntu 18.04 LTS" # user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.) ssh_user = "eouser" # 0|1 bastion nodes number_of_bastions = 1 flavor_bastion = "14" # eo1.xsmall # standalone etcds number_of_etcd = 0 # masters number_of_k8s_masters = 0 number_of_k8s_masters_no_etcd = 0 number_of_k8s_masters_no_floating_ip = 1 number_of_k8s_masters_no_floating_ip_no_etcd = 0 flavor_k8s_master = "18" # eo1.large # nodes number_of_k8s_nodes = 0 number_of_k8s_nodes_no_floating_ip = 2 flavor_k8s_node = "18" # eo1.large # GlusterFS # either 0 or more than one #number_of_gfs_nodes_no_floating_ip = 0 #gfs_volume_size_in_gb = 150 # Container Linux does not support GlusterFS image_gfs = "Ubuntu 18.04 LTS" # May be different from other nodes #ssh_user_gfs = "ubuntu" #flavor_gfs_node = "18" # networking network_name = "cf2-k8s-network" external_net = "5a0a9ccb-69e0-4ddc-9563-b8d6ae9ef06c" subnet_cidr = "172.16.0.0/24" floatingip_pool = "external2" bastion_allowed_remote_ips = ["0.0.0.0/0"] dns_nameservers = ["185.48.234.234", "185.48.234.238"]
Adding nodes
In order to add new worker nodes to the cluster you should modify cluster.tfvars
For example, to increase number of workers without floating ip, change variable number_of_k8s_nodes_no_floating_ip.
eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ nano cluster.tfvars number_of_k8s_nodes_no_floating_ip = 5
Afterwards, re-run terraform apply:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ terraform apply -var-file=cluster.tfvars contrib/terraform/openstack
you can see the new configuration in https://cf2.cloudferro.com/project/instances/
now go to ~/code/kubespray and run kubespray's scaling playbook:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray$ ansible-playbook --become -i inventory/cf2-kube/hosts scale.yml
now ssh to bastion:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray$ ssh eouser@[bastion-ip]
and verify the nodes:
eouser@cf2-k8s-bastion-1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION cf2-k8s-k8s-master-nf-1 Ready master 3d20h v1.16.3 cf2-k8s-k8s-node-nf-1 Ready <none> 3d20h v1.16.3 cf2-k8s-k8s-node-nf-2 Ready <none> 3d20h v1.16.3 cf2-k8s-k8s-node-nf-3 Ready <none> 9m46s v1.16.3 cf2-k8s-k8s-node-nf-4 Ready <none> 9m46s v1.16.3 cf2-k8s-k8s-node-nf-5 Ready <none> 9m46s v1.16.3venv) eouser@cloudferro-kubernetes:~/code/kubespray$ ssh eouser@[bastion-ip]
Removing nodes
When removing nodes from the cluster it is important to remember that one can only remove nodes from the end of the list, and not random nodes in the cluster.
First, nodes must be drained - this process reschedules all pods from the affected nodes onto other nodes in the cluster ensuring that to be deleted nodes are no longer running any workloads:
eouser@cf2-k8s-bastion-1:~$ kubectl drain cf2-k8s-k8s-node-nf-5 node/cf2-k8s-k8s-node-nf-5 cordoned node/cf2-k8s-k8s-node-nf-5 drained
Next, go back to (venv) eouser@cloudferro-kubernetes:~/code/kubespray$ and run kubespray playbook for node removal:
(venv) cloudferro-kubernetes kubespray $ ansible-playbook --become -i inventory/cf2-kube/hosts remove-node.yml --extra-vars="node=cf2-k8s-k8s-node-nf-5" ... Are you sure you want to delete nodes state? Type 'yes' to delete nodes. [no]: yes
Finally, decrease terraform variable number_of_k8s_nodes_no_floating_ip
eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ nano cluster.tfvars number_of_k8s_nodes_no_floating_ip = 4
and rerun terraform:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ terraform apply -var-file=cluster.tfvars contrib/terraform/openstack ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: [yes] ... Apply complete! Resources: 0 added, 0 changed, 1 destroyed. Outputs: bastion_fips = [ "45.130.28.241", ] floating_network_id = 5a0a9ccb-69e0-4ddc-9563-b8d6ae9ef06c k8s_master_fips = [] k8s_node_fips = [] private_subnet_id = c8146f47-742a-4eff-b616-a646a212c893 router_id = 8df39733-1b62-48a3-b02d-2c494a0e664b
Again, kubectl can be used to verify that the node has been removed from the cluster:
(venv) eouser@cloudferro-kubernetes:~/code/kubespray/inventory/cf2-kube$ ssh eouser@[bastion-ip] eouser@cf2-k8s-bastion-1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION cf2-k8s-k8s-master-nf-1 Ready master 3d21h v1.16.3 cf2-k8s-k8s-node-nf-1 Ready <none> 3d21h v1.16.3 cf2-k8s-k8s-node-nf-2 Ready <none> 3d21h v1.16.3 cf2-k8s-k8s-node-nf-3 Ready <none> 39m v1.16.3 cf2-k8s-k8s-node-nf-4 Ready <none> 39m v1.16.3