- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Processing Sentinel-5P data using HARP and Python
- EO Data Access (R)evolution
- Land cover classification using remote sensing and AI/ML technology
- AI-based satellite image enhancer and mosaicking tools
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- EO4UA
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Legal Matters
- FAQ
- News
- Partner Services
- About Us
- Forum
- Knowledgebase
Your Processing Environment
Load Balancer as a Service - user documentation
Load Balancer is a service that listens for requests, and then forwards those requests on to servers within a pool. There are many reasons why you should take advantage of this functionality, but the most common situations are when you are trying to get a high-availability architecture, and when you need more performance than a single server can provide.
The following figure presents the general concept of LBaaS service and its main components.

The load balancer occupies Openstack neutron network port and has an IP address assigned from a subnet. Load balancers can listen for requests on multiple ports. Each one of those ports is specified by a listener. A pool holds a list of members that serve content through the load balancer. Members are servers that serve traffic behind a load balancer. Each member is specified by the IP address and port that it uses to serve traffic. Members may go offline from time to time and health monitors divert traffic away from members that are not responding properly. Health monitors are associated with pools.
Basic Load Balancer configuration
You can create new Load Balancer using one of two available methods
- by OpenStack Horizon dashboard
- using Octavia CLI extensions to OpenStack Client
Creating Load Balancer using OpenStack Horizon dashboard
If you want to configure new Load Balancer using Openstack dashboard you should login to OpenStack, choose the right project if different than default, go to:
Project -> Network -> Load Balancers, click Create Load Balancer and fill the required fields in Load Balancer Details tab, than go to the next tab.
Now you should provide the details for the listener (listening endpoint of a load balanced service). Choose from the list protocol as well as other protocol specific details that are attributes of the listener.
Provide details for the pool as well as pool members ( pool is the object representing the grouping of members to which the listener forwards client requests).
Now you can define the detail for health monitor, which is an object that defines a check method for each member of the pool.
Now you can click the “Create Load Balancer” button and create a load balancer in accordance to the previously entered configuration.
Now you can associate Floating IP to Load Balancer if such association is needed in your use case.
Creating basic HTTP Load Balancer using CLI
The way in which you can prepare the basic HTTP Load Balancer configuration using the Openstack CLI will be illustrated in the following example:
Description of the example scenario:
- To ensure a high level of reliability and performance we need to configure three BackEnd servers using IP: 192.168.5.9, 192.168.5.10, 192.168.5.11 on subnet private-subnet, that have been configured with an HTTP application on TCP port 80.
- Backend servers have been configured with a health check at the URL path “/hch1”.
- Neutron network public is a shared external network created by the cloud operator which is reachable from the internet.
- We want to configure a basic load balancer that is accessible from the internet via Floating IP, which distributes web requests to the back-end servers, and which checks the “/hch1” path to ensure back-end member health.
List of steps that should be taken to achieve the goal described in the above scenario:
- Create load balancer lb1 on subnet private-subnet
openstack loadbalancer create --name lb1 --vip-subnet-id public-subnet
- Create listener lis1
openstack loadbalancer listener create --name lis1 --protocol HTTP --protocol-port 80 lb1
- Create pool pool1 as lis1 default pool
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener lis1 --protocol HTTP --session-persistence type=APP_COOKIE,cookie_name=PHPSESSIONID
- Create a health monitor on pool1 which tests the “/hch1” path
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /hch1 pool1
- Add members 192.168.5.9, 192.168.5.10 , 192.168.5.11 on private-subnet to pool1
openstack loadbalancer member create --subnet-id private-subnet --address 192.0.5.9 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private-subnet --address 192.0.5.10 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private-subnet --address 192.0.5.11 --protocol-port 80 pool1
- Create a floating IP address on public-subnet.
openstack floating ip create public
- Associate this floating IP with the lb1’s VIP port. The following IDs should be visible in the output of previous commands
openstack floating ip set --port <load_balancer_vip_port_id> <floating_ip_id>
Glossary
Amphora
Virtual machine, container, dedicated hardware, appliance or device that actually performs the task of load balancing in the Octavia system. More specifically, an amphora takes requests from clients on the front-end and distributes these to back-end systems. Amphorae communicate with their controllers over the LB Network through a driver interface on the controller.
Apolocation
Term used to describe when two or more amphorae are not colocated on the same physical hardware (which is often essential in HA topologies). May also be used to describe two or more loadbalancers which are not colocated on the same amphora.
Controller
Daemon with access to both the LB Network and OpenStack components which coordinates and manages the overall activity of the Octavia load balancing system. Controllers will usually use an abstracted driver interface (usually a base class) for communicating with various other components in the OpenStack environment in order to facilitate loose coupling with these other components. These are the “brains” of the Octavia system.
HAProxy
Load balancing software used in the reference implementation for Octavia. (See http://www.haproxy.org/ ). HAProxy processes run on amphorae and actually accomplish the task of delivering the load balancing service.
Health Monitor
An object that defines a check method for each member of the pool. The health monitor itself is a pure-db object which describes the method the load balancing software on the amphora should use to monitor the health of back-end members of the pool with which the health monitor is associated.
LB Network
Load Balancer Network. The network over which the controller(s) and amphorae communicate. The LB network itself will usually be a nova or neutron network to which both the controller and amphorae have access, but is not associated with any one tenant. The LB Network is generally also not part of the undercloud and should not be directly exposed to any OpenStack core components other than the Octavia Controller.
Listener
Object representing the listening endpoint of a load balanced service. TCP / UDP port, as well as protocol information and other protocol- specific details are attributes of the listener. Notably, though, the IP address is not.
Load Balancer
Object describing a logical grouping of listeners on one or more VIPs and associated with one or more amphorae. (Our “Loadbalancer” most closely resembles a Virtual IP address in other load balancing implementations.) Whether the load balancer exists on more than one amphora depends on the topology used. The load balancer is also often the root object used in various Octavia APIs.
Load Balancing
The process of taking client requests on a front-end interface and distributing these to a number of back-end servers according to various rules. Load balancing allows for many servers to participate in delivering some kind TCP or UDP service to clients in an effectively transparent and often highly-available and scalable way (from the client’s perspective).
Member
Object representing a single back-end server or system that is a part of a pool. A member is associated with only one pool.
Octavia
Octavia is an operator-grade open source load balancing solution. Also known as the Octavia system or Octavia Openstack project. The term by itself should be used to refer to the system as a whole and not any individual component within the Octavia load balancing system.
Pool
Object representing the grouping of members to which the listener forwards client requests. Note that a pool is associated with only one listener, but a listener might refer to several pools (and switch between them using layer 7 policies).
TLS Termination
Transport Layer Security Termination - type of load balancing protocol where HTTPS sessions are terminated (decrypted) on the amphora as opposed to encrypted packets being forwarded on to back-end servers without being decrypted on the amphora. Also known as SSL termination. The main advantages to this type of load balancing are that the payload can be read and / or manipulated by the amphora, and that the expensive tasks of handling the encryption are off-loaded from the back-end servers. This is particularly useful if layer 7 switching is employed in the same listener configuration.
VIP
Virtual IP Address - single service IP address which is associated with a load balancer. In a highly available load balancing topology in Octavia, the VIP might be assigned to several amphorae, and a layer-2 protocol like CARP, VRRP, or HSRP (or something unique to the networking infrastructure) might be used to maintain its availability. In layer-3 (routed) topologies, the VIP address might be assigned to an upstream networking device which routes packets to amphorae, which then load balance requests to back-end members.
Additional documentation and useful links
- Openstack Octavia documentation: https://docs.openstack.org/octavia/queens/user/
- Openstack LBaaS Octavia Command Line Interface Reference: https://docs.openstack.org/python-octaviaclient/latest/cli/index.html
- OpenStack Octavia v2 ReSTful HTTP API https://developer.openstack.org/api-ref/load-balancer/v2/index.html
- Openstack LBaaS Octavia HAProxy Amphora API: https://docs.openstack.org/octavia/queens/contributor/api/haproxy-amphora-api.html
- Openstack Octavia L7 Load Balancing: https://docs.openstack.org/octavia/queens/user/guides/l7.html