- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Integration Guides
- OGC API
- Custom Processing Scripts
- Legal Matters
- Partner Services
- About Us
Your Processing Environment
Load Balancer as a Service - user documentationLoad Balancer is a service that listens for requests, and then forwards those requests on to servers within a pool. There are many reasons why you should take advantage of this functionality, but the most common situations are when you are trying to get a high-availability architecture, and when you need more performance than a single server can provide.
The following figure presents the general concept of LBaaS service and its main components.
The load balancer occupies Openstack neutron network port and has an IP address assigned from a subnet. Load balancers can listen for requests on multiple ports. Each one of those ports is specified by a listener. A pool holds a list of members that serve content through the load balancer. Members are servers that serve traffic behind a load balancer. Each member is specified by the IP address and port that it uses to serve traffic. Members may go offline from time to time and health monitors divert traffic away from members that are not responding properly. Health monitors are associated with pools.
Basic Load Balancer configuration
You can create new Load Balancer using one of two available methods
- by OpenStack Horizon dashboard
- using Octavia CLI extensions to OpenStack Client
Creating Load Balancer using OpenStack Horizon dashboard
If you want to configure new Load Balancer using Openstack dashboard you should login to OpenStack, choose the right project if different than default, go to:
Project -> Network -> Load Balancers, click Create Load Balancer and fill the required fields in Load Balancer Details tab, than go to the next tab.
Now you should provide the details for the listener (listening endpoint of a load balanced service). Choose from the list protocol as well as other protocol specific details that are attributes of the listener.
Provide details for the pool as well as pool members ( pool is the object representing the grouping of members to which the listener forwards client requests).
Now you can define the detail for health monitor, which is an object that defines a check method for each member of the pool.
Now you can click the “Create Load Balancer” button and create a load balancer in accordance to the previously entered configuration.
Now you can associate Floating IP to Load Balancer if such association is needed in your use case.
Creating basic HTTP Load Balancer using CLI
The way in which you can prepare the basic HTTP Load Balancer configuration using the Openstack CLI will be illustrated in the following example:
Description of the example scenario:
- To ensure a high level of reliability and performance we need to configure three BackEnd servers using IP: 192.168.5.9, 192.168.5.10, 192.168.5.11 on subnet private-subnet, that have been configured with an HTTP application on TCP port 80.
- Backend servers have been configured with a health check at the URL path “/hch1”.
- Neutron network public is a shared external network created by the cloud operator which is reachable from the internet.
- We want to configure a basic load balancer that is accessible from the internet via Floating IP, which distributes web requests to the back-end servers, and which checks the “/hch1” path to ensure back-end member health.
List of steps that should be taken to achieve the goal described in the above scenario:
- Create load balancer lb1 on subnet private-subnet
openstack loadbalancer create --name lb1 --vip-subnet-id public-subnet
- Create listener lis1
openstack loadbalancer listener create --name lis1 --protocol HTTP --protocol-port 80 lb1
- Create pool pool1 as lis1 default pool
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener lis1 --protocol HTTP --session-persistence type=APP_COOKIE,cookie_name=PHPSESSIONID
- Create a health monitor on pool1 which tests the “/hch1” path
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /hch1 pool1
- Add members 192.168.5.9, 192.168.5.10 , 192.168.5.11 on private-subnet to pool1
openstack loadbalancer member create --subnet-id private-subnet --address 220.127.116.11 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private-subnet --address 18.104.22.168 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private-subnet --address 22.214.171.124 --protocol-port 80 pool1
- Create a floating IP address on public-subnet.
openstack floating ip create public
- Associate this floating IP with the lb1’s VIP port. The following IDs should be visible in the output of previous commands
openstack floating ip set --port <load_balancer_vip_port_id> <floating_ip_id>
Virtual machine, container, dedicated hardware, appliance or device that actually performs the task of load balancing in the Octavia system. More specifically, an amphora takes requests from clients on the front-end and distributes these to back-end systems. Amphorae communicate with their controllers over the LB Network through a driver interface on the controller.
Term used to describe when two or more amphorae are not colocated on the same physical hardware (which is often essential in HA topologies). May also be used to describe two or more loadbalancers which are not colocated on the same amphora.
Daemon with access to both the LB Network and OpenStack components which coordinates and manages the overall activity of the Octavia load balancing system. Controllers will usually use an abstracted driver interface (usually a base class) for communicating with various other components in the OpenStack environment in order to facilitate loose coupling with these other components. These are the “brains” of the Octavia system.
Load balancing software used in the reference implementation for Octavia. (See http://www.haproxy.org/ ). HAProxy processes run on amphorae and actually accomplish the task of delivering the load balancing service.
An object that defines a check method for each member of the pool. The health monitor itself is a pure-db object which describes the method the load balancing software on the amphora should use to monitor the health of back-end members of the pool with which the health monitor is associated.
Load Balancer Network. The network over which the controller(s) and amphorae communicate. The LB network itself will usually be a nova or neutron network to which both the controller and amphorae have access, but is not associated with any one tenant. The LB Network is generally also not part of the undercloud and should not be directly exposed to any OpenStack core components other than the Octavia Controller.
Object representing the listening endpoint of a load balanced service. TCP / UDP port, as well as protocol information and other protocol- specific details are attributes of the listener. Notably, though, the IP address is not.
Object describing a logical grouping of listeners on one or more VIPs and associated with one or more amphorae. (Our “Loadbalancer” most closely resembles a Virtual IP address in other load balancing implementations.) Whether the load balancer exists on more than one amphora depends on the topology used. The load balancer is also often the root object used in various Octavia APIs.
The process of taking client requests on a front-end interface and distributing these to a number of back-end servers according to various rules. Load balancing allows for many servers to participate in delivering some kind TCP or UDP service to clients in an effectively transparent and often highly-available and scalable way (from the client’s perspective).
Object representing a single back-end server or system that is a part of a pool. A member is associated with only one pool.
Octavia is an operator-grade open source load balancing solution. Also known as the Octavia system or Octavia Openstack project. The term by itself should be used to refer to the system as a whole and not any individual component within the Octavia load balancing system.
Object representing the grouping of members to which the listener forwards client requests. Note that a pool is associated with only one listener, but a listener might refer to several pools (and switch between them using layer 7 policies).
Transport Layer Security Termination - type of load balancing protocol where HTTPS sessions are terminated (decrypted) on the amphora as opposed to encrypted packets being forwarded on to back-end servers without being decrypted on the amphora. Also known as SSL termination. The main advantages to this type of load balancing are that the payload can be read and / or manipulated by the amphora, and that the expensive tasks of handling the encryption are off-loaded from the back-end servers. This is particularly useful if layer 7 switching is employed in the same listener configuration.
Virtual IP Address - single service IP address which is associated with a load balancer. In a highly available load balancing topology in Octavia, the VIP might be assigned to several amphorae, and a layer-2 protocol like CARP, VRRP, or HSRP (or something unique to the networking infrastructure) might be used to maintain its availability. In layer-3 (routed) topologies, the VIP address might be assigned to an upstream networking device which routes packets to amphorae, which then load balance requests to back-end members.
Additional documentation and useful links
- Openstack Octavia documentation: https://docs.openstack.org/octavia/queens/user/
- Openstack LBaaS Octavia Command Line Interface Reference: https://docs.openstack.org/python-octaviaclient/latest/cli/index.html
- OpenStack Octavia v2 ReSTful HTTP API https://developer.openstack.org/api-ref/load-balancer/v2/index.html
- Openstack LBaaS Octavia HAProxy Amphora API: https://docs.openstack.org/octavia/queens/contributor/api/haproxy-amphora-api.html
- Openstack Octavia L7 Load Balancing: https://docs.openstack.org/octavia/queens/user/guides/l7.html