4x Affordable, 99.95% SLA, 24x& Video Support, 100+ Countires

Haproxy And Load Balancing Concepts

Introduction

HAProxy, which stands for High convenience Proxy, is a famous ajar source program TCP/HTTP Load acrobat and proxying success which can be run on linux, Solaris, and FreeBSD. Its most communal use is to enhance the performance and reliability of a server environment by giving the workload across aggregate servers (e.g. web, application, database). It is used in many high-profile environments, including: GitHub, Imgur, Instagram, and sound.

In this govern, we will give a general overview of what HAProxy is, basic load-balancing word, and instances of how it might be used to enhance the performance and reliability of your own server environment.

HAProxy Terminology

There are many terms and concepts that are all-important when discussing load balancing and proxying. We will go over commonly used terms in the following sub-sections.

Before we get into the basic symbols of load balancing, we will talk about ACLs, backends, and frontends.

Access Control List (ACL)

In relation to load balancing, ACLs are used to try-out some condition and perform an action (e.g. choose a server, or block a request) based on the try-out result. Use of ACLs allows flexible network traffic forwarding based on a show of causes like pattern-matching and the number of connections to a backend, for instance.

instance of an acl:

acl url_blog path_beg /blog

This ACL is matched if the route of an user's request begins with /blog. This would match a request of http://yourdomain.com/blog/blog-entry-1, for instance.

For a detailed lead on ACL usage, check out the HAProxy Configuration Manual.

Backend

a backend is a set of servers that receives forwarded questions. Backends are been in the backend part of the HAProxy configuration. In its most basic form, a backend can be been by:

  • which load balance algorithm to use
  • a database of servers and ports

a backend can include one or many servers in it--generally speaking, increasing more servers to your backend will increase your latent load capacity by spreading the load over aggregate servers. Increase reliability is also attained through this manner, in case some of your backend servers become unavailable.

Here is an instance of a two backend configuration, web-backend and blog-backend with two web servers in each, listening on port 80:

backend web-backend
   balance roundrobin
   server web1 web1.yourdomain.com:80 check
   server web2 web2.yourdomain.com:80 check

backend blog-backend
   balance roundrobin
   method http
   server blog1 blog1.yourdomain.com:80 check
   server blog1 blog1.yourdomain.com:80 check

balance roundrobin line specifies the load balancing algorithm, which is detailed in the Load Balancing algorithms part.

method http specifies that layer 7 proxying will be used, which is informed in symbols of Load Balancing part.

The check action at the end of the server directives specifies that health checks should be performed on those backend servers.

Frontend

a frontend defines how asks should be forwarded to backends. Frontends are been in the frontend part of the HAProxy configuration. Their definitions are composed of the following elements:

  • a set of IP addresses and a port (e.g. 10.1.1.7:80, *:443, etc.)
  • ACLs
  • use_backend rules, which be which backends to use being on which ACL conditions are matched, and/or a default_backend rule that handles every other case

A frontend can be configured to various types of network traffic, as explained in the next part.

symbols of Load Balancing

Now that we have an understanding of the basic elements that are used in load balancing, let's get into the basic symbols of load balancing.

No Load Balancing

an easy web application environment with no load balancing might look like the following:

No Load Balancing

In this instance, the user connects directly to your web server, at yourdomain.com and there is no load balancing. If your solo web server goes down, the user will no longer be able to accesses your web server. Additionally, if many users are striving to accesses your server simultaneously and it is unable to handle the load, they may have a sedate experience or they may not be able to connect at all.

Layer 4 Load Balancing

The easy path to load balance network traffic to aggregate servers is to use layer 4 (convey layer) load balancing. Load balancing this path will forward user traffic based on IP range and port (i.e. if a request comes in for http://yourdomain.com/anything, the traffic will be forwarded to the backend that handles all the questions for yourdomain.com on port 80). For more details on layer 4, check out the TCP subsection of our Introduction to Networking.

Here is a drawing of an uncomplicated instance of layer 4 load balancing:

Layer 4 Load Balancing

The user accesses the load acrobat, which forwards the user's request to the web-backend team of backend servers. Whichever backend server is specified will reply directly to the user's request. Generally, all of the servers in the web-backend should be serving same communication--otherwise the user might collect inconsistent communication. Note that both web servers connect to the same database server.

Layer 7 Load Balancing

Another, more complex route to load balance network traffic is to use layer 7 (application layer) load balancing. Using layer 7 allows the load acrobat to forward questions to distinct backend servers based on the communication of the user's request. This method of load balancing allows you to run aggregate web application servers under the same domain and port. For more details on layer 7, check out the HTTP subsection of our Introduction to Networking.

Here is a drawing of an uncomplicated instance of layer 7 load balancing:

Layer 7 Load Balancing

In this instance, if an user questions yourdomain.com/blog, they are forwarded to the blog backend, which is a set of servers that run a blog application. Other questions are forwarded to web-backend, which might be running another application. Both backends use the same database server, in this instance.

a piece of the instance frontend configuration would look like this:

frontend http
  bind *:80
  method http

  acl url_blog path_beg /blog
  use_backend blog-backend if url_blog

  default_backend web-backend

This configures a frontend labelled http, which handles all incoming traffic on port 80.

acl url_blog path_beg /blog matches a request if the route of the user's request begins with /blog.

use_backend blog-backend if url_blog uses the ACL to proxy the traffic to blog-backend.

default_backend web-backend specifies that all other traffic will be forwarded to web-backend.

Load Balancing algorithms

The load balancing algorithm that is used determines which server, in a backend, will be chosen when load balancing. HAProxy offers several actions for algorithms. In addition to the load balancing algorithm, servers can be assigned a weight parameter to manipulate how frequently the server is chosen , compared to other servers.

Because HAProxy provides so many load balancing algorithms, we will only describe a few of them here. See the HAProxy Configuration Manual for a finish database of algorithms.

a few of the commonly used algorithms are as follows:

roundrobin

circular thrush selects servers in turns. This is the failure algorithm.

leastconn

Selects the server with the least number of connections--it is recommended for longer sessions. Servers in the same backend are also spun in a round-robin fashion.

source

This selects which server to use based on a dish of the source IP i.e. your user's IP addresses. This is one mode to ensure that an user will connect to the same server.

Sticky Sessions

Some applications demand that an user continues to connect to the same backend server. This persistence is attained through adhesive sessions, using the appsession parameter in the backend that requires it.

Health Check

HAProxy uses health checks to determine if a backend server is accessible to processes questions. This avoids having to manually remove a server from the backend if it becomes unavailable. The failure health check is to attempt to establish a tcp connection to the server i.e. it checks if the backend server is listening on the configured IP addresses and port.

If a server fails a health check, and therefore is unable to serve questions, it is automatically unfit in the backend i.e. traffic will not be forwarded to it until it becomes healthy again. If all servers in a backend fail, the service will become unavailable until at least one of those backend servers becomes healthy again.

For definite symbols of backends, like database servers in definite situations, the failure health check is insufficient to determine whether a server is still healthy.

Other Solutions

If you feel like HAProxy might be too complex for your needs, the following successes may be an acceptable fit:

  • linux realistic Servers (LVS) - an uncomplicated, swift layer 4 load acrobat included in many linux distributions

  • Nginx - a swift and reliable web server that can also be used for proxy and load-balancing purposes. Nginx is often used in conjunction with HAProxy for its caching and compression aptitudes

High Availability

The layer 4 and 7 load balancing setups described before both use a load acrobat to direct traffic to one of many backend servers. However, your load acrobat is a solo point of failure in these setups; if it goes down or gets astonished with asks, it can cause high latency or downtime for your service.

A high convenience (HA) setup is an infrastructure without a single point of failure. It prevents a single server failure from being a downtime event by adding redundancy to every layer of your architecture. A load balancer facilitates redundancy for the backend layer (web/app servers), but for a true high convenience setup, you need to have redundant load balancers as well.

Here is a diagram of a basic high convenience setup:

HA Setup

In this instance, you have aggregate load acrobats (one progressive and one or more hands-off) behind a nonmoving IP addresses that can be remapped from one server to another. When an user accesses your website, the request goes through the outer IP addresses to the progressive load acrobat. If that load acrobat fails, your failover mechanism will detect it and automatically reassign the IP addresses to one of the hands-off servers. There are a number of disparate ways to implement a progressive/hands-off HA setup. To learn more, read this portion of How To Use Floating IPs.

Conclusion

Now that you have a basic understanding of load balancing and know of a few ways that HAProxy facilitate your load balancing needs, you have a hard foundation to get commenced on upgrading the performance and reliability of your own server environment.

The following sessions give detailed instances of HAProxy setups:

How To Use HAProxy As a layer 4 Load acrobat for WordPress application Servers on Ubuntu 14.04

How To Use HAProxy to Set Up MySQL Load Balancing

By Mitchell Anicas
Reference: digitalocean