Back

Load Balancing

Idealogic’s Glossary

Load Balancing is a way of distributing traffic of network or an application across multiple servers in order to provide reliability, efficient use of resources and high availability. Load balancing is the process of distributing the work load in such a way that the load is not concentrated on one server and hence no single server will be over loaded leading to slow performance or even crashes. This technique is very important in ensuring that applications are always up and running and very responsive especially where there is a lot of traffic.

Key Concepts of Load Balancing

1. Traffic Distribution: The primary purpose of load balancing is to divide the traffic that enters a network (people’s requests or data streams) between several servers. This way no one server is overloaded with requests and this is quite helpful in providing a seamless and fast experience to the users.

2. High Availability: Load balancing plays a major role in high availability as it means that if one server is down the traffic can be redirected to other servers in the pool. This way, the down time is reduced and the application or service is available to the users.

3. Scalability: Load balancing helps in scaling the applications horizontally. When the demand increases, more servers can be added to the pool, the load balancer will automatically distribute the traffic to the new added servers and thus the application will be capable of handling a large traffic.

4. Fault Tolerance: Load balancing work by directing traffic to a number of servers at the same time thereby giving enough room for fault tolerance. In case one of the servers fails the load balancer is able to recognize this and redirect the traffic to the other working servers and thus avoid service disruption.

5. Session Persistence: Another term used for it is session affinity, and this is a load balancing technique where all the requests coming from a specific client are routed to the same server in a single session. This is especially useful in applications that require the usage of session which helps in retaining information as in case of e-commerce websites.

6. Health Monitoring: Load balancer is usually equipped with some kind of a health check mechanism aimed at constantly probing their availability of a server in the pool. When a server fails the health check, the load balancer simply withholds traffic to that server until the server is up and running again.

Types of Load Balancing

1. Hardware Load Balancers: These are physical devices which are developed for managing load balancing only. These are located in between the client as well as the server and their main function is to direct traffic towards the right server depending on a certain criteria. Hardware load balancing is primarily used in large corporations that need high availability and high performance.

2. Software Load Balancers: Software load balancers are the applications or services which are used to perform load balancing. They are easy to use and can be installed on standard servers while hardware load balancers cannot be easily installed on standard servers. Some of these are HAProxy, NGINX and Apache HTTP Server with mod_proxy.

3. Cloud-Based Load Balancers: Some of the major cloud service providers provide the load balancing as a service (LBaaS). Some of these load balancers are the cloud-based load balancers which are controlled by the cloud provider and can help to balance the traffic load across the different servers in the cloud. For example; AWS Elastic Load Balancing (ELB), Google Cloud Load Balancing, Azure Load Balancer, etc.

4. DNS Load Balancing: DNS load balancing means that it uses the Domain Name System (DNS) to share network traffics. When a DNS query is made, DNS server can and will provide different IP addresses for a particular domain name and thus distribute the traffic among several servers. DNS load balancing is less accurate than an application load balancer but it is helpful in the distribution of traffic around the world.

Load Balancing Algorithms

1. Round Robin: This is one of the easiest techniques of load balancing where the incoming requests are forwarded to the next server in the pool. Finally the process is repeated again starting with the first server after reaching the last server. This method is good when all the servers are of the same specification.

2. Least Connections: This algorithm sends the traffic to the server with the least number of connections currently being active. It is beneficial when there are large differences in the amount of time to handle a request as it aids to prevent servers from getting congested.

3. IP Hashing: In this method, the load balancer applies hash function on the client’s IP address to decide which server should process the request. This will mean that, all the requests coming from a particular client will be handled by the same server and this is quite useful in maintaining server sessions.

4. Weighted Round Robin: It is almost similar to the round-robin scheduling but each server is given a weight depending on the capacity or the performance of the server. The more weighted servers get more traffic hence providing a good platform for load balancing especially if the servers have different capacities.

5. Random: The requests are dispatched in random manner to any of the server in the pool. This method is quite basic and may not necessarily provide an optimal load balancing as seen with other elaborate methods.

6. Least Response Time: This algorithm forwards traffic to the server that responds the fastest which ensures that every request is handled by the fastest server. It can be used very effectively for the betterment of the performance of the system.

Common Use Cases for Load Balancing

1. Web Applications: Load balancing is essential in the case of web applications that have high traffic as a way of managing the traffic. It guarantees that user requests are processed and that the application is always available even during traffic rush and if some of the servers go down.

2. Database Servers: Load balancing is useful for redirecting query operations to different database servers and can help optimise the performance of the application and prevent a single server from being overloaded.

3. E-commerce Platforms: Load balancing is crucial in e-commerce which deals with multiple orders at the same time and even at the peak period of shopping such as black Friday or holidays.

4. Content Delivery Networks (CDNs): In order to provide content delivery across multiple servers placed in different regions of the world CDN uses load balancing.

5. Email Servers: Load balancing is a technique which allows the traffic to be split equally among different email servers in order to enhance the delivery of emails and to avoid overloading of single server.

Advantages of Load Balancing

1. Improved Performance: Load balancing is the process of spreading traffic across many servers and it is efficient in that it helps in avoiding congestion and delay on server responses.

2. High Availability: Load balancing makes it possible for applications to continue running since in case one or more servers go down. This redundancy is important to ensure that the services are always on and available to the clients.

3. Scalability: Load balancing helps to distribute the load among multiple machines thus allowing applications to be scaled out. When the load increases, it becomes easier to add new servers into the pool and the load balancer will automatically start distributing load to the new servers.

4. Efficient Resource Utilization: Load balancing helps in distributing the work load across the servers so that no server gets overloaded whereas some other servers might be under loaded.

5. Security: Load balancers are effective in improving the security since they can help to conceal the internal structure of the server pool to users meaning that attackers cannot easily identify the specific servers to attack. Some load balancers come also equipped with additional functions such as SSL termination that relieves the servers from the process of encryption and decryption of the data.

Disadvantages and Considerations

1. Complexity: Load balancing is actually a process and it is not always easy to implement and control especially when a large organisation has many physical hosts and many applications. This makes it very intricate and therefore needs a good deal of planning as well as constant monitoring.

2. Cost: Hardware load balancers for instance may be costly to acquire as well as maintain. In most other cases, even software and cloud-based load balancers are capable of adding more costs to the mix as the environment grows.

3. Single Point of Failure: The only disadvantage is that if the load balancer itself is broken then the entire system is affected by this breakdown. This risk can be prevented by employing multiple load balancers and this only increases the overall cost and complexity of the system.

4. Latency: Load balancing in its general sense enhances the performance of an application but the process of directing traffic through a load balancer does add a tiny amount of latency. This latency is usually trivial, however, it might be a factor in applications where latency is critical.

Conclusion

In essence, Load Balancing is the technique of dividing network or application load across many servers with a view of enhancing reliability, productivity and availability. This way load balancing do not allow one server to be loaded while others are idle hence ensuring that all servers have the same workload thus avoiding instances of servers being overloaded leading to slow or unresponsive applications. To this end, there are different categories of load balancers and several algorithms that can be utilized depending on the requirements of an organization, whether it is for a web application, a database or content delivery. Load balancing is an important part of modern IT infrastructure because of its ability to improve performance, scalability, and fault tolerance even though the concept is complicated and expensive.