Load balancing

Load balancing is a technique used in networking for distributing workloads evenly across two or more computers, network links, CPUs or other resources. It is ideal to be used in order to get optimal resource utilization, minimizing the response time and avoiding overloading, thus making an application scalable.

There are several methods for implementing load balancing:

1. Link aggregation
This method works on Layer 2 of the OSI Model and consists in using multiple network cables and ports in parallel to increase the speed beyond the limit of a cable or port. It is used as an inexpensive way of setting up a high-speed backbone networks that have to transport large amount of data.

2. Server farm
This method is the one commonly used. The load balancer is usually a software that is listening on the socket where external clients connect. The load balancer forwards the requests to one of the backend servers. The server then will usually replies back to the load balancer, which will forward the response to the client. This method is also used for hiding the backend servers from the clients, offering extra-security.

3. DNS Round Rubin
For this method, multiple IP addresses are assigned to a single domain name. When a client makes a DNS request, the DNS server will return a response with one of the IP addresses, based on criteria that can vary from geographical location to scheduling strategies or even random algorithms. This is also one of the methods used by Google for load balancing.

4. Layer 7 Load Balancing
As the name suggests, this method works on application layer of the OSI Model. It involves parsing the requests in application layer and distribute them to optimized servers based on different types of request contents. The overhead of parsing requests in layer 7 is high, thus its scalability is limited, compared to other methods. Many software like Apache, Lighttpd and Linux kernel can provide modules for implementing this method.

Other uses:
Because of the ability of the load balancing to switch between different resources, this technique is also used to implement failover(the continuation of a service after the failure of one or more components). The components of a system are monitored continually and when one no longer response, the load balancer will no longer send traffic to it. When the component becomes responsive, the load balancer will start to route traffic again to it.


Random suggestions:

  • Ddos
  • Secure connections
  • Proxy Servers

About Stefan Fodor

inscriptie pe un mormant
Gallery | This entry was posted in Week 5. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s