Application Continuity Using F5 | Part 1 | Load Balancer

  • By
  • September 21, 2019
  • F5 Load Balancer
Application Continuity Using F5 | Part 1 | Load Balancer

Application Continuity Using F5

This is the first of a two-part series on F5 Load Balancer. This first post addresses the importance of Load Balancer and its types – referred to local load balancing and global load balancing, while the second part will focus on F5 Load Balancer.

Now, let’s first figure out, “What is the Load Balancer?” and “How it works?”

A local server load balancer will distribute the load between a group of application servers, such as a web server. The load balancer has an IP address which represents this pool of backend servers. Client computers, like web browsers, will send requests to this Load Balancer IP address to access the service.

For Free, Demo classes Call: 7798058777
Registration Link: Click Here!

Load Balancer

The load balancer has different policies for what method to balance the load i.e. to determine which session need to be sent to which server. For instance, the load balancer can track how many parallel sessions each server has and will send the next new session to the server with the least connections, therefore least connections is one method.

Another approach of load balancing is that some applications must “stick” the client connection to the same server for the duration of its communication. The load balancer can make sure that persistence or “sticky” does this. The load balancer also examines each of the servers and if one stops responding, the load balancer will stop sending traffic to that dead server and distributes the load on it to other servers in the pool.

Now, you know that what the Load Balancer and how it works. Let’s see “Why Load balancing is important?”

We saw that a Load Balancing is a technique to distribute load across server cluster according to the policies configured. Here we are going to learn about the challenges facing by organizations in load management and how Load Balancer provides solutions to these all challenges.


Challenges for Organisations

There are three main challenges or domains of problems that are organizations facing: availability, performance, and economy.

To avoid a single point of failure, we need to have replicas of our application servers. Hence, whenever there is a hardware failure, it is not completely failure for your application. And in this situation, your customer should face possibly lowest downtime. This is the availability challenge. In this, to avoid outages, we need to run multiple servers and also be able to reroute the traffic or load of the failed server to other servers as fast as possible.

There are specified physical limitations on how much work is done by any Sever system in a given time. These boundaries can be proportionally increased from time to time. But our demand for quick complicated software is constantly increasing, as we are piling hundreds to millions of users onto our servers. This is the performance challenge.

Now, to keep up with this increasing demand, we could buy the latest and fastest systems every year. And we could buy another one to protect yourself from failure means for redundancy. But this gets too much expensive. This is an economic challenge for us. As up-gradation is good choice in some cases only, but in most cases like web services or applications, it is not as business continuity is not constant.

All is that your clients want your service to be fast and reliable and you want to provide your clients with quality service with the highest return on their investment. Load Balancer helps you in this by solving the above availability, performance, and economic problems.

Now it is cleared that load balancing referred to the distribution of traffic across the servers. And where these servers are located, according to that there are two types of Load balancing – Local load balancing and global load balancing.


Local load balancing is a basic infrastructure, where traffic is distributed across the servers in one data center. Now very common questions about this are: “Do we need it?” and “When we need it?”

And the answers are: Yes, and always for various services like web applications.

For Free, Demo classes Call: 7798058777
Registration Link: Click Here!

There are two main goals why local load balancing is requisite:

  • Reason 1: High availability achievement that’s maintainable as we grow. For high availability, we need at least two backend servers, and the local load balancer will ensure that if one backend server is not working, the traffic or load on it will be directed to the other backend server. This works for HTTP servers, Mail Servers or any other servers that answer to traffic coming in on TCP or that pulls items off a backend work line up.
  • Reason 2: To place a control point in front of your backend services. This does not really have anything to do with balancing load or distributing the traffic. In point of fact, even if we had a service with a single backend server, we would still want a load balancer. It is for having a control point that enables you to change backend servers during deployments, to generally manage how your traffic flows, and to add filtering rules. It gives us the ability to change how our service is implemented on the backend servers without open up those variations to the clients who consume our service on the frontend.

Global Load Balancing

Global load balancing concerns with balancing the traffic between multiple Data Centers. Obviously it is more complex than that of Local load balancing.


Global Load Balancing is also known as global server load balancing (GSLB) is the act of load balancing across globally dispersed servers. This allows the distribution of traffic to be performed efficiently across application servers that are distributed geographically.


What is Global Server Load Balancing (GSLB)?

Global server loading balancing (GSLB) refers to web traffic management and application delivery over public or private clouds and/or multiple data centers in several geographical areas. Client requests are generally sent to closer Application servers to ensure minimal latency and maximum performance. And Application load at each location is typically managed by “local” load balancers.


How Does Global Server Load Balancing Work?

Let’s look at the process happens when a client HTTPS request is sent to a website that uses global server load balancing. First, a main server obtains the client’s IP address and examines information about the client’s location. At the same time, the server performs health checks to assess the real time performance and responsiveness of the Application servers. Finally, the main server forwards the request to the local DNS server that is closest geographically or has the shortest response time with lowest latency. All of this happens in the background within some micro seconds.

For Free, Demo classes Call: 7798058777
Registration Link: Click Here!

What Are the Advantages of Global Server Load Balancing?

Global Server Load Balancing is usually put into practice to achieve one or more of the following goals for an application:

  • Performance: GSLB ensures the best website or service performance to clients in geographically distributed areas. Bypassing user requests to the nearest servers minimizes network issues and network latency.
  • Customized Content: GSLB allows enterprises to host content on local servers that are customized for significance in that geographic location and language.
  • Disaster Recovery: Application high availability in disasters minimizes the impact of data center or network failures. Consider the case, if a power outage affects Bangalore Data Center, the load balancer will redirect client requests to other servers hosted in multiple sites that are spread at a distance geographically.
  • “Cloud Bursting”: If applications are hosted in hybrid clouds, the GSLB system can “burst” that is “scale-out” to a public cloud to absorb the uncommonly high load.
  • Maintenance: Datacenter upgrades and migrations can be performed in a non- interruptive manner, as the client requests can simply be redirected by GSLB to servers elsewhere.


The global load balancing logic that requires to apply has to do with the origin of the traffic, and not with an equal distribution of that traffic. Let’s look at several ways to provide global load balancing:

  1. Configuring a Content Delivery Network (CDN)

Content Delivery Networks are expert at managing multiple active data centers globally. Primary CDN has multiple points of presence around the world and a cutting-edge IP network that relies on Anycast to assure that the closest data center can consistently answer queries. Traffic goes to the CDN based on both performance and geography, and then the CDN can route it to multiple backend data centers based on the preferred load level in each data center or other characteristics.

  1. Configuring DNS

This way of balancing traffic between multiple data centers has been the oldest one since the early days of the internet. As an example, suppose you have one hostname, such as and it returns one of four different IP addresses, one for each of your data centers. DNS load balancing is somewhat inexpensive to maintain and implement, but on the other hand, it is slow and sometimes unstable as many DNS clients will hold on to their replies and continuing to the same old target even after it has been updated.

For Free, Demo classes Call: 7798058777
Registration Link: Click Here!

  1. Using Anycast and Border Gateway Protocol (BGP)

This solution becomes more common recently. It is similar to the methodology a CDN takes, just done in-house. Each of the data centers offers itself as a possible route for a set of virtual IPs. Then the initiating traffic finds the shortest path to those IPs. It picks a route and brings it to one of the data centers. Here, it serves the traffic and just pretends to be those IPs, instead of the data center actually passing that data through in the backend.

The transmission time is much faster than DNS. We have more control over different things like how we load the traffic among various geographic locations. This strategy encourages a better performance by picking a path that is close to a data center and close to your traffic originating, as its topology is identical to the physical structure of the internet. And hence, it requires good network experience to set up and manage.

Stay tuned: In part two of this series post we’ll take a close look at F5 Load Balancer, how it works. Also we will discuss different features and services provided by F5 Load Balancer.

So be sure to also read my next post F5 Load Balancer.

Call the Trainer and Book your free demo Class for now!!!

call icon© Copyright 2019 | Sevenmentor Pvt Ltd.

Author Name: Sumaiyya Suhail Bagwan
Department Name: Networking
Designation: Technical Trainer



Submit Comment

Your email address will not be published. Required fields are marked *