Evolution of Load Balancer
The Journey from DNS Round Robin to IP Clustering to Server Load Balancer to Application Delivery Controller
The concept of LOAD was coined during dot com boom era of the last millennium. During initial days of commercial internet, wannabe dot com millionaire had a big issue concerning cost as well as technology in their business plan. Mainframe was generally out of budget for most of the start ups, and all they could afford was a PC based off the shelf server. However, they were unable to handle the large amount of traffic, but even if they could, it posed a great risk to their profitability if the server went down, because in that case their business was going offline and they could even face the danger of going out of business. It was indeed a serious issue, but as history has proved time and again that necessity is the mother of invention, this gave birth to the concept called load balancing.
The concept of load balancing has witnessed an evolution in the last 10 years, no less remarkable than the evolution of Modern Man from the Early Ancestors. Starting from the DNS Round Robin in the 90s, it evolved into IP Clustering and Server Load Balancer, and then to Application Delivery Controller or ADC, which is the latest technology in use by organizations world-wide at present.
DNS Round Robin:
Domain Name System or DNS translated human-readable names (www.viaedge.com) into machine-recognized IP addresses. DNS also provided a way in which each request for name resolution could be answered with multiple IP addresses in different order .This solution was simple and provided the basic characteristics of what customer were looking for by distributing users sequentially across multiple physical machines using the name as the virtualization point. As the service needed to grow, all that the business owner needed to do was to add a new server, include its IP address in the DNS records, and voila, attain increased capacity!
However this solution could not address the issue of high availability. A few serious limitations of this solution were as follows: Clients were earlier used to cache the name resolution. This means that clients did not ask for IP of the server; instead they went back to the server they used earlier irrespective of the fact that server was presently working or not. So there was a need for some good mechanism to ensure that clients did not bypass the load balancing due to caching
DNS had no capability of knowing if the servers listed were working or not. There was a need for the system to have the capability of automatically detecting the malfunctioning server and remove them
DNS provided the poor load balancing along with uncontrolled distribution. It highlighted the striking difference between “load distribution” and “load balancing".
Persistence was one of the major issues which DNS Round Robin had no capability to solve.
Clustering solution evolved in response to various limitations of DNS technology. It had all the servers in a cluster listening to a “cluster IP” in addition to their own physical IP addresses. Scalability of this solution was readily apparent. All you had to do was build a new server, add it to the cluster, and thus the capacity of your application is enhanced.
The availability was dramatically increased with this technology. Because the clustered members were in constant communication with each other, and also because the application developers could use their extensive application knowledge to know when a server was running correctly, this virtually eliminated the chance that users would ever reach a server that was unable to service their request.Predictability was also enhanced by these solutions. Since the application designers knew when and why users needed to be returned to the same server instead of being load balanced, they were able to embed logic that helped to ensure that users would stay persistent as long as needed.
This solution too was not free from constraints and had some serious limitations:
· It had potential limitations on true scalability.
· It was reliant on the application vendors to develop and maintain.
Server Load Balancer or Network-Based Load balancing Hardware:
Further research into load balancing technology gave birth to network based load balancing which was the foundation stone for the application delivery controller, the latest technology in this domain. These load balancing appliances were not only application neutral but also resided outside of the application server. The load balancer presented virtual servers to the outside world. Each virtual server pointed to a cluster of services that resided on one or more physical hosts.
Server Load balancer (SLB) which is now known as simple early generation load balancer is basically a Layer 4 Balancing technology, which has the ability to direct traffic based on MAC/IP address and TCP port (e.g., L2-L4 information) and is now prerequisite for all load-balancing solutions. It could be treated as the building block of entire balancing ecosystem. Functionality such as health monitoring, session persistence and network integration are minimum requirement of layer 4 load balancing.
The advent of the network-based load balancer ushered in a whole new era in the architecture of applications. Now Load balancer could control each server in terms of connections. It could provide at least basic load balancing services to nearly every application in a uniform, consistent manner - finally creating a truly virtualized service entry points unique to the application servers serving it. Scalability with this solution was only limited by the throughput of the load balancing equipment and the networks attached to it. Network-based load balancing hardware enabled the business owners to provide the high-levels of availability to all their applications instead of merely the select few with built-in load balancing. The added intelligence to create controlled load distribution (as opposed to the uncontrolled distribution of dynamic DNS) allowed business owners to finally use load distribution in a positive way, sending more connections to the bigger servers and less to the smaller ones.
Application Delivery Controllers:
Application Delivery Controller (ADC) could be imagined as the Load balancer technology at its zenith, i.e. the maximum limit to which the load balancer could be stretched. Although we may give more stress or weight to security, performance and availability, but load balancing technology is critical to execute all these said attributes. The intent of ADC is to have a single device that incorporates not just a core set of load-balancing capabilities but a comprehensive set of application performance and security services as well
Be it SSL offload or centralized authentication or application fluent firewall, one thing remain is that load balancer is the aggregate point of virtualization across all applications.ADC is based on the concept of Layer 7 load balancing and is a combination of application aware layer 7 switching capability over and above layer 4 load balancing (popularly known as server load balancing). Layer 7 switching takes its name from the OSI model, indicating that the device switches requests based on layer 7 (application) data. Layer 7 switching is also known as “request switching”, “application switching”, and “content based routing”.
Unlike server load balancing, layer 7 switching does not require that all servers in the pool (farm/cluster) have the same content. In fact, layer 7 switching expects that servers will have different content, thus recognizing the need to more deeply inspect requests before determining where they should be directed. Layer 7 switches are capable of directing requests based on URI, host, HTTP headers, and anything in the application message. The salient features of Layer 7 Load Balancing are:
· This allows the architect to design an application delivery network that is highly optimized to serve specific types of content but is also highly available.
· This has additional features offered by application delivery controllers to be applied based on content type, which further improves performance by executing only those policies that are applicable to the content.
· This also allows for increased efficiency of the application infrastructure.
· This allows getting better efficiencies out of your servers by grouping them so that some handle transactions, while others just act as massive storage systems for serving up static pages, or are optimized for downloading streaming video. For instance, URL Parsing involves looking at the URL that appears just after the HTTP GET header, and sending the request to one of a group of servers based on that address. You could also use the extension (.asp, .gif, etc) to point traffic at servers that have been optimized for serving that type of traffic.
· This allows you to make sure that some users are directed at higher powered servers, if they are premium customers, or are on your site to place an order rather than just browse. A cookie value (which can be more or less any string) might indicate that this person has used your site before, so that you can welcome him by name the next time he visits, or if it is a service he has previously subscribed to, you can send him to the servers providing the services they have paid for.
Scalability, high availability, and persistence are common attributes of ADC with load balancer. Performance enhancement is another obvious extension to the load balancing concept. These ADCs often include caching, compression, and even rate-shaping technology to further increase the overall performance and delivery of applications.
So what lies in future?
Candidly speaking, it is hard for anyone to make a guess as to where do we go from here, but surely a probabilistic guess could be made at this point of time. Traffic need gave birth to the very concept of Load Balancing that further grew and evolved up to the level of ADC. The changing user needs and the associated technology required to meet those challenges will surely take us to an environment where the technology is more capable to take care of the following:
· Integration of Network Access Control.
· Some innovative way of application caching/compression.
· Application of business rules to the management and control of application delivery.
· Reduction in number of devices.
Let us wait and watch for some new surprises with some revolutionary breakthrough in technology. But till the time any innovative ground-breaking technology is evolved, the best that we can do is to concentrate on creating robust ADCs for betterment of the existing ecosystem.