Randomized it is crucial to balance the load on

                                 Randomized load
balancing and hashing

In
computing, load balancing improves the distribution of workloads across
multiple computing resources, such as computers, a computer cluster, network
links, central processing units, or disk drives. Load balancing aims to
optimize resource use, maximize throughput, minimize response time, and avoid
overload of any single resource. Using multiple components with load balancing
instead of a single component may increase reliability and availability through
redundancy. Load balancing usually involves dedicated software or hardware,
such as a multilayer switch or a Domain Name System server process.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

A
server program that is installed on a targeted system has limited resources.
These resources include system memory, hard disk space, and processor speed.
Since the server capacity is limited, so it can handle only certain number of
clients. With more number of clients a server will be overloaded and that may
lead to slow down of the performance, hang and crash issues. So, it is crucial
to balance the load on server and this can be achieved by keeping copies of
servers and distribute the load among them.

Load
balancing is the process of making a group of servers participate in the same
service and do the same work. The general purpose of load balancing is to
increase availability, improve throughput, reliability, maintain stability,
optimize resource utilization and provide fault tolerant capability. As the
number of servers grows, the

risk
of a failure anywhere increases and such failures must be handled carefully.
The ability to maintain unaffected service during any number of simultaneous
failures is termed as high availability.

Load
balancing is also provided by few operating systems. Microsoft’s Network Load
Balancing (NLB), a software-based solution that allows you to effortlessly cluster
multiple machines 3. There are variety open source load balancers and load balancing
software available for Linux, such as Linux Virtual Server, Ultra Monkey, Red Hat
Cluster Suite, High Availability Linux (Linux HA); which can be used efficiently
with most of the network services, including FTP, HTTP, DNS, SMTP, POP/IMAP, VoIP,
etc.

 LOAD BALANCING CONCEPT

“Load
balancing” means an even distribution of the total load amongst all serving
entities.Load balancing is very essential in distributed computing systems to
improve the quality of service by managing customer loads that are changing
over time. The

request
demands of incoming requests are optimally distributed among available system
resources to avoid resource bottlenecks as well as to fully utilize available resources
5. Load balancing also provides horizontal scaling e.g., adding computing
resources in order to address increased loads.

 

In
order to balance the requests of the resources it is important to recognize a
few major goals of

load
balancing algorithms:

a)
Cost effectiveness: primary aim is to achieve an overall improvement in system

performance
at a reasonable cost.

b)
Scalability and flexibility: the distributed system in which the algorithm is
implemented

may
change in size or topology. So the algorithm must be scalable and flexible
enough to

allow
such changes to be handled easily.

c)
Priority: prioritization of the resources or jobs need to be done on before hand
through the

algorithm
itself for better service to the important or high prioritized jobs in spite of
equal

service
provision for all the jobs regardless of their origin.

 

Types of load
balancing

there
are two major types of load balancing .

1.    Static load
balancing

2.    Dynamic load
balancing

 

Static load
balancing

In
this category of static load balancing techniques, the performance of the
server nodes is determined at the initial stage of process. Then based on their
performance the work load is distributed by the master node. The slave nodes evaluate
their allocated work and then send their result to the master node. A job is
always executed on the server node to which it is assigned that is static load
balancing techniques are non-preemptive. The main intention of static load
balancing method is to reduce the overall execution time of a concurrent
program while minimizing the communication

delays.
A usual disadvantage of all static methods is thatthe final selection of a
server node for process allocation is made when the process is initially
created and it cannot be changed during process execution in order to make variations
in the system load.

Dynamic load
balancing

Dynamic
load balancing techniques depend on recent system load information and determine
the job assignments to server nodes at run time. In case of dynamic approach,
the load balancing decisions are based on the current state of the system and
hence work loads are allowed to shift dynamically from an overloaded node to an
under-loaded node to get faster response from the server nodes. This ability to
respond to variations in the system is the main advantage of the dynamic load
balancing. But, since the load balancer has to continuously monitor the current
system load on all the nodes, it becomes an extra overhead as monitoring
consumes CPU cycles. So, a proper decision must be taken when to invoke the
monitoring operation by the load balancer.

Applications

1.     
Load Balancing for JDBC Connections

·       
Load
balancing of JDBC connection requires the use of a multi data source configured
for load balancing. Load balancing support is an option you can choose when
configuring a multi data source.

·       
A
load balancing multi data source balances the load among the data sources in
the multi data source. A multi data source has an ordered list of data sources
it contains. If you do not configure the multi data source for load balancing,
it always attempts to obtain a connection from the first data source in the
list. In a load-balancing multi data source, the data sources it contains are
accessed using a round-robin scheme. In each successive client request for a
multi data source connection, the list is rotated so the first pool tapped
cycles around the list.

2.     
Network 
load balancing

Network Load
Balancing has several potential use cases and advantages. By distributing
network traffic across multiple servers or virtual machines, traffic can be
processed faster than in a scenario in which all traffic flowed through a
single server. The feature can also enable an organization to quickly scale up
a server application (such as a Web server) by adding hosts and then
distributing the application’s traffic among the new hosts. Similarly, if
demand decreases, servers can be taken offline and the feature will balance traffic
among the remaining hosts. Network Load Balancing can also ensure network
traffic is re-routed to remaining hosts if one or more hosts within the cluster
fail unexpectedly. A Network Load Balancing cluster can scale up to 32 servers.

3.     
load balancing on a router

Network load
balancing (commonly referred to as dual-WAN routing or multihoming) is the
ability to balance traffic across two WAN links without using complex routing
protocols like BGP.

This capability
balances network sessions like Web, email, etc. over multiple connections in
order to spread out the amount of bandwidth used by each LAN user, thus
increasing the total amount of bandwidth available. For example, a user has a
single WAN connection to the Internet operating at 1.5Mbit/s. They wish to add
a second broadband (cable, DSL, wireless, etc.) connection operating at
2.5Mbit/s. This would provide them with a total of 4Mbit/s of bandwidth when
balancing sessions.

Session
balancing does just that, it balances sessions across each WAN link. When Web
browsers connect to the Internet, they commonly open multiple sessions, one for
the text, another for an image, another for some other image, etc. Each of
these sessions can be balanced across the available connections. An FTP
application only uses a single session so it is not balanced; however if a
secondary FTP connection is made, then it may be balanced so that on the whole,
traffic is evenly distributed across the various connections and thus provides
an overall increase in throughput.

Additionally,
network load balancing is commonly used to provide network redundancy so that
in the event of a WAN link outage, access to network resources is still
available via the secondary link(s). Redundancy is a key requirement for
business continuity plans and generally used in conjunction with critical
applications like VPNs and VoIP.

Finally, most
network load balancing systems also incorporate the ability to balance both
outbound and inbound traffic. Inbound load balancing is generally performed via
dynamic DNS which can either be built into the system, or provided by an
external service or system. Having the dynamic DNS service within the system is
generally thought to be better from a cost savings and overall control point of
view.

4.     
load balancing in cloud computing

Cloud load
balancing is the process of distributing workloads and computing resources in a
cloud computing environment. Load balancing allows enterprises to manage
application or workload demands by allocating resources among multiple
computers, networks or servers. Cloud load balancing involves hosting the
distribution of workload traffic and demands that reside over the Internet.

Cloud load
balancing helps enterprises achieve high performance levels for potentially
lower costs than traditional on-premises load balancing technology. Cloud load
balancing takes advantage of the cloud’s scalability and agility to meet
rerouted workload demands and to improve overall availability. In addition to
workload and traffic distribution, cloud load balancing technology can provide
health checks for cloud applications.

To avoid noisy
neighbors and poor application performance in a public cloud environment, cloud
load balancing uses virtual local area networks (VLANs), which group network
nodes in various geographic locations to communicate as if they were in the
same physical location.

Application chosen

Cloud
computing

Best solution

Randomised
load balancing

Randomized load
balancing is a type of static load balancing. In
a random allocation method, the client requests, for example HTTP requests, are
assigned to any server picked randomly among the group of available servers. In
such a

case, one of the servers may be
assigned more requests to process, while the other servers are sitting idle.
However, on an average, each server gets its share of the client load due to
the random selection

The random method of load balancing applies only
to EJB and RMI object clustering. In random load
balancing, requests are routed to servers at random. Random load balancing is recommended
only for homogeneous cluster deployments, where each server instance runs on a
similarly configured machine. A random allocation of requests does not allow
for differences in processing power among the machines upon which server
instances run. If a machine hosting servers in a cluster has significantly less
processing power than other machines in the cluster, random load balancing will
give the less powerful machine as many requests as it gives more powerful
machines.

Random load balancing distributes requests
evenly across server instances in the cluster, increasingly so as the
cumulative number of requests increases. Over a small number of requests the
load may not be balanced exactly evenly.

Randomized algorithm is of type static in
nature. In this algorithm 25 a

process can be handled by a particular node n
with a probability p. The process allocation

order is maintained for each processor
independent of allocation from remote processor.

This algorithm works well in case of processes
are of equal loaded. However, problem

arises when loads are of different computational
complexities. Randomized algorithm does

not maintain deterministic approach. It works
well when Round Robin algorithm generates

overhead for process queue.

 

Disadvantages of random load balancing

Disadvantages of random load balancing include
the slight processing overhead incurred by generating a random number for each
request, and the possibility that the load may not be evenly balanced over a
small number of requests.

The algorithm

As its name implies, this algorithm matches clients
and servers by random, i.e. using an underlying random number generator. In
cases wherein the load balancer receives a large number of requests, a Random
algorithm will be able to distribute the requests evenly to the nodes. So like
Round Robin, the Random algorithm is sufficient for clusters consisting of
nodes with similar configurations (CPU, RAM, etc). Using a random number
generator, the load balancer directs connections randomly to the web servers
behind it. This type of algorithm may be used if the web servers are of similar
or same hardware specifications. If connection monitoring is not enabled, then
the load balancer will continue sending traffic to failed web servers. Load
balancers use a number of algorithms to direct traffic. Some of the most common
algorithms are listed below.

Other Load Balancing Algorithms and their comparisons

Name

Working

Example

Limitation

Round robin

Using a circular queue, the load balancer walks through it, sending
one request per server. Same as the random method, this works best when the
web servers are of similar or same hardware specifications. The Round Robin
algorithm is best for clusters consisting of servers with identical specs.

Here’s how it works. Let’s say you have 2 servers
waiting for requests behind your load balancer. Once the first request
arrives, the load balancer will forward that request to the 1st server. When
the 2nd request arrives (presumably from a different client), that request will
then be forwarded to the 2nd server.
Because the 2nd server is the last in this cluster,
the next request (i.e., the 3rd) will be forwarded back to the 1st server,
the 4th request back to the 2nd server, and so on, in a cyclical fashion.
 

The method
is very simple. However, it won’t do well in certain scenarios. For
example, what if Server 1 had more CPU, RAM, and other specs compared to
Server 2? Server 1 should be able to handle a higher workload than Server 2,
right?
Unfortunately, a load balancer running on a round
robin algorithm won’t be able to treat the two servers accordingly. In spite
of the two servers’ disproportionate capacities, the load balancer will still
distribute requests equally. As a result, Server 2 can get overloaded faster
and probably even go down. You wouldn’t want that to happen.
 

Weighted Round Robin

Web servers or a group of web servers are assigned a static weight.
For instance, new web servers that can handle more load are assigned a higher
weight and older web servers are assigned a lower weight. The load balancer
will send more traffic to the servers with a higher weight than the ones to
the lower weight. The Weighted Round Robin is similar to the Round Robin in a
sense that the manner by which requests are assigned to the nodes is still
cyclical, albeit with a twist. The node with the higher specs will be
apportioned a greater number of requests.

But
how would the load balancer know which node has a higher capacity? Simple.
You tell it beforehand. Basically, when you set up the load balancer, you
assign “weights” to each node. The node with the higher specs
should of course be given the higher weight.
You
usually specify weights in proportion to actual capacities. So, for example,
if Server 1’s capacity is 5x more than Server 2’s, then you can assign it a
weight of 5 and Server 2 a weight of 1.
 

There
can be instances when, even if two servers in a cluster have exactly the same
specs (see first example/figure), one server can still get overloaded
considerably faster than the other. One possible reason would be because
clients connecting to Server 2 stay connected much longer than those
connecting to Server 1.
This
can cause the total current connections in Server 2 to pile up, while those
of Server 1 (with clients connecting and disconnecting over shorter times)
would virtually remain the same. As a result, Server 2’s resources can run
out faster.  This is depicted below,
wherein clients 1 and 3 already disconnect, while 2, 4, 5, and 6 are still
connected

Least
Connections
 

Keeps a track of the connections to the web servers, and prefers to
send connections to the servers with the least number of connections. This
algorithm takes into consideration the number of current connections each
server has. When a client attempts to connect, the load balancer will try to
determine which server has the least number of connections and then assign
the new connection to that server.

So if say (continuing our last example), client 6 attempts to connect
after 1 and 3 have already disconnected but 2 and 4 are still connected, the
load balancer will assign client 6 to Server 1 instead of Server 2.

Again the problem of weights.

Weighted Least
Connections
 

The Weighted Least Connections algorithm does to Least Connections
what Weighted Round Robin does to Round Robin. That is, it introduces a
“weight” component based on the respective capacities of each
server. Just like in the Weighted Round Robin, you’ll have to specify each
server’s “weight” beforehand.

A
load balancer that implements the Weighted Least Connections algorithm now
takes into consideration two things: the weights/capacities of each server
AND the current number of clients currently connected to each server.
 

 

 

Plain Programmer
Description:

 The system builds an array of Servers being
load bal anced, and uses the random number generator to determine who gets the
next connection… Far from an elegant solution, and most often found in large
software packages that have thrown load balancing in as a feature.

 

The
idea of random algorithm is to randomly assign the selected jobs to the
available Virtual Machines (VM). The algorithm does not take into
considerations the

status
of the VM, which will either be under heavy or low load. Hence, this may result
in the selection of a VM under heavy load and the job requires a long waiting
time before service is obtained. The complexity of this algorithm is quite low
as it does not need any overhead or preprocessing.

 

Figure 1: Process of random algorithm

Steps for RANDOM
ALGORITHM

 

 

 

 

 

 

 

 

 

 

The algorithm

Index = random() *
(NoVM – 1) (1)

Where:

index = The index to
the selected VM

random() = Function
that returns a random value

between 0 and 1

NoVM = The total
number of available VMs

 

 

 

 

Problems

This
attempt to load-balance can fail in two ways.

First,
the typical “random” partition of the address space among nodes is not completely
balanced. Some nodes end up with a larger portion of the addresses and thus
receive

a
larger portion of the randomly distributed items. Second, some applications may
preclude the randomization of data items’ addresses. For example, to support

range
searching in a database application the items may need to be placed in a specific
order, or even at specific addresses, on the ring. In such cases, we may find
the items unevenly distributed in address space, meaning that balancing the
address space among nodes is not adequate to balance the distribution of items
among nodes.

Tight bounds of
randomized load balancing

As
an application, we consider the following problem. Given a fully connected
graph of n nodes, where each node needs to send and receive up to n messages,
and in each round each node may send one message over each link, deliver all
messages as quickly as possible to their destinations. We give a simple and
robust algorithm of time complexity O(log* n) for this task and provide a generalization
to the case where all nodes initially hold arbitrary sets of messages. A less
practical algorithm terminates within asymptotically optimal O(1) rounds. All
these bounds hold with high probability.

Solution proposed

Hybrid
algorithm.

 

The
Hybrid algorithm as given in Figure 3.3 is a load balancing algorithm used by
the datacenter to distribute the received tasks efficiently over the virtual
machine under a normal workload by finding the best VM among the group of VMs
to assign the load in heterogeneous of a processors power in cloud computing
environment. The hybrid algorithm consists of both random and greedy algorithms;
the random algorithm which randomly select a VM to process the received tasks,
does not need complex computation to make a decision but it does not select the
best VM. On the other hand, Greedy algorithm selects the best VM to handle the
received task, but the selection

process
needs some complex computation to find the best VM. The hybrid algorithm
considers the current resource information and the CPU capacity factor. The hybrid
algorithm selects k nodes (VMs) randomly, and chooses the current load for each
VM selected. Then the hybrid algorithm will choose a VM that have least VM
current loads and return the VM ID to the Data center. The data center will
assign the load to the selected VM and update the value of selected VM in the table
of current loads. Finally, when the VM finishes processing the request, it will
inform the data center to update its current load value.

The
Hybrid Load balancing algorithm uses randomization and greedy, it distributes
the load over VMs to achieve efficient performance in heterogeneous cloud
computing environment. The algorithm depends on current resource allocation
count.

 

 

 

 

 

 

Load balancing and
hashing

Load
balancing uses a number of algorithms, called load balancing methods, to
determine how to distribute the load among the servers. When a Load balancer is
configured to use the hash method, it computes a hash value then sends the
request to the server. Hash load balancing is similar to persistence based load
balancing, ensuring that connections within existing user sessions are
consistently routed to the same back-end servers even when the list of
available servers is modified during the user’s session.

The
load balancer computes two hash values using:

The
back-end server IP Address and Port (X).

One
of the incoming URL, Domain name, Destination IP, Source IP, Source &
Destination IP, Source IP & Source Port, Call ID, Token (Y).

The
load balancer computes a new hash value (Z) based on (X) and (Y).

The
hash value (Z) is stored in cache.

The
load balancer forwards the request to the server with highest hash value, by
using the value (Z) from the computed hash values. Subsequent requests with the
same hash value (cached) are sent to the same server.

The
following example shows how a Load Balancer works using the hash method. The
load balancer delivers the request based on the value of Hash(Z) as follows:

Server-1
receives the first request.

If
server-1 is down, the hash value is calculated again.

The
load balancer selects the server with the highest hash value, and forwards the
request.

Note:
If the load balancer fails to select a service by using a hash value, it uses
the least connections method to select the server.

Whether
it’s load balancing XenApp Web Interface, iPhone/iPad resources, websites,
linux servers, windows servers, e-commerce sites, or enterprise applications,
NetScaler is the perfect choice. NetScaler, available as a network device or as
a virtualized appliance, is a web application delivery appliance that
accelerates internal and externally-facing web application up to 5x, optimizes
application availability through advanced L4-7 traffic management, increases
security with an integrated application firewall, and substantially lowers
costs by increasing web server efficiency.

 

Citrix
NetScaler is a comprehensive system deployed in front of web servers that
combines high-speed load balancing and content switching with application
acceleration, highly-efficient data compression, static and dynamic content
caching, SSL acceleration, network optimization, application performance
monitoring, and robust application security. Available as a virtual machine,
the NetScaler is perfect for load balancing virtual servers in the datacenter or
in the cloud.

 

x

Hi!
I'm Eileen!

Would you like to get a custom essay? How about receiving a customized one?

Check it out