0% found this document useful (0 votes)
283 views15 pages

Load Balancing in Cloud Computing Guide

The document discusses load balancing in cloud computing. It covers the importance, goals, and how load balancers work to distribute traffic across servers. Load balancing optimizes performance, availability, scalability, and efficiency in cloud environments.

Uploaded by

nebibguel09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
283 views15 pages

Load Balancing in Cloud Computing Guide

The document discusses load balancing in cloud computing. It covers the importance, goals, and how load balancers work to distribute traffic across servers. Load balancing optimizes performance, availability, scalability, and efficiency in cloud environments.

Uploaded by

nebibguel09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT 6 LOAD BALANCING

Structure

6.0 Introduction
6.1 Objectives
6.2 Load Balancing and its Importance
6.2.1 Importance of Load Balancing
6.2.2 Goals of Load Balancing in Cloud Computing
6.2.3 How a Load Balancer Works?
6.3 Types of Load Balancers
6.3.1 Types of Load Balancers based on the Functionality
6.3.2 Types of Load Balancers based on the Configuration
6.4 Load Balancing Algorithms – Static and Dynamic
6.4.1 Static Load Balancing Algorithms
6.4.2 Dynamic Load Balancing Algorithms
6.5 Load Balancing as a Service (LBaaS)
6.5.1 Open Stack LBaaS
6.6 Summary
6.7 Solutions/Answers
6.8 Further Readings

6.0 INTRODUCTION

In the earlier unit, we have studied Cloud Elasticity and Scaling which are very
important characteristics of a cloud. In this unit, we will focus on another
important aspect of cloud computing namely load balancing.

Load balancing is the strategic distribution of incoming network traffic across


multiple servers or resources in cloud computing. It acts as the traffic cop,
managing the flow of requests among various servers to optimize performance,
prevent overloading, and ensure high availability. This critical function is the
backbone of a responsive and robust cloud infrastructure. By evenly
distributing workloads, load balancing minimizes the risk of any single server
becoming overwhelmed, reducing latency and preventing potential downtime.
In the cloud, where scalability and reliability are paramount, load balancing
allows for dynamic resource allocation, ensuring that computing resources are
used efficiently while maintaining consistent performance levels. It plays a
pivotal role in maintaining seamless operations, maximizing resource
utilization, and providing users with uninterrupted access to applications and
services hosted in the cloud.

In this unit, you will study importance of load balancing, goals of load
balancing, levels of load balancing, load balancing algorithms and load
balancing as a service.

1
Resource Provisioning,
Load Balancing and
Security 6.1 OBJECTIVES

After going through this unit, you shall be able to:

• understand load balancing concept and its importance;


• describe how a load balancer works;
• list and explain types of load balancers based on functionality and
configuration;
• discuss various types of static load balancing algorithms;
• discuss various types of dynamic load balancing algorithms; and
• explain Load Balancing-as-a-Service

6.2 LOAD BALANCING AND ITS IMPORTANCE


Load balancing in cloud computing distributes traffic and workloads to ensure
that no single server or machine is under-loaded, overloaded, or idle. Load
balancing optimizes various constrained parameters such as execution
time, response time, and system stability to improve overall cloud
performance. Load balancing architecture in cloud computing consists of a
load balancer that sits between servers and client devices to manage traffic.

As shown in Fig 1, load balancing in cloud computing distributes


traffic, workloads and computing resources evenly throughout a cloud
environment to deliver greater efficiency and reliability for cloud applications.
Cloud load balancing enables enterprises to manage client requests and host
resource distribution among multiple computers, application servers,
or computer networks.

Figure 1: Load Balancing in Cloud Computing

2
Load Balancing
6.2.1 Importance of Load Balancing

Load balancing holds immense importance in cloud computing for several


reasons:

• Optimized Performance: It ensures that resources are efficiently


utilized, preventing any single server from becoming overloaded. By
distributing workloads evenly, load balancing minimizes response
times and enhances overall system performance, providing a smooth
and consistent user experience.

• High Availability and Reliability: Load balancing improves system


reliability by spreading traffic across multiple servers or regions. If one
server fails or experiences issues, the traffic can be redirected to
healthy servers, ensuring continuous availability of applications and
services.

• Scalability and Flexibility: In a cloud environment, load balancing


facilitates dynamic resource allocation. It allows for easy scaling,
enabling the addition or removal of resources based on demand. This
scalability ensures that the infrastructure can handle varying workloads
efficiently.

• Cost Efficiency: Efficient load balancing contributes to cost savings by


optimizing resource usage. It prevents over-provisioning of resources,
reducing unnecessary expenses associated with idle or underutilized
servers.

• Fault Tolerance and Resilience: Load balancing enhances fault


tolerance by distributing traffic across redundant servers or data
centers. This redundancy minimizes the impact of failures or
disruptions, improving the system's resilience against potential issues.

• Support for Modern Architectures: Load balancing is crucial for


modern architectures like microservices and containers. It intelligently
routes traffic among various microservices or containers, ensuring that
each component receives an appropriate share of the workload.

Overall, load balancing is fundamental in cloud computing as it not only


optimizes resource utilization and performance but also ensures high
availability, scalability, and resilience, making it a cornerstone for robust and
reliable cloud-based services.

6.2.2 Goals of Load Balancing in Cloud Computing

The overall goals of load balancing in cloud computing are to


minimize response time for application users and maximize organizational
resources. Other than that, optimal resource utilization, high availability,

3
Resource Provisioning,
Load Balancing and scalability, improved performance and enhanced security are the other goals of
Security
load balancing in cloud computing.

6.2.3 How a Load Balancer Works?

A load balancer is a crucial component that helps distribute incoming network


traffic across multiple servers or resources to ensure efficient utilization,
maximize performance, and maintain high availability of applications or
services. Here's a breakdown of how it typically works:

• Traffic Distribution: When a user sends a request to access a website,


application, or service hosted on the cloud, it first reaches the load
balancer. The load balancer acts as a traffic cop, receiving incoming
requests.

• Load Balancing Algorithms: The load balancer employs various


algorithms to decide how to distribute the incoming requests. Common
algorithms include Round Robin (where each server is sequentially
assigned a request), Least Connections (sending requests to the server
with the fewest active connections), or Weighted Round Robin
(assigning servers based on predefined weights).

• Health Monitoring: Load balancers continuously monitor the health


and performance of the servers in the pool. They perform health
checks, probing servers to ensure they are operational and capable of
handling requests. If a server is found to be unhealthy, the load
balancer can route traffic away from it until it recovers.

• Session Persistence: In cases where maintaining session data is


essential (like in e-commerce or banking apps), the load balancer can
employ techniques like cookie-based or IP-based session persistence.
This ensures that a user's requests are consistently directed to the same
server in a session to maintain continuity.

• Scalability and Elasticity: Load balancers play a vital role in scaling


resources. In cloud environments, they facilitate horizontal scaling by
adding or removing servers dynamically based on demand. When
traffic increases, new servers can be added to the pool, and the load
balancer will distribute traffic accordingly.

• High Availability: Load balancers themselves are often designed for


high availability. They might have redundancy built-in, employing
techniques like clustering or active-passive configurations, ensuring
that if one load balancer fails, another takes over seamlessly to prevent
disruptions.

• Content Delivery and Security: Advanced load balancers can also


handle tasks like SSL termination (decrypting incoming SSL requests
before distributing them to backend servers) and content caching,
4
Load Balancing
which can improve performance by serving frequently accessed content
directly from memory.

Overall, load balancers are fundamental components in cloud computing


architecture, ensuring efficient resource utilization, optimal performance, and
high availability for applications and services.

 Check Your Progress 1


1) What is load balancing in Cloud Computing?

…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
2) What are the goals of load balancing?

…………………………………………………………………………………………
…………………………………………………………………………………………
3) Why is it imperative in Cloud Computing to balance the cloud load?
…………………………………………………………………………………………
…………………………………………………………………………………………

6.3 TYPES OF LOAD BALANCERS

There are broadly 2 types of load balancers:


(i) based on the functionality and
(ii) based on the configuration

6.3.1 Types of Load Balancers based on the Functionality


Several load balancing techniques are there for addressing the specific network
issues:
a. Network Load Balancer / Layer 4 (L4) Load Balancer:
Based on the network variables like IP address and destination ports, Network
Load balancing is the distribution of traffic at the transport level through the
routing decisions. Such load balancing is TCP i.e. level 4, and does not
consider any parameter at the application level like the type of content, cookie
data, headers, locations, application behavior etc. Performing network
addressing translations without inspecting the content of discrete packets,
Network Load Balancing cares only about the network layer information and
directs the traffic on this basis only.

b. Application Load Balancer / Layer 7 (L7) Load Balancer:


Ranking highest in the OSI model, Layer 7 load balancer distributes the
requests based on multiple parameters at the application level. A much wider
5
Resource Provisioning,
Load Balancing and range of data is evaluated by the L7 load balancer including the HTTP headers
Security
and SSL sessions and distributes the server load based on the decision arising
from a combination of several variables. This way application load balancers
control the server traffic based on the individual usage and behavior.

c. Global Server Load Balancer/Multi-site Load Balancer:


With the increasing number of applications being hosted in cloud data centers,
located at varied geographies, the GSLB extends the capabilities of general L4
and L7 across various data centers facilitating the efficient global load
distribution, without degrading the experience for end users. In addition to the
efficient traffic balancing, multi-site load balancers also help in quick recovery
and seamless business operations, in case of server disaster or disaster at any
data center, as other data centers at any part of the world can be used for
business continuity.

6.3.2 Types of Load Balancers based on the Configuration


Software load balancers, hardware load balancers and Virtual load balancers
are the three types of load balancers used in cloud computing to manage and
distribute incoming network traffic among multiple servers or resources to
optimize performance and reliability.

a. Software Load Balancers

Software load balancers are load balancing solutions implemented as software


applications or services within the cloud infrastructure.

Characteristics of Software Load Balancers

• They operate as software instances that can be deployed on virtual


machines or containers.
• They offer flexibility and scalability, allowing for easy configuration
changes and adjustments to accommodate changing traffic patterns.
• These load balancers can be dynamically scaled up or down based on
demand without relying on specific hardware devices.

Examples: Nginx, HAProxy, and load balancing solutions provided by cloud


service providers are common examples of software load balancers.

b. Hardware Load Balancers:

Hardware load balancers are dedicated physical devices designed specifically


to perform load balancing tasks.

Characteristics of Hardware Load Balancers

• They are standalone appliances that sit between the incoming traffic
and the servers, managing the distribution of requests.
• Hardware load balancers are known for their high performance,
specialized hardware optimizations, and ability to handle high volumes
of traffic efficiently.
6
Load Balancing
• These devices often offer robust reliability features and specialized
hardware for load balancing tasks.
Examples: F5 Networks' BIG-IP, Citrix ADC (formerly known as Netscaler),
and Barracuda Load Balancer are examples of hardware load balancers.

c. Virtual Load Balancers:

This load balancer is different from both the software and hardware load
balancers as it is the combination of the program of a hardware load balancer
working on a virtual machine.

Through virtualization, this kind of load balancer imitates the software driven
infrastructure. The program application of hardware equipment is executed on
a virtual machine to get the traffic redirected accordingly. But such load
balancers have similar challenges as of the physical on-premise balancers viz.
lack of central management, lesser scalability and much limited automation.

6.4 LOAD BALANCING ALGORITHMS – STATIC


AND DYNAMIC

A load balancing algorithm is the logic, a set of predefined rules, which a load
balancer uses to route traffic among servers.

There are two primary approaches to load balancing. Static load


balancing distributes traffic without taking this state into consideration; some
static algorithms route an equal amount of traffic, either in a specified order or
at random, to each server in a group. Dynamic load balancing uses algorithms
that distribute traffic based on the current state of each server.

6.4.1 Static Load Balancing Algorithms

Algorithms in this class are also noted as off-line algorithms, in which the
VMs information are required to be known in advance. Thus, static algorithms
generally obtain better overall performance than dynamic algorithms.
However, demands are changing over time in real clouds. Thus, static resource
allocation algorithms are easy to violate the requirements of dynamic VM
allocation. Some of the static load balancing algorithms are as follows:

a. Round Robin: Round-robin network load balancing rotates user


requests across servers in a cyclical manner. As a simplified example,
let’s assume that an enterprise has a group of three servers: Server A,
Server B, and Server C. In the order that Round Robin regulates
requests the first request is sent to Server A, the second request goes to
Server B, and the third request is sent to Server C. The load balancer
continues to route incoming traffic by this order. This ensures that the
traffic load is distributed evenly across servers.

b. Weighted Round Robin: Weighted Round Robin is developed upon


the Round Robin load balancing method. In weighted Round Robin,
each server in the farm is assigned a fixed numerical weighting by the

7
Resource Provisioning,
Load Balancing and network administrator. Servers deemed as able to handle more traffic
Security will receive a higher weight. Weighting can be configured within DNS
records.

c. IP Hash: IP hash load balancing combines the source and destination


IP addresses of incoming traffic and uses a mathematical function to
convert them into hashes. Connections are assigned to specific servers
based on their corresponding hashes. This algorithm is particularly
useful when a dropped connection needs to be returned to the same
server that originally handled it.

d. Opportunistic Algorithm: This static load balancing algorithm that


does not consider the current workload of each system. Therefore it
keeps each node busy by randomly distributing all uncompleted tasks
to the available nodes. This makes the algorithm to provide poor results
on load balancing. It fails to calculate the node's implementation time,
which then lowers the performance of the processing task. Also, when
there are nodes in the idle state, then there will be bottlenecks in the
cloud system.

e. Min-Min Algorithm: This algorithm is easy to use and works at a


faster pace. In addition, it improves performance and consists of a
series of tasks. The time taken to execute the task is computed and
allocated to Virtual Machines (VMs) on the basis of the smallest
completion time for the existing tasks. The process will continue till it
is ensured that each task has been allocated to VM. Because of the
existence of a greater number of smaller tasks, this algorithm performs
better compared to if there were bigger tasks. However, this will lead to
starvation because of giving priority to smaller tasks and deferring the
bigger tasks.

f. Max-Min Algorithm: This algorithm is quite similar to the Min-Min


Load Balancing, based on the calculation time. In this algorithm, all
existing tasks are sent to the system, after which the calculation is
carried out for determining the least time to complete each of the given
tasks. The selected task then has the maximum time to be completed
will be allocated to the relevant machine. A comparison of the
performance of this algorithm with the Min-Min algorithm shows that
the Max-Min algorithm is better because there is just one large task in
the set, which means that the Max-Min algorithm will carry out the
shorter tasks alongside the larger task.

g. Throttled Load Balancer (TLB): A table is generated in this


algorithm that includes the virtual machines as well as the existing state
(available/busy). If a specific task is allocated to a virtual machine, a
request is made to the control unit within the data center, which will
look for the ideal VM suit with respect to their abilities to achieve the
required task. The load balancer will send -1 back to the data center if
an appropriate VM is not available. In contrast, the process of looking
for the ideal virtual machines always takes place from the start of the
table each time; therefore, certain VMs are not employed. Fig 2
presents a demonstration of Throttled Load Balancer.

8
Load Balancing

Figure 2: TLB Algorithm

h. Active Monitoring Load Balancer(AMLB): It is a type of dynamic


load technology. This technology obtains information relevant to each
VM and to the number of requests that are presently allocated to each
of them. The Data Center Controller (DCC) scans the VM index table
after receiving a new request to determine the VM that is least loaded
or idle. First-comefirst serve concept is employed by this algorithm to
allocate load to the VM that has the smallest index number for more
than two [Link] VM ID is sent back by the AMLB algorithm to
the DCC which then sends the request to the VM represented by that
ID. The AMLB is informed about the new allocation by the DCC and it
is sent the cloudlet. Once the task is completed, the information is sent
to the DCC and the VM index table is reduced. When a new request is
received, it goes over the table again using load balancer and then the
process allocation occurs. This is shown in Fig 3.

Figure 3: AMLB Algorithm

6.4.2 Dynamic Load Balancing Algorithms

Algorithms in this class are also noted as online algorithms, in which VMs are
dynamically allocated according to the loads at each time interval. The load
information of VM is not obtained until it comes into the scheduling stage.
These algorithms could dynamically configure the VM placement combining
with VM migration technique. In comparison to static algorithms, dynamic

9
Resource Provisioning,
Load Balancing and algorithms have higher competitive ratio. Some of the dynamic load balancing
Security algorithms are as follows:

a. Least Connection: The least connection algorithm identifies which


servers currently have the fewest number of requests being served and
directs traffic to those servers. This is based on an assumption that all
connections require roughly equal processing power.

b. Weighted Least Connection: This one gives administrators the option


to assign different weights to each server under the assumption that
some servers can handle more requests than others.

c. Weighted Response Time: This algorithm averages out the response


time of each server and combines that with the number of requests each
server is serving to determine where to send traffic. This algorithm can
ensure faster service for users by sending traffic to the servers with the
quickest response time.

d. Resource-Based (Adaptive) load balancing algorithm: Resource


based (or adaptive) load balancing makes decisions based on status
indicators retrieved by LoadMaster from the back-end servers. The
status indicator is determined by a custom program (an “agent”)
running on each server. LoadMaster queries each server regularly for
this status information and then sets the dynamic weight of the real
server appropriately.

In this way, the load balancing method is essentially performing a


detailed “health check” on the real server. This method is appropriate in
any situation where detailed health check information from each server
is required to make load balancing decisions. For example: this method
would be useful for any application where the workload is varied and
detailed application performance and status is required to assess server
health. This method can also be used to provide application-aware
health checking for Layer 4 (UDP) services via the load balancing
method.

6.5 LOAD BALANCING AS A SERVICE (LBaaS)


Load Balancing as a Service (LBaaS) in cloud computing refers to a
managed service offered by cloud providers to dynamically distribute
incoming network traffic across multiple servers, applications, or resources to
optimize performance, enhance availability, and improve reliability. LBaaS
simplifies the management of load balancing operations by abstracting the
complexities of configuring and maintaining load balancers, offering a scalable
and efficient solution for handling traffic distribution in the cloud. Key aspects
of Load Balancing as a Service include managing of load balancing, traffic
optimization, integration with cloud services, monitoring and analytics,
scalability and flexibility, high availability and reliability.

10
Load Balancing
LBaaS is available as part of the services provided by major cloud platforms
like AWS, Azure, Google Cloud, and others. Users can configure and manage
load balancers through web-based interfaces or APIs provided by the cloud
service providers. This service abstraction allows businesses to focus on their
applications' functionality and scalability while relying on the cloud provider's
infrastructure for efficient load balancing.

In the next section let us study how a Open Stack LBaaS works.

6.5.1 Open Stack LBaaS

OpenStack LBaaS allow users to create Load balancer to balance the traffic
load between the Instances, it reside in front of a group of Instances and
manage traffic balancing. LBaaS v2 allows you to configure multiple listener
ports on a single load balancer IP address.

LBaaS service consist with a load balancer, pool, pool members, listener and a
health Monitor. High Availability Proxy (HAProxy) is used to implement the
load balancing. Below given Fig 4 will help you to understand various
components of Open Stack LBaaS.

Figure 4: Open Stack LBaaS

Load Balancer: Load Balancer collect the data from Listeners and route the
traffic to appropriate Instance. It get assigned one IP from from same subnet on
which the Instances are running. The traffic from outside network is redirected
to LB IP and LB route the traffic to the Instance as per load balancer Policies
configuration.

11
Resource Provisioning,
Load Balancing and Listener: Load balancers can listen for requests on multiple ports. Each one
Security
of those ports is specified by a listener.

Pool : A pool holds a list of members that serve content through the load
balancer.

Health monitor: Health Monitor keep tracking the Status of the pool
members. if a Member is not in Healthy state health monitor redirect the traffic
to another healthy instance. Health monitors are associated with pools.

Member: Members are the servers that serve traffic behind a load balancer.
Each member is specified by the IP address and port that it uses to serve
traffic.

 Check Your Progress 2


1) Briefly explain the static and dynamic approaches of load balancing.

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) Briefly explain the Round Robin and Weighted Round Robin Algorithms.

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

6.6 SUMMARY
In this unit we had studied load balancing and its associated algorithms. Load
balancing in cloud computing is a critical mechanism that optimizes the
distribution of incoming network traffic across multiple servers or resources.
Acting as a traffic manager, the load balancer ensures that no single server is
overwhelmed, thereby enhancing performance, maximizing resource
utilization, and maintaining high availability.

Employing algorithms to determine how to distribute requests, load balancers


also monitor the health of servers and dynamically adjust traffic routing based
on their operational status. This dynamic scalability and efficient resource
allocation contribute to the overall resilience and responsiveness of cloud-
based applications, supporting the seamless handling of varying workloads and
improving the user experience.

12
Load Balancing

6.7 SOLUTIONS / ANSWERS

Check Your Progress 1

1. In a public cloud computing environment, a load balancer distributes


application and network traffic efficiently and methodically across
various servers. This prevents excessive traffic and requests from
collecting in one place and enhances application responsiveness by
spreading the workload categorically between existing servers.

Load balancers sit between backend servers and client devices, receive
server requests, and distribute them to available, capable servers. Cloud
load balancing is the process of distributing traffic such as UDP,
TCP/SSL, HTTP(s), HTTPS/2 with gRPC, and QUIC to multiple
backends to increase security, avoid congestion, and reduce costs and
latency.

2. The goals of load balancing in cloud computing are:

(i) To minimize response time for application users; and

(ii) Maximize organizational resources.

(Also refer section 6.2.2 for details)

3. In the cloud, load balancing is critical for the following reasons. Load
balancing technology is less costly and easier to use than other options.
Firms may now give greater outcomes at a cheaper cost by using this
technology. The scalability of cloud load balancing can help manage
website traffic. High-end network and server traffic may be effectively
managed using effective load balancers. In order to manage and
disperse workloads in the face of numerous visitors every second, e-
commerce businesses rely on cloud load balancing. Load balancers can
deal with any abrupt spikes in traffic. For example, if there are too
many requests for university results, the website may be shut down. It
is unnecessary to be concerned about the flow of traffic while using a
load balancer. Whatever is the scale of the traffic, load balancers will
evenly distribute the website's load over several servers, resulting in the
best outcomes in the shortest amount of time.
The primary benefit of utilizing a load balancer is to ensure that the
website does not go down unexpectedly. This means that if a single
node fails, the load is automatically shifted to another node on the

13
Resource Provisioning,
Load Balancing and network. It allows for more adaptability, scalability, and traffic
Security
handling.

Check Your Progress 2


1. Static Algorithm Approach
This type of method is used when the load on the system is relatively
predictable and hence static. Because of the static method, all of the
traffic is split equally amongst all of the servers. Implementing this
algorithm effectively calls for extensive knowledge of server resources,
which is only known at implementation time.
However, the decision to shift loads does not take into account the
current state of the system. One of the main limitations of a static load
balancing method is that load balancing jobs only begin working once
they have been established. It couldn't be used for load balancing on
other devices.
Dynamic Algorithm
The dynamic process begins by locating the network's lightest server
and assigns priority load balancing to it. As a result, the system's traffic
may need to be increased by utilising network real-time
communication. It's all about the present status of the system in this
case.
Decisions are made in the context of the present system state, which is
a key feature of dynamic algorithms. Processes can be transferred from
high-volume machines to low-volume machines in real time.
2. Round Robin Algorithm
For this algorithm, as its name implies, jobs are assigned in a round-
robin fashion using the name. The initial node is chosen at random, and
other nodes are assigned work in a round-robin fashion. This is one of
the simplest strategies for distributing the load on a network.
Processes are assigned to each other in a random order with no regard
for priority. When the workload is evenly distributed throughout the
processes, it responds quickly. The loading time for each procedure
varies. Some nodes may be underutilized while others are
overburdened, as a result.
Weighted Round Robin Load Balancing Algorithm

14
Load Balancing
Round Robin Load Balancing Algorithms using Weighted Round Robins have
been created to address the most problematic aspects of Round Robins.
Weights and functions are distributed according to the weight values in this
algorithm.
Higher-capacity processors are valued more highly. Consequently, the servers
with the most traffic will be given the most work. Once the servers are fully
loaded, they will see a steady stream of traffic.

6.8 FURTHER READINGS

1. Cloud Computing: Principles and Paradigms, Rajkumar Buyya, James


Broberg and Andrzej M. Goscinski, Wiley, 2011.
2. Mastering Cloud Computing, Rajkumar Buyya, Christian Vecchiola,
and Thamarai Selvi, Tata McGraw Hill, 2013.
3. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014.
4. Cloud Computing, Sandeep Bhowmik, Cambridge University Press,
2017.

15

You might also like