0% found this document useful (0 votes)
30 views28 pages

Ubiquitous Cloud Computing Trends

The document discusses ubiquitous computing, which integrates computing capabilities into everyday objects, creating a context-aware network. It explores trends in cloud computing that support this paradigm, including large-scale private clouds, cloud mashups, and mobile cloud computing through cloudlets. Additionally, it covers enabling technologies for the Internet of Things (IoT), such as RFID and sensor networks, which facilitate the seamless integration of technology into daily life.

Uploaded by

rehanmaner1001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views28 pages

Ubiquitous Cloud Computing Trends

The document discusses ubiquitous computing, which integrates computing capabilities into everyday objects, creating a context-aware network. It explores trends in cloud computing that support this paradigm, including large-scale private clouds, cloud mashups, and mobile cloud computing through cloudlets. Additionally, it covers enabling technologies for the Internet of Things (IoT), such as RFID and sensor networks, which facilitate the seamless integration of technology into daily life.

Uploaded by

rehanmaner1001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit - V

Trends in Cloud
Ubiquitous computing:
 Ubiquitous computing, also known as pervasive computing, is a concept where computing capabilities
are embedded into everyday objects and environments, creating a seamlessly connected and context-
aware network that makes technology an invisible, integral part of life.
 Ubiquitous cloud computing is a new computing paradigm that merges ubiquitous computing and cloud
computing to create an interconnected network of devices and services available anytime, anywhere.
 This hybrid model leverages the power of the cloud to manage vast amounts of data from embedded
devices and deliver context-aware services across diverse platforms, moving beyond centralized cloud
models to a more distributed and dynamic computing landscape.
 Definition: A concept in software engineering and computer science where computing is made to
appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur
using any device, in any location, and in any format.
 Ubiquitous computing integrates microprocessors into everyday objects like lights, cameras, TVs,
wearable’s, voice assistants, and sensors. These interconnected devices can communicate and
respond intelligently to provide services for various applications.

Cloud trends in supporting Ubiquitous Computing


Following are the cloud trends:
 Use of Clouds for HPC/HTC and Ubiquitous Computing.
 Large-Scale Private Clouds at NASA and CERN.
 Cloud Mashups for Agility and Scalability.
 Cloudlets for Mobile Cloud Computing.

1)Use of Clouds for HPC/HTC and Ubiquitous Computing


 Ubiquitous cloud computing refers to the use of Internet resources at any place and any time for
any objectives. Today, people can access the Internet via a fixed wire or a mobile wireless
connection
 Science in the cloud has been impacted with a new research environment that leverages software
and services being provided on user demand.
 New advances in research discovery are expected in high-energy physics, observatory astronomy,
atmospheric modeling, biomedicine, big-data, many-cores pervasive clouds and their impact on
scientific research in the information age.

2)Large-Scale Private Clouds at NASA and CERN


 Large-Scale Private Clouds at NASA and CERN

Prof:Gavali R.B.
 In 2010, two large-scale private clouds were under construction in the United States and European
Union.
 The U.S. cloud, called Nebula, is developed by NASA and is designed for NASA scientists to run
climate models on remote systems provided by NASA. This can save thousands of NASA users
from acquiring supercomputers at their local sites.
 The EU cloud is built by CERN in Geneva. This is a large private cloud for distributing data.
applications, and computing resources to thousands of scientists around the world. CERN deals
with large data sets and throughput with a global workforce and a finite budget.

3)Cloud Mashups for Agility and Scalability


 In web application development, a mashup is a web page or application that combines data.
presentations, or functionality from two or more sources to create a new service.
 The main characteristics of the mashup are a combination of virtualization and aggregation. In
cloud computing, the cloud has captured the computing market with dynamic resource allocation
from a pool of VM resources.

4)Cloudlets for Mobile Cloud Computing


 Recently, researchers proposed a low-cost infrastructure to enable cloud computing using mobile
devices.
 The ides are callen Cloudlet, and it offers a resource-rich portal for upgrading mobile devices with
cognitive abilities to access distant clouds. This portal shouid be designed to be trustworthy and
use VMs to expiore location-aware cloud applications.
 The idea can be applied to opportunity discovery, fast information processing, and intelligent
decision making on the road.
ENABLING TECHNOLOGY IN THE INTERNET OF THINGS
In ubiquitous computing, the Internet of Things (IoT) provides a network of sensor or radio connected
devices that can be uniquely identified and located in the cyber-physical space.
This IoT is mostly wirelessly connected as a self-configuring network of radio-frequency tags, low-cost
sensors, or e-labels.
1) The IoT enabling technologies are as follows:
2) The Internet of Things for Ubiquitous Computing.
3) Radio-Frequency Identification (RFID).
4) Sensor Networks and ZigBee Technology.
5) Global Positioning System (GPS).
The Internet of Things for Ubiquitous Computing
 The IoT is a natural extension of the Internet. The foundation of the lot is radio-frequency
Identification (RFID).
 This enables the discovery of tagged objects and mobile devices by browsing an IP address or
searching for a database entry.
 The Internet of Things (IoT) serves as a practical implementation and driving force for the concept
of Ubiquitous Computing (UbiComp), which envisions(imaginary) technology
seamlessly(completely) integrated into daily life.

Prof:Gavali R.B.
 While UbiComp is the broader goal of embedding computing into our environment to provide
context-aware services, IoT provides the infrastructure of connected sensors, software, and
devices that make this goal achievable.
 Ubiquitous computing is the overarching vision of a world filled with invisible technology, and the
Internet of Things is a major technological enabler that makes this vision a reality.
 IoT's interconnected network of "things" forms the foundation for the pervasive, context-aware
computing environments that define ubiquitous computing.

Development of the Internet of Things


 In 2005, the concept of the IoT entered the limelight. The IoT should be designed to connect the
world's objects in a sensory manner.
 The approach is to tag things through RFID, feel things through sensors and wireless networks,
and think things by building embedded systems that interact with human activities.

Enabling and Synergistic Technologies


Many technologies can be applied to build the lot infrastructure and specific lol systems for special
application domains.
Supportive technologies are divided into two categories.
1. Enabling technologies: It build up the foundations of the lot. Among the enabling technologies, tracking
(RFID), sensor networks, and GPS are critical.
2. Synergistic technologies: It plays supporting roles. For example, biometrics could be widely applied to
personalize the interactions among humans, machines, and objects.
Artificial intelligence, computer vision, robotics, and telepresence can make our lives more automated in the
future.

Prof:Gavali R.B.
Architecture of Internet of Things

IoT development is shown with a three-layer architecture:


1) The top layer is formed by driven applications.
2) The bottom layers represent various types of sensing devices: namely RFID tags, ZigBee or other
types of sensors, and road-mapping GPS navigators.
3) The sensing devices are locally or wide-area-connected in the form of RFID networks, sensor
networks, and GPSes.
 Signals or information collected at these sensing devices are linked to the applications through the
cloud computing platforms at the middle layer.
 The signal processing clouds are built over the mobile networks, the Internet backbone, and various
information networks at the middle layer.
 In the IoT, the meaning of a sensing event does not follow a deterministic or syntactic model. In fact,
the SOA model is adoptable here.
 A large number of sensors and filters are used to collect the raw data. Various compute and storage
clouds and grids are used to process the data and transform it into information and knowledge formats.
 The sensed information is used to put together a decision-making system for intelligence applications.
The middle layer is also considered as a Semantic web or grid. Some actors (services, components,
avatars) are self-referenced.
Radio-Frequency Identification (RFID)
 RFID systems consist of tags, readers, and a backend database.

 Tags contain unique identifiers and can be passive (relying on reader for power) or active (with their
own power source).

 Readers scan and retrieve data from the tags.

 RFID is used for asset tracking, inventory management, and access control in IoT.

Prof:Gavali R.B.
 RFID is applied with electronic labels or RFID tags on any objects being monitored or tracked. The
tagging may be applied to any objects, such as merchandise, tools, smartphones, computers,
animals, or people.
 The purpose is to identify and track the objects using radio waves or sensing signals. Some tags
can be read from tens or hundreds of meters away via a wireless reader.
 Most RFID tags contain at least two major parts. One is an integrated circuit for storing and
processing information, modulating and demodulating a, radio-frequency (RF) signal, and other
special functions.

RFID Tags and Device Components


There are generally three types of RFID tags:
1. Active RFID tags containing a battery and transmitting signals autonomously,
2. Passive RFID tags which have no battery and require an external source to provoke signal
transmission.
3. Battery-assisted passive RFID tags which require an external source to wake up the battery but have
significantly higher forward link capability.

How RFID Works?


 The tag's components are enclosed within plastic, silicon, or sometimes glass. Data stored in the
microchip waits to be read.
 The tag's antenna receives electromagnetic energy from an RFID reader's antenna. Using power from
its internal battery or power harvested from the reader's electromagnetic field, the tag returns radio
signals back to the reader.
 Fig. 5.3 shows the RFID operations between the RFID tags (e-labels), reading/writing devices, and
backend computers. RFID tags are still not in widespread use by ordinary people, because they are
expensive and bulky

 These tags are not that expensive to produce in large quantities, and they can be made small
enough to fit on almost any product or object.
 RFID has many industrial applications including merchandise tracking and supply chain
management. Its use can improve the efficiency of inventory control and accuracy of assembly line
management.

Prof:Gavali R.B.
 Active and semi-passive tags are reserved for reading over 30 to 100 meters, if repeater batteries
are used to boost a tag's range.
Sensor Networks and ZigBee Technology
 Today's sensor networks are mostly wireless, and are known as wireless sensor networks (WSNs).
 A typical WSN consists of spatially distributed autonomous sensors to cooperatively monitor
physical or environmental conditions, such as temperature, sound, vibration, pressure, motion, or
pollutants.
 The development of wireless sensor networks was motivated by military applications such as
battlefield surveillance.
 WSN technology is now used in many industrial and civilian application areas, including process
monitoring and control, machine health monitoring, environment and habitat monitoring, health
care and home automation and intelligent traffic control.
Wireless Sensor Networks:
 A WSN is a group of specialized transducers with a communications infrastructure intended to
monitor and record conditions at diverse locations.
 Commonly monitored parameters are temperature, humidity, pressure, wind direction and speed,
illumination Intensity, vibration intensity, sound intensity, power-line voltage, chemical
concentrations, pollutant levels, and vital body functions.
 A sensor network consists of multiple detection stations called sensor nodes, each of which is
small, lightweight, and portable.
 Every sensor node is equipped with a transducer, microcomputer, transceiver, and power source.
The transducer generates electrical signals based on sensed data.
 They are connected to the Base Station which acts as a processing unit in the WSN System. The
base Station in a WSN System is connected through the Internet to share data. WSN can be used
for processing, analysis, storage, and mining of the data.

Wireless Network Support for Ubiquitous Computing


 Four classes of wireless networks support ubiquitous computing: Zigbee, The GSM/GPRS, WiFi
802.11g and Bluetooth 802.15.1.
 The ZigBee networks are mainly used in low-cost and low-speed monitoring and control
applications, such as those used in a Wireless Home Area Network (WHAN).
 The GSM/GPRS or CDMA/1 networks are cellular mobile networks that can cover a wide area for
volce and data telecommunications.
 The WiFi networks specified in the IEEE 802.11b standard. WiFi is used for wirelessly accessing
the Internet, reading e-mails, or conducting web page searches Bluetooth is mainly used for short
distance wireless connection of computer peripherals (e.g., keyboard, mouse, printers, etc.). In
terms of data rate, WiFi is the fastest (54 Mbps for 802.11g network).

Prof:Gavali R.B.
 The Bluetooth with 720 Kbps and 115 Kbps for the 2.5G GPRS mobile network. ZigBee has the
slowest rate of 20-250 Kbps. However, ZigBee has high reliability and low power/cost advantages.

Three Generations of Sensor Networks


Three generations of WSN are First generation (1990), Second generation (2000) and Third Generation
(2010).

First Generation:
 The sensors used in the first generation were mainly vehicle-placed or air-dropped single sensors.
They were quite bulky, like a shoebox, and weighed in the kilograms.
 Networks assumed only star or point-to-point topologies and were powered by large batteries that
could last hours or days.

Second Generation:
 In the second generation, the sensors became smaller, like a pack of playing cards, and weighed
in the grams, and worked with AA batteries that lasted for days or weeks.
 They appear in client/server or P2P configurations.

Third Generation:
 The current generation are as large as dust particles, weigh only a negligible amount, and are used
in P2P networks for embedded and remote applications.

ZigBee Network
 The term "ZigBee" is originated from the behavior of honeybees when they return to the beehive This is
a high-level communication protocol using small, low-power, radio-based sensors as specified by the
IEEE 802.15.4 standard.
 For example, ZigBee devices are applied in WHANs in wireless light switches and consumer
electronics controlled by short-range radio.
 The ZigBee technology is simpler to use and less expensive than Bluetooth or WiFi. The RF applied in
ZigBee results in low data rates, long battery life, and secure networking.
 The low cost allows the technology to be widely deployed in wireless control and monitoring
applications.
 The low power usage allows a longer life with smaller batteries. The mesh networking provides higher
availability and an extended operating range.

Features of Zigbee:
1. Support for multiple network topologies such as point-to-point, point-to-multipoint and mesh
networks.
2. Low duty cycle - provides long battery life.
3. Low latency.

Prof:Gavali R.B.
4. Direct Sequence Spread Spectrum (DSSS).
5. Up to 65,000 nodes per nerwork.
6. 128-bit AES encryption for secure data connections.
7. Collision avoidance, retries and acknowledgements.

Devices supported by Zigbee:


1. ZigBee coordinator (ZC): This is the most capable ZigBee device serving as the coordinator or the
root of a ZigBee network. There is exactly one coordinator in each network since it is the device that started
the network. It is able to store information about the network, including acting as the trust center and
repository of security keys.

2. ZigBee Router (ZR): This can act as an intermediate router, passing on data from end device to end
device.

3. ZigBee End Device (ZED): This contains just enough functionality to talk to the parent node (either
the coordinator or the router). The end device cannot relay data from other devices.

ZigBee Protocol Architecture


ZigBee protocol architecture consists of a stack of various layers where IEEE 802.15.4 is defined by
physical and MAC layers while this protocol is completed by accumularing Zigbee's own network and
application layers shown in Fig. 5.5.

1. Physical Layer: This layer does modulation and demodulation operations up on transmitting and
receiving signals respectively. This layer's supports frequency 2.4GHz, 868MHz and 915MHz. 2.

MAC Layer: This layer is responsible for reliable transmission of data by accessing different networks
with the carrier sense multiple access collision avoidance (CSMA). This also transmits the beacon frames
for synchronizing communication.

3. Network Layer: This layer takes care of all network related operations such as nerwork setup. end
device connection and disconnection to network, routing, device configurations, etc.

4. Application Support Sub-Layer :This layer enables the services necessary for Zigbee device
object and application objects to interface with the nerwork layers for data managing services. This layer is
responsible for matching two devices according to their services and needs.

5. Application Framework: It provides two types of data services as key value pair and generic
message services. Generic message is a developer defined structure, whereas the key value pair is used
for getting attributes within the application objects. ZDO provides an interface between application objects
and APS layer in Zigbee devices! It is responsible for detecting, initiating and binding other devices to the
network.

Prof:Gavali R.B.
Fig:ZigBee Protocol Architecture

Applications of ZigBee
1. Industrial Automation: In manufacturing and production industries, a communication link
continually monitors various parameters and critical equipments. Hence ZigBee considerably reduce this
communication cost as well as optimizes the control process for greater reliability

2. Home Automation: ZigBee is perfectly suited for controlling home appliances remotely as a lighting
system control, appliance control, heating and cooling system control, safety equipment operations and
control, surveillance and so on.

3. Smart Metering: ZigBee remote operations in smart metering include energy consumption response,
pricing support, security over power theft, etc.

4. Smart Grid Monitoring: ZigBee operations in this smart grid involve remote temperature
monitoring, fault locating, reactive power management, and so on.

Global Positioning System (GPS)


 GPS uses satellite signals to determine the location of a receiver on Earth.
 It provides accurate location, speed, and time information.
 GPS is widely used for asset tracking, vehicle navigation, and location-based services in IoT.
 Location-based service (LBS) helps people and machines find things and determine their physical
where about.
 Sensors play a role in dead reckoning, but that approach does not satisfy practical needs for geo
location.

Prof:Gavali R.B.
 This has triggered the creation of global positioning and navigation systems. The GPS was
developed In 1973 by the U.S. Air Force.
 The global positioning system is a satellite-based navigation system consisting of a network of 24
orbiting satellites that are eleven thousand nautical miles in space and in six different orhital paths
 For anyone with a GPS receiver, the system will provide accurate location and time information for
an unlimited number of users in all weather conditions, day and night, anywhere in the world
 Each GPS satellite broadcasts a message that includes the satellite's current position, orbit, and
exact time.
 A GPS receiver combines the broadcasts from multiple satellites to calculate its exact position
using a process called triangulation.
 Three satellites are required in order to determine a receiver's location, though a connection to four
satellites is ideal since it provides greater accuracy.
 Most GPS devices also use some type of location caching to speed up GPS detection. By
memorizing its previous location, a GPS device can quickly determine what satellites will be
available the next time it scans for a GPS signal.

How GPS Works?


 GPS consists of three parts namely, Space segment, Control segment and User segment.
 There are 24 satellites deployed around the Earth in fixed orbits. The satellites orbit at an altitude
of approximately 20,200 KM.
 GPS satellites broadcast signals from space, by which each GPS receiver calculates its 3D
location (latitude, longitude, and altitude) plus the current time.

 Earth twice a day in very precise orbits and transmit signals to Earth.
 GPS devices on the ground receive these signals and use a triangulation method to calculate the
user's exact location. At different times, different satellites will become visible to the receiver
 As shown in Fig. 5.6, the control segment is composed of a master control station and a host of
dedicated and shared ground antennas and monitor stations.
 The user segment is composed of hundreds of thousands of allied users of the secure GPS
precise positioning services.
 Tens of millions of civil, commercial, and scientific users are only allowed to use a degraded
functionality of the so-called standard positioning services that cannot be used in hostile attack
purposes.
 The GPS tracking device makes it possible to track people, vehicles, and other assets, anywhere
on Earth

Prof:Gavali R.B.
Types of GPS
There are two types of GPS tracking systems namely, Passive GPS and Active GPS.

1. In passive GPS tracking, the GPS is just a receiver, not a transmitter. Passive GPS tracking
devices lack a transmission capability to send the GPS data from the vehicle. Therefore, passive GPS is
also known as data loggers used primarily as recording devices.

2. Active GPS tracking units incorporate a method to transmit user information from a vehicle.
Although satellite uplink of data is available, cellular data communication is the most common and cost-
effective. Automatic incremental updating provides a continuous source of tracking throughout a recording
period. This provides current as well as historical logging positions.
 Passive GPS tracking devices store GPS location data in their internal memory which can then be
downloaded to a computer or mobile for viewing at a later time, while active GPS tracking systems
send the data at regular intervals to be viewed in real time.

Operating Principles of GPS


 Knowing the distance from the receiver to a fixed-position satellite implies that the receiver is on
the surface of a sphere centered at the satellite. With four satellites, the receiver location is
detected at the intersection of four sphere surfaces.
 To locate the point accurately, the receiver has to use a fourth satellite to hone in almost precisely
The fourth satellite sphere will come very close to the final two intersecting points of three satellite
spheres
 The final receiver location is thus decided by noting the closest point calculated from the two final
points to the sphere surface of the fourth satellite in the case of no error, the precise position is
located
 A GPS receiver calculates its position by precisely timing the signals sent by GPS satellites high
above Earth.

Innovative Applications with the Internet of Things


1)Healthcare Applications:
 Cloud-based ECG Analysis:
Wearable ECG devices collect heart activity data and transmit it to the cloud for analysis using cloud-
based algorithms. This allows for remote patient monitoring, early detection of cardiac abnormalities,
and timely interventions, reducing the need for frequent hospital visits.

 Remote Patient Monitoring:


IoT sensors and wearable devices track vital signs like heart rate, blood pressure, and glucose levels,
transmitting the data to healthcare providers for remote monitoring and personalized care.

 Medication Adherence:

Prof:Gavali R.B.
IoT devices can track medication intake, sending reminders to patients and alerts to caregivers if
medication is missed, improving treatment adherence and patient outcomes.

 Smart Hospitals:
IoT devices like RFID tags and sensors help track patients, equipment, and resources within hospitals,
optimizing hospital operations and improving patient care.

Advantages:
1. Elasticity: The elasticity of cloud infrastructure that can grow and shrink according to the requests
served. As a result, doctors and hospitals do not have to invest in large computing infrastructures designed
after capacity planning, thus making more effective use of budgets.

2. Ubiquity: Cloud computing technologies are easily accessible and promise to deliver systems with
minimum or no downtime.

3. Cost savings: Cloud services are priced on a pay-per-use basis and with volume prices for large
numbers of service requests.

Business Applications:
 CRM Integration:
IoT devices can collect customer data, such as preferences and usage patterns, which can be
integrated with CRM systems to personalize marketing efforts, improve customer service, and enhance
customer satisfaction.

Prof:Gavali R.B.
 ERP Integration:
IoT data can be integrated with ERP systems to optimize supply chain management, inventory control,
and production processes, leading to increased efficiency and reduced costs.

 Smart Retail:
IoT sensors can track customer behavior in retail stores, providing valuable insights for store layout
optimization, product placement, and targeted promotions.

 Smart Manufacturing:
IoT sensors on factory floors can monitor equipment performance, predict maintenance needs, and
optimize production processes, leading to increased productivity and reduced downtime.

Benefits of Cloud Platforms :


 Cloud platforms like Amazon EC2, S3, CloudStack, Intercloud, Google App Engine, OpenStack, and
OpenNebula offer various benefits in cloud computing.
 These include cost savings, scalability, flexibility, enhanced security, and improved collaboration.
 Each platform has its unique strengths, catering to different needs and use cases, from Infrastructure
as a Service (IaaS) to Platform as a Service (PaaS) and even hybrid cloud solutions.

 Amazon EC2 and S3:


 Amazon EC2 (Elastic Compute Cloud):
 Provides scalable computing capacity, allowing users to rent virtual machines on demand, reducing the
need for upfront investment in hardware.
 It offers flexibility in choosing operating systems and software packages and allows for scaling capacity
based on computing requirements.

Prof:Gavali R.B.
 Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon
Web Services (AWS) cloud
 Using Amazon EC2 eliminates our need to invest in hardware up front, so we can develop and deploy
applications faster.
 We can use Amazon EC2 to launch as many or as few virtual servers as we need, configure security
and networking, and manage storage.
 Amazon EC2's simple web service interface allows us to obtain and configure capacity with minimai
friction. It provides us with complete control of your computing resources and lets us run on
 Amazon EC2 changes the economics of computing by allowing us to pay only for capacity that we
actually use.
 Amazon EC2 provides developers the tools to bulid failure resilient applications and isolate them from
common failure scenarios

Features of Amazon EC2


1. Virtual computing environments, known as instances
2. Preconfigured templates for the instances, known as Amazon Machine Images (AMIs), that package
the hits we need for our server (Including the operating system and additional software)
3. Various configurations of CPU, memory, storage, and nerworking capacity for our instances, known
as instance types
4. Secure login information for our instances using key pairs (AWS stores the public key, and we store
the private key in a secure places.
5. Storage volumes for temporary data that's deleted when we stop or terminate our instance known as
instance store volumes
6. Persistent storage volumes for our data using Amazon Elastic Block Store (Amazon ERS known as
Amazon EBS volumes
7. A firewall chat enables us to specify the protocols, ports, and source IP ranges that can reach o
8. Static IPv4 addresses for dynamic cloud computing, known as Elastic IP addresses.

There are two important terms that we should understand regarding


Amazon EC2:
EC2 Instance: An EC2 instance is a virtual machine rumiing on Amazon EC2. EC2 instances
represent virtual machines. They are created using AMI as templates, which are specialized by
selecting the number of cores, their computing power and the installed memory.
AMI: An Amazon Machine Image (AMI, sometimes pronounced "ah-mee" or "Amy") determines the
files, settings and software that are applied when we create a new EC2 instance.

Benefits of ASW EC2:


1. Elastic Web-Scale Computing: Amazon EC2 enables us to increase or decrease capacity within
minutes, not hours or days.
2. Completely Controlled: We have complete control of our instances including root access and the
ability to interact with them as we would any machine.
3. Flexible Cloud Hosting Services: we have the choice of multiple instance types operating systems,
and software packages Amasje EC2 allows us to select a configuration of memory, CPU Instance
storage and the boot partition sire that is optimal for our choice of operating system and application.
Prof:Gavali R.B.
4 Integrated: Amazon EC2 is integrated with most AWS services such as Amazon Simple Storage
Service (Amazon $31.
5 Reliable: Amazon EC2 offers a highly reliable environment where replacement instances can be
rapidly and predictably commissioned.
6. Secure:Cloud security at AWS is the highest priority As an AWS customer, we will benefit from a
data center and network architecture built to meet the requirements of the most security sensitive
organizations.
7 inexpensive: Amazon EC2 passes on to us the financial benefits of Amazen's scale We pay a very
rare for the rompute capacity we actually consume
8. Easy to Start: There are several ways to get started with Amazon EC2. We can use the AWS
Management Console, the AWS Command Line Tools (CLI), or AWS SDKs. AWS is free to get started.

Amazon Simple Storage Service (S3)


 Offers scalable and durable object storage, making it ideal for storing and retrieving large amounts
of data, including unstructured data like images, videos, and log files.
 It provides high availability and security features, with cost-effectiveness as a key advantage.
 Amazon S3 (Simple Storage Service) is a scalable, high-speed, low-cost web-based service
designed for online backup and archiving of data and application programs. It allows to upload,
store and download any type of files up to 5 TB in size
 This service allows the subscribers to access the same systems that Amazon uses to run its own
web sites. The subscriber has control over the accessibility of data, ie. privately/publicly accessible
 Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale
computing easier for developers.
 Amazon S3 has a simple web services interface that you can use to store and retrieve any amount
of data, at any time, from anywhere on the Web
 Amazon S3 is designed for 99 999999999% of durability, and stores data for millions of
applications for companies all around the world.

Features of Amazon S3:


1. Low Cost and Easy to Use:Using Amazon S3, the user can store a large amount of data at very
low charges.
2. Secure: Amazon S3 supports data transfer over 55L and the data gets encrypted automatically once
it is uploaded. The user has complete control over their data by configuring bucket policies using AWS
IAM
3. Scalable: Using Amazon S3, there need not be any worry about storage concerns. We can store as
much data as we have and access it anytime
4. Higher Performance: Amazon S3 is integrated with Amazon Cloud Front, that distributes content to
the end users with low latency and provides high data transfer speeds without any minimum usage
commitments.
5. Integrated with AWS Services: Amazon S3 integrated with AWS services include Amazon
CloudFront, Amazon CloudWatch, Amazon Kinests, Amazon RDS, Amazon Route 53. Amazon VPC,
AWS Lambda, Amazon EBS, Amazon Dynamo DB. Ete

Amazon S3 Concepts:

Prof:Gavali R.B.
The concepts of Amazon S3 are Buckets, Objects, Keys, Regions, Amazon S3 Data Consistency
Model.

Buckets:
 A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket.
 For example, if the object named photos/[Link] is stored in the amarsawant bucker, then it is
addressable using the URL [Link]
Objects:
 Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and
metadata. The data portion is opaque to Amazon S3.
 The metadata is a set of name-value pairs that describe the object. These include some default
metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type
Keys:
 A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key.
 The combination of a bucket, key and version ID uniquely identify each object. So, we can think of
Amazon S3 as a basic data map between "bucket key version and the object itself.
Regions:
 We can choose the geographical AWS Region where Amazon 53 will store the buckets that you create
We might choose a Region to optimize latency, minimize costs, or address regulatory requirements.
 Objects stored in a Region never leave the Region uniess we explicitly transfer them to another
Region. For example, objects stored in the EU (Ireland) Region never leave it.

Amazon S3 Data Consistency Model:


 Amazon S3 provides read-after-write consistency for PUTS of new objects in our 53 bucker in all
Regions with one caveat The caveat is that if we make a HEAD or GET request to the key name (to
find if the object exists)
 before creating the object, Amazon S3 provides eventual consistency for read-after-write Amazon S3
offers eventual consistency for overwrite PUTS and DELETES in all Regions Updates to a single key
are atomic. For example, if you PUT to an existing key, a subsequent read might return the old data or
the updated data, bur it never returns corrupted or partial data
 Amazon S3 achieves high availability by replicating data across multiple servers within AWS data
centers.
 If a PUT request is successfui, our data is safely stored. However information about the changes muxt
replicate across Amazon S3, which can take some time and so we might observe the following
behaviors:
 A process writes a new object to Amazon S3 and immediately lists keys wahm as bucket. Until the
change is fully propagated, the object might not appear in the list

 A process replaces an existing object and immediately tries to read it. Until the change is fially
propagated, Amazon S3 might return the previous data.
 If a PUT request is successful, our data is safely stored. However, information about the changes must
replicate across Amazon S3, which can take some time and so we might observe the following
behaviors:
 A process writes a new object to Amazon 53 and immediately lists keys within its bucket. Until the
change is fully propagated, the object might not appear in the list

Prof:Gavali R.B.
 A process replaces an existing object and immediately tries to read it. Until the change is fully
propagated, Amazon S3 might return the previous data.

Advantages of using Amazon S3:


1. Creating Buckets: Create and name a bucket that stores data Buckets are the fundamental
container in Amazon S3 for data storage.
2. Storing Data: Store an infinite amount of data in a bucket. Upload as many objects as we like into
an Amazon S3 bucket. Each object can contain up to 5 TB of data. Each object is stored and retrieved
using a unique developer-assigned key
3. Downloading Data: Download our data or enable others to do so. Download our data anytime we
like, or allow others to do the same.
4 . Permissions: Grant or deny access to others who want to upload or download data into our
Amazon S3 bucket Grant upload and download permissions to three types of users Authentication
mechanisms can help keep data secure from unauthorized access
5. Standard Interfaces:Use standards based REST and SOAP Interfaces designed to work with any
Internet-development toolkit.

5.4.2 CloudStack:
 CloudStack: An open-source Infrastructure as a Service (IaaS) platform that enables users to
deploy and manage private, public, and hybrid clouds.
 It focuses on providing a user-friendly interface for managing virtual machines, storage, and
networking resources.
 CloudStack facilitates hybrid cloud deployment, allowing organizations to connect private clouds
with public cloud services.
 Apache GoudStack is open source software designed to deploy and manage large networks of
virtual machines as a highly available, highly scalable Infrastructure as a Service (laas) cloud
computing platform
 Cloud Stack is used by a number of service providers to offer public cloud services, and by many
companies to provide an on premises (private cloud offering, or as part of a hybrid cloud solution
 Cloud Stack currently supports the must popular hypervisors VMware KVM Citrix Xen Server Xen
Cloud Platform (XCP), Oracle VM server and Microsoft Hyper-V
 Cloud Stock is cloud software which assists users to alter the cloud according to then requirements
controls and supports the network storage and compute joints in a cloud Infrastructure
 Cloud Stack allows users to coordinates virtualized servers, networking and network storage in
provide infrastructure-as-a-service (lass) just the busting providers-but on theses own hardware

CloudStack can be used for following multiple purposes:


1. Service Providers can offer virtualized hosting in an elastic cloud computing configuration
2. Enterprises can stage development, testing and production in a consistent way easing the
development and publishing processes for applications
3. Web content providers can deploy scalable, slartic web infrastrucnare that can adapt in meet the
demand of their readership
4. Software-as-a-Service providers can offer true multi-tenant software hosting while securing anch
user's environments

Prof:Gavali R.B.
CloudStack are capable for the following:
1 Establishing an on-command, flexible cloud computing service Service suppliers may offer sa service
virtual machines, networking arrangements, and storage sizes over the erant
2 CloudStack may be used to construct an on-command cloud computing service along with Dexibility
Service supplier provides storage and virtual machines, and much more on the Internes
3. An organization or worker could establish an en-premise private cloud along with CloudStack

Functionality Cloudstack
 Works with hests running XenServer XCP. KVM, Hyper-V, andur Vitware ESX shviphare
 Provides a friendly Web-based Ut for managing the cloud
 Provides a native API
 May provide an Amazon 5372 compatible APT (optional)
 snapshots, and ISG images secondary storage)
 Accounting of network, computes, and storage resources
 Multi-tenancy account separation and user management

Benefits of Cloud Stack


1. Cost-effectiveness: Cloud computing services can be costly, making it a challenge for businesses with
a tight budget to avail the service that fits their needs.
2. Quality Services: Apache Cloud Stack comes with tools that enable users to deliver cloud services
3. Powerful API: This cloud management solution features a REST ful API that allows users to connect
with an array of third-party services.
4. An Array of Administrative Capabilities: With Apache Cloud Stack, businesses can manage their
users, assign management tasks, segregate cloud resources, etc. Hence, it gives them complete control
over the operation.
5. Hypervisor Agnostic: The software can support any hypervisor from any manufacturer. This feature
makes the platform a flexible solution.

InterCloud:
 Intercloud: A concept that involves connecting multiple cloud providers to offer greater flexibility,
redundancy, and potentially lower costs.
 It enables users to move applications and data across different cloud providers, avoiding vendor lock-in
and potentially accessing a wider range of geographical locations.
 InterCloud is the concept of connected cloud networks, including public, private, and hybrid clouds. It
incorporates a number of technology efforts put together to improve interoperability and portability
among cloud networks.
 The InterCloud started as a research project in Cisco in 2008 and was soon taken up by the Institute of
Electrical and Electronics Engineers (IEEE).
 The goal was to improve interoperability of cloud networks. The terms "InterCloud Computing" and
"InterCloud" were used to address this concept.
 The term "InterCloud" is originally attributed to the journalist and technology author Kevin Kelly,
according to several sources.

Prof:Gavali R.B.
Several key tenets of the Intercloud exist to accomplish this:
 Software-Defined Infrastructure: The Software-Defined Infrastructure is fluid and ever evolving to
support the ever-changing and growing demands of the business Software-defined services support all
aspects of the business application needs.
 Application: parameters and configuration details of the application including integration performance
optimizations metrics, and service-level objectives (SLOS)
 Integrated Application Platform: any underlying dependent or loosely coupled services the
application requires (message queue, Doman Name System (DNS), memcache)
 Network and Security: network and security policies and the enforcement of those policy objectives.
 Data: being able to abstract data aspects of the application and provide analytics and business
outcome learnings
 BSS (Back-Office Support Systems): being able to abstract the BSS aspects to enable complete
abstraction to any existing BSS systems in the enterprise or Cisco Powered partner clouds.
 OSS (Operational Support Systems): being able to abstract the OSS aspects to enable complete
abstraction to any existing OSS systems in the enterprise or Cisco Powered partner clouds.
 Enterprise is a special case of private cloud where the deployment model is the enterprise data center.
Additionally, the enterprise model may want to burst to other deployment models, so managing the
policies and processes of the enterprise becomes critical.

 As shown in Fig. 5. 9, the broker offers three facilities to the cloud users, such as SaaS, Paas, and
laans IaaS services are offered by cloud provider A. Saas services by cloud provider B, and Paas
services are offered by cloud provider C
 Cloud providers have been deploying datacenters in multiple locations throughout the globe. For
example, Amazon EC2 Cloud services are available via Amazon datacenters located in the United
States, Europe, and Singapore.
 This disbursement is leading to the emergence of a notion, called the InterCloud, supporting scalable
delivery of application services by harnessing multiple datacenters from one or more providers.
 In addition to enhancing performance and reliability, these InterClouds provide a powerful means of
reducing energy-related costs.
 One reason is that the local demand for electricity varies with time of day and weather. This causes
time-varying differences in the price of electricity at each location.
 Moreover, each site has a different source of energy (such as coal, hydroelectric, or wind), with
different environmental costs. This gives scope to adjust the load sent to each location, and the number
of servers powered on at each location, to improve efficiency.

InterCloud architecture:

Prof:Gavali R.B.
InterCloud architecture consists of the following layers:
1. Cloud Platform and Underlying Infrastructure:
 The OpenStack platform and underlying infrastructure consist of physical, virtual and automation
components. The beginning point for any service is the physical data center, controls, and hardware
 Whatever services or virtualization you build on top of this infrastructure will only be as good as the
underlying physical design and the security controls you integrate.
2. Cloud Services and Value-added Products:
 In the layer above the OpenStack platform, services and capabilities are added by the intercloud to
enable advanced services around networking, security, NFV, dara, database, load balancing, and
application policy.
 As we move up the stack, it is critical to focus on the API interfaces and capabilities. This layer is also
where the InterCloud Fabric (ICF) product resides to enable point-to-point secure Intercloud
connectivity.
 3. Application Enablement Platform as a Service:
 The Intercloud's primary use case is to enable application portability and interoperability scross a world
of clouds. By virtue of its broad definition, one can throw in various solutions under the Paas umbrella.
 For our purposes we will define what Paas encompasses by looking at what customer prohiems need
to be solved and working backward from there
4. Cloud OSS:
 The OSS consist of the following management aspects
 Change management means managing break/fix and new features to the system following the change
process and change windows.
 SLA management means managing the SLA thresholds set by configuration and performance
management
 Incident management means managing incidents as they occur, documenting results, and creating
tickets.

Prof:Gavali R.B.
5. Cloud BSS:
 The intercloud provides the following to BSS for several management funcrians.
 Accounting provides data for managing the user accounts and departments
 Contracts and Agreements manages the contracts and support agreements.
 Service Offering Catalog selects what offerings are allowed in the catalog.
 Pricing sets the price for the service
6. Marketplace:
 The Marketplace is the curated set of products and services from Cisco and its Intercloud partners that
enables enterprise customers to consume products and services in the enterprise catalog and across
the global Intercloud product catalog from a simple, easy-to-use marketplace
 The Marketplace performs the following operations
 Manage online user ID setup and user directory.

Google AppEngine:
 Google App Engine: A platform-as-a-service (PaaS) that simplifies the development and deployment
of web applications.
 It provides a managed environment, handling infrastructure concerns like scaling and maintenance,
allowing developers to focus on building applications.
 Google App Engine offers easy-to-use APIs, robust security features, and a pay-as-you-go pricing
model.
 Google AppEngine is a Paas implementation that provides services for developing and hosting
scalable Web applications.
 AppEngine is essentially a distributed and scalable runtime environment that leverages Google's
distributed infrastructure to scale out applications facing a large number of requests by allocating more
computing resources to them and balancing the load among them.
 Application usage of Google resources and services is metered by AppEngine, which aills users when
their applications finish their free quotas
 The features supported by the Google app engine are Support trendy languages. Elasticity and
Complete control. Google App Engine assists apps written in a variery of programming languages like
Java, Python, PHP and Go

Google App Engine comprises the following characteristics


1. Mechanized scaling and load balancing
2. Planned jobs for activating events at particular times or normal intervals
3. Integration with other APIs and Google cloud services
4. With Google App Engine, users have a cloud-based platform that enables them to create,
maintain and scale apps without the need to have and/or maintain servers.
Why Google AppEngine?
1. Fully managed service with zero ops (no need to work at infrastructure level at all).
2. Deploy at Google Scale,
3. All popular languages support.

Prof:Gavali R.B.
4. Lots of built-in features such as scaling up/down, logging, monitoring, diagnostics etc
5. Building quickly and time to market is short.
6. Pay only for what you use.

Architecture and Core Concepts in AppEngine:

The platform is logically divided into four major components namely, Infrastructure Runtime
environment, Underlying storage and Set of scalable services.
Infrastructure:
 AppEngine hosts Web applications, and its primary function is to serve users requests efficiently. To do
so, AppEngine's infrastructure takes advantage of many servers available within Google dataceniters
 For each HTTP request, AppEngine locates the servers hosting the application that processes the
request, evaluates their load and if necessary, allocates additional resources (ie, servers) or redirects
the request to an existing server.
 The infrastructure is also responsible for monitoring application performance and collerong statistics on
which the billing is calculated.
Runtime Environment:
 The runtime environment represents the execution context of applications hosted on AppEngine With
reference to the AppEngine Infrastructure code
 AppEngine supports applications that are developed only with managed or interpreted languages,
which by design require a runtime for translating their code inin executable Instructions
 Therefore, sandboxing is achieved by means of modified runtimes for applications that disable some of
the common features normally available with their default implementations
Storage:
 AppEngine provides various types of storage, which operate differentiv depending on the volatility of
the data
 There are three different levels of storage
o In memory-cache
o Storage for semi-structured data

Prof:Gavali R.B.
o Long-term storage for static data.
 Google AppEngine supports uses main two storage. Data-Store and tatatic file servers.
 Data-Store: Data-Store is a service that allows developers to store semi-structured data. Theservice is
designed to scale and optimized to quickly access data.
 Static File Servers: Web applications are composed of dynamic and statie data. Dynamic data are a
result of the logic of the application and the interaction with the user.

Application Services:
 Applications hosted on AppEngine take the most from the services made available through the
runtime environment
 These services simplify most of the common operations that are performed in Web applications
o access to data, account management, integration of external resources, messaging and
communication, image manipulation, and asynchronous computation
o communication, image manipulation, and asynchronous computation.

Pros of AppEngine:
1. No need to buy servers or server space (no maintenance).
2. Makes solving the problem of scaling easier
3. Free up to a certain level of consumed resources.
4. Access to google infrastructure
5. Less likely that google (has the company) is going close shop then Heroku or Appspot
6. Scalable.
7. Easy and cheaper.

Cons of AppEngine:
[Link]-in to Google AppEngine.
[Link] have read-only access to the filesystem on AppEngine.
3. AppEngine can only execute code called from an HTTP request (except for scheduled background
tasks).
4. Users may upload arbitrary Python modules, but only if they are pure-Python: C and Pyrex modules are
not supported.
5. AppEngine limits the maximum rows returned from an entity get to 1000 rows per Datastore call.
6. Java applications may only use a subset (The JRE Class White List) of the classes from the JRE
standard edition.
7. Java applications cannot create new threads.

OpenStack
 OpenStack: An open-source cloud operating system that provides a flexible and modular platform
for building and managing private, public, and hybrid clouds.

Prof:Gavali R.B.
 It offers a wide range of services for managing virtual machines, storage, networking, and other
resources, giving users control over their cloud infrastructure.
 OpenStack is a project originally started by NASA and Rackspace for delivering a cloud computing
and storage platform.
 Today, OpenStack is a global collaboration of developers and technologists producing an open
source cloud computing platform for public and private clouds
 OpenStack is an open-source cloud computing platform that enables users to create and manage
public and private clouds.
 OpenStack software delivers a massively scalable cloud operating system consisting of three
major components:
o Compute: Open source software designed to provision and manage large networks of
virtual machines, creating a redundant and scalable cloud computing platform.
o Object Storage: Open source software for creating redundant, scalable object storage
using clusters of standardized servers to store petabytes of accessible data (code-named
"Swift").
o Image Service: Provides discovery, registration, and delivery services for virtual disk
images (code-named "Glance").

Why OpenStack?
 Control and Flexibility: Open source platform means you're never locked to a proprietary vendor, and
modular design can integrate with legacy or third-party technologies to meet your business needs.
 Industry Standard: More than 60 leading companies from over a dozen countries are participating in
OpenStack, including Cisco, Citrix, Dell, Inteland Microsoft, and new OpenStack clouds are coming
online across the globe.

Architecture and Components of OpenStack.


 OpenStack is made up of many different moving parts. Because of its open nature, anyone can add
additional components to OpenStack to help it to meet their needs.
 But the OpenStack community has collaboratively identified nine key components that are a part of the
"core" of OpenStack shown in Fig. 5.12.
 Which are distributed as a part of any OpenStack system and officially maintained by the OpenStack
community.
o Nova is the primary computing engine behind OpenStack. It is used for deploying and
managing large numbers of virtual machines and other instances to handle computing tasks.
o Swift is a storage system for objects and files. Rather than the traditional idea of a referring to
files by their location on a disk drive, developers can instead refer to a unique identifier
referring to the file or piece of information and let OpenStack decide where to store this
information.
o Cinder is a block storage component, which is more analogous to the traditional notion of a
computer being able to access specific locations on a disk drive.

Prof:Gavali R.B.
o Neutron provides the networking capability for OpenStack. It helps to ensure that each of the
components of an OpenStack deployment can communicate with one another quickly and
efficiently.
o Horizon is the dashboard behind OpenStack. It is the only graphical interface to OpenStack,
so for users wanting to give OpenStack a try, this may be the first component they actually
"see."
o Keystone provides identity services for OpenStack. It is essentially a central list of all of the
users of the OpenStack cloud, mapped against all of the services provided by the cloud, which
they have permission to use.
o Glance provides image services to OpenStack. In this case, "images" refers to images (or
virtual copies) of hard disks.
o Ceilometer provides telemetry services, which allow the cloud to provide billing services to
individual users of the cloud.
o Heat is the orchestration component of OpenStack, which allows developers to store the
requirements of a cloud application in a file that defines what resources are necessary for that
application.

Main Components of OpenStack:


 Currently, OpenStack focuses on the development of two aspects of cloud computing to address
compute and storage aspects with the OpenStack Compute and OpenStack Storage solutions.
 The "OpenStack Compute is the internal fabric of the cloud creating and managing large groups of
virtual private servers" and "OpenStack Object Storage is software for creating redundant, scalable
object storage using clusters of commodity servers to store terabytes or even petabytes of data."
OpenStack Compute:
 The architecture for Nova is built on the concepts of shared-nothing and messaging-bund information
exchange. Hence, most communication in Nova is facilitated by message queues
 Fig. 5.13 shows the main architecture of Open Stack Compute.
 In this architecture, the API Server receives HTTP requests from boto, converts the commands to and
from the API format, and forwards the requests to the cloud controller.

Prof:Gavali R.B.
 The cloud controller maintains the global state of the system, ensures authorization while interacting
with the User Manager via Lightweight Directory Access Protocol (LDAP), interacts with the S service,
and manages nodes, as well as storage workers drough a queue
It includes the following types

 NetworkController manages address and virtual LAN (VLAN) allocationa


 Routing Node governs the NAT inetwork address translation) conversion of public IPs to private IPs,
and enforces firewall rules
 Addressing Node runs Dynamic Host Configuration Protocol (DHCP) services for private nerworks.
 Tunneling Node provides VPN connectivity
 The network state imanaged in the distributed object stores consins of the following

OpenStack Storage:
 The OpenStack storage solution is built around a number of interacting components and concrpra
Including a proxy server, a ring, an object servez a containes server an account server, replication,
updaters, and auditors
 The role of the proxy server is to enabis lookups to the arcounts, containers, or oljects in OpenStack
storage rings and route the requests.
 A ring represents a mapping between the names of entities stored on disk and their physical locations.
Separate rings for accounts, containers, and objects exist
 A ring includes the concept of using zones, devices, partitions, and replicas. Hence it allows the system
to deal with fallures, and isolation of zones representing a drive, a terver, a cabinet, a switch, or even a
data center.

OpenNebula
 OpenNebula: A lightweight, open-source cloud management platform that allows organizations to build
and manage both private and hybrid clouds.
 It focuses on simplicity and flexibility, enabling users to integrate with various virtualization technologies
and cloud providers.
 OpenNebula is an open source cloud computing platform designed to build and manage emerprise
clouds and virtualized data centers

Prof:Gavali R.B.
 It combines existing virtualization with features like multorenancy, automauc provimening and elasticity
to offer on-demand applications and services
 OpenNebula is an open source cloud computing platform used for managing and orchestrating data
center virtualization and cloud infrastructures
 It enables users to build privare hybrid, and edge clouds by managing virtualized data centers
efficiently
 OpenNebula is a cloud computing tool for running various distributed data center infrastructures

Features of OpenNebula.

1. Unified Management: It provides a single control panel to manage hybrid and multi-cloud environments.
2. Simplicity: OpenNebula is designed to be easy to use, install and maintain
3. Flexibility: It is infrastructure-agnostic, allowing the creation of enterprise clouds that meet specific needs
on-premises or in the cloud
[Link]: OpenNebula is proven in large-scale production deployments with thousands of distributed
nodes.
[Link]-Effectiveness: It can reduce the total cost of ownershup (TCO) compared to other solutions.
6. Vendor Neutrality: OpenNebula is an open cloud architecture to orchestrate compute, storage, and
networkings.
 Linux-based controller;
 CLI, XML-RPC, EC2-compatible Query and OCA interfaces.
 Xen, KVM, and VMware backend;
 Interface to public clouds (Amazon EC2, ElasticHosts);
 virtual networks,
 dynamic resource allocation,
 advance reservation of capacity
 proviston dynamic resources
 support changing environments are live migration and VM snapshots

OpenNebula Architecture and its Main Components:


 OpenNebula is an open source toolkit which allows users to transform existing infrastructure into an
IaaS cloud with cloud-like interfaces.
 Fig. 5.14 shows the OpenNebula architecture and its main components.

Prof:Gavali R.B.
 The architecture of OpenNebula has been designed to be flexible and modular to allow integration with
different storage and network infrastructure configurations, and hypervisor technologies.
 The last main components are the access drivers. They provide an abstraction of the underlying
infrastructure to expose the basic functionality of the monitoring, storage, and virtualization services
available in the cluster.
 Therefore, OpenNebula is not tied to any specific environment and can provide a uniform management
layer regardless of the virtualization platform
 Additionally, OpenNebula offers management interfaces to integrate the core's functionality within other
data-center management tools, such as accounting or monitoring frameworks.
 A subset of this functionality is exposed to external users through a cloud interface. OpenNebula is
able to adapt to organizations with changing resource needs, including addition or failure of physical
resources.
 OpenNebula currently includes an EC2 driver, which can submit requests to Amazon EC2 and
Eucalyptus, as well as an ElasticHosts driver.
 Regarding storage, an Image Repository allows users to easily specify disk images from a catalog
without worrying about low-level disk configuration attributes or block device mapping

Benefits of OpenNebula:
1. Centralized Management: Simplifies IT infrastructure management with a single interface3
2. Scalability: Easily scale resources by adding or removing nodes3
3. Hybrid Cloud Support: Can deploy nodes in public clouds like AWS and Azure3
4. Resource Optimization: The built-in scheduler efficiently allocates resources based on defined
policies3.

Prof:Gavali R.B.

You might also like