Ubiquitous Cloud Computing Trends
Ubiquitous Cloud Computing Trends
Trends in Cloud
Ubiquitous computing:
Ubiquitous computing, also known as pervasive computing, is a concept where computing capabilities
are embedded into everyday objects and environments, creating a seamlessly connected and context-
aware network that makes technology an invisible, integral part of life.
Ubiquitous cloud computing is a new computing paradigm that merges ubiquitous computing and cloud
computing to create an interconnected network of devices and services available anytime, anywhere.
This hybrid model leverages the power of the cloud to manage vast amounts of data from embedded
devices and deliver context-aware services across diverse platforms, moving beyond centralized cloud
models to a more distributed and dynamic computing landscape.
Definition: A concept in software engineering and computer science where computing is made to
appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur
using any device, in any location, and in any format.
Ubiquitous computing integrates microprocessors into everyday objects like lights, cameras, TVs,
wearable’s, voice assistants, and sensors. These interconnected devices can communicate and
respond intelligently to provide services for various applications.
Prof:Gavali R.B.
In 2010, two large-scale private clouds were under construction in the United States and European
Union.
The U.S. cloud, called Nebula, is developed by NASA and is designed for NASA scientists to run
climate models on remote systems provided by NASA. This can save thousands of NASA users
from acquiring supercomputers at their local sites.
The EU cloud is built by CERN in Geneva. This is a large private cloud for distributing data.
applications, and computing resources to thousands of scientists around the world. CERN deals
with large data sets and throughput with a global workforce and a finite budget.
Prof:Gavali R.B.
While UbiComp is the broader goal of embedding computing into our environment to provide
context-aware services, IoT provides the infrastructure of connected sensors, software, and
devices that make this goal achievable.
Ubiquitous computing is the overarching vision of a world filled with invisible technology, and the
Internet of Things is a major technological enabler that makes this vision a reality.
IoT's interconnected network of "things" forms the foundation for the pervasive, context-aware
computing environments that define ubiquitous computing.
Prof:Gavali R.B.
Architecture of Internet of Things
Tags contain unique identifiers and can be passive (relying on reader for power) or active (with their
own power source).
RFID is used for asset tracking, inventory management, and access control in IoT.
Prof:Gavali R.B.
RFID is applied with electronic labels or RFID tags on any objects being monitored or tracked. The
tagging may be applied to any objects, such as merchandise, tools, smartphones, computers,
animals, or people.
The purpose is to identify and track the objects using radio waves or sensing signals. Some tags
can be read from tens or hundreds of meters away via a wireless reader.
Most RFID tags contain at least two major parts. One is an integrated circuit for storing and
processing information, modulating and demodulating a, radio-frequency (RF) signal, and other
special functions.
These tags are not that expensive to produce in large quantities, and they can be made small
enough to fit on almost any product or object.
RFID has many industrial applications including merchandise tracking and supply chain
management. Its use can improve the efficiency of inventory control and accuracy of assembly line
management.
Prof:Gavali R.B.
Active and semi-passive tags are reserved for reading over 30 to 100 meters, if repeater batteries
are used to boost a tag's range.
Sensor Networks and ZigBee Technology
Today's sensor networks are mostly wireless, and are known as wireless sensor networks (WSNs).
A typical WSN consists of spatially distributed autonomous sensors to cooperatively monitor
physical or environmental conditions, such as temperature, sound, vibration, pressure, motion, or
pollutants.
The development of wireless sensor networks was motivated by military applications such as
battlefield surveillance.
WSN technology is now used in many industrial and civilian application areas, including process
monitoring and control, machine health monitoring, environment and habitat monitoring, health
care and home automation and intelligent traffic control.
Wireless Sensor Networks:
A WSN is a group of specialized transducers with a communications infrastructure intended to
monitor and record conditions at diverse locations.
Commonly monitored parameters are temperature, humidity, pressure, wind direction and speed,
illumination Intensity, vibration intensity, sound intensity, power-line voltage, chemical
concentrations, pollutant levels, and vital body functions.
A sensor network consists of multiple detection stations called sensor nodes, each of which is
small, lightweight, and portable.
Every sensor node is equipped with a transducer, microcomputer, transceiver, and power source.
The transducer generates electrical signals based on sensed data.
They are connected to the Base Station which acts as a processing unit in the WSN System. The
base Station in a WSN System is connected through the Internet to share data. WSN can be used
for processing, analysis, storage, and mining of the data.
Prof:Gavali R.B.
The Bluetooth with 720 Kbps and 115 Kbps for the 2.5G GPRS mobile network. ZigBee has the
slowest rate of 20-250 Kbps. However, ZigBee has high reliability and low power/cost advantages.
First Generation:
The sensors used in the first generation were mainly vehicle-placed or air-dropped single sensors.
They were quite bulky, like a shoebox, and weighed in the kilograms.
Networks assumed only star or point-to-point topologies and were powered by large batteries that
could last hours or days.
Second Generation:
In the second generation, the sensors became smaller, like a pack of playing cards, and weighed
in the grams, and worked with AA batteries that lasted for days or weeks.
They appear in client/server or P2P configurations.
Third Generation:
The current generation are as large as dust particles, weigh only a negligible amount, and are used
in P2P networks for embedded and remote applications.
ZigBee Network
The term "ZigBee" is originated from the behavior of honeybees when they return to the beehive This is
a high-level communication protocol using small, low-power, radio-based sensors as specified by the
IEEE 802.15.4 standard.
For example, ZigBee devices are applied in WHANs in wireless light switches and consumer
electronics controlled by short-range radio.
The ZigBee technology is simpler to use and less expensive than Bluetooth or WiFi. The RF applied in
ZigBee results in low data rates, long battery life, and secure networking.
The low cost allows the technology to be widely deployed in wireless control and monitoring
applications.
The low power usage allows a longer life with smaller batteries. The mesh networking provides higher
availability and an extended operating range.
Features of Zigbee:
1. Support for multiple network topologies such as point-to-point, point-to-multipoint and mesh
networks.
2. Low duty cycle - provides long battery life.
3. Low latency.
Prof:Gavali R.B.
4. Direct Sequence Spread Spectrum (DSSS).
5. Up to 65,000 nodes per nerwork.
6. 128-bit AES encryption for secure data connections.
7. Collision avoidance, retries and acknowledgements.
2. ZigBee Router (ZR): This can act as an intermediate router, passing on data from end device to end
device.
3. ZigBee End Device (ZED): This contains just enough functionality to talk to the parent node (either
the coordinator or the router). The end device cannot relay data from other devices.
1. Physical Layer: This layer does modulation and demodulation operations up on transmitting and
receiving signals respectively. This layer's supports frequency 2.4GHz, 868MHz and 915MHz. 2.
MAC Layer: This layer is responsible for reliable transmission of data by accessing different networks
with the carrier sense multiple access collision avoidance (CSMA). This also transmits the beacon frames
for synchronizing communication.
3. Network Layer: This layer takes care of all network related operations such as nerwork setup. end
device connection and disconnection to network, routing, device configurations, etc.
4. Application Support Sub-Layer :This layer enables the services necessary for Zigbee device
object and application objects to interface with the nerwork layers for data managing services. This layer is
responsible for matching two devices according to their services and needs.
5. Application Framework: It provides two types of data services as key value pair and generic
message services. Generic message is a developer defined structure, whereas the key value pair is used
for getting attributes within the application objects. ZDO provides an interface between application objects
and APS layer in Zigbee devices! It is responsible for detecting, initiating and binding other devices to the
network.
Prof:Gavali R.B.
Fig:ZigBee Protocol Architecture
Applications of ZigBee
1. Industrial Automation: In manufacturing and production industries, a communication link
continually monitors various parameters and critical equipments. Hence ZigBee considerably reduce this
communication cost as well as optimizes the control process for greater reliability
2. Home Automation: ZigBee is perfectly suited for controlling home appliances remotely as a lighting
system control, appliance control, heating and cooling system control, safety equipment operations and
control, surveillance and so on.
3. Smart Metering: ZigBee remote operations in smart metering include energy consumption response,
pricing support, security over power theft, etc.
4. Smart Grid Monitoring: ZigBee operations in this smart grid involve remote temperature
monitoring, fault locating, reactive power management, and so on.
Prof:Gavali R.B.
This has triggered the creation of global positioning and navigation systems. The GPS was
developed In 1973 by the U.S. Air Force.
The global positioning system is a satellite-based navigation system consisting of a network of 24
orbiting satellites that are eleven thousand nautical miles in space and in six different orhital paths
For anyone with a GPS receiver, the system will provide accurate location and time information for
an unlimited number of users in all weather conditions, day and night, anywhere in the world
Each GPS satellite broadcasts a message that includes the satellite's current position, orbit, and
exact time.
A GPS receiver combines the broadcasts from multiple satellites to calculate its exact position
using a process called triangulation.
Three satellites are required in order to determine a receiver's location, though a connection to four
satellites is ideal since it provides greater accuracy.
Most GPS devices also use some type of location caching to speed up GPS detection. By
memorizing its previous location, a GPS device can quickly determine what satellites will be
available the next time it scans for a GPS signal.
Earth twice a day in very precise orbits and transmit signals to Earth.
GPS devices on the ground receive these signals and use a triangulation method to calculate the
user's exact location. At different times, different satellites will become visible to the receiver
As shown in Fig. 5.6, the control segment is composed of a master control station and a host of
dedicated and shared ground antennas and monitor stations.
The user segment is composed of hundreds of thousands of allied users of the secure GPS
precise positioning services.
Tens of millions of civil, commercial, and scientific users are only allowed to use a degraded
functionality of the so-called standard positioning services that cannot be used in hostile attack
purposes.
The GPS tracking device makes it possible to track people, vehicles, and other assets, anywhere
on Earth
Prof:Gavali R.B.
Types of GPS
There are two types of GPS tracking systems namely, Passive GPS and Active GPS.
1. In passive GPS tracking, the GPS is just a receiver, not a transmitter. Passive GPS tracking
devices lack a transmission capability to send the GPS data from the vehicle. Therefore, passive GPS is
also known as data loggers used primarily as recording devices.
2. Active GPS tracking units incorporate a method to transmit user information from a vehicle.
Although satellite uplink of data is available, cellular data communication is the most common and cost-
effective. Automatic incremental updating provides a continuous source of tracking throughout a recording
period. This provides current as well as historical logging positions.
Passive GPS tracking devices store GPS location data in their internal memory which can then be
downloaded to a computer or mobile for viewing at a later time, while active GPS tracking systems
send the data at regular intervals to be viewed in real time.
Medication Adherence:
Prof:Gavali R.B.
IoT devices can track medication intake, sending reminders to patients and alerts to caregivers if
medication is missed, improving treatment adherence and patient outcomes.
Smart Hospitals:
IoT devices like RFID tags and sensors help track patients, equipment, and resources within hospitals,
optimizing hospital operations and improving patient care.
Advantages:
1. Elasticity: The elasticity of cloud infrastructure that can grow and shrink according to the requests
served. As a result, doctors and hospitals do not have to invest in large computing infrastructures designed
after capacity planning, thus making more effective use of budgets.
2. Ubiquity: Cloud computing technologies are easily accessible and promise to deliver systems with
minimum or no downtime.
3. Cost savings: Cloud services are priced on a pay-per-use basis and with volume prices for large
numbers of service requests.
Business Applications:
CRM Integration:
IoT devices can collect customer data, such as preferences and usage patterns, which can be
integrated with CRM systems to personalize marketing efforts, improve customer service, and enhance
customer satisfaction.
Prof:Gavali R.B.
ERP Integration:
IoT data can be integrated with ERP systems to optimize supply chain management, inventory control,
and production processes, leading to increased efficiency and reduced costs.
Smart Retail:
IoT sensors can track customer behavior in retail stores, providing valuable insights for store layout
optimization, product placement, and targeted promotions.
Smart Manufacturing:
IoT sensors on factory floors can monitor equipment performance, predict maintenance needs, and
optimize production processes, leading to increased productivity and reduced downtime.
Prof:Gavali R.B.
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon
Web Services (AWS) cloud
Using Amazon EC2 eliminates our need to invest in hardware up front, so we can develop and deploy
applications faster.
We can use Amazon EC2 to launch as many or as few virtual servers as we need, configure security
and networking, and manage storage.
Amazon EC2's simple web service interface allows us to obtain and configure capacity with minimai
friction. It provides us with complete control of your computing resources and lets us run on
Amazon EC2 changes the economics of computing by allowing us to pay only for capacity that we
actually use.
Amazon EC2 provides developers the tools to bulid failure resilient applications and isolate them from
common failure scenarios
Amazon S3 Concepts:
Prof:Gavali R.B.
The concepts of Amazon S3 are Buckets, Objects, Keys, Regions, Amazon S3 Data Consistency
Model.
Buckets:
A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket.
For example, if the object named photos/[Link] is stored in the amarsawant bucker, then it is
addressable using the URL [Link]
Objects:
Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and
metadata. The data portion is opaque to Amazon S3.
The metadata is a set of name-value pairs that describe the object. These include some default
metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type
Keys:
A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key.
The combination of a bucket, key and version ID uniquely identify each object. So, we can think of
Amazon S3 as a basic data map between "bucket key version and the object itself.
Regions:
We can choose the geographical AWS Region where Amazon 53 will store the buckets that you create
We might choose a Region to optimize latency, minimize costs, or address regulatory requirements.
Objects stored in a Region never leave the Region uniess we explicitly transfer them to another
Region. For example, objects stored in the EU (Ireland) Region never leave it.
A process replaces an existing object and immediately tries to read it. Until the change is fially
propagated, Amazon S3 might return the previous data.
If a PUT request is successful, our data is safely stored. However, information about the changes must
replicate across Amazon S3, which can take some time and so we might observe the following
behaviors:
A process writes a new object to Amazon 53 and immediately lists keys within its bucket. Until the
change is fully propagated, the object might not appear in the list
Prof:Gavali R.B.
A process replaces an existing object and immediately tries to read it. Until the change is fully
propagated, Amazon S3 might return the previous data.
5.4.2 CloudStack:
CloudStack: An open-source Infrastructure as a Service (IaaS) platform that enables users to
deploy and manage private, public, and hybrid clouds.
It focuses on providing a user-friendly interface for managing virtual machines, storage, and
networking resources.
CloudStack facilitates hybrid cloud deployment, allowing organizations to connect private clouds
with public cloud services.
Apache GoudStack is open source software designed to deploy and manage large networks of
virtual machines as a highly available, highly scalable Infrastructure as a Service (laas) cloud
computing platform
Cloud Stack is used by a number of service providers to offer public cloud services, and by many
companies to provide an on premises (private cloud offering, or as part of a hybrid cloud solution
Cloud Stack currently supports the must popular hypervisors VMware KVM Citrix Xen Server Xen
Cloud Platform (XCP), Oracle VM server and Microsoft Hyper-V
Cloud Stock is cloud software which assists users to alter the cloud according to then requirements
controls and supports the network storage and compute joints in a cloud Infrastructure
Cloud Stack allows users to coordinates virtualized servers, networking and network storage in
provide infrastructure-as-a-service (lass) just the busting providers-but on theses own hardware
Prof:Gavali R.B.
CloudStack are capable for the following:
1 Establishing an on-command, flexible cloud computing service Service suppliers may offer sa service
virtual machines, networking arrangements, and storage sizes over the erant
2 CloudStack may be used to construct an on-command cloud computing service along with Dexibility
Service supplier provides storage and virtual machines, and much more on the Internes
3. An organization or worker could establish an en-premise private cloud along with CloudStack
Functionality Cloudstack
Works with hests running XenServer XCP. KVM, Hyper-V, andur Vitware ESX shviphare
Provides a friendly Web-based Ut for managing the cloud
Provides a native API
May provide an Amazon 5372 compatible APT (optional)
snapshots, and ISG images secondary storage)
Accounting of network, computes, and storage resources
Multi-tenancy account separation and user management
InterCloud:
Intercloud: A concept that involves connecting multiple cloud providers to offer greater flexibility,
redundancy, and potentially lower costs.
It enables users to move applications and data across different cloud providers, avoiding vendor lock-in
and potentially accessing a wider range of geographical locations.
InterCloud is the concept of connected cloud networks, including public, private, and hybrid clouds. It
incorporates a number of technology efforts put together to improve interoperability and portability
among cloud networks.
The InterCloud started as a research project in Cisco in 2008 and was soon taken up by the Institute of
Electrical and Electronics Engineers (IEEE).
The goal was to improve interoperability of cloud networks. The terms "InterCloud Computing" and
"InterCloud" were used to address this concept.
The term "InterCloud" is originally attributed to the journalist and technology author Kevin Kelly,
according to several sources.
Prof:Gavali R.B.
Several key tenets of the Intercloud exist to accomplish this:
Software-Defined Infrastructure: The Software-Defined Infrastructure is fluid and ever evolving to
support the ever-changing and growing demands of the business Software-defined services support all
aspects of the business application needs.
Application: parameters and configuration details of the application including integration performance
optimizations metrics, and service-level objectives (SLOS)
Integrated Application Platform: any underlying dependent or loosely coupled services the
application requires (message queue, Doman Name System (DNS), memcache)
Network and Security: network and security policies and the enforcement of those policy objectives.
Data: being able to abstract data aspects of the application and provide analytics and business
outcome learnings
BSS (Back-Office Support Systems): being able to abstract the BSS aspects to enable complete
abstraction to any existing BSS systems in the enterprise or Cisco Powered partner clouds.
OSS (Operational Support Systems): being able to abstract the OSS aspects to enable complete
abstraction to any existing OSS systems in the enterprise or Cisco Powered partner clouds.
Enterprise is a special case of private cloud where the deployment model is the enterprise data center.
Additionally, the enterprise model may want to burst to other deployment models, so managing the
policies and processes of the enterprise becomes critical.
As shown in Fig. 5. 9, the broker offers three facilities to the cloud users, such as SaaS, Paas, and
laans IaaS services are offered by cloud provider A. Saas services by cloud provider B, and Paas
services are offered by cloud provider C
Cloud providers have been deploying datacenters in multiple locations throughout the globe. For
example, Amazon EC2 Cloud services are available via Amazon datacenters located in the United
States, Europe, and Singapore.
This disbursement is leading to the emergence of a notion, called the InterCloud, supporting scalable
delivery of application services by harnessing multiple datacenters from one or more providers.
In addition to enhancing performance and reliability, these InterClouds provide a powerful means of
reducing energy-related costs.
One reason is that the local demand for electricity varies with time of day and weather. This causes
time-varying differences in the price of electricity at each location.
Moreover, each site has a different source of energy (such as coal, hydroelectric, or wind), with
different environmental costs. This gives scope to adjust the load sent to each location, and the number
of servers powered on at each location, to improve efficiency.
InterCloud architecture:
Prof:Gavali R.B.
InterCloud architecture consists of the following layers:
1. Cloud Platform and Underlying Infrastructure:
The OpenStack platform and underlying infrastructure consist of physical, virtual and automation
components. The beginning point for any service is the physical data center, controls, and hardware
Whatever services or virtualization you build on top of this infrastructure will only be as good as the
underlying physical design and the security controls you integrate.
2. Cloud Services and Value-added Products:
In the layer above the OpenStack platform, services and capabilities are added by the intercloud to
enable advanced services around networking, security, NFV, dara, database, load balancing, and
application policy.
As we move up the stack, it is critical to focus on the API interfaces and capabilities. This layer is also
where the InterCloud Fabric (ICF) product resides to enable point-to-point secure Intercloud
connectivity.
3. Application Enablement Platform as a Service:
The Intercloud's primary use case is to enable application portability and interoperability scross a world
of clouds. By virtue of its broad definition, one can throw in various solutions under the Paas umbrella.
For our purposes we will define what Paas encompasses by looking at what customer prohiems need
to be solved and working backward from there
4. Cloud OSS:
The OSS consist of the following management aspects
Change management means managing break/fix and new features to the system following the change
process and change windows.
SLA management means managing the SLA thresholds set by configuration and performance
management
Incident management means managing incidents as they occur, documenting results, and creating
tickets.
Prof:Gavali R.B.
5. Cloud BSS:
The intercloud provides the following to BSS for several management funcrians.
Accounting provides data for managing the user accounts and departments
Contracts and Agreements manages the contracts and support agreements.
Service Offering Catalog selects what offerings are allowed in the catalog.
Pricing sets the price for the service
6. Marketplace:
The Marketplace is the curated set of products and services from Cisco and its Intercloud partners that
enables enterprise customers to consume products and services in the enterprise catalog and across
the global Intercloud product catalog from a simple, easy-to-use marketplace
The Marketplace performs the following operations
Manage online user ID setup and user directory.
Google AppEngine:
Google App Engine: A platform-as-a-service (PaaS) that simplifies the development and deployment
of web applications.
It provides a managed environment, handling infrastructure concerns like scaling and maintenance,
allowing developers to focus on building applications.
Google App Engine offers easy-to-use APIs, robust security features, and a pay-as-you-go pricing
model.
Google AppEngine is a Paas implementation that provides services for developing and hosting
scalable Web applications.
AppEngine is essentially a distributed and scalable runtime environment that leverages Google's
distributed infrastructure to scale out applications facing a large number of requests by allocating more
computing resources to them and balancing the load among them.
Application usage of Google resources and services is metered by AppEngine, which aills users when
their applications finish their free quotas
The features supported by the Google app engine are Support trendy languages. Elasticity and
Complete control. Google App Engine assists apps written in a variery of programming languages like
Java, Python, PHP and Go
Prof:Gavali R.B.
4. Lots of built-in features such as scaling up/down, logging, monitoring, diagnostics etc
5. Building quickly and time to market is short.
6. Pay only for what you use.
The platform is logically divided into four major components namely, Infrastructure Runtime
environment, Underlying storage and Set of scalable services.
Infrastructure:
AppEngine hosts Web applications, and its primary function is to serve users requests efficiently. To do
so, AppEngine's infrastructure takes advantage of many servers available within Google dataceniters
For each HTTP request, AppEngine locates the servers hosting the application that processes the
request, evaluates their load and if necessary, allocates additional resources (ie, servers) or redirects
the request to an existing server.
The infrastructure is also responsible for monitoring application performance and collerong statistics on
which the billing is calculated.
Runtime Environment:
The runtime environment represents the execution context of applications hosted on AppEngine With
reference to the AppEngine Infrastructure code
AppEngine supports applications that are developed only with managed or interpreted languages,
which by design require a runtime for translating their code inin executable Instructions
Therefore, sandboxing is achieved by means of modified runtimes for applications that disable some of
the common features normally available with their default implementations
Storage:
AppEngine provides various types of storage, which operate differentiv depending on the volatility of
the data
There are three different levels of storage
o In memory-cache
o Storage for semi-structured data
Prof:Gavali R.B.
o Long-term storage for static data.
Google AppEngine supports uses main two storage. Data-Store and tatatic file servers.
Data-Store: Data-Store is a service that allows developers to store semi-structured data. Theservice is
designed to scale and optimized to quickly access data.
Static File Servers: Web applications are composed of dynamic and statie data. Dynamic data are a
result of the logic of the application and the interaction with the user.
Application Services:
Applications hosted on AppEngine take the most from the services made available through the
runtime environment
These services simplify most of the common operations that are performed in Web applications
o access to data, account management, integration of external resources, messaging and
communication, image manipulation, and asynchronous computation
o communication, image manipulation, and asynchronous computation.
Pros of AppEngine:
1. No need to buy servers or server space (no maintenance).
2. Makes solving the problem of scaling easier
3. Free up to a certain level of consumed resources.
4. Access to google infrastructure
5. Less likely that google (has the company) is going close shop then Heroku or Appspot
6. Scalable.
7. Easy and cheaper.
Cons of AppEngine:
[Link]-in to Google AppEngine.
[Link] have read-only access to the filesystem on AppEngine.
3. AppEngine can only execute code called from an HTTP request (except for scheduled background
tasks).
4. Users may upload arbitrary Python modules, but only if they are pure-Python: C and Pyrex modules are
not supported.
5. AppEngine limits the maximum rows returned from an entity get to 1000 rows per Datastore call.
6. Java applications may only use a subset (The JRE Class White List) of the classes from the JRE
standard edition.
7. Java applications cannot create new threads.
OpenStack
OpenStack: An open-source cloud operating system that provides a flexible and modular platform
for building and managing private, public, and hybrid clouds.
Prof:Gavali R.B.
It offers a wide range of services for managing virtual machines, storage, networking, and other
resources, giving users control over their cloud infrastructure.
OpenStack is a project originally started by NASA and Rackspace for delivering a cloud computing
and storage platform.
Today, OpenStack is a global collaboration of developers and technologists producing an open
source cloud computing platform for public and private clouds
OpenStack is an open-source cloud computing platform that enables users to create and manage
public and private clouds.
OpenStack software delivers a massively scalable cloud operating system consisting of three
major components:
o Compute: Open source software designed to provision and manage large networks of
virtual machines, creating a redundant and scalable cloud computing platform.
o Object Storage: Open source software for creating redundant, scalable object storage
using clusters of standardized servers to store petabytes of accessible data (code-named
"Swift").
o Image Service: Provides discovery, registration, and delivery services for virtual disk
images (code-named "Glance").
Why OpenStack?
Control and Flexibility: Open source platform means you're never locked to a proprietary vendor, and
modular design can integrate with legacy or third-party technologies to meet your business needs.
Industry Standard: More than 60 leading companies from over a dozen countries are participating in
OpenStack, including Cisco, Citrix, Dell, Inteland Microsoft, and new OpenStack clouds are coming
online across the globe.
Prof:Gavali R.B.
o Neutron provides the networking capability for OpenStack. It helps to ensure that each of the
components of an OpenStack deployment can communicate with one another quickly and
efficiently.
o Horizon is the dashboard behind OpenStack. It is the only graphical interface to OpenStack,
so for users wanting to give OpenStack a try, this may be the first component they actually
"see."
o Keystone provides identity services for OpenStack. It is essentially a central list of all of the
users of the OpenStack cloud, mapped against all of the services provided by the cloud, which
they have permission to use.
o Glance provides image services to OpenStack. In this case, "images" refers to images (or
virtual copies) of hard disks.
o Ceilometer provides telemetry services, which allow the cloud to provide billing services to
individual users of the cloud.
o Heat is the orchestration component of OpenStack, which allows developers to store the
requirements of a cloud application in a file that defines what resources are necessary for that
application.
Prof:Gavali R.B.
The cloud controller maintains the global state of the system, ensures authorization while interacting
with the User Manager via Lightweight Directory Access Protocol (LDAP), interacts with the S service,
and manages nodes, as well as storage workers drough a queue
It includes the following types
OpenStack Storage:
The OpenStack storage solution is built around a number of interacting components and concrpra
Including a proxy server, a ring, an object servez a containes server an account server, replication,
updaters, and auditors
The role of the proxy server is to enabis lookups to the arcounts, containers, or oljects in OpenStack
storage rings and route the requests.
A ring represents a mapping between the names of entities stored on disk and their physical locations.
Separate rings for accounts, containers, and objects exist
A ring includes the concept of using zones, devices, partitions, and replicas. Hence it allows the system
to deal with fallures, and isolation of zones representing a drive, a terver, a cabinet, a switch, or even a
data center.
OpenNebula
OpenNebula: A lightweight, open-source cloud management platform that allows organizations to build
and manage both private and hybrid clouds.
It focuses on simplicity and flexibility, enabling users to integrate with various virtualization technologies
and cloud providers.
OpenNebula is an open source cloud computing platform designed to build and manage emerprise
clouds and virtualized data centers
Prof:Gavali R.B.
It combines existing virtualization with features like multorenancy, automauc provimening and elasticity
to offer on-demand applications and services
OpenNebula is an open source cloud computing platform used for managing and orchestrating data
center virtualization and cloud infrastructures
It enables users to build privare hybrid, and edge clouds by managing virtualized data centers
efficiently
OpenNebula is a cloud computing tool for running various distributed data center infrastructures
Features of OpenNebula.
1. Unified Management: It provides a single control panel to manage hybrid and multi-cloud environments.
2. Simplicity: OpenNebula is designed to be easy to use, install and maintain
3. Flexibility: It is infrastructure-agnostic, allowing the creation of enterprise clouds that meet specific needs
on-premises or in the cloud
[Link]: OpenNebula is proven in large-scale production deployments with thousands of distributed
nodes.
[Link]-Effectiveness: It can reduce the total cost of ownershup (TCO) compared to other solutions.
6. Vendor Neutrality: OpenNebula is an open cloud architecture to orchestrate compute, storage, and
networkings.
Linux-based controller;
CLI, XML-RPC, EC2-compatible Query and OCA interfaces.
Xen, KVM, and VMware backend;
Interface to public clouds (Amazon EC2, ElasticHosts);
virtual networks,
dynamic resource allocation,
advance reservation of capacity
proviston dynamic resources
support changing environments are live migration and VM snapshots
Prof:Gavali R.B.
The architecture of OpenNebula has been designed to be flexible and modular to allow integration with
different storage and network infrastructure configurations, and hypervisor technologies.
The last main components are the access drivers. They provide an abstraction of the underlying
infrastructure to expose the basic functionality of the monitoring, storage, and virtualization services
available in the cluster.
Therefore, OpenNebula is not tied to any specific environment and can provide a uniform management
layer regardless of the virtualization platform
Additionally, OpenNebula offers management interfaces to integrate the core's functionality within other
data-center management tools, such as accounting or monitoring frameworks.
A subset of this functionality is exposed to external users through a cloud interface. OpenNebula is
able to adapt to organizations with changing resource needs, including addition or failure of physical
resources.
OpenNebula currently includes an EC2 driver, which can submit requests to Amazon EC2 and
Eucalyptus, as well as an ElasticHosts driver.
Regarding storage, an Image Repository allows users to easily specify disk images from a catalog
without worrying about low-level disk configuration attributes or block device mapping
Benefits of OpenNebula:
1. Centralized Management: Simplifies IT infrastructure management with a single interface3
2. Scalability: Easily scale resources by adding or removing nodes3
3. Hybrid Cloud Support: Can deploy nodes in public clouds like AWS and Azure3
4. Resource Optimization: The built-in scheduler efficiently allocates resources based on defined
policies3.
Prof:Gavali R.B.