0% found this document useful (0 votes)
33 views19 pages

Understanding Service-Oriented Architecture

Service-Oriented Architecture (SOA) is an architectural style that enables applications to be built as a collection of loosely coupled, reusable services, enhancing scalability, flexibility, and efficiency in cloud computing. Key characteristics include modularity, loose coupling, and security, while major roles involve service providers and consumers for efficient service delivery. Virtualization, a fundamental technology in cloud computing, allows for the creation of virtual resources, improving resource management, scalability, and cost-effectiveness, but also introduces challenges such as performance overhead and security vulnerabilities.

Uploaded by

walterwhiteww29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views19 pages

Understanding Service-Oriented Architecture

Service-Oriented Architecture (SOA) is an architectural style that enables applications to be built as a collection of loosely coupled, reusable services, enhancing scalability, flexibility, and efficiency in cloud computing. Key characteristics include modularity, loose coupling, and security, while major roles involve service providers and consumers for efficient service delivery. Virtualization, a fundamental technology in cloud computing, allows for the creation of virtual resources, improving resource management, scalability, and cost-effectiveness, but also introduces challenges such as performance overhead and security vulnerabilities.

Uploaded by

walterwhiteww29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit 2

# Service Oriented Architecture (SOA)


Service-Oriented Architecture (SOA) is an architectural style
where applications are designed as a collection of loosely
coupled, reusable, and interoperable services. In cloud
computing, SOA plays a crucial role in building scalable,
flexible, and efficient applications by leveraging cloud
resources.
Service Oriented Architecture (SOA) is an architectural
approach in which applications make use of services available
in the network. SOA allows users to combine a large number
of facilities from existing services to form applications.
In service oriented architecture, a number of services
communicate with each other, in one of two ways: through
passing data or through two or more services coordinating an
activity.
# Characteristics of Service Oriented Architecture
1. Modular and Reusable Services
SOA emphasizes breaking down applications into smaller,
independent services. Each service performs a specific
function and can be reused in multiple applications, reducing
development time and effort.
2. Loose Coupling
Services in SOA are loosely coupled, meaning they operate
independently of each other. Changes in one service (e.g.,
updating functionality) do not impact other services,
enhancing flexibility and maintainability.
3. Scalability
SOA services are designed to scale individually. Depending on
demand, services can be scaled up or down without affecting
the overall system, aligning with the elastic nature of cloud
computing.
4. Flexibility and Agility
SOA allows businesses to adapt quickly to changing
requirements by adding, updating, or replacing services
without disrupting the entire application.
5. Security
SOA incorporates security mechanisms such as encryption,
authentication, and access control at the service level,
ensuring secure communication between services.
# Major Roles of Service Oriented Architecture
1. Service Provider:
The Service Provider is responsible for creating, maintaining,
and offering one or more services for use by others. These
providers publish their services in a registry, making them
discoverable to potential users.
Along with the service, they include a detailed contract
specifying how the service works, how to use it, any
prerequisites, and any associated costs. The provider ensures
that the service is always functional and meets the needs of
consumers effectively.
2. Service Consumer:
The Service Consumer is the entity that locates, connects to,
and uses the services provided. Consumers use the registry to
find services and retrieve the necessary details to build client
components for accessing them.
In some cases, consumers utilize services that aggregate data
or combine workflows from multiple other services to fulfill
their specific requirements.
This interaction between the Service Provider and Service
Consumer ensures efficient service delivery, reusability, and
seamless integration in an SOA framework.
# Benefits of SOA in Cloud Computing
1. Modularity:
o Applications can be easily updated or modified by
updating individual services.
2. Interoperability:
o SOA enables seamless communication between
services across diverse systems.
3. Scalability:
o Services can be scaled independently, improving
application performance during high demand.
4. Reduced Costs:
o Reusable services lower development and
maintenance costs.
5. Enhanced Flexibility:
o Applications can adapt to changing business needs
by replacing or adding services without rewriting
the entire application.
# Elements of Service Oriented Architecture (SOA):

1. Application Frontends:
Application frontends are the user-facing components of SOA,
responsible for delivering the benefits of SOA to the end-
users. These frontends initiate and manage all activities
within the enterprise system, ensuring users interact
seamlessly with the services provided.
2. Service:
A service is a self-contained software component that
encapsulates a high-level business concept or function. Each
service performs specific business tasks, such as handling
customer orders or processing payments, and is designed to
be reusable across different applications.
3. Contract:
The service contract specifies the purpose, functionality,
constraints, and usage of a service. It outlines how the
service can be used, its limitations, and the requirements for
interaction, ensuring clarity and consistency for both service
providers and consumers.
4. Interface:
The service interface defines the functionality exposed by the
service to clients that connect to it. It includes details about
the service’s operations, such as input parameters, expected
output, and the protocols used for communication. The
interface provides a standardized way for consumers to
interact with the service.
5. Implementation:
The service implementation contains the business logic and
the data necessary to perform its functions. It includes the
actual code, configuration files, and any data or databases
needed to execute the service’s operations. The
implementation is responsible for carrying out the service's
tasks as defined in the service contract.
6. Business Logic:
The business logic represents the core processes and rules
that define how a service operates within a business context.
It includes the algorithms, decision-making rules, and
processes that are essential for fulfilling the service's
purpose, such as processing an order or calculating a
discount.
7. Data:
Data refers to the information that is represented and used
by the service to perform its tasks. This can include customer
information, transaction records, product details, or any
other data required for the service to function properly.
8. Service Repository:
The service repository is a central registry where all services
are registered along with their attributes. It facilitates service
discovery and provides details such as service operations,
access rights, ownership, quality metrics, and other relevant
information. The repository ensures that consumers can find
and connect to the appropriate services.
9. Service Bus:
The service bus is a flexible infrastructure used to integrate
applications and services within an SOA. It facilitates message
routing, protocol transformation, and communication
between different services and clients. It acts as the
backbone of SOA, allowing services to interact and
communicate with each other in a standardized manner.
# REST (Representational State Transfer)
REST is an architectural style used for designing networked
applications. It is widely used in cloud computing to enable
communication between different services and clients,
especially for web services. RESTful services use HTTP as the
communication protocol, which makes them simple,
lightweight, and easy to implement.
REST focuses on resources, which are the key abstractions in
the system. Each resource is identified by a unique URL, and
clients interact with resources using standard HTTP methods:
GET, POST, PUT, DELETE, etc. These methods correspond to
actions like retrieving data, creating new data, updating
existing data, and deleting data, respectively.
=> REST offers the following benefits:
1. It gives better response time and reduced server load due
to its support for the caching of representations.
2. It has better long-term compatibility and evolvability
characteristics.
3. A single browser can access any application and any
resource, so less client-side software needs to be written.
# REST architecture
1. Uniform Interface
The Uniform Interface defines the interface
between client and server. It simplifies and decomposes the
architecture which enables every part to be developed. It is
One of the main principles of REST is the use of a uniform
interface.
2. Resources
In REST, everything is treated as a resource. A resource can be
any piece of data or functionality, such as a user, a file, or a
service. Each resource is identified by a unique URL (Uniform
Resource Locator), which allows clients to access it. For
example, a cloud-based service might have resources like
/users/123 to access user data.
3. Statelessness
In REST, every request from a client to a server must contain
all the information necessary to understand and process the
request. The server does not store any session information
about the client between requests. This statelessness
ensures that the server does not need to retain any state,
leading to simpler, more scalable systems.
4. HTTP Methods
RESTful services use standard HTTP methods to perform
operations on resources. These methods include:
 GET: Retrieve a resource or data (e.g., getting user
information).
 POST: Create a new resource (e.g., adding a new user).
 PUT: Update an existing resource (e.g., modifying user
information).
 DELETE: Remove a resource (e.g., deleting a user).
Each of these methods corresponds to a CRUD (Create, Read,
Update, Delete) operation, making REST intuitive and easy to
understand.
5. Cacheability
Responses from a server in REST can be marked as cacheable
or non-cacheable. If a response is cacheable, the client can
store it and use it for subsequent requests, reducing the need
for repeated processing and improving performance. In cloud
environments, caching is especially important for reducing
latency and optimizing resource usage.
6. Layered System
The layered system principle means that a REST architecture
can consist of multiple layers (e.g., client, server, intermediate
layers such as proxy or gateway). Each layer operates
independently, allowing for scalability and flexibility in the
cloud environment. For example, a load balancer or caching
layer can be introduced without affecting the client or server.
RESTful API.
A RESTful API is an Application Program Interface (API) that
uses HTTP requests to GET, PUT, POST and DELETE data.
A RESTful API is referred to as a RESTful web service that is
based on Representational State Transfer (REST) technology,
an architectural style and approach to communications often
used in web services development.
REST technology is generally preferred to the more robust
Simple Object Access Protocol (SOAP) technology because
REST leverages less bandwidth, making it more suitable for
internet usage.
An API for website is a code that allows two software
programs to Communicate with each other.
REST is a logical choice for building APIs that allow users to
connect and interact with cloud services. RESTful APIs are
used by sites such as Amazon, Google, LinkedIn and Twitter.
A RESTful API breaks down a transaction to create a series of
small modules. Each module addresses a particular
underlying part of the transaction.
# Virtualization
Virtualization is a fundamental technology in cloud
computing that allows the creation of virtual (rather than
physical) versions of resources, such as servers, storage
devices, and networks. By abstracting and isolating the
physical hardware, virtualization enables more efficient use of
resources and enhances the flexibility, scalability, and cost-
effectiveness of cloud computing environments.
# Benefits
1. Money saving:
With virtualization technology, the number of physical servers
can be reduced. Therefore, the ongoing maintenance, and
ongoing operational costs will also be reduced.
2. Dramatic increase in control:
Virtualization provides a flexible foundation to provide
capacity according to the demand for an organization. New
servers can be quickly deployed. Therefore, services can be
provided within minutes.
3. Simplified disaster recovery:
More efficient and cost effective disaster recovery solutions
can be realized with virtualization technologies. Servers and
online business can be transferred to an alternate site within
minutes with the help of virtualization.
4. Reduce the cost of the existing infrastructure by reducing
operational and systems management cost while maintaining
the needed capacity.
5. Scalability:
Virtualized environments can be scaled up or down quickly by
adding or removing virtual machines and resources, allowing
cloud providers to respond to changing demands with
flexibility.
# Disadvantages of Virtualization:
1. Performance Overhead: Virtualization introduces some
performance overhead due to the layer of abstraction
between the hardware and virtual machines.
2. Complexity in Management: Although virtualization
simplifies some aspects of management, it can also introduce
complexity in large-scale environments with many virtual
machines.
3. Security Vulnerabilities: Although virtualization provides
isolation, vulnerabilities in the hypervisor or improper
configuration can allow malicious code to escape from one
VM and affect others.
4. Resource Contention: Virtual machines share physical
resources, such as CPU and memory, which can lead to
resource contention if not managed properly.
# Types of Virtualization in Cloud Computing
Virtualization in cloud computing is categorized into various
types based on the resources being virtualized. Each type of
virtualization serves a specific purpose and helps optimize the
use of physical hardware, improve scalability, and enhance
resource management. Below are the primary types of
virtualization:
1. Server Virtualization
Definition: Server virtualization is the process of dividing a
physical server into multiple virtual machines (VMs). Each VM
operates independently with its own operating system (OS),
applications, and resources. Server virtualization allows for
better utilization of server hardware and makes it easier to
manage workloads.
By isolating multiple VMs on a single physical machine, server
virtualization provides greater flexibility and scalability,
enabling organizations to easily scale their infrastructure. It
also makes system maintenance more efficient, as virtual
machines can be quickly backed up, migrated, or restored.
2. Storage Virtualization
Definition: Storage virtualization involves pooling together
multiple physical storage devices (such as hard drives or solid-
state drives) and presenting them as a single unified storage
resource. This helps to simplify storage management and
increases flexibility.
With storage virtualization, organizations can efficiently
manage large amounts of data across different physical
locations. It enables features like dynamic provisioning, which
allows storage resources to be allocated based on demand.
Additionally, it supports data redundancy, ensuring that
critical data is always available, even in the event of hardware
failure.
3. Network Virtualization
Definition: Network virtualization involves combining
hardware and software network resources to create a
virtualized network. It abstracts the physical network
infrastructure, enabling the creation of multiple virtual
networks that can be managed independently, often used to
create isolated environments within the same physical
network.
Network virtualization allows businesses to manage traffic,
configure network policies, and deploy services without
relying on traditional physical network boundaries.
4. Application Virtualization
Definition: Application virtualization involves running
applications in a virtual environment, separate from the
underlying operating system. The application is encapsulated
in a virtualized container, allowing it to run on any device
without installation or modification of the host OS.
Application virtualization allows organizations to deploy
software applications more efficiently by isolating them from
the underlying OS. This reduces the risk of software conflicts
and provides a consistent experience across different user
environments.
5. Hardware Virtualization
Definition: Hardware virtualization (or full virtualization)
enables the creation of a virtual machine that operates as if it
were running on dedicated physical hardware. The hardware
is abstracted through a hypervisor that manages multiple
operating systems simultaneously.
Hardware virtualization allows the consolidation of physical
resources into multiple virtual environments, providing
flexibility in deploying different operating systems and
applications on the same machine.
# Various Implementation Levels of Virtualization:
Virtualization is implemented at various levels of the system
architecture to achieve resource abstraction, isolation, and
efficiency. Each level uses different techniques and
technologies to create virtual environments.
1. Instruction Set Architecture (ISA) Level:
At the ISA level, virtualization is achieved by emulating the
Instruction Set Architecture (ISA) of a guest machine using
the ISA of the host machine. At the ISA level, virtualization
works by making one machine behave like another using a
different instruction set (language the CPU understands).
An interpreter reads each instruction from the guest system
and translates it into a language the host system can
understand, but this process can be slow.
2. Hardware Abstraction Level:
At the hardware abstraction level, we create a virtual version
of the physical hardware so that multiple virtual machines
(VMs) can share the same physical resources (like CPU,
memory, and storage).
This is done by using a hypervisor (software that manages
virtual machines), which helps allocate resources to each VM.
Each VM behaves like it’s using its own hardware, even
though they’re all sharing the same physical machine.
It allows you to run different operating systems and
applications on the same physical server, making better use
of resources.
3. Operating System Level
Operating system-level virtualization involves virtualizing an
operating system’s resources to allow multiple isolated
environments (containers) to run on a single physical
machine. These containers share the same host operating
system but operate as independent systems.
The OS itself is responsible for creating isolated containers,
each with its own environment, but all containers share the
same kernel. This approach is commonly used for lightweight
virtualization, as it doesn’t require the overhead of running
multiple full operating systems.
4. Library Support Level
At the library support level, we virtualize the connection
between applications and the system they run on. This is
done by modifying how applications interact with the system
using API hooks.
This method involves intercepting and redirecting the system
requests made by an application so that it works in a
virtualized environment.
Software like WINE lets you run Windows applications on a
Linux system by translating Windows commands into Linux
commands without changing the application.
5. User-Application Level
At the user-application level, we virtualize individual
applications instead of the entire system. This means the
application can run in an isolated environment as if it were a
separate machine.
The application is treated like a process running in a special
environment, even though it’s running on a regular computer.
This approach allows applications to run without interfering
with other applications on the system.
# Virtualization of CPU, Memory, And I/O Devices
1. CPU Virtualization
CPU virtualization involves creating multiple virtual CPUs
(vCPUs) that share the physical CPU of the host machine.
Each virtual machine (VM) gets its own virtual CPU to run
tasks, but all vCPUs are managed by the host CPU.
The hypervisor (virtualization software) schedules time for
each VM to use the physical CPU. It makes sure that each VM
gets its fair share of CPU time, and that they don’t interfere
with each other.
This allows multiple VMs to run different operating systems
and applications on the same physical machine without
affecting performance.
2. Memory Virtualization
Memory virtualization involves creating virtual memory
spaces for each virtual machine, so that each VM thinks it has
its own memory (RAM), even though all VMs share the
physical memory of the host machine.
The hypervisor manages the physical memory and allocates a
portion to each VM. When a VM accesses memory, it
accesses its own virtual memory space, which is mapped to
the physical memory by the hypervisor.
This helps in running multiple VMs simultaneously, as each
VM gets a dedicated memory space without interfering with
others. It also allows for efficient memory management, even
when the physical memory is limited.
3. I/O Devices Virtualization
I/O device virtualization involves creating virtual versions of
physical input/output devices, such as hard drives, network
interfaces, and USB devices, to be used by VMs.
The hypervisor abstracts the physical I/O devices and makes
them available to the virtual machines as virtual devices. For
example, each VM might think it has its own network card,
but they are actually sharing the physical network card of the
host machine.
This allows each VM to interact with its own virtual devices
independently, while still using the physical devices of the
host machine. It also enables better resource management
and isolation between VMs.

You might also like