0% found this document useful (0 votes)
43 views44 pages

Overview of Database Management Systems

Uploaded by

armanpatidar666
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views44 pages

Overview of Database Management Systems

Uploaded by

armanpatidar666
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Introduction to database

Database

A database is a collection of interrelated data that helps in the efficient retrieval,


insertion, and deletion of data from the database and organizes the data in the
form of tables, views, schemas, reports, etc. For Example, a university database
organizes the data about students, faculty, admin staff, etc. which helps in the
efficient retrieval, insertion, and deletion of data from it.

Database Management System (DBMS)

A Database Management System (DBMS) is a software system that is designed to


manage and organize data in a structured manner. It allows users to create,
modify, and query a database, as well as manage the security and access
controls for that database. DBMS provides an environment to store and retrieve
data in a convenient and efficient manner.

Types of DBMS

Relational Database Management System (RDBMS):

Data is organized into tables (relations) with rows and columns, and the
relationships between the data are managed through primary and foreign keys.
SQL (Structured Query Language) is used to query and manipulate the data.
Examples are MySQL, Oracle, Microsoft SQL Server and Postgre SQL.
NoSQL DBMS:

Designed for high-performance scenarios and large-scale data, NoSQL databases


store data in various non-relational formats such as key-value pairs, documents,
graphs, or columns. Examples are NoSQL DBMS are MongoDB, Cassandra,
DynamoDB and Redis.

Object-Oriented DBMS (OODBMS):

Stores data as objects, similar to those used in object-oriented programming,


allowing for complex data representations and relationships.

File System oriented Approach

The file system is basically a way of arranging the files in a storage medium like a
hard disk refer fig(a). The file system organizes the files and helps in the retrieval
of files when they are required. File systems consist of different files which are
grouped into directories. The directories further contain other folders and files.
The file system performs basic operations like management, file naming, giving
access rules, etc.

Example: NTFS(New Technology File System) , EXT(Extended File System).


fig(a) File system

Database Approach
Database Management System is basically software that manages the collection of
related data refer fig(b). It is used for storing data and retrieving the data
effectively when it is needed. It also provides proper security measures for
protecting the data from unauthorized access. In Database Management System
the data can be fetched by SQL queries and relational algebra. It also provides
mechanisms for data recovery and data backup.
Example: Oracle, MySQL, MS SQL server.
fig (b) DBMS

Difference Between File System and DBMS

File System DBMS

The file system is a way of


arranging the files in a DBMS is software for
storage medium within a managing the database.
computer.
Redundant data can be In DBMS there is no
present in a file system. redundant data.

It doesn’t provide Inbuilt It provides in house tools


mechanism for backup and for backup and recovery of
recovery of data if it is lost. data even if it is lost.

There is no efficient query Efficient query processing


processing in the file system. is there in DBMS.

There is less data There is more data


consistency in the file consistency because of the
system. process of normalization .

It has more complexity in


It is less complex as
handling as compared to
compared to DBMS.
the file system.
File systems provide less DBMS has more security
security in comparison to mechanisms as compared
DBMS. to file systems.

It has a comparatively
It is less expensive than
higher cost than a file
DBMS.
system.

In DBMS data
independence exists,
mainly of two types:

There is no data
1) Logical Data
independence.
Independence .

2)Physical Data
Independence.

Only one user can access Multiple users can access


data at a time. data at a time.
The user has to write
The users are not required
procedures for managing
to write procedures.
databases

Data is distributed in many


Due to centralized nature
files. So, it is not easy to
data sharing is easy
share data.

It give details of storage and It hides the internal details


representation of data of Database

Integrity Constraints are Integrity constraints are


difficult to implement easy to implement

To access data in a file , user


No such attributes are
requires attributes such as
required.
file name, file location.
Cobol , C++ Oracle , SQL Server

Data Models in DBMS


A Data Model in Database Management System (DBMS) is the concept of tools
that are developed to summarize the description of the database. Data Models
provide us with a transparent picture of data which helps us in creating an actual
database. It shows us from the design of the data to its proper implementation of
data.

Types of Relational Models


1. Conceptual Data Model
2. Representational Data Model
3. Physical Data Model

. Conceptual Data Model

The conceptual data model describes the database at a very high level and is
useful to understand the needs or requirements of the database. It is this model,
that is used in the requirement-gathering process i.e. before the Database
Designers start making a particular database. One such popular model is the
entity/relationship model (ER model). The E/R model specializes in entities,
relationships, and even attributes that are used by database designers. In terms of
this concept, a discussion can be made even with non-computer
science(non-technical) users and stakeholders, and their requirements can be
understood.
Entity-Relationship Model( ER Model): It is a high-level data model which is used
to define the data and the relationships between them. It is basically a conceptual
design of any database which is easy to design the view of data.
Components of ER Model:

1. Entity: An entity is referred to as a real-world object. It can be a name,

place, object, class, etc. These are represented by a rectangle in an ER


Diagram.
2. Attributes: An attribute can be defined as the description of the entity.

These are represented by Ellipse in an ER Diagram. It can be Age, Roll


Number, or Marks for a Student.
3. Relationship: Relationships are used to define relations among different

entities. Diamonds and Rhombus are used to show Relationships.

2. Representational Data Model

This type of data model is used to represent only the logical part of the database
and does not represent the physical structure of the database. The
representational data model allows us to focus primarily, on the design part of the
database. A popular representational model is a Relational model. The relational
Model consists of Relational Algebra and Relational Calculus. In the Relational
Model, we basically use tables to represent our data and the relationships
between them. It is a theoretical concept whose practical implementation is done
in Physical Data Model.

The advantage of using a Representational data model is to provide a foundation


to form the base for the Physical model

3. Physical Data Model

The physical Data Model is used to practically implement Relational Data Model.
Ultimately, all data in a database is stored physically on a secondary storage device
such as discs and tapes. This is stored in the form of files, records, and certain
other data structures. It has all the information on the format in which the files
are present and the structure of the databases, the presence of external data
structures, and their relation to each other. Here, we basically save tables in
memory so they can be accessed efficiently. In order to come up with a good
physical model, we have to work on the relational model in a better way.
Structured Query Language (SQL) is used to practically implement Relational
Algebra.

DBMS Architecture 1-level, 2-Level, 3-Level


· · ·

A Database stores a lot of critical information to access data quickly and securely. Hence
it is important to select the correct architecture for efficient data management. DBMS
Architecture helps users to get their requests done while connecting to the database.
We choose database architecture depending on several factors like the size of the
database, number of users, and relationships between the users. There are two types of
database models that we generally use, logical model and physical model. Several types
of architecture are there in the database which we will deal with in the next section.

Types of DBMS Architecture

There are several types of DBMS Architecture that we use according to the usage
requirements. Types of DBMS Architecture are discussed here.
· 1-Tier Architecture

· 2-Tier Architecture

· 3-Tier Architecture

1-Tier Architecture
In 1-Tier Architecture the database is directly available to the user, the user can directly
sit on the DBMS and use it that is, the client, server, and Database are all present on the
same machine. For Example: to learn SQL we set up an SQL server and the database on
the local system. This enables us to directly interact with the relational database and
execute operations. The industry won’t use this architecture they logically go for 2-tier
and 3-tier Architecture.
Advantages of 1-Tier Architecture

Below mentioned are the advantages of 1-Tier Architecture.

● Simple Architecture: 1-Tier Architecture is the most simple architecture to set


up, as only a single machine is required to maintain it.
● Cost-Effective: No additional hardware is required for implementing 1-Tier
Architecture, which makes it cost-effective.
● Easy to Implement: 1-Tier Architecture can be easily deployed, and hence it is
mostly used in small projects.

2-Tier Architecture
The 2-tier architecture is similar to a basic client-server model . The application at the
client end directly communicates with the database on the server side. APIs like ODBC
and JDBC are used for this interaction. The server side is responsible for providing query
processing and transaction management functionalities. On the client side, the user
interfaces and application programs are run. The application on the client side
establishes a connection with the server side to communicate with the DBMS.
An advantage of this type is that maintenance and understanding are easier, and
compatible with existing systems. However, this model gives poor performance when
there are a large number of users.

Advantages of 2-Tier Architecture


● Easy to Access: 2-Tier Architecture makes easy access to the database, which
makes fast retrieval.
● Scalable: We can scale the database easily, by adding clients or upgrading
hardware.
● Low Cost: 2-Tier Architecture is cheaper than 3-Tier Architecture and
Multi-Tier Architecture .
● Easy Deployment: 2-Tier Architecture is easier to deploy than 3-Tier
Architecture.
● Simple: 2-Tier Architecture is easily understandable as well as simple because
of only two components.

3-Tier Architecture
In 3-Tier Architecture , there is another layer between the client and the server. The client does
not directly communicate with the server. Instead, it interacts with an application server which
further communicates with the database system and then the query processing and transaction
management takes place. This intermediate layer acts as a medium for the exchange of partially
processed data between the server and the client. This type of architecture is used in the case
of large web applications.
Advantages of 3-Tier Architecture

● Enhanced scalability: Scalability is enhanced due to the distributed


deployment of application servers. Now, individual connections need not be
made between the client and server.
● Data Integrity: 3-Tier Architecture maintains Data Integrity. Since there is a
middle layer between the client and the server, data corruption can be
avoided/removed.
● Security: 3-Tier Architecture Improves Security. This type of model prevents
direct interaction of the client with the server thereby reducing access to
unauthorized data.

Conclusion
When it comes to choosing a DBMS architecture, it all comes down to how complex and
scalable the system is. The 3-level structure has the best features and is perfect for
modern, big database systems.

Data Independence
What is Data Independence in DBMS?
In the context of a database management system, data independence is the
feature that allows the schema of one layer of the database system to be changed
without any impact on the schema of the next higher level of the database
system. ” Through data independence, we can build an environment in which data
is independent of all programs, and through the three schema architectures, data
independence will be more understandable.
Types of Data Independence
There are two types of data independence.
● logical data independence
● Physical data independence

Logical Data Independence


● Changing the logical schema (conceptual level) without changing the
external schema (view level) is called logical data independence.
● It is used to keep the external schema separate from the logical schema.
● If we make any changes at the conceptual level of data, it does not affect
the view level.
● This happens at the user interface level.
● For example, it is possible to add or delete new entities, attributes to the
conceptual schema without making any changes to the external schema.

Physical Data Independence


● Making changes to the physical schema without changing the logical
schema is called physical data independence.
● If we change the storage size of the database system server, it will not
affect the conceptual structure of the database.
● It is used to keep the conceptual level separate from the internal level.
● This happens at the logical interface level.
● Example – Changing the location of the database from C drive to D drive.

Data Dictionary
What is a Data Dictionary?
The data dictionary consists of two words, data, which represents data collected
from several sources, and dictionary, which represents where this data is
available. The data dictionary is an important part of the relational database
because it provides additional information about the relationship between several
tables in the database. A data dictionary in a DBMS helps users manage data in an
orderly and orderly manner, thereby preventing data redundancy.

Types of Data Dictionary in DBMS


There are basically two types of data dictionaries in a database management
system:
● Integrated Data Dictionary
● Stand Alone Data Dictionary

Integrated Data Dictionary


Every relational database has an Integrated Data Dictionary available in the DBMS.
This integrated data dictionary acts as a system directory that is accessed and
updated by the relational database. The old database does not have an integrated
data dictionary, so the database administrator must use the Stand Alone Data
Dictionary. An Integrated Data Dictionary in a DBMS can link metadata.

The integrated data dictionary can be further divided into two types:

Active: When any changes are made to the database, the active data dictionary is
automatically updated by the DBMS. It is also known as a self-updating dictionary
because it continuously updates its data.

Passive: Unlike active dictionaries, passive dictionaries must be updated manually


when there are changes in the database. This type of data dictionary is difficult to
manage because it requires proper functionality. Else, the database and data
dictionary will be synchronized.

Stand Alone Data Dictionary


This type of database in the DBMS is very adaptive because it grants the
administrator in charge of the confidential information complete autonomy to
define and manage all crucial data. Whether the information is printed or not has
nothing to do with it. A data dictionary that has a stand-alone format enables
database designers to have the flexibility to communicate with end users
regardless of their data dictionaries format.

Database Administrator (DBA)

A Database Administrator (DBA) is an individual or person responsible for


controlling, maintaining, coordinating, and operating a database management
system. Managing, securing, and taking care of the database systems is a prime
responsibility. Their role also varies from configuration, database design,
migration, security, troubleshooting, backup, and data recovery. Database
administration is a major and key function in any firm or organization that is
relying on one or more databases.

Types of Database Administrator (DBA)


● Administrative DBA: Their job is to maintain the server and keep it
functional. They are concerned with data backups, security,
troubleshooting, replication, migration, etc.
● Data Warehouse DBA: Assigned earlier roles, but held accountable for
merging data from various sources into the data warehouse. They also
design the warehouse, with cleaning and scrubs data prior to loading.
● Cloud DBA: Nowadays companies prefer to save their workpiece on
cloud storage. As it reduces the chance of data loss and provides an
extra layer of data security and integrity.
● Development DBA: They build and develop queries, store procedures,
etc. that meet firm or organization needs.
● Application DBA: They particularly manage all requirements of
application components that interact with the database and accomplish
activities such as application installation and coordination, application
upgrades, database cloning, data load process management, etc.
● Architect: They are held responsible for designing schemas like building
tables. They work to build a structure that meets organizational needs.
The design is further used by developers and development DBAs to
design and implement real applications.
● OLAP DBA: They design and build multi-dimensional cubes for
determination support or OLAP systems.
● Data Modeler –In general, a data modeler is in charge of a portion of a
data architect’s duties. A data modeler is typically not regarded as a
DBA, but this is not a hard and fast rule.
● Task-Oriented DBA: To concentrate on a specific DBA task, large
businesses may hire highly specialised DBAs. They are quite uncommon
outside of big corporations. Recovery and backup DBA, whose
responsibility it is to guarantee that the databases of businesses can be
recovered, is an example of a task-oriented DBA.
● Database Analyst: This position doesn’t actually have a set definition.
Junior DBAs may occasionally be referred to as database analysts. A
database analyst occasionally performs functions that are comparable to
those of a database architect. The term “Data Administrator” is also
used to describe database analysts and data analysts. Additionally, some
businesses occasionally refer to database administrators as data
analysts.

Introduction to SQL
What are SQL Commands?
Structured Query Language (SQL) commands are standardized instructions
used by developers to interact with data stored in relational databases.
These commands allow for the creation, manipulation, retrieval, and
control of data, as well as database structures. SQL commands are
categorized based on their specific functionalities:

RDBMS
RDBMS stands for Relational Database Management System.

RDBMS is the basis for SQL, and for all modern database systems such as MS SQL Server, IBM DB2,
Oracle, MySQL, and Microsoft Access.

The data in RDBMS is stored in database objects called tables. A table is a collection of related data
entries and it consists of columns and rows.

Components of a SQL System


A SQL system consists of several key components that work together to enable
efficient data storage, retrieval, and manipulation. Understanding these
components is crucial for mastering SQL and its role in relational database
systems. Some of the Key components of a SQL System are:
● Databases: Databases are structured collections of data organized into
tables, rows, and columns. Databases serve as repositories for storing
information efficiently and provide a way to manage and access data.
● Tables: Tables are the fundamental building blocks of a database,
consisting of rows (records) and columns (attributes or fields). Tables
ensure data integrity and consistency by defining the structure and
relationships of the stored information.
● Queries: Queries are SQL commands used to interact with databases.
They enable users to retrieve, update, insert, or delete data from tables,
allowing for efficient data manipulation and retrieval.
● Constraints: Constraints are rules applied to tables to maintain data
integrity. Constraints define conditions that data must meet to be stored
in the database, ensuring accuracy and consistency.
● Stored Procedures: Stored procedures are pre-compiled SQL statements
stored in the database. Stored procedures can accept parameters,
execute complex operations, and return results, enhancing efficiency,
reusability, and security in database management.
● Transactions: Transactions are groups of SQL statements that are
executed as a single unit of work. Transactions ensure data consistency
and integrity by allowing for the rollback of changes if any part of the
transaction fails.

SQL Commands are mainly categorized into five categories:

● DDL – Data Definition Language


● DQL – Data Query Language
● DML – Data Manipulation Language
● DCL – Data Control Language
● TCL – Transaction Control Language

1. Data Definition Language (DDL) in SQL


DDL or Data Definition Language actually consists of the SQL commands that can
be used to defining, altering, and deleting database structures such as tables,
indexes, and schemas. It simply deals with descriptions of the database schema
and is used to create and modify the structure of database objects in the
database.

Common DDL Commands

Command Description Syntax

Create database or its


objects (table, index, CREATE TABLE table_name (column1
CREATE
function, views, store data_type, column2 data_type, ...);
procedure, and triggers)

Delete objects from the


DROP DROP TABLE table_name;
database

Alter the structure of the ALTER TABLE table_name ADD


ALTER
database COLUMN column_name data_type;
Remove all records from a
table, including all spaces
TRUNCATE TRUNCATE TABLE table_name;
allocated for the records
are removed

Add comments to the data COMMENT 'comment_text' ON TABLE


COMMENT
dictionary table_name;

Rename an object existing RENAME TABLE old_table_name TO


RENAME
in the database new_table_name;

2. Data Query Language (DQL) in SQL


DQL statements are used for performing queries on the data within schema
objects. The purpose of the DQL Command is to get some schema relation based
on the query passed to it. This command allows getting the data out of the
database to perform operations with it. When a SELECT is fired against a table or
tables the result is compiled into a further temporary table, which is displayed or
perhaps received by the program.

DQL Command
Command Description Syntax

It is used to retrieve data SELECT column1, column2, ...FROM


SELECT
from the database table_name WHERE condition;

3. Data Manipulation Language (DML) in SQL


The SQL commands that deal with the manipulation of data present in the
database belong to DML or Data Manipulation Language and this includes most of
the SQL statements. It is the component of the SQL statement that controls access
to data and to the database. Basically, DCL statements are grouped with DML
statements.

Common DML Commands

Command Description Syntax


Insert data into a INSERT INTO table_name (column1, column2, ...)
INSERT
table VALUES (value1, value2, ...);

Update existing
UPDATE table_name SET column1 = value1,
UPDATE data within a
column2 = value2 WHERE condition;
table

Delete records
DELETE from a database DELETE FROM table_name WHERE condition;

table

Table control
LOCK LOCK TABLE table_name IN lock_mode;
concurrency

Call a PL/SQL or
CALL CALL procedure_name(arguments);
JAVA subprogram
Describe the
EXPLAIN
access path to EXPLAIN PLAN FOR SELECT * FROM table_name;
PLAN
data

4. Data Control Language (DCL) in SQL


DCL (Data Control Language) includes commands such as GRANT and REVOKE
which mainly deal with the rights, permissions, and other controls of the
database system. These commands are used to control access to data in the
database by granting or revoking permissions.

Common DCL Commands

Command Description Syntax

GRANT privilege_type [(column_list)]


Assigns new privileges to a
GRANT ON [object_type] object_name TO
user account, allowing
user [WITH GRANT OPTION];
access to specific database
objects, actions, or
functions.

Removes previously
granted privileges from a REVOKE [GRANT OPTION FOR]
user account, taking away privilege_type [(column_list)] ON
REVOKE
their access to certain [object_type] object_name FROM user

database objects or [CASCADE];

actions.

5. Transaction Control Language (TCL) in SQL


Transactions group a set of tasks into a single execution unit. Each transaction
begins with a specific task and ends when all the tasks in the group are
successfully completed. If any of the tasks fail, the transaction fails. Therefore, a
transaction has only two results: success or failure. We can explore more about
transactions here.

Common TCL Commands


Command Description Syntax

BEGIN BEGIN TRANSACTION


Starts a new transaction
TRANSACTION [transaction_name];

Saves all changes made


COMMIT COMMIT;
during the transaction

Undoes all changes made


ROLLBACK ROLLBACK;
during the transaction

Creates a savepoint
SAVEPOINT within the current SAVEPOINT savepoint_name;

transaction

Important SQL Commands


1. SELECT: Used to retrieve data from a database.
2. INSERT: Used to add new data to a database.

3. UPDATE: Used to modify existing data in a database.

4. DELETE: Used to remove data from a database.

5. CREATE TABLE: Used to create a new table in a database.

6. ALTER TABLE: Used to modify the structure of an existing table.

7. DROP TABLE: Used to delete an entire table from a database.

8. WHERE: Used to filter rows based on a specified condition.

9. ORDER BY: Used to sort the result set in ascending or descending order.

10. JOIN: Used to combine rows from two or more tables based on a

related column between them.

Cloud Computing

Cloud Computing

Cloud Computing means storing and accessing the data and programs on remote
servers that are hosted on the internet instead of the computer’s hard drive or
local server.

The following are some of the Operations that can be performed with

Cloud Computing

● Storage, backup, and recovery of data


● Delivery of software on demand

● Development of new applications and services

● Streaming videos and audio


Cloud Computing Infrastructure

Cloud computing refers to providing on demand services to the customer

anywhere and anytime irrespective of everything where the cloud

infrastructure represents the one who activates the complete cloud

computing system. Cloud infrastructure has more capabilities of providing

the same services as the physical infrastructure to the customers. It is

available for private cloud, public cloud, and hybrid cloud systems with

low cost, greater flexibility and scalability.

Cloud infrastructure is categorized into three parts in general i.e.

1. Computing

2. Networking

3. Storage
fig (a) Components of Cloud Infrastructure

1. Hypervisor :

Hypervisor is a firmware or a low level program which is a key to enable


virtualization. It is used to divide and allocate cloud resources between several
customers. As it monitors and manages cloud services/resources that’s why
hypervisor is called as VMM (Virtual Machine Monitor) or (Virtual Machine
Manager).

2. Management Software :

Management software helps in maintaining and configuring the infrastructure.


Cloud management software monitors and optimizes resources, data, applications
and services.
3. Deployment Software :

Deployment software helps in deploying and integrating the application on the


cloud. So, typically it helps in building a virtual computing environment.

4. Network :

It is one of the key component of cloud infrastructure which is responsible for


connecting cloud services over the internet. For the transmission of data and
resources externally and internally network is must required.

5. Server :

Server which represents the computing portion of the cloud infrastructure is


responsible for managing and delivering cloud services for various services and
partners, maintaining security etc.

6. Storage :

Storage represents the storage facility which is provided to different organizations


for storing and managing data. It provides a facility of extracting another resource
if one of the resource fails as it keeps many copies of storage.

What is cloud segmentation

Cloud segmentation is a cybersecurity strategy for optimizing performance,


enhancing security, and ensuring regulatory compliance within cloud
computing environments. Just as physical barriers in traditional architecture
help prevent the spread of fire, cloud segmentation creates virtual boundaries
within cloud environments to protect sensitive data and applications from
unauthorized access and cyber threats.

The Importance of Cloud Segmentation


Cyber threats are becoming increasingly sophisticated, and a cloud workload
platform that leverages segmentation offers multiple layers of protection. Here are a
few reasons why cloud segmentation it’s becoming an indispensable part of cloud
security:

1. Enhanced Security: By isolating different workloads and data, cloud


segmentation reduces the attack surface, limiting the potential impact of
a breach to a confined area within the cloud environment.
2. Regulatory Compliance: Many industries are subject to stringent
regulatory requirements for data protection. Remote workload
segmentation helps in meeting these requirements by providing a
framework for segregating sensitive data.
3. Performance Optimization: Segmentation can improve system
performance by limiting the scope of data and resource sharing, thus
reducing latency and enhancing user experience.
4. Cost Management: Effective segmentation (especially granular
segmentation) can lead to more efficient use of resources, potentially
lowering costs by aligning resource allocation with actual usage patterns.
Pros and Cons of cloud computing

Many organizations still see cost as a significant

benefit when they weigh the pros and cons of cloud

computing.

1. Lower operational costs. The cloud vendor assumes many equipment


and software management tasks, from servers and networking gear to
cloud storage. That includes applying software updates and security
patches.

2. Increased IT resources. Enterprises can access more resources for


internal service development and digital transformation projects that
directly support business units for easier business experimentation and
innovation.

3. Convenient, rapid access to technology. Enterprises can work with


the latest hardware and software -- such as new CPUs and GPUs,
machine learning and AI applications and network interfaces -- often
before its available or affordable to enterprise buyers.
4. Faster connectivity. Cloud providers invest in the latest network
interface cards and switches, along with multi-Gbps circuits to internet
exchange points. This provides the fastest access to data and
applications both within the data center and to customers.

5. Greater scale. The public cloud is engineered for massive scale.


Providers can easily expand resource capacity for individual services to
meet customers workload demands.

6. Greater expertise. Few organizations possess the internal expertise in


secure infrastructure and security engineering offered by cloud
providers. This expertise allows for highly specialized services, such as
powerful analytics and AI, which might be impossible to implement
with local data center staff.

7. More reliable infrastructure. The resilience and redundancy found in


cloud providers physical infrastructure far outstrips what most
companies can afford to build or operate. Cloud customers also can
access multiple cloud locations, which simplifies redundant
deployments. Some cloud services offer built-in multisite redundancy.

Cons of cloud computing


Although the cloud has been a boon for IT organizations, cloud services
are a panacea for all IT operational problems. An organization must
balance its many benefits with the following downsides.

1. A complicated shared security model. Security policies and


management are split between the provider and user. Understanding
the division in this shared responsibility is crucial as mistakes or neglect
can expose vast amounts of sensitive data.

2. Vendor lock-in. Cloud vendors are ubiquitous. Cloud providers share


many common service types, but access techniques -- such as APIs --
service levels and pricing can vary dramatically. It might not be possible
to migrate a workload from one cloud provider to another without
some amount of re-architecting of the new cloud environment.

3. Complex pricing structures. Some services, such as compute


instances, have multiple subscription tiers and pricing schemes. These
variables make pricing and total cost of ownership analysis tedious and
time-consuming; doing so typically requires software assistance from
built-in or third-party tools. The addition of free service levels and
discount availability only adds complexity to pricing considerations.
4. Outbound data transfer costs. It's expensive to egress large data
sets from a cloud provider to the local data center or another cloud --
this also creates a disincentive for an organization to move from one
cloud provider to another.

5. Less flexibility than DIY environments. Many configuration choices


are made by the provider, so customers have limited control.

6. Sketchy, inconsistent customer support. Cloud services providers can


be difficult to reach or slow to respond to technical issues or cost
concerns. As a result, many organizations contract with a third-party
cloud management and support partner.

7. Fast, redundant connectivity. Cloud computing requires either


reliable connections to networks and the internet or a direct private link
to the provider. This is especially important for remote locations such as
edge facilities.

8. Cloud-specific skills. Most internal IT organizations don't


possess the cloud design and operations expertise found on a cloud
provider's payroll. Such cloud-skilled staff can be hard to recruit
and retain, as workers with those advanced skills are attractive to other
organizations as well as to the cloud providers themselves.

9. Country- or industry-specific regulatory requirements. Organizations


must plan carefully, especially when data and workloads are hosted
outside one residence or country with strict privacy laws. Note that a
cloud provider presence in a particular location might imply jurisdiction,
and a need to comply with local regulations.

You might also like