SQL Server Backup and Restore Guide
SQL Server Backup and Restore Guide
bases de données
[Link]/adminBD/[Link]
Plan
1. Introduction
2. Backup and Restore
3. Manage logins and server roles
4. Implement and maintain indexes
5. Import and export data
6. Manage SQL Server Agent
7. Manage and configure databases
8. Identify and resolve concurrency problems
9. Collect and analyse troubleshooting data
10. Audit SQL Server Instances
11. Additional SQL server components
Introduction
Downloading SQL Server Developer 2019
Download a free specialized edition
▪ Basic, Custom or Download Media
▪ Langage: English
▪ Package ISO or CAB
Editions and supported features of SQL Server 2019: Enterprise, Developer, Express,…
[Link]
15?view=sql-server-ver15
AdventureWorks
[Link]/adminBD/sampleBD/[Link]
Backup and Restore
Restoring AdventureWorks with SSMS
T-SQL (Transact-SQL)
Restoring a backup consists of creating a database containing exclusively all the data contained in
the backup
T-SQL (Transact-SQL)
j
Backing up Database
Backing up Database
BACKUP DATABASE [AdventureWorks2014] TO DISK =
N'E:\adminBD\DBA\AdventureWorksBackup'
WITH NOFORMAT, NOINIT,
NAME = N'AdventureWorks2014-Full-Database-Backup’,
SKIP, NOREWIND, NOUNLOAD, STATS = 1
Recovery Model
❖ SIMPLE You can only restore the database to the most recent backup. Point-in-time recovery is not possible.
❖ FULL You can restore the database to any point in time, provided that you have the necessary transaction log backups.
❖ BULK LOGGED point-in-time recovery is possible for all transactions except those that are part of bulk operations.
Different backup models
❖ FULL BACKUP A full backup captures the entire database, including all objects and data. It includes the transaction log at the
time of the backup, ensuring a complete snapshot.
❖DIFFERENTIAL BACKUP A differential backup captures only the data that has changed since the last full backup.
It keeps track of changes and includes only the modified data.
❖TRANSACTION LOG BACKUP transaction log backup records all the transactions that have occurred since
the last transaction log backup. It captures changes in the database at a granular level.
Different backup models in SSMS
Point in time recovery
Quiz
Explanation: To restore to a point in time at 8:30 p.m., you would need:
The FULL midday backup (the last full backup before the restore point),
The DIFFERENTIAL 6 p.m. backup (the last differential backup before the restore point), and
The TRANSACTION LOG 8 p.m. backup (the transaction log closest to 8:30 p.m.).
Use NORECOVERY when you still plan to restore additional backups (such as differential or transaction log backups).
Use RECOVERY when you're done with restoring all backups and want the database to be online and available.
Go
Model: A template database used when creating new databases. Any modifications to the Model database will apply to any new database created afterward.
MSDB: Used by SQL Server Agent for scheduling alerts and jobs. It also stores backup and restore history, maintenance plans, and more.
TempDB: A workspace for temporary objects. It’s recreated every time SQL Server starts, so it cannot be backed up. It’s used for various operations, such as sorting and storing temporary tables.
❖ Model Resource: A hidden, read-only database that contains system objects that are included with SQL Server. It is primarily used by the SQL Server engine and is not meant to be modified.
❖ Msdb
❖ TempDB
❖ Resource
Perform backup/restore based on strategies
❖ What you are backing up and how often the data is updated
❖ How much data can you afford to loose
❖ How much space will a full database use ?
❖ Backup redundancy
❖ Reliability
❖ Expiration of the backup
❖ Compression
❖ Encryption
Recover from a corrupted drive
❖ What happens if the backup media is damaged ?
✓ Mirrored Media
CREATE LOGIN [DESKTOP-ICEQ7B9\SQLTest] Windows Authentication means the login credentials of the
Windows user (in this case, SQLTest) are used to authenticate
FROM WINDOWS the user when they log in to SQL Server.
WITH DEFAULT_DATABASE=[AdventureWorks2014]
This specifies the default database that will be used when
the login is established. In this case, AdventureWorks2014
is set as the default database for this login.
Manage access to the server
❖ Serveradmin
❖ Alter settings, shutdown, alter any endpoint, create any endpoint
❖Securityadmin
❖ Alter any login
❖Processadmin
❖ Alter any connection
❖Setupadmin
❖ Alter any linked server
❖Bulkadmin
❖ Administer Bulk operations
Manage access to the server
❖ Diskadmin
❖ Alter resources
❖Dbcreator
❖ Alter any database, create any database
❖Sysadmin
❖ Can perform any activity on the server
❖Public
❖ No server-level permission (except view any database and connect permission)
The overall purpose of this command is to:Create a user named SQLTest in the current [Link] this user to the Windows login
DESKTOP-ICEQ7B9\SQLTest, allowing it to access the database using that [Link] the default schema for the user to dbo, making it easier to
reference
database objects owned by the dbo schema.
Fixed database-level roles
❖ db_owner
❖ Has all permission in the database
❖db_securityadmin
❖ Alter any role, create role, view definition
❖db_accessadmin
❖Alter any user, connect
❖db_backupoperator
❖Backup database, backup log, checkpoint
❖db_ddladmin
❖Data definition Langage commands in the database
Fixed database-level roles
❖ db_datawriter
❖ Grant Insert, update, delete on database
❖db_denydatawriter
❖Deny Insert, update, delete on database
❖db_datareader
❖Grant Select on database
❖db_denydatareader
❖Deny Select on database
❖public
❖No database level permissions (except some database permissionss for example: view any column
master key definition and select permission on system tables)
❖ Schema owner
The owner of a schema has the ability to manage the objects within that schema,
including granting permissions to other users and creating new objects.
Creating access to server/database with
least privilege
❖ Principle of least privilege REVOKE removes a previously granted or denied permission from a user or role. It essentially
"resets"
the permission, meaning the user no longer has the specific permission, but it does not explicitly
❖ Use fixed server roles prevent them from having it if other sources grant it.
❖ Restrict use of sysadmin DENY explicitly prevents a user or role from having a specific permission, overriding any grants of
the
permission they may receive from other roles or groups. It is a stronger restriction than REVOKE
❖ Assign permissions to role
❖ Use stored procedures and functions
❖Permission statements
❖ Grant, Deny, Revoke (REVOKE SELECT ON [Link] TO myNewDBRole)
❖ Owernership chains When a SQL query involves multiple database objects (such as views, tables, or stored procedures), S
QL Server checks whether these objects are owned by the same database principal (usually a user or role).
If they are, the objects are said to be in the same ownership chain, and permission checks can be bypassed.
In SQL Server, if you have denied a permission to a user or role, you cannot grant that same permission to them directly until the DENY is removed. This is
because
a DENY always takes precedence over a GRANT. Even if you issue a GRANT after a DENY, the DENY will still block the user from accessing the resource.
!!!!!!YOU
Protect objects from being modified
use [AdventureWorks2014]
❖ Non-clustred indexes
❖Non-unique
Implement indexes
CREATE CLUSTERED INDEX [IX_NewTable_ID] ON [dbo].[NewTable]
(
[ID] ASC
)
Printer Red 5 C D
Bookcase Brown 6 ObjectName ColorName ID ObjectName ColorName ID
Computer Black 7
Bookcase Brown 6
Book White 8
Computer Black 7
Book Green 9
Table Glod 10
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Printer Black 4 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Black 4 C D
ObjectName ColorName ID ObjectName ColorName ID
Printer Red 5
Bookcase Brown 6 Book White 8 Book Green 9
Computer Black 7 Bookcase Brown 6
Book White 8 Computer Black 7
Book Green 9
Book (Green) D
Table Glod 10
Book (White), Computer (Black) C
Computer(Gold), Printer A
Table B
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Black 4 C D
ObjectName ColorName ID ObjectName ColorName ID
Printer Red 5
Bookcase Brown 6 Book White 8 Book Green 9
Computer Black 7 Bookcase Brown 6
Book White 8 Computer Black 7
Book Green 9
Book (Green) D
Table Glod 10
Book (White), Bookcase (Brown), Computer (Black) C
Computer (Gold), Printer A
Table B
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Red 5 C D
ObjectName ColorName ID ObjectName ColorName ID
Bookcase Brown 6
Computer Black 7 Book White 8 Book Brown 11
Book White 8 Bookcase Brown 6 Book Green 9
Book Green 9 Computer Black 7
Table Glod 10
Book (Brown) Book (Green) D
Book Brown 11
Book (White), Compter (Black) C
Computer(Gold), Printer A
Table B
ObjectName ColorName ID
Bookcase Brown 6 E
Computer Black 7
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Red 5 C D
ObjectName ColorName ID ObjectName ColorName ID
Bookcase Brown 6
Computer Black 7 Book White 8 Book Brown 11
Book White 8 Book Yellow 12 Book Green 9
Book Green 9
Book (Brown) Book (Green) D
Table Glod 10
Book (White), Book(Yellow) C
Book Brown 11
BookCase Computer (Black) E
Book Yellow 12
Computer(Gold), Printer A
Table B
Fragmentation
❖ Reorganize
Computer(Gold), Printer A ObjectName ColorName ID
Table B
Book (White), Book(Yellow) C Computer Gold 3
Table B
Fragmentation
❖ Rebuild
❖ Drops the index and starts from scratch
❖ Sort out (Book, Bookcase, Computer,…)
Only use rebuild when it is absolutely necessary, when the fragmentation has reached such an
extent that you just have to delete it and start again.
Fragmentation
❖ Reorganize
❖ Rebuild
❖ user_scans parcourir table/ soit parcourir toute les pages dans l index : objectif de l index est de ne pas faire scan (rapidement le trouver)
a chaque fois incrementer c que l'index n'est pas bon!!!
❖ user_updates pour chaque update de l'index, nb elevé , cout maintenance important il faut voir si on élimine l index ou pas rapport rappidité
maintenance
il faut faire un rebuild après disable si on veut faire enable again (mais y pas enable) donc rebuild
Quiz
car les données sont dupliqués non unique
>30%
Import and export data
Transfer data previously seen : backup restore
❖Export data
❖Copy database (SQL Server Agent) il faut activer sql server agent avant
Bulk Insert
create table [Link] (Heading1 varchar(50), Heading2 varchar(50))
with
(FIELDTERMINATOR=',',
ROWTERMINATOR='\n',
FIRSTROW=2
)
Quiz
DS YEKEF HNE, efhem notions + rajaa les 2 tps: tp1,tp2
questions QCM
questions Commandes
questions expliquer ces phenomènes
Manage SQL Server Agent
pour l'automatisation (on crée des jobs)
Create, maintain and monitor jobs
❖ Automation
Go
!!Reopening the database for each new connection adds overhead and can cause significant performance issues in production systems.
❖ Autoshrink
ALTER DATABASE [AdventureWorks2014] SET AUTO_SHRINK ON
The AUTO_SHRINK option, when enabled, causes the database to automatically shrink its size when SQL Server detects unused space in the database files.
!!Frequent shrinking leads to fragmentation of indexes and database files, which further harms performance.
Autoclose and Autoshrink
In SQL Server, filegroups are logical structures that help manage the physical storage of database files. They allow you to group one or more data files for better management and improved performance. Filegroups are particularly useful for
databases that require large amounts of storage, have high I/O demands, or need data distribution across multiple storage devices.
❖ Secondary data file (.ndf) Optional files that can be associated with user-defined filegroups.
User-Defined Filegroups:
You can create additional filegroups (user-defined filegroups) to distribute data files and objects for
specific purposes (e.g., separating tables and indexes for performance optimization).
❖ Filegroups
❖ Data Files on different filegroups
Creating database with multiple
filegroups
Creating database with multiple
filegroups
This script creates a database with:
A primary data file (.mdf) for critical system data.
A secondary data file (.ndf) in a custom filegroup for other objects.
A transaction log file (.ldf).
filegroups
CREATE DATABASE [DBAdatabase]
CONTAINMENT = NONE Specifies that the database is not contained. This means it depends on the SQL Server instance and server-level objects, like logins.
ON PRIMARY
FILEGROUP [Secondary]
LOG ON
IF NOT EXISTS (SELECT name FROM [Link] WHERE is_default=1 AND name = N'Secondary')
Yo
u use RECOVERY when you're done applying
backups, meaning the database should now be
made available to users.
Manage file space including adding new
filegroups
ALTER DATABASE [DBAdatabase] ADD FILEGROUP [third]
Partitionning
❖ Create filegroups Filegroups are logical containers that group one or more database files together. For partitioning, multiple filegroups are created to distribute the partitions.
❖ Create partition function A Partition Function defines how the data will be divided into partitions based on a specific column (e.g., date or range of values).
CREATE PARTITION FUNCTION [PartitionFunctionPartition](date) AS RANGE RIGHT FOR VALUES (N'2018-01-01', N'2022-01-01’)
This function splits the data into three partitions based on the date column.
[dateOfEntry]
) ON [PartitionSchemeParttition]([dateOfEntry])
COMMIT TRANSACTION Ends the transaction and saves the changes to the database.
Ensures all actions performed within the transaction are permanently applied.
Partitionning
select *, $[Link](dateOfEntry) as PartitionNumber
from [dbo].[partitionTable]
The query retrieves all columns (*) from the table [partitionTable] and adds an extra column
called PartitionNumber.
PartitionNumber indicates the partition ID (starting from 1 where the dateOfEntry value is
stored.
select $[Link]('2018-01-01')
You can back up specific filegroups of a database instead of the entire database. This allows you to back up just the data that has changed or is important.
Filegroup and Page restore
RESTORE DATABASE [DBAdatabase] FILE = N'DBAdatabase2' FROM DISK = N'C:\Program
Files\Microsoft SQL Server\[Link]\MSSQL\Backup\[Link]'
WITH FILE = 1, NOUNLOAD, STATS = 10
RESTORE DATABASE [AdventureWorks2014] PAGE='1:12’ PAGE = '1:12': Refers to the file ID and page ID of the specific page being restored. 1 is the file ID,
and 12 is the page ID within that file.
Storage Structure: SQL Server stores data in pages within data files. These files are divided into multiple pages, and each page can store multiple rows of data, depending on the size of the rows.
Manage log file growth
select * from sys.dm_db_log_space_usage This view provides insights into how much of the log file is in use and how much is free.
4: This specifies the target size of the log file (in MB). SQL Server will attempt to shrink the log file to this size.
Note: Shrinking the transaction log should be done cautiously because it can cause fragmentation and might lead to poor performance if done frequently. After shrinking, SQL Server may need to grow the log file again as the database activity increases.
❖ Enable Autogrowth
Autogrowth is enabled by default for most SQL Server files, but it’s a good idea to review the
settings to ensure that they meet the requirements of your database environment.
FILEGROWTH = 128MB: Specifies that the log file will grow by 128 MB each time it runs out of space.
DBCC
dbcc shrinkdatabase(DBAdatabase,20)
❖ File Id
◦ select * from sys.database_files
Use this command when the file has just a small amount of unused space at the end that you want to remove, without affecting the physical disk size.
dbcc shrinkfile(DBAdatabase3,emptyfile)
This should be used sparingly and only when necessary, as it will physically remove data from the file, making it smaller on the disk.
Implement and configure contained
databases and logins
❖ Moving database from one particular instance of sql server to another
❖ A contained database
❖ A database that is isolated from the instance of SQL Server
N'1': The value 1 enables contained database authentication. When set to 1, SQL Server allows databases to manage their own user authentication,
independent of SQL Server instance-level authentication.
Prefix compression is a technique where common prefixes (values that appear frequently at the beginning of data) are replaced with references to a
❖Prefix compression stored prefix elsewhere.
✓ Stores commonly used prefixes elsewhere
✓ Prefix values are replaced by a reference to the prefix Values that share a common prefix are replaced with a reference to that prefix.
❖Dictionary compression Dictionary compression replaces frequently used values with references to a dictionary that stores those values.
Whenever a data value matches an entry in the dictionary, it is replaced with a pointer (reference) to that value in the dictionary.
❖ Replaces commonly used values
Data Compression
Data Compression
ALTER TABLE [HumanResources].[Employee] REBUILD PARTITION = ALL
WITH
(DATA_COMPRESSION = PAGE Here, PAGE compression is used, which works at the page level and applies multiple compression techniques
(such as row, prefix, and dictionary compression) to reduce the data size stored on each page.
)
this applies PAGE compression to the index, which can reduce its size and improve performance by decreasing the number of I/O operations required to access
the index.
ALTER INDEX [IX_Employee_OrganizationNode] ON
[HumanResources].[Employee] REBUILD PARTITION = ALL WITH
(DATA_COMPRESSION = PAGE)
Running this stored procedure gives you an idea of how much storage you could save by applying PAGE compression. This is helpful for decision-making, as you can assess t
he potential benefits of compression before applying it to production environments.
sp_estimate_data_compression_savings: This system stored procedure is used to estimate the potential space savings for applying data compression (in this case, PAGE compression) to a table or index.
[HumanResources]: The schema name.
[Employee]: The table name.
1: The partition number to estimate. In this case, 1 refers to the first partition. If there are multiple partitions, you can run this estimation for each partition.
null: Indicates that the specific index is not specified (in this case, you're estimating the table's compression savings, not an index).
'PAGE': Specifies that the estimate is for PAGE compression.
Sparse columns in SQL Server are a special type of column that is optimized for storing null values in a more efficient way. These columns are especially useful when you
have tables with many nullable columns, but only a small number of rows actually contain data for those columns.
Sparse columns
❖ Optimized space for null values
❖Reduce the space requirements for null values
❖ Sparse columns require more storage space for non-NULL values than the space required for
identical data that is not marked SPARSE
❖what percent of the data must be NULL for a net space savings ?
❖Estimated space savings by data type
[Link]
server-ver15
Typically, when a column is NULL, it still takes up space in SQL Server (e.g., 1 byte). However, when you define a column as sparse, SQL Server optimizes storage for rows where this column contains a
NULL value by not using storage for those NULL entries. This results in significant space savings when the column has many NULL values.
Sparse columns
create table sparsetable (heading1 nvarchar(10) sparse null)
Columnstore indexes are a special type of index in SQL Server designed to store data in a columnar format rather than the traditional row-based format. This approach optimizes the storage and retrieval
of large amounts of data, especially for analytics and reporting workloads.
Columnstore Indexes
❖ Clustered and nonclustered columnstore index
❖Live locking- Shared locks prevent another process from acquiring exclusive locks (but one
process wins then the next process wins)
deadlock Cycle
Transaction 1 is waiting for a lock on Table2, which is held by Transaction 2.
Transaction 2 is waiting for a lock on Table1, which is held by Transaction 1.
Neither transaction can proceed, resulting in a deadlock.
dbcc traceon(1204,-1)
dbcc traceon(1222,-1) The -1 argument enables the trace flag globally for all sessions on the SQL Server instance.
❖ DMVs
❖ Performance monitor
Diagnose performance problems with
DMVs Dynamic Management Views DMVs
❖CPU usage
select current_workers_count, work_queue_count, pending_disk_io_count
from sys.dm_os_schedulers
where scheduler_id <=255
❖ Processor
❖Processor : % Privildged Time
❖Processor : % User Time
❖ System: Processor queue length
IO, Memory and CPU bottlenecks
❖ IO Primary
❖PhysicalDisk: Avg Disk sec/Write
❖PhysicalDisk: Avg Disk sec/Read
❖ IO Secondary
❖PhysicalDisk: Avg Disk queue length
❖PhysicalDisk: Disk Bytes/sec
❖ PhysicalDisk: Disk Transfer/sec
Quiz
Audit SQL Server Instances
Implement a security strategy for
auditing and controlling the instance
Server-level audits focus on tracking activities and events at the SQL Server instance level. These audits monitor
❖ Server-level audits actions that affect the entire server, such as login attempts, permission changes, and server configuration changes.
apture activities and changes specific to a particular database within the SQL Server instance. These audits typically
❖ Database-level audits track DML Data Manipulation Language) operations such as INSERT, UPDATE, DELETE, and SELECT statements on
sensitive data, schema changes, or permission alterations within the database.
An audit in SQL Server consists of several components that define how the audit operates
and where the data is stored.
❖Components of audits
The audit itself is the process of capturing, recording, and analyzing events
❖ Audit itself
A Server Audit Specification defines the server-level events that you want to capture. This specification works
❖ Server audit specification in conjunction with a server-level audit object and allows you to define specific events, such as login attempts
or permission changes, to be audited.
❖ Database audit specification
A Database Audit Specification defines the database-level events you want to track, such as SELECT, INSERT, UPDATE, DELETE
❖ Target operations, and schema changes. It can be used to audit actions on specific tables, views, or stored procedures within a
database.
A target is where the audit logs and events are stored. SQL Server allows you to store audit logs in various locations,
and each target has different features.
Configure server audits
Configure server audits – monitor the
attemps to connect
Configure server audits – log file
Monitor elevated privileges
❖A fixed list of what the privileges currently are:
select [Link], perm.permission_name
from sys.server_permissions as perm join
sys.server_principals as princ on
perm.grantee_principal_id =
princ.principal_id
select *
from [Person].[Address]
where CONTAINS(AddressLine1, 'Drive NEAR Glaze')
Filestream
▪ Allows to store unstructed data like documents and images on the file system.
▪ Filestream is not automatically enabled
Filestream
exec sp_configure filestream_access_level, 2
reconfigure
Filestream
types, and directory paths.
The FileTable feature in SQL Server extends the capabilities of FILESTREAM by allowing you to store and manage unstructured data
(such as documents and files) directly in the database while maintaining native file system functionality. FileTables provide an easy
way to work with files and folders directly within SQL Server, without the need to manage file paths or file system operations
Filestream separately.
▪Store files and documents in special tables in SQL Server called FileTables
FileTable OPENROWSETBULK 'c:\[Link]', SINGLE_BLOB is used to read the contents of the file 'c:\[Link]' as a single
large binary object SINGLE_BLOB.
x.* refers to all columns in the result of the OPENROWSET function, which is the binary data of the file.
The file is inserted into the name column (as the file name, '[Link]') and the file_stream column (as the file's
binary data).
ALTER DATABASE [filestramdatabase]
SET FILESTREAM(NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = 'MyFiles')