0% found this document useful (0 votes)
23 views194 pages

SQL Server Backup and Restore Guide

Uploaded by

il
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views194 pages

SQL Server Backup and Restore Guide

Uploaded by

il
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Administration des

bases de données
[Link]/adminBD/[Link]
Plan
1. Introduction
2. Backup and Restore
3. Manage logins and server roles
4. Implement and maintain indexes
5. Import and export data
6. Manage SQL Server Agent
7. Manage and configure databases
8. Identify and resolve concurrency problems
9. Collect and analyse troubleshooting data
10. Audit SQL Server Instances
11. Additional SQL server components
Introduction
Downloading SQL Server Developer 2019
Download a free specialized edition
▪ Basic, Custom or Download Media
▪ Langage: English
▪ Package ISO or CAB

Editions and supported features of SQL Server 2019: Enterprise, Developer, Express,…
[Link]
15?view=sql-server-ver15

Hardware and software requirements: [Link]


and-software-requirements-for-installing-sql-server-ver15?view=sql-server-ver15
Introduction
Installing SQL Server Developer 2019
SQL Server Installation Center – Installation
▪ New SQL Server stand-alone installation or add features to an existing installation
▪ Free edition: Developer
▪Features selection: Database Engine Services
▪ Instance Configuration: Default instance
▪ Database Engine Configuration: Add current user
Introduction
Installing SQL Server Management Studio (SSMS)
[Link]
ssms?redirectedfrom=MSDN&view=sql-server-ver15
Introduction
Downloading Demo Database

AdventureWorks
[Link]/adminBD/sampleBD/[Link]
Backup and Restore
Restoring AdventureWorks with SSMS
T-SQL (Transact-SQL)

RESTORE DATABASE [AdventureWorks2014] FROM


DISK=N'E:\adminBD\Adventure+Works+2014+Full+Database+Backup\AdventureWo
[Link]'
WITH FILE = 1, MOVE N'AdventureWorks2014_Data'
TO N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\AdventureWorks2014_Data.mdf',
MOVE N'AdventureWorks2014_Log' TO N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\AdventureWorks2014_Log.ldf',
NOUNLOAD,
STATS = 5
Backup and Restore
The backup and restore component provides an essential safeguard for protecting critical data
stored in your databases.
Why backup?
▪ Backing up is the only way to protect your data
▪ Recover your data from many failures, such as:
✓ User errors, for example, dropping a table by mistake.
✓ Media and Hardware failures, for example, a damaged disk drive or permanent loss of a server.
✓ Natural disasters.

Restoring a backup consists of creating a database containing exclusively all the data contained in
the backup
T-SQL (Transact-SQL)
j
Backing up Database
Backing up Database
BACKUP DATABASE [AdventureWorks2014] TO DISK =
N'E:\adminBD\DBA\AdventureWorksBackup'
WITH NOFORMAT, NOINIT,
NAME = N'AdventureWorks2014-Full-Database-Backup’,
SKIP, NOREWIND, NOUNLOAD, STATS = 1
Recovery Model
❖ SIMPLE You can only restore the database to the most recent backup. Point-in-time recovery is not possible.

❖ FULL You can restore the database to any point in time, provided that you have the necessary transaction log backups.

❖ BULK LOGGED point-in-time recovery is possible for all transactions except those that are part of bulk operations.
Different backup models
❖ FULL BACKUP A full backup captures the entire database, including all objects and data. It includes the transaction log at the
time of the backup, ensuring a complete snapshot.

❖DIFFERENTIAL BACKUP A differential backup captures only the data that has changed since the last full backup.
It keeps track of changes and includes only the modified data.

❖TRANSACTION LOG BACKUP transaction log backup records all the transactions that have occurred since
the last transaction log backup. It captures changes in the database at a granular level.
Different backup models in SSMS
Point in time recovery
Quiz
Explanation: To restore to a point in time at 8:30 p.m., you would need:

The FULL midday backup (the last full backup before the restore point),
The DIFFERENTIAL 6 p.m. backup (the last differential backup before the restore point), and
The TRANSACTION LOG 8 p.m. backup (the transaction log closest to 8:30 p.m.).
Use NORECOVERY when you still plan to restore additional backups (such as differential or transaction log backups).
Use RECOVERY when you're done with restoring all backups and want the database to be online and available.

Using NORECOVERY and RECOVERY

When you specify NORECOVERY,


the database remains in a restoring state,
allowing you to apply additional backups without
bringing the database online.

You use RECOVERY when you're done applying


backups, meaning the database should now be
made available to users.
1.---To ensure you have a recent transaction log backup that covers the point in time you're restoring to
[Link] the full database backup, ensuring you do not fully recover the database yet
3. Restore the transaction log to the exact moment just before the row was inserted, using the time you noted

Using NORECOVERY and RECOVERY


BACKUP LOG [AdventureWorks2014] TO DISK = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-11_07-[Link]'
WITH NOFORMAT, NOINIT, NAME = N'AdventureWorks2014_LogBackup_2022-02-11_07-49-49', NOSKIP,
NOREWIND, NOUNLOAD, NORECOVERY , STATS = 5
RESTORE DATABASE [AdventureWorks2014Backup4] FROM DISK = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-04_08-[Link]'
WITH FILE = 2, MOVE N'AdventureWorks2014_Data' TO N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\AdventureWorks2014Backup4_Data.mdf', MOVE
N'AdventureWorks2014_Log' TO N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\AdventureWorks2014Backup4_Log.ldf', NORECOVERY,
NOUNLOAD, STATS = 5
RESTORE DATABASE [AdventureWorks2014Backup4] FROM DISK = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-04_08-[Link]'
WITH FILE = 3, NORECOVERY, NOUNLOAD, STATS = 5
RESTORE LOG [AdventureWorks2014Backup4] FROM DISK = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-04_08-[Link]'
WITH FILE = 4, NORECOVERY, NOUNLOAD, STATS = 5
RESTORE LOG [AdventureWorks2014Backup4] FROM DISK = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-11_07-[Link]'
WITH NOUNLOAD, STATS = 5, STOPAT = N'2022-02-11T[Link]'
Using NORECOVERY and RECOVERY
use [master]

Go

restore database [AdventureWorks2014] with recovery


Backup an SQL server environment and
system databases
❖ Master Master: Contains all the system-level information for an SQL Server instance, including login accounts, configuration settings, and information about databases.

Model: A template database used when creating new databases. Any modifications to the Model database will apply to any new database created afterward.

MSDB: Used by SQL Server Agent for scheduling alerts and jobs. It also stores backup and restore history, maintenance plans, and more.

TempDB: A workspace for temporary objects. It’s recreated every time SQL Server starts, so it cannot be backed up. It’s used for various operations, such as sorting and storing temporary tables.

❖ Model Resource: A hidden, read-only database that contains system objects that are included with SQL Server. It is primarily used by the SQL Server engine and is not meant to be modified.

❖ Msdb
❖ TempDB
❖ Resource
Perform backup/restore based on strategies
❖ What you are backing up and how often the data is updated
❖ How much data can you afford to loose
❖ How much space will a full database use ?
❖ Backup redundancy
❖ Reliability
❖ Expiration of the backup
❖ Compression
❖ Encryption
Recover from a corrupted drive
❖ What happens if the backup media is damaged ?

✓ Mirrored Media

✓ Allow the restore to continue despite the errors


oDatabase consistency check
oRepair data loss (single user mode and rollback immediate)
Recover from a corrupted drive
RESTORE DATABASE [AdventureWorks2014backup]
FROM DISK = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-01-27_13-
[Link]'
WITH CONTINUE_AFTER_ERROR, NORECOVERY, FILE = 6

ALTER DATABASE [AdventureWorks2014backup] SET SINGLE_USER WITH ROLLBACK IMMEDIATE


DBCC CHECKDB ([AdventureWorks2014backup], REPAIR_ALLOW_DATA_LOSS)
SET SINGLE_USER: Changes the database to single-user
mode, allowing only one connection to the database. This is useful for maintenance tasks.
WITH ROLLBACK IMMEDIATE: Forces any other connections to the database to
disconnect immediately.

ALTER DATABASE [AdventureWorks2014backup] SET MULTI_USER


SET SINGLE_USER: Changes the database to single-user mode, allowing only one connection to the database. This is useful for maintenance tasks.
WITH ROLLBACK IMMEDIATE: Forces any other connections to the database to disconnect immediately.
Quiz
"NORECOVERY" means that the database is not fully recovered and is not accessible for normal operations.
This state is typically used during a restore process where additional transaction logs or differential backups are expected to be applied be
fore the database can be brought online.
A tail-log backup is a type of backup taken just before a database is restored to capture any transactions that occurred since the last full or differential backup.
Manage logins and server roles
Create login accounts
❖ Logins
❖ Create a login
Windows account on the machine named DESKTOP-ICEQ7B9,
❖ Windows authentification / SQL server authentification and SQLTest is the specific Windows user.

CREATE LOGIN [DESKTOP-ICEQ7B9\SQLTest] Windows Authentication means the login credentials of the
Windows user (in this case, SQLTest) are used to authenticate
FROM WINDOWS the user when they log in to SQL Server.

WITH DEFAULT_DATABASE=[AdventureWorks2014]
This specifies the default database that will be used when
the login is established. In this case, AdventureWorks2014
is set as the default database for this login.
Manage access to the server
❖ Serveradmin
❖ Alter settings, shutdown, alter any endpoint, create any endpoint

❖Securityadmin
❖ Alter any login

❖Processadmin
❖ Alter any connection

❖Setupadmin
❖ Alter any linked server

❖Bulkadmin
❖ Administer Bulk operations
Manage access to the server
❖ Diskadmin
❖ Alter resources

❖Dbcreator
❖ Alter any database, create any database

❖Sysadmin
❖ Can perform any activity on the server

❖Public
❖ No server-level permission (except view any database and connect permission)

ALTER SERVER ROLE [sysadmin] ADD MEMBER [DESKTOP-ICEQ7B9\SQLTest]


ALTER SERVER ROLE [sysadmin] DROP MEMBER [DESKTOP-ICEQ7B9\SQLTest]
Create and maintain user-defined server roles
❖ Create my own server rôle
❖ Name
❖ Owner: AUTHORIZATION
❖ Securables (Endpoints, logins, servers, availability groups, server roles)

CREATE SERVER ROLE [myServerRole1]


ALTER SERVER ROLE [myServerRole1] ADD MEMBER [DESKTOP-ICEQ7B9\SQLTest]
ALTER ANY LOGIN Permission: This permission
GRANT ALTER ANY LOGIN TO [myServerRole1] allows members of the myServerRole1 role to
create, modify, and delete any login in the SQL
Server instance.
Create database user accounts
❖ Add user account

CREATE USER [SQLTest] FOR LOGIN [DESKTOP-ICEQ7B9\SQLTest] WITH DEFAULT_SCHEMA=[dbo]


dbo stands for "database owner," which is the default schema in SQL Server. Schemas are used to group database
objects like tables, views, and procedures, and having a default schema allows the user to access these objects
without

The overall purpose of this command is to:Create a user named SQLTest in the current [Link] this user to the Windows login
DESKTOP-ICEQ7B9\SQLTest, allowing it to access the database using that [Link] the default schema for the user to dbo, making it easier to
reference
database objects owned by the dbo schema.
Fixed database-level roles
❖ db_owner
❖ Has all permission in the database

❖db_securityadmin
❖ Alter any role, create role, view definition

❖db_accessadmin
❖Alter any user, connect

❖db_backupoperator
❖Backup database, backup log, checkpoint

❖db_ddladmin
❖Data definition Langage commands in the database
Fixed database-level roles
❖ db_datawriter
❖ Grant Insert, update, delete on database
❖db_denydatawriter
❖Deny Insert, update, delete on database
❖db_datareader
❖Grant Select on database
❖db_denydatareader
❖Deny Select on database
❖public
❖No database level permissions (except some database permissionss for example: view any column
master key definition and select permission on system tables)

ALTER ROLE [db_datareader] ADD MEMBER [SQLTest]


User database-level roles
❖ New database rôle
❖Name
❖Owner
❖Members
❖Secrurables
❖Permisssions (Alter, Control, select, insert, update, select, references, execute, take ownership…)

CREATE ROLE [myNewDBRole]


ALTER ROLE [myNewDBRole] ADD MEMBER [SQLTest]
GRANT SELECT ON [HumanResources].[Department] TO [myNewDBRole]
DENY SELECT ON [HumanResources].[Employee] TO [myNewDBRole]
Creating and using schemas
USE [AdventureWorks2014]
GO
CREATE SCHEMA [myNewSchema] AUTHORIZATION [SQLTest]
GO
A schema is a way to logically group database objects (such as tables, views, and stored procedures) within a
database. Schemas help manage and organize database objects and can also control permissions.

❖ Schema owner
The owner of a schema has the ability to manage the objects within that schema,
including granting permissions to other users and creating new objects.
Creating access to server/database with
least privilege
❖ Principle of least privilege REVOKE removes a previously granted or denied permission from a user or role. It essentially
"resets"
the permission, meaning the user no longer has the specific permission, but it does not explicitly
❖ Use fixed server roles prevent them from having it if other sources grant it.

❖ Restrict use of sysadmin DENY explicitly prevents a user or role from having a specific permission, overriding any grants of
the
permission they may receive from other roles or groups. It is a stronger restriction than REVOKE
❖ Assign permissions to role
❖ Use stored procedures and functions
❖Permission statements
❖ Grant, Deny, Revoke (REVOKE SELECT ON [Link] TO myNewDBRole)

❖ Owernership chains When a SQL query involves multiple database objects (such as views, tables, or stored procedures), S
QL Server checks whether these objects are owned by the same database principal (usually a user or role).

If they are, the objects are said to be in the same ownership chain, and permission checks can be bypassed.

In SQL Server, if you have denied a permission to a user or role, you cannot grant that same permission to them directly until the DENY is removed. This is
because
a DENY always takes precedence over a GRANT. Even if you issue a GRANT after a DENY, the DENY will still block the user from accessing the resource.
!!!!!!YOU
Protect objects from being modified
use [AdventureWorks2014]

DENY ALTER ON [Production].[Culture] TO [SQLTest]


DENY CONTROL ON [Production].[Culture] TO [SQLTest]
DENY DELETE ON [Production].[Culture] TO [SQLTest]
DENY INSERT ON [Production].[Culture] TO [SQLTest]
DENY TAKE OWNERSHIP ON [Production].[Culture] TO [SQLTest]
DENY UPDATE ON [Production].[Culture] TO [SQLTest]
Quiz
Integrated Security refers to a method where SQL Server uses the Windows credentials of the user to authenticate them. This is synonymous with
*Windows Authentication in SQL Server. It does not require a separate login and password; instead, it leverages the user's Windows login credentials.
Public is a fixed server role that every user belongs to by default. Although you cannot remove a user from this role, you can grant or revoke additional permissions to this role,
which will then apply to all users in the database.
Roles like Processadmin and Bulkadmin have specific administrative permissions, but you cannot add extra permissions to them as they are predefined with fixed privileges.
Implement and maintain indexes
What are indexes ?
❖ Clustred indexes
❖ Unique column properties

❖ Non-clustred indexes
❖Non-unique
Implement indexes
CREATE CLUSTERED INDEX [IX_NewTable_ID] ON [dbo].[NewTable]
(
[ID] ASC
)

CREATE NONCLUSTERED INDEX [IX_NewTable_Color] ON [dbo].[NewTable]


(
[ColorName] ASC
)
INCLUDE([ObjectName])

DROP INDEX [IX_NewTable_ID] ON [dbo].[NewTable]


Fragmentation
A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1
Table Brown 2
Computer Gold 3
Printer Black 4
Printer Red 5 C D
Bookcase Brown 6 ObjectName ColorName ID ObjectName ColorName ID
Computer Black 7
Book White 8
Book Green 9
Table Glod 10
Fragmentation
A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1
Table Brown 2 Computer Gold 3
Computer Gold 3 Table Black 1
Printer Black 4 Table Brown 2
Printer Red 5 C D
Bookcase Brown 6 ObjectName ColorName ID ObjectName ColorName ID
Computer Black 7
Book White 8
Book Green 9
Table Glod 10
Fragmentation
A B
ObjectName ColorName ID ObjectName ColorName ID ObjectName ColorName ID
Table Black 1
Table Brown 2 Computer Gold 3 Table Black 1

Computer Gold 3 Printer Black 4 Table Brown 2

Printer Black 4 Printer Red 5

Printer Red 5 C D
Bookcase Brown 6 ObjectName ColorName ID ObjectName ColorName ID
Computer Black 7
Bookcase Brown 6
Book White 8
Computer Black 7
Book Green 9
Table Glod 10
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Printer Black 4 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Black 4 C D
ObjectName ColorName ID ObjectName ColorName ID
Printer Red 5
Bookcase Brown 6 Book White 8 Book Green 9
Computer Black 7 Bookcase Brown 6
Book White 8 Computer Black 7
Book Green 9
Book (Green) D
Table Glod 10
Book (White), Computer (Black) C
Computer(Gold), Printer A
Table B
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Black 4 C D
ObjectName ColorName ID ObjectName ColorName ID
Printer Red 5
Bookcase Brown 6 Book White 8 Book Green 9
Computer Black 7 Bookcase Brown 6
Book White 8 Computer Black 7
Book Green 9
Book (Green) D
Table Glod 10
Book (White), Bookcase (Brown), Computer (Black) C
Computer (Gold), Printer A
Table B
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Red 5 C D
ObjectName ColorName ID ObjectName ColorName ID
Bookcase Brown 6
Computer Black 7 Book White 8 Book Brown 11
Book White 8 Bookcase Brown 6 Book Green 9
Book Green 9 Computer Black 7
Table Glod 10
Book (Brown) Book (Green) D
Book Brown 11
Book (White), Compter (Black) C
Computer(Gold), Printer A
Table B
ObjectName ColorName ID

Bookcase Brown 6 E
Computer Black 7

Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Red 5 C D
ObjectName ColorName ID ObjectName ColorName ID
Bookcase Brown 6
Computer Black 7 Book White 8 Book Brown 11
Book White 8 Book Yellow 12 Book Green 9
Book Green 9
Book (Brown) Book (Green) D
Table Glod 10
Book (White), Book(Yellow) C
Book Brown 11
BookCase Computer (Black) E
Book Yellow 12
Computer(Gold), Printer A
Table B
Fragmentation
❖ Reorganize
Computer(Gold), Printer A ObjectName ColorName ID
Table B
Book (White), Book(Yellow) C Computer Gold 3

Book (Brown), Book (Green) D


BookCase, Computer (Black) E Printer Red 5

Book (Brown) Book (Green) D ObjectName ColorName ID


Book (White), Book(Yellow) C
BookCase, Computer (Black) E Computer Gold 3

Computer(Gold), Printer A Printer Red 5

Table B
Fragmentation
❖ Rebuild
❖ Drops the index and starts from scratch
❖ Sort out (Book, Bookcase, Computer,…)

Only use rebuild when it is absolutely necessary, when the fragmentation has reached such an
extent that you just have to delete it and start again.
Fragmentation
❖ Reorganize

ALTER INDEX [IX_NewTable_ID] ON [dbo].[NewTable] REORGANIZE

❖ Rebuild

ALTER INDEX [IX_NewTable_ID] ON [dbo].[NewTable] REBUILD PARTITION = ALL WITH


(ONLINE = ON)
How fragmented are the indexes
select * from
sys.dm_db_index_physical_stats(DB_ID('AdventureWork
s2014'),OBJECT_ID('[Person].[Address]'),null,null,n
ull) as stats
join [Link] as si
on stats.object_id=si.object_id and
stats.index_id=si.index_id
Fill Factor
❖ When you are creating a page in a first place, don’t make it 100 % full.
❖ Reorgonize less
❖ Rebuild less

ALTER INDEX [IX_NewTable_Color] ON [dbo].[NewTable] REBUILD PARTITION =


ALL WITH (FILLFACTOR = 80)
Optimise indexes
CREATE NONCLUSTERED INDEX [NCDemo] ON [Person].[Address]
(
[AddressID] ASC
)
WHERE city='London'
Identify unused indexes
❖ user_seeks combien de fois l index est utilisé, incrementé à chaque fois, index est bon

❖ user_scans parcourir table/ soit parcourir toute les pages dans l index : objectif de l index est de ne pas faire scan (rapidement le trouver)
a chaque fois incrementer c que l'index n'est pas bon!!!

❖ user_lookups c les deux, on est entrain d'utiliser l'index + table d'origine

❖ user_updates pour chaque update de l'index, nb elevé , cout maintenance important il faut voir si on élimine l index ou pas rapport rappidité
maintenance

select * from sys.dm_db_index_usage_stats as stats


pour conaitre join [Link] as si
le nom index
sans join
juste ID on stats.object_id=si.object_id and stats.index_id=si.index_id
Disable or drop unused indexes
ALTER INDEX [IX_NewTable_ID] ON [dbo].[NewTable] DISABLE

DROP INDEX [IX_NewTable_Color] ON [dbo].[NewTable]

il faut faire un rebuild après disable si on veut faire enable again (mais y pas enable) donc rebuild
Quiz
car les données sont dupliqués non unique
>30%
Import and export data
Transfer data previously seen : backup restore

❖ Detach and attach a database detach: db n'est plus accessible


lorsque on veut pas a une autre version du sql server
puis attach

detach EXEC [Link].sp_detach_db @dbname = N'AdventureWorks2014backup’

attach CREATE DATABASE [AdventureWorks2014backup] ON


( FILENAME = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\AdventureWorks2014backup_Data.mdf' ),
( FILENAME = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\AdventureWorks2014backup_Log.ldf' )
FOR ATTACH
Transfer data
❖ Import flat file flat files generallement des fichiers textes comme csv

❖ Import data plus générale

❖Export data

❖Copy database (SQL Server Agent) il faut activer sql server agent avant
Bulk Insert
create table [Link] (Heading1 varchar(50), Heading2 varchar(50))

bulk insert [dbo].[flatFile] from 'E:\[Link]' import

with
(FIELDTERMINATOR=',',
ROWTERMINATOR='\n',
FIRSTROW=2
)
Quiz
DS YEKEF HNE, efhem notions + rajaa les 2 tps: tp1,tp2

questions QCM
questions Commandes
questions expliquer ces phenomènes
Manage SQL Server Agent
pour l'automatisation (on crée des jobs)
Create, maintain and monitor jobs
❖ Automation

❖ Enable/Start SQL Server Agent

❖Job: Full backup every day at midnight


Create, maintain and monitor jobs
❖ Job steps
Create, maintain and monitor jobs
Create, maintain and monitor jobs
❖ Job schedule
Administer jobs
USE msdb

Go

select * from sysjobs

select * from sysjobsteps

select * from syssessions

select * from sysjobactivity

select * from sysjobhistory

select * from sysschedules


RAISERROR
❖ User-defined error
RAISERROR (id, severitynumber, statenumber) (id >= 50000, severity <=10
information, >=19 fatal error , state between 0 and 255)

select * from [Link]

exec sp_addmessage 50001, 16, 'Iam raising an alert’

RAISERROR (50001, 16, 1)


Alerts
❖ SQL Server event alert

❖ Server performance condition alert

❖ WMI (Windows Management Instrumentation) event alert


Create Event Alerts
Create Event Alerts
RAISERROR and event Alerts
RAISERROR (50001, 16, 1) WITH LOG
Create alerts on critical server condition
Operators
❖ Database Mail
❖Configuration

❖ Set up database mail


❖E-mail profile
❖ SMTP accounts
Database mail configuration
Database mail configuration
❖ SQL Server Agent – Enable mail profile
Adding operators to jobs and alerts
Quiz
Manage and configure databases
Autoclose and Autoshrink
❖ Autoclose
ALTER DATABASE [AdventureWorks2014] SET AUTO_CLOSE ON
AUTO_CLOSE
The AUTO_CLOSE option, when enabled, causes the database to automatically close when the last user connection to the database is closed. When a new connection is made, the database automatically reopens.

!!Reopening the database for each new connection adds overhead and can cause significant performance issues in production systems.

❖ Autoshrink
ALTER DATABASE [AdventureWorks2014] SET AUTO_SHRINK ON
The AUTO_SHRINK option, when enabled, causes the database to automatically shrink its size when SQL Server detects unused space in the database files.

!!Frequent shrinking leads to fragmentation of indexes and database files, which further harms performance.
Autoclose and Autoshrink
In SQL Server, filegroups are logical structures that help manage the physical storage of database files. They allow you to group one or more data files for better management and improved performance. Filegroups are particularly useful for
databases that require large amounts of storage, have high I/O demands, or need data distribution across multiple storage devices.

Design multiple file groups


❖Big Database: one filegroup ?
❖Primary data file (.mdf) By default, all database objects (tables, indexes, etc.) are created in the primary filegroup unless specified otherwise.

❖ Secondary data file (.ndf) Optional files that can be associated with user-defined filegroups.

User-Defined Filegroups:

You can create additional filegroups (user-defined filegroups) to distribute data files and objects for
specific purposes (e.g., separating tables and indexes for performance optimization).

❖ Filegroups
❖ Data Files on different filegroups
Creating database with multiple
filegroups
Creating database with multiple
filegroups
This script creates a database with:
A primary data file (.mdf) for critical system data.
A secondary data file (.ndf) in a custom filegroup for other objects.
A transaction log file (.ldf).

Creating database with multiple


It ensures that new objects are stored in the Secondary filegroup if it's not already set as the default. This approach helps organize database objects and improve manageability.

filegroups
CREATE DATABASE [DBAdatabase]

CONTAINMENT = NONE Specifies that the database is not contained. This means it depends on the SQL Server instance and server-level objects, like logins.

ON PRIMARY

( NAME = N'DBAdatabase', FILENAME = N'C:\Program Files\Microsoft SQL Server\[Link]\MSSQL\DATA\[Link]' ,

SIZE = 102400KB , FILEGROWTH = 65536KB ),

FILEGROUP [Secondary]

( NAME = N'DBAdatabase2', FILENAME = N'C:\Program Files\Microsoft SQL Server\[Link]\MSSQL\DATA\[Link]' ,

SIZE = 102400KB , FILEGROWTH = 65536KB )

LOG ON

( NAME = N'DBAdatabase_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\[Link]\MSSQL\DATA\DBAdatabase_log.ldf'


,

SIZE = 8192KB , FILEGROWTH = 65536KB )


Changes the default filegroup to Secondary if it is not already set.
GO New objects without a specified filegroup will be stored in the Secondary filegroup.

IF NOT EXISTS (SELECT name FROM [Link] WHERE is_default=1 AND name = N'Secondary')

ALTER DATABASE [DBAdatabase] MODIFY FILEGROUP [Secondary] DEFAULT


Manage file space including adding new
filegroups

When you specify NORECOVERY,


the database remains in a restoring state,
allowing you to apply additional backups without
bringing the database online.

Yo
u use RECOVERY when you're done applying
backups, meaning the database should now be
made available to users.
Manage file space including adding new
filegroups
ALTER DATABASE [DBAdatabase] ADD FILEGROUP [third]

ALTER DATABASE [DBAdatabase] ADD FILE ( NAME = N'DBAdatabase3',


FILENAME = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\[Link]' , SIZE = 8192KB ,
FILEGROWTH = 65536KB )
TO FILEGROUP [third]
Manage file space - moving objects
CREATE CLUSTERED INDEX [ClusteredIndex-
20220407-111813] ON [dbo].[NewTable]
(
[heading1] ASC, [heading2] ASC
)
ON [third]
Manage file space - moving objects
CREATE TABLE [dbo].[NewTable2]
(
Heading1 int, heading2 int
)
ON [third]
Partitioning in SQL Server is a technique to improve database performance and manageability by dividing large tables or indexes into smaller, more manageable parts, called partitions, across multiple filegroups

Partitionning
❖ Create filegroups Filegroups are logical containers that group one or more database files together. For partitioning, multiple filegroups are created to distribute the partitions.

ALTER DATABASE [YourDatabase] ADD FILEGROUP [FG_Partition1];

❖ Create partition function A Partition Function defines how the data will be divided into partitions based on a specific column (e.g., date or range of values).

CREATE PARTITION FUNCTION PF_Yearly (INT)


AS RANGE LEFT FOR VALUES (2019, 2020, 2021);

❖Create scheme partition (uses partion function and filegroups)


A Partition Scheme maps the partitions defined by the partition function to specific filegroups.

CREATE PARTITION SCHEME PS_Yearly


AS PARTITION PF_Yearly TO (FG_Partition1, FG_Partition2, FG_Partition3);

❖Create/modify tables/indexes using partition scheme


Tables or indexes can now use the partition scheme to distribute their data across the partitions.

CREATE TABLE Sales


(
SaleID INT NOT NULL,
SaleDate DATE NOT NULL,
Amount DECIMAL(10, 2)
)
ON PS_Yearly (YEAR(SaleDate));
Benefits of Partitioning
Improved Query Performance: Queries that target specific partitions avoid scanning the entire table.
Ease of Management: You can back up, restore, or maintain individual partitions.
Efficient Data Archiving: Older data can be easily moved or deleted by partition.
Scalability: Allows for managing very large datasets.
Partitionning
Partitionning
Partitionning
BEGIN TRANSACTION Starts a transaction. All subsequent statements are executed as a single unit of work.
If any part of the transaction fails, you can roll back to maintain data consistency.

CREATE PARTITION FUNCTION [PartitionFunctionPartition](date) AS RANGE RIGHT FOR VALUES (N'2018-01-01', N'2022-01-01’)
This function splits the data into three partitions based on the date column.

CREATE PARTITION SCHEME [PartitionSchemeParttition] AS PARTITION [PartitionFunctionPartition] TO ([PRIMARY], [Secondary], [third])


This scheme determines where each partition's data will physically reside.

CREATE CLUSTERED INDEX [ClusteredIndex_on_PartitionSchemeParttition_637849306408154072] ON [dbo].[partitionTable]

[dateOfEntry]

) ON [PartitionSchemeParttition]([dateOfEntry])

DROP INDEX [ClusteredIndex_on_PartitionSchemeParttition_637849306408154072] ON [dbo].[partitionTable]

COMMIT TRANSACTION Ends the transaction and saves the changes to the database.
Ensures all actions performed within the transaction are permanently applied.
Partitionning
select *, $[Link](dateOfEntry) as PartitionNumber
from [dbo].[partitionTable]
The query retrieves all columns (*) from the table [partitionTable] and adds an extra column
called PartitionNumber.
PartitionNumber indicates the partition ID (starting from 1 where the dateOfEntry value is
stored.

select $[Link]('2018-01-01')

This retrieves the partition number for the value '2018-01-01'.


It determines which partition the date '2018-01-01' would be mapped to based on the partition function's rules.
Filegroup backup
❖ How can you manage a VERY BIG database ?
❖ How do you back that up ? It is huge (hours, days)
❖Not to backup the entirety
BACKUP DATABASE [DBAdatabase]
FILEGROUP = N'Secondary' TO
DISK = N'C:\Program Files\
Microsoft SQL Server\[Link]
\MSSQL\Backup\[Link]'

You can back up specific filegroups of a database instead of the entire database. This allows you to back up just the data that has changed or is important.
Filegroup and Page restore
RESTORE DATABASE [DBAdatabase] FILE = N'DBAdatabase2' FROM DISK = N'C:\Program
Files\Microsoft SQL Server\[Link]\MSSQL\Backup\[Link]'
WITH FILE = 1, NOUNLOAD, STATS = 10

RESTORE DATABASE [AdventureWorks2014] PAGE='1:12’ PAGE = '1:12': Refers to the file ID and page ID of the specific page being restored. 1 is the file ID,
and 12 is the page ID within that file.

FROM DISK = N'C:\Program Files\Microsoft SQL


Server\[Link]\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-
28_07-[Link]'
In SQL Server, a page is the smallest unit of data storage within a database.

Storage Structure: SQL Server stores data in pages within data files. These files are divided into multiple pages, and each page can store multiple rows of data, depending on the size of the rows.
Manage log file growth
select * from sys.dm_db_log_space_usage This view provides insights into how much of the log file is in use and how much is free.

❖ Add additional log files ALTER DATABASE [DBAdatabase]


ADD LOG FILE (NAME = N'DBAdatabase_log2',
FILENAME = N'C:\Program Files\Microsoft SQL Server\[Link]\MSSQL\DATA\DBAdatabase_log2.ldf',
SIZE = 1024MB,
❖ Do a transaction log backup FILEGROWTH = 128MB);

BACKUP LOG [DBAdatabase]


TO DISK = N'C:\Program Files\Microsoft SQL Server\[Link]\MSSQL\Backup\DBAdatabase_log.bak';
❖ Shrinking the file
dbcc shrinkfile(DBAdatabase_log,4) If your transaction log file has grown too large over time due to heavy transaction processing (e.g., after large data imports o
r bulk operations), you might want to shrink the log file to free up disk space.

4: This specifies the target size of the log file (in MB). SQL Server will attempt to shrink the log file to this size.

Note: Shrinking the transaction log should be done cautiously because it can cause fragmentation and might lead to poor performance if done frequently. After shrinking, SQL Server may need to grow the log file again as the database activity increases.

❖ Enable Autogrowth
Autogrowth is enabled by default for most SQL Server files, but it’s a good idea to review the
settings to ensure that they meet the requirements of your database environment.

FILEGROWTH = 128MB: Specifies that the log file will grow by 128 MB each time it runs out of space.
DBCC
dbcc shrinkdatabase(DBAdatabase,20)

❖ File Id
◦ select * from sys.database_files

3: This is the file ID of the file you want to shrink.


TRUNCATEONLY: Specifies that SQL Server should only truncate the unused space at the end of the file, reducing the logical size of the file on disk.
dbcc shrinkfile(3,truncateonly) This command won't actually release the space back to the operating system; it just reduces the logical size of the file within SQL Server.

Use this command when the file has just a small amount of unused space at the end that you want to remove, without affecting the physical disk size.

dbcc shrinkfile(DBAdatabase3,emptyfile)
This should be used sparingly and only when necessary, as it will physically remove data from the file, making it smaller on the disk.
Implement and configure contained
databases and logins
❖ Moving database from one particular instance of sql server to another

❖ A contained database
❖ A database that is isolated from the instance of SQL Server

❖ Contained database users


❖Configured directly at the database level and don’t require an associated login
❖ Authenticate users by passwords
Implement and configure contained
databases and logins
Implement and configure contained
databases and logins
Enable Contained Database Authentication:
EXEC sys.sp_configure N'contained database authentication', N'1'
GO
Apply Configuration Changes:
RECONFIGURE WITH OVERRIDE RECONFIGURE WITH OVERRIDE command makes the configuration change take effect immediately.

N'1': The value 1 enables contained database authentication. When set to 1, SQL Server allows databases to manage their own user authentication,
independent of SQL Server instance-level authentication.

ALTER DATABASE [DBAdatabase] SET CONTAINMENT = PARTIAL WITH NO_WAIT


Set the Containment Level to Partial:
which means that the database can manage some of its own features (such as user authentication) while still depending on
some instance-level configurations

Why Use Contained Databases?


Portability: Contained databases can be moved between SQL Server instances more easily because they are less dependent on instance-level configurations.
Self-contained Authentication: With contained database authentication, users can authenticate directly within the database, making it easier to manage logins and security when moving databases to different environments (e.g., cloud-based or distributed
environments).
Minimizing Dependencies: By setting the containment level to PARTIAL, you give the database some autonomy, allowing it to manage certain aspects independently (e.g., users and logins), while still leveraging instance-level settings when
Implement and configure contained
databases and logins
❖ SQL users with password

CREATE USER [ContainedUser] WITH PASSWORD=N'ContainedUser'


Quiz
It defines 4 partitions because it includes:
Values less than or equal to 1
Values greater than 1 and less than or equal to 100
Values greater than 100 and less than or equal to 1000
Values greater than 1000
1- RIGHT means < or >=

2- LEFT means <= and >.

1 1 . <= 100 100.1000 1000


1- RIGHT means < or >=

2- LEFT means <= and >.


Data Compression
❖ Page compression
Page compression reduces the size of data at the page level, which is the fundamental unit of storage in SQL Server
❖ Row compression Row compression reduces the size of individual rows by using more efficient storage formats.
✓ Reduce metadata overhead
✓ Uses variable-length storage for numeric-based types
✓ Uses variable-length character strings

Prefix compression is a technique where common prefixes (values that appear frequently at the beginning of data) are replaced with references to a
❖Prefix compression stored prefix elsewhere.
✓ Stores commonly used prefixes elsewhere
✓ Prefix values are replaced by a reference to the prefix Values that share a common prefix are replaced with a reference to that prefix.

❖Dictionary compression Dictionary compression replaces frequently used values with references to a dictionary that stores those values.
Whenever a data value matches an entry in the dictionary, it is replaced with a pointer (reference) to that value in the dictionary.
❖ Replaces commonly used values
Data Compression
Data Compression
ALTER TABLE [HumanResources].[Employee] REBUILD PARTITION = ALL
WITH
(DATA_COMPRESSION = PAGE Here, PAGE compression is used, which works at the page level and applies multiple compression techniques
(such as row, prefix, and dictionary compression) to reduce the data size stored on each page.

)
this applies PAGE compression to the index, which can reduce its size and improve performance by decreasing the number of I/O operations required to access
the index.
ALTER INDEX [IX_Employee_OrganizationNode] ON
[HumanResources].[Employee] REBUILD PARTITION = ALL WITH
(DATA_COMPRESSION = PAGE)
Running this stored procedure gives you an idea of how much storage you could save by applying PAGE compression. This is helpful for decision-making, as you can assess t
he potential benefits of compression before applying it to production environments.

exec sp_estimate_data_compression_savings [HumanResources], [Employee], 1,


null, 'PAGE'
This stored procedure estimates the potential storage savings of applying PAGE compression to the Employee table in the HumanResources schema.

sp_estimate_data_compression_savings: This system stored procedure is used to estimate the potential space savings for applying data compression (in this case, PAGE compression) to a table or index.
[HumanResources]: The schema name.
[Employee]: The table name.
1: The partition number to estimate. In this case, 1 refers to the first partition. If there are multiple partitions, you can run this estimation for each partition.
null: Indicates that the specific index is not specified (in this case, you're estimating the table's compression savings, not an index).
'PAGE': Specifies that the estimate is for PAGE compression.
Sparse columns in SQL Server are a special type of column that is optimized for storing null values in a more efficient way. These columns are especially useful when you
have tables with many nullable columns, but only a small number of rows actually contain data for those columns.

Sparse columns
❖ Optimized space for null values
❖Reduce the space requirements for null values

❖ Sparse columns require more storage space for non-NULL values than the space required for
identical data that is not marked SPARSE

❖what percent of the data must be NULL for a net space savings ?
❖Estimated space savings by data type

[Link]
server-ver15
Typically, when a column is NULL, it still takes up space in SQL Server (e.g., 1 byte). However, when you define a column as sparse, SQL Server optimizes storage for rows where this column contains a
NULL value by not using storage for those NULL entries. This results in significant space savings when the column has many NULL values.

Sparse columns
create table sparsetable (heading1 nvarchar(10) sparse null)
Columnstore indexes are a special type of index in SQL Server designed to store data in a columnar format rather than the traditional row-based format. This approach optimizes the storage and retrieval
of large amounts of data, especially for analytics and reporting workloads.

Columnstore Indexes
❖ Clustered and nonclustered columnstore index

CREATE NONCLUSTERED COLUMNSTORE INDEX [NonClusteredColumnStoreIndex-20220414-


122043] ON [dbo].[NewTable]
(
[newcolumn]
)

CREATE CLUSTERED COLUMNSTORE INDEX [ClusteredColumnStoreIndex-20220414-


122043] ON [dbo].[NewTable]
Quiz
Identify and resolve concurrency problems
Diagnose blocking, live locking and
deadlocking
❖ Blocking – the second connection is blocked

❖Live locking- Shared locks prevent another process from acquiring exclusive locks (but one
process wins then the next process wins)

❖Deadlocking – two processes compete for the same resource


Diagnose deadlocking - practise
❖Transaction 1 –Q-58 ❖Transaction 2 – Q-59
begin transaction begin transaction
1
update [dbo].[Table1] update [dbo].[Table2]
2
set column1=column1+1 set ColorName='Brown2'
where ColorName='Brown'
select * from [dbo].[Table2]
4 select * from [dbo].[Table1] 3

deadlock Cycle
Transaction 1 is waiting for a lock on Table2, which is held by Transaction 2.
Transaction 2 is waiting for a lock on Table1, which is held by Transaction 1.
Neither transaction can proceed, resulting in a deadlock.

sql server will choose a process to kill


Diagnose deadlocking - practise
exec sp_who2
Diagnose deadlocking - practise
❖ Activity Monitor
Monitor via DMV (Dynamic management
view)
select resource_type, request_status, request_mode, request_session_id from
sys.dm_tran_locks
Session 58 is holding a lock on a
resource (e.g., a row or page in
Table1 and is waiting for a lock
held by Session 59.
Session 59 is holding a lock on a
resource (e.g., a row or page in
Table2 and is waiting for a lock
held by Session 58.
Monitor via DMV (Dynamic management
view)
Transaction (Process ID 58) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim
Monitor via DMV (Dynamic management
view)
select * from sys.dm_os_waiting_tasks
where session_id in(58,59)

select * from sys.dm_exec_requests


Examine deadlocking issues using the
SQL server logs
❖ Trace flags
❖ Flag 1222 returns deadlock information
❖ Flag 1204 provides information about the nodes involved in the deadlock

Execute the following commands to enable the trace flags globally:

dbcc traceon(1204,-1)
dbcc traceon(1222,-1) The -1 argument enables the trace flag globally for all sessions on the SQL Server instance.

Automating Deadlock Analysis


For long-term monitoring:
Use Extended Events or SQL Profiler to capture deadlock information without relying on manual trace flags.
Deadlock events are logged under the sqlserver.deadlock_graph event.
Examine deadlocking issues using the
SQL server logs
Examine deadlocking issues using the
SQL server logs
Quiz
Collect and analyse troubleshooting data
Collect trace data by using SQL Server
Profiler
❖ SQL Server Profiler

❖ Finding problem slow queries

❖ Capturing a series of T-SQL statements that lead to a problem

❖Analyzing the performance of SQL server

❖ Correlating performance counters to diagnose problems


Collect trace data by using SQL Server
Profiler
Collect trace data by using SQL Server
Profiler
Collect trace data by using SQL Server
Profiler
Use XEvents (Extended Events)
Extended Events XEvents) is a lightweight and highly configurable system in SQL Server that allows users to collect, store, and analyze data about various
activities in the SQL Server instance. It is a powerful tool for monitoring and troubleshooting SQL Server performance, helping database administrators DBAs
capture and analyze events such as query execution, waits, deadlocks, and more.

Use XEvents (Extended Events)


Use XEvents (Extended Events)
Use XEvents (Extended Events)
Use XEvents (Extended Events)
Know what affects performance
❖ Blocking, deadlocking and locking

❖ DMVs

❖ Performance monitor
Diagnose performance problems with
DMVs Dynamic Management Views DMVs

❖CPU usage
select current_workers_count, work_queue_count, pending_disk_io_count
from sys.dm_os_schedulers
where scheduler_id <=255

❖ Buffer Pool/ Data cache


select count(database_id)*8/1024.0 as [cache in MB], database_id
from sys.dm_os_buffer_descriptors
group by database_id
Diagnose performance problems with
DMVs
select * from [Link]
where object_name like
'SQLServer:Buffer Manager%'
order by counter_name
Collect performance data by using
Performance Monitor
Collect performance data by using
Performance Monitor
Collect performance data by using
Performance Monitor
❖ Processor: % Privileged Time
❖The amount of time the processor spends on processing of input/output requests from SQL Server

❖ Processor: % User Time


❖ The percentage of time the processor spends on executing things like sql server

❖ System: Processor queue length


❖ The number of threads waiting for processing time (what is being the bottleneck)
Collect performance data by using
Performance Monitor
❖ Data collector sets
IO, Memory and CPU bottlenecks
❖ Memory
❖Memory: Available bytes
❖Memory: Pages/sec
❖ Process: Working Set
❖SQL Server: Buffer Manager: Buffer Cache Hit Ratio
❖SQL Server: Buffer Manager: Databases Pages
❖SQL Server: Memory Manager: Total Server Memory

❖ Processor
❖Processor : % Privildged Time
❖Processor : % User Time
❖ System: Processor queue length
IO, Memory and CPU bottlenecks
❖ IO Primary
❖PhysicalDisk: Avg Disk sec/Write
❖PhysicalDisk: Avg Disk sec/Read

❖ IO Secondary
❖PhysicalDisk: Avg Disk queue length
❖PhysicalDisk: Disk Bytes/sec
❖ PhysicalDisk: Disk Transfer/sec
Quiz
Audit SQL Server Instances
Implement a security strategy for
auditing and controlling the instance
Server-level audits focus on tracking activities and events at the SQL Server instance level. These audits monitor
❖ Server-level audits actions that affect the entire server, such as login attempts, permission changes, and server configuration changes.

apture activities and changes specific to a particular database within the SQL Server instance. These audits typically
❖ Database-level audits track DML Data Manipulation Language) operations such as INSERT, UPDATE, DELETE, and SELECT statements on
sensitive data, schema changes, or permission alterations within the database.

An audit in SQL Server consists of several components that define how the audit operates
and where the data is stored.
❖Components of audits
The audit itself is the process of capturing, recording, and analyzing events
❖ Audit itself
A Server Audit Specification defines the server-level events that you want to capture. This specification works
❖ Server audit specification in conjunction with a server-level audit object and allows you to define specific events, such as login attempts
or permission changes, to be audited.
❖ Database audit specification
A Database Audit Specification defines the database-level events you want to track, such as SELECT, INSERT, UPDATE, DELETE
❖ Target operations, and schema changes. It can be used to audit actions on specific tables, views, or stored procedures within a
database.

A target is where the audit logs and events are stored. SQL Server allows you to store audit logs in various locations,
and each target has different features.
Configure server audits
Configure server audits – monitor the
attemps to connect
Configure server audits – log file
Monitor elevated privileges
❖A fixed list of what the privileges currently are:
select [Link], perm.permission_name
from sys.server_permissions as perm join
sys.server_principals as princ on
perm.grantee_principal_id =
princ.principal_id

❖ Audit action type (examples for privileges)


❖Database object permission change
❖ Database object access group
❖ Database role member change group
❖Database change group
Configure database-level audit – track
who modified an object
Additional SQL server components
Full-text indexing
▪ Additional Features: Full text and semantic extraction for search

▪ Define a full-text index on a column or columns


Full-text indexing
select *
from [Person].[Address]
where CONTAINS(AddressLine1, 'Drive’)

select *
from [Person].[Address]
where CONTAINS(AddressLine1, 'Drive NEAR Glaze')
Filestream
▪ Allows to store unstructed data like documents and images on the file system.
▪ Filestream is not automatically enabled
Filestream
exec sp_configure filestream_access_level, 2
reconfigure

▪ Restart SQL server service

▪Create a filestream database


FileTables allow SQL Server to store files and directories within the database, with each row representing a file or folder.
Non-transactional access enables integration with SQL Server while using native file system APIs.
Files can be inserted into FileTables using OPENROWSET to read files from the file system.
FileTables automatically handle storing files in a designated directory and provide metadata about each file, such as file names,

Filestream
types, and directory paths.
The FileTable feature in SQL Server extends the capabilities of FILESTREAM by allowing you to store and manage unstructured data
(such as documents and files) directly in the database while maintaining native file system functionality. FileTables provide an easy
way to work with files and folders directly within SQL Server, without the need to manage file paths or file system operations

Filestream separately.

CREATE DATABASE [filestramdatabase]


CONTAINMENT = NONE
ON PRIMARY
( NAME = N'filestramdatabase', FILENAME = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\[Link]' , SIZE = 8192KB , FILEGROWTH =
65536KB ),
FILEGROUP [filestreamdata] CONTAINS FILESTREAM
( NAME = N'filestreamdata', FILENAME = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\filestreamdata' )
LOG ON
( NAME = N'filestramdatabase_log', FILENAME = N'C:\Program Files\Microsoft SQL
Server\[Link]\MSSQL\DATA\filestramdatabase_log.ldf' , SIZE = 8192KB , FILEGROWTH =
65536KB )
GO
FileTable
▪FileTables remove a significant barrier to the use of SQL Server for the storage and management
of unstructured data

▪Store files and documents in special tables in SQL Server called FileTables

▪ Every row in a FileTable represents a file or a directory

▪ Require a non-transactional access


FileTable_Directory = 'MyFiles': Specifies the name of the folder on the file system where files will be stored,
corresponding to the directory MyFiles set in the previous ALTER DATABASE command.
FileTable_Collate_Filename = database_default: Specifies that the file name collation uses the default collation
settings for the database.

FileTable OPENROWSETBULK 'c:\[Link]', SINGLE_BLOB is used to read the contents of the file 'c:\[Link]' as a single
large binary object SINGLE_BLOB.
x.* refers to all columns in the result of the OPENROWSET function, which is the binary data of the file.
The file is inserted into the name column (as the file name, '[Link]') and the file_stream column (as the file's
binary data).
ALTER DATABASE [filestramdatabase]
SET FILESTREAM(NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = 'MyFiles')

create table [Link] as FILETABLE


WITH(FileTable_Directory='MyFiles', FileTable_Collate_Filename=database_default);

INSERT INTO [Link](name,file_stream)


SELECT '[Link]', x.* from OPENROWSET (BULK 'c:\[Link]', SINGLE_BLOB) AS x

You might also like