0% found this document useful (0 votes)
188 views21 pages

Error Detection Methods in Data Link Layer

1. The document discusses different techniques for error detection in computer networks, including parity checks, checksums, and cyclic redundancy checks (CRCs). 2. Parity checks add an extra bit to make the total number of 1s in a data block even or odd for error detection. Checksums and CRCs use mathematical calculations on data blocks to generate check values for detecting errors. 3. The document provides examples to illustrate how parity checks, checksums, and CRCs are used to detect errors during data transmission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
188 views21 pages

Error Detection Methods in Data Link Layer

1. The document discusses different techniques for error detection in computer networks, including parity checks, checksums, and cyclic redundancy checks (CRCs). 2. Parity checks add an extra bit to make the total number of 1s in a data block even or odd for error detection. Checksums and CRCs use mathematical calculations on data blocks to generate check values for detecting errors. 3. The document provides examples to illustrate how parity checks, checksums, and CRCs are used to detect errors during data transmission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit 3

24 February 2023 19:18

After Academy

Error
A condition when the receiver’s information does not match with the sender’s information.
During transmission, digital signals suffer from noise that can introduce errors in the binary
bits travelling from sender to receiver. That means a 0 bit may change to 1 or a 1 bit may
change to 0.

Error Detection Techniques:


Following are the different types of error detection techniques that we use for detecting an error in
computer networks,
1. Simple parity check
2. Two-dimensional parity check
3. Checksum
4. Cyclic redundancy check.

Simple Parity Check


Simply parity check is the most simple mechanism of error detection in computer networks and a
parity bit is used in this technique. Parity is simply an additional bit and is also called a redundant
bit. A parity bit is appended at the end of the data unit before transmitting the data.
This will increase the size of the data because we are adding one extra bit with the data. For
example,–If we are sending the 8-bit data, then after appending the parity bit, the size of the data
block will become 9 bits.
There are two types of parity bits that can be added for error detection as follows:
Even Parity: In even parity, an additional bit is added at the end of the data block to make the no. of
1’s even.
If the no. of 1’s is already even in the data, then the parity bit 0 is added. If the no. of 1’s is odd in
the data block, then the parity bit 1 is added.
Odd Parity: In odd parity, an additional bit is added to make the no. of 1’s odd.
If the no. of 1’s is already odd in the data unit then, parity bit 0 is added at the end of the data unit.
If the no. of 1’s is even then parity bit 1 is added.
Two Dimensional Parity Check
A table is used to arrange the data in a two-dimensional parity check. In this technique, the whole
data is divided into rows. Row and column parities are calculated. Parity bits are appended at the
end of the data block and these parity bits are sent along with data.
Take the example of the following data to understand the concept of a two-dimensional parity
check.

Checksum-
Checksum is an error detection method.

Error detection using checksum method involves the following steps-

Step-01:
At sender side,

• If m bit checksum is used, the data unit is divided into segments of m bits.
• All the m bit segments are added.
• The result of the sum is then complemented using 1’s complement arithmetic.
• The value so obtained is called as checksum.

Step-02:

• The data along with the checksum value is transmitted to the receiver.

Step-03:
At receiver side,

• If m bit checksum is being used, the received data unit is divided into segments of m bits.
• All the m bit segments are added along with the checksum value.
• The value so obtained is complemented and the result is checked.

Then, following two cases are possible-

Case-01: Result = 0

If the result is zero,

• Receiver assumes that no error occurred in the data during the transmission.
• Receiver accepts the data.

Case-02: Result ≠ 0

If the result is non-zero,

• Receiver assumes that error occurred in the data during the transmission.
• Receiver discards the data and asks the sender for retransmission.

Checksum Example-
Consider the data unit to be transmitted is-

10011001111000100010010010000100

Consider 8 bit checksum is used.

Step-01:

At sender side,
The given data unit is divided into segments of 8 bits as-

Now, all the segments are added and the result is obtained as-

• 10011001 + 11100010 + 00100100 + 10000100 = 1000100011


• Since the result consists of 10 bits, so extra 2 bits are wrapped around.
• 00100011 + 10 = 00100101 (8 bits)
• Now, 1’s complement is taken which is 11011010.
• Thus, checksum value = 11011010

Step-02:

• The data along with the checksum value is transmitted to the receiver.

Step-03:

At receiver side,

• The received data unit is divided into segments of 8 bits.


• All the segments along with the checksum value are added.
• Sum of all segments + Checksum value = 00100101 + 11011010 = 11111111
• Complemented value = 00000000
• Since the result is 0, receiver assumes no error occurred in the data and therefore accepts it.

From <[Link]

Cyclic Redundancy Check-

• Cyclic Redundancy Check (CRC) is an error detection method.


• It is based on binary division.

CRC Generator-

• CRC generator is an algebraic polynomial represented as a bit pattern.


• Bit pattern is obtained from the CRC generator using the following rule-

Example-
Consider the CRC generator is x7 + x6 + x4 + x3 + x + 1.

The corresponding binary pattern is obtained as-


Steps Involved-

Error detection using CRC technique involves the following steps-

Step-01: Calculation Of CRC At Sender Side-

At sender side,

• In data (no. of generator's binary code digit -1) 0 are added at last .
• Binary division is performed of the resultant string with the CRC generator.
• After division, the remainder so obtained is called as CRC.
• The remainder is placed in place of added 0's .

Ex:

Step-02: Appending CRC To Data Unit-

At sender side,

• The CRC is obtained after the binary division.


• The string of n 0’s appended to the data unit earlier is replaced by the CRC remainder.

Step-03: Transmission To Receiver-

• The newly formed code word (Original data + CRC) is transmitted to the receiver.

Step-04: Checking at Receiver Side-

At receiver side,

• The transmitted code word is received.


• The received code word is divided with the same CRC generator.
• On division, the remainder so obtained is checked.
The following two cases are possible-

Case-01: Remainder = 0

If the remainder is zero,

• Receiver assumes that no error occurred in the data during the transmission.
• Receiver accepts the data.

Case-02: Remainder ≠ 0

If the remainder is non-zero,

• Receiver assumes that some error occurred in the data during the transmission.
• Receiver rejects the data and asks the sender for retransmission.

From <[Link]

Data Link Controls


Data Link Control is the service provided by the Data Link Layer to provide reliable data
transfer over the physical medium. The Data link layer provides the coordination among
the devices so that no collision occurs.
The Data link layer provides three functions:
• framing
• Flow Control
• Error Control

Framing

Framing is a point to point connection between two devices that consists of a wire in
which data is transmitted as a stream of bits . When the frame size becomes large , a
packet is divided into small frames . These smaller size frames enable error and flow control
more efficiently.

Problems in Framing

Given below are some of the problems caused due to the framing:
1. Detecting the start of the frame Whenever a frame is transmitted then every station
must be able to detect this frame. Any Station detects the frame by looking out for a
special sequence of bits that are marked at the beginning of the frame that is Starting
Frame Delimiter(SFD).
2. How any station detects a frame Every station in the network listens to the link. If an
SFD is detected then the sequential circuit alerts the station. After that, the Station
checks the destination address in order to accept or reject the frame.
3. Detecting the end of the frame It is when to stop reading the frame.

Parts of a frame

Different parts of a frame are as follows:

1. Flag A flag is used to mark the beginning and end of the frame.
[Link] The frame header mainly contains the address of the source and the destination
of the frame.
[Link] The frame trailer mainly contains the error detection and error correction bits.
[Link] Field This field contains the message to be delivered.

Types of Framing

Framing is mainly categorized into two parts:


• Fixed-size Framing
• Variable-size Framing
Let us cover the above given two types one-by-one;

Fixed-size framing

In the fixed-size framing, there is no need for defining the boundaries of the frame. The size
or length of the frame itself can be used as a delimiter.
One drawback of fixed size framing is that it will suffer from Internal fragmentation if the
size of data is less than the size of the frame.

Variable-size framing

In Variable-size framing, the size of each frame is different. Thus there is a need of the way
in order to define the end of the frame and the beginning of the next.
There are two approaches used for Variable-size framing:
Character-Oriented Protocols

In the Character-Oriented protocol, data are in 8-bit characters .


Now the parts of the frame in Character-Oriented Framing are as follows:
[Link] Header
The header of the frame contains the address of the source and destination in the form of
bytes.
[Link] Field
The Payload field mainly contains the message that is to be delivered. In this case, it is a
variable sequence of data bytes.
[Link] trailer
The trailer of the frame contains the bytes for error correction and error detection.
[Link]
In order to separate one frame from the next an 8-bit(that is 1 byte) flag is added at the
beginning and end of the frame.
Let us take a look at the frame in Character-Oriented Protocol:

Now any pattern that is used for the flag could also be a part of the Information. If this
happens then the receiver encounters this pattern in the middle of the data and then
thinks that it has reached the end of the frame.
In order to fix the above problem, the byte-stuffing strategy was added to the character-
oriented framing.
Byte-stuffing

If the flag byte is present in the message byte sequence , there should be a strategy so
that the receiver does not consider the pattern as the end of the frame. Here, a special
byte called the escape character(ESC) is added before every byte in the message with the
same pattern as the flag byte. If the ESC sequence is found or present in the message
byte, then another ESC byte is added before it .

Bit-Oriented Protocols

In Bit-oriented framing mainly the data section of the frame is a sequence of bits that are to
be interpreted by the upper layer as text, graphics, audio, video, etc.
In this, there is also a need for a delimiter in order to separate one frame from the other.
Let us take a look at the frame in Bit-oriented Protocol:

Bit-Stuffing
The delimiting flag sequence generally contains six or more consecutive 1's . Most
protocols use the 8-bit pattern 01111110 as flag. In order to differentiate the message from
the flag in case of same sequence present in message a 0 bit is added at the last after 5
continuous 1's . When the receiver receives the data it removes the added 0 which is called
Bit un-stuffing.
From <[Link]

Flow Control
Flow control tells the sender how much data should be sent to the receiver so that it
is not lost. This mechanism makes the sender wait for an acknowledgment before
sending the next data. There are two ways to control the flow of data:

1. Stop and Wait Protocol


2. Sliding Window Protocol

Stop and Wait Protocol


It is the simplest flow control method. In this, the sender will send one frame at a
time to the receiver. Until then, the sender will stop and wait for the
acknowledgment from the receiver. When the sender gets the acknowledgment then
it will send the next data packet to the receiver and wait for the acknowledgment
again and this process will continue. This can be understood by the diagram below.

Suppose if any frame sent is not received by the receiver and is lost. So the receiver
will not send any acknowledgment as it has not received any frame. Also, the sender
will not send the next frame as it will wait for the acknowledgment for the previous
frame which it had sent. So a deadlock situation can be created here. To avoid any
such situation there is a time-out timer. The sender will wait for this fixed amount of
time for the acknowledgment and if the acknowledgment is not received then it will
send the frame again.

There are two types of delays while sending these frames:

• Transmission Delay: Time taken by the sender to send all the bits of the frame
onto the wire is called transmission delay. This is calculated by dividing the data
size(D) which has to be sent by the bandwidth(B) of the link.
Td = D / B

• Propagation Delay: Time taken by the last bit of the frame to reach from one
side to the other side is called propagation delay. It is calculated by dividing the
distance between the sender and receiver by the wave propagation speed.
Tp = d / s ; where d = distance between sender and receiver, s = wave propagation
speed

The propagation delay for sending the data frame and the acknowledgment frame is
the same as distance and speed will remain the same for both frames. Hence, the
total time required to send a frame is

Total time= Td(Transmission Delay) + Tp(Propagation Delay for data frame) +


Tp(Propagation Delay for acknowledgment frame)

The sender is doing work only for Td time and for the rest 2Tp time the sender is
waiting for the acknowledgment.

Efficiency = Useful Time/ Total Time

η=TdÊ/Ê(Td+2Tp)

Advantages of Stop and Wait Protocol


1. It is very simple to implement.
Disadvantages of Stop and Wait Protocol
1. We can send only one packet at a time.
2. If the distance between the sender and the receiver is large then the
propagation delay would be more than the transmission delay. Hence, efficiency
would become very low.
3. After every transmission, the sender has to wait for the acknowledgment and
this time will increase the total transmission time.

Sliding Window Protocol


As we saw that the disadvantage of the stop and wait protocol is that the sender
waits for the acknowledgment and during that time the sender is idle. In sliding
window protocol we will utilize this time. We will change this waiting time into
transmission time.

A window is a buffer where we store the frames. Each frame in a window is


numbered. If the window size is n then the frames are numbered from the number 0
to n-1. A sender can send n frames at a time. When the receiver sends the
acknowledgment of the frame then we need not store that frame in our window as it
has already been received by the receiver. So, the window in the sender side slides to
the next frame and this window will now contain a new frame along with all the
previous unacknowledged frames of the window. At any instance of time window
will only contain the unacknowledged frames. This can be understood with
the example below:

1. Suppose the size of the window is 4. So, the frames would be numbered as
0,1,2,3,0,1,2,3,0,… so on.
2. Initially, the frames in the window are 0,1,2, 3. Now, the sender starts
transmitting the frames. The first frame is sent, then second and so on.
3. When the receiver receives the first frame i.e. frame 0. Then it sends an
acknowledgment.
4. When the acknowledgment is received by the sender then it knows that the first
frame has been received by the receiver and it need not keep its record. So,
the window slides to the next frame.
5. The new window contains the frame 1, 2, 3, 0. In this way, the window slides
hence the name sliding window protocol.

Using sliding window protocol, the efficiency can be made maximum i.e. 1. In sliding
window protocol we are using the propagation delay time also for the transmission.
For doing this we the sender should be sending the data frame all the time i.e for Td+
2Tp time. So, what should be the number of packets such that the efficiency is
maximum?

We will apply a simple unitary method to find this. In Td units of time, we can send
one packet. So in one unit of time, we can send 1/Td packets. We have total time as
Td+2Tp. Therefore, in Td+2Tp time we can send (Td+2Tp)/Td packets. Let a=Tp/Td.
So, if we send 1+2a packets then the efficiency is 1.

Td units of time → 1 packet transmitted

1 unit of time → (1/ Td) packet transmitted

Td + 2Tp units of time → (Td + 2Tp) / Td packets transmitted


This is how the flow of data is controlled using the above two mechanisms.

From <[Link]

Error Control
Error Control in the data link layer is a process of detecting and retransmitting
the data which has been lost or corrupted during the transmission of data. Any
reliable system must have a mechanism for detecting and correcting such errors.
Error detection and correction occur at both the transport layer and the data link
layer. Here we will talk about the data link layer and check bit by bit that if there
is any error or not.

Types of error
Single bit Error: When there is a change in only one bit of the sender's data then
it is called a single bit error.

Example: If the sender sends 101(5) to the receiver but the receiver receives
100(4) then it is a single bit error.

101(sent bits) → 100(received bits)

Burst Error: When there is a change in two or more bits of the sender’s data
then it is called a burst error.

Example: If the sender sends 1011(11) to the sender but the receiver receives
1000(8) then it is a burst error.

1011(sent bits) → 1000(received bits)

Phases in Error Control


• Error Detection: Firstly, we need to detect at the receiver end that the data
received has an error or not.
• Acknowledgement: If any error is detected the receiver sends a negative
acknowledgement(NACK) to the receiver.
• Retransmission: When the sender receives a negative acknowledgement or
if any acknowledgement is not received from the receiver sender
retransmits the data again.

Error Detection

1. Simple parity check

2. Two dimetional parity check

3. Checksum

4. Crc

Retransmission
When any error is detected then the specified frames are sent again this process
is called automatic repeat request(ARQ). Error Control in the data link layer is
based on the ARQ.

The following error control techniques can be used once the error is detected.

1. Stop and wait ARQ


2. Sliding Window ARQ

Stop and Wait ARQ


A time out counter is maintained on the sender's side. Firstly, if the sender does
not receive the acknowledgement of the sent data within the given time then
the sender assumes that the sent data has been lost or the acknowledgement of
the data has been lost. So, the sender retransmits the data to the
receiver. Secondly, if the receiver detects an error in the data frame indicating
that it has been corrupted during the transmission the receiver sends a
NACK(negative acknowledgement). If the sender receives a negative
acknowledgement of the data then it retransmits the data.

Sliding Window ARQ


In sliding window ARQ, a sender can send multiple data frames at the same time.
The sender keeps a record of all the sent frames until they have been
acknowledged. The receiver can send an ACK (acknowledgement ) or
NACK(negative acknowledgement) depending upon if the data frame is received
correctly, if any error has been detected, or has been lost.

TheÊslidingÊARQÊisÊofÊtwoÊtypes:

1. Go-Back-N ARQ
2. Selective Repeat ARQ

Go-Back-N ARQ
In this protocol, if any frame is lost or corrupted then all the frames since the last
frame that was acknowledged are sent once again. The sender's window size is N
but the receivers window size is only one.

Example: Suppose we have a window size of 4 for the data frames which we are
going to send. Now, suppose while sending the data frame 2 some error
occurred and it got corrupted. So the receiver will send a negative
acknowledgement (NACK) of the data. All the data frames after the last
acknowledged(ACK) frames i.e after frame 1 will now be sent again.
Limitations of Go-Back-N ARQ
In this, we have to send all the frames once again even though it has no errors. In
the above example, we had to send all the frame i.e 2, 3, 4, 5 once again though
the error was only in frame 2 . How we can overcome this?

Selective Repeat ARQ


In this ARQ, if any frame is lost or corrupted then only that frame is sent again
which has a negative acknowledgement. The sender’s window size and the
receiver’s window size is the same here.

It removes the problem of the Go-Back-N ARQ as the error-free frames can be
accepted as the receiver's window size now equal to the sender's size, unlike the
Go-Back-N ARQ where the receiver's window size was only 1. The retransmission
method is modified as only the individual frames are retransmitted.

Example: In the above example, if there was an error in the frame 2. So, we will
send only the frame number 2 again.
This is how the error control is done in networking.

From <[Link]

What are Multiple Access Protocols?


Data transmission between two nodes is handled by the Data Link Layer. Its primary
duties include data link control and multiple access control.

Data Link Control


By utilising methods like framing, error control, and flow control, the data link control
is in charge of ensuring that messages are reliably transmitted via transmission
channels. To regulate a data link, use the Stop and Wait ARQ command.

Multiple Access Control


The data connection control layer is sufficient if there is a dedicated link present
between the sender and the receiver, but numerous stations can access the channel at
once if there isn’t. Therefore, it is necessary to use several access protocols to reduce
collision and prevent crosstalk.

Classification of Multiple Access Protocol


Multiple access protocols may also be classified into: Random Access Protocol,
Controlled Access and Channelization.

1. Random Access Protocol


All stations in the random access protocol have equal superiority, which means that no
station has higher priority than any other station. Depending on the status of the
medium, any station may send data ( idle or busy). It has two attributes:
• There is no set timing for data transmission.
• The order of the stations delivering data is not fixed.
The following are divisions of the random access protocols:

(a) ALOHA
Although ALOHA was created for wireless LAN, it can also be used for shared
mediums. This allows for simultaneous data transmission from numerous stations,
which might cause collisions and jumbled data.

(b) CSMA
Fewer collisions are guaranteed by carrier sensing multiple access (CSMA) since the
station must first determine whether the medium is busy or idle before delivering
data. If it isn’t idle, it waits for the channel to become idle before sending data. Due to
propagation latency, there is still a potential for collision in CSMA.
Here are the CSMA access modes:
• Non-persistent: The node senses the channel; if it is free, it sends the data; if not,
it checks the medium once or twice (not continuously) and sends the data when it
is.
• 1-persistent: The node senses the channel, sends the data if it is idle, or
constantly checks the medium for idleness before sending data unconditionally
(with a probability of 1) when the channel becomes idle.
• P-persistent: The node senses the media and sends data with p probability if it is
idle. If the data is not transferred ((1-p) probability), the system waits a while
before checking the media once more. If the medium is still empty, the system
sends the data with a p probability. This process will repeat until the frame is
sent. It is used in packet radio and Wifi systems.
• O-persistent: Transmission takes place in the sequence determined by the
superiority of nodes. The node waits because of its time slot in order to send data
if the medium is not in use.

(c) CSMA/CD
CSMA/CD is an abbreviation of Carrier Sense Multiple Access/Collision Detection. It
refers to the multiple access carrier with collision detection. In CSMA/CD, all the
stations have the ability to stop data transmission if a collision is found anywhere.

(d) CSMA/CA
CSMA/CA stands for Carrier Sense Multiple Access/Collision Avoidance. Multiple
access with carrier awareness and collision avoidance, sender receipt of
acknowledgement signals is a necessary step in the collision detection process. The
data is successfully delivered if there is just one signal (its own), but a collision has
occurred if there are two signals (its own and that with which it collided). The collision
must significantly affect the received signal in order to discriminate between these
two scenarios. However, this is not the case in wired networks, which is why CSMA/CA
is employed here.
The CSMA/CA prevents collisions by:
1. Interframe space – In order to prevent collisions caused by propagation delays,
the station waits for the medium to become idle before sending data. This waiting
period is known as the Interframe Space (IFS). Once more, it checks to see if the
medium is idle after this. The priority of the station affects the IFS duration.
2. Contention Window – The quantity of time has been broken up into slots. When
the transmitter is prepared to send data, the number of wait slots it chooses at
random doubles each time the medium is not discovered to be idle. If the
medium is determined to be in use, the process is not restarted in its entirety;
rather, the timer is restarted when the channel is once more found to be inactive.
3. Acknowledgement – If the acknowledgement is just not received before time-
out, the sender resends the data.

2. Controlled Access Protocol


In this, the station sends the data once it has received approval from all other stations.
The stations under controlled access exchange information to determine which station
has the authority to send. In order to prevent message collisions over a shared
medium, it only permits one node to send at a time. These are the three controlled-
access techniques:
• Token Passing
• Polling
• Reservation

3. Channelization
The channelization protocol allows numerous stations to access the same channel at
the same time by sharing the link’s available bandwidth according to time, frequency,
and code. The three types of channelization are: Frequency Division Multiple Access,
Time Division Multiple Access and Code Division Multiple Access.

(a) Frequency Division Multiple Access


In order to assign each station its own band, the available bandwidth is split into equal
bands. In order to prevent crosstalk and noise, guard bands are also included to ensure
that no two bands overlap.

(b) Time Division Multiple Access


The bandwidth is split across several stations in this. Time is separated into slots for
stations to broadcast data in order to prevent collisions. However, there is a
synchronisation overhead because each station needs to be aware of its time slot. By
including synchronisation bits in each slot, this problem can be fixed. Propagation
delay is a problem with TDMA as well, but it can be remedied by adding guard bands.

(c) Code Division Multiple Access


All signals are broadcast simultaneously on one channel in this process. The concepts
of time and bandwidth are not divided. For instance, even if only two individuals in the
room speak the same language, complete data reception is still achievable when many
people are speaking at once. Similar to this, data from several stations can be sent
simultaneously in various code languages.

From <[Link]

You might also like