0% found this document useful (0 votes)
188 views7 pages

Error Detection and Correction Overview

This document discusses error detection and correction in networks. It explains that data can become corrupted during transmission and techniques are needed to detect and correct errors. There are two main types of errors - single-bit errors, where one bit is flipped, and burst errors, where multiple contiguous bits are flipped. Various coding schemes add redundancy to transmitted data to enable error detection and correction at the receiver. These include parity bits, forward error correction, retransmission, block coding, and cyclic redundancy checks. The minimum Hamming distance of a code is important for error detection and correction performance.

Uploaded by

Maykie W.A.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
188 views7 pages

Error Detection and Correction Overview

This document discusses error detection and correction in networks. It explains that data can become corrupted during transmission and techniques are needed to detect and correct errors. There are two main types of errors - single-bit errors, where one bit is flipped, and burst errors, where multiple contiguous bits are flipped. Various coding schemes add redundancy to transmitted data to enable error detection and correction at the receiver. These include parity bits, forward error correction, retransmission, block coding, and cyclic redundancy checks. The minimum Hamming distance of a code is important for error detection and correction performance.

Uploaded by

Maykie W.A.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd

Warasinee Apirakdacharchai ID5080048 DataCom Report Error Detection and Correction Networks must be able to transfer data from

one device to another with acceptable accuracy. However, any time data are sent, they can become corrupted by many factor in passage. Some application require a mechanism for detecting error and correcting error. Error occurs because of interference. This interference can change the shape of signal. There are two types of error which are Single-bit error and Burst error. In single-bit error, a 0 is change to 1 or vice [Link] mean that only one but of given data unit such as a byte, character or packet is changed

In burst-error, multiple bits are changed from 0 to 1 and vice versa. In this case we sent 0100010001000011 but received 0101110101100011. .

Burst error is more likely to occur than a single-bit error. The duration of noise is normally longer than the duration of 1 bit which mean that when noise affect data, it will affect a set of data. The central concept in detecting or correcting error is redundancy. In order to detect error, we sent some extra bit with the data. These bits allow the receiver to detect or correct corrupted bits. There are two types which are odd parity and even parity which are added and removed by the receiver.

In error detection, what we are looking for is to see if there any error. However, after we found the error, we need to correct it which is more difficult than error detection. In error correction, we know exact number of bits that are corrupt and also their location in the message. The important factor are number of error and the size of the message. There are two main methods of error correction which are Forward error correction and Retransmission. Forward error correction is the process in which the receiver tries to guess the message by using redundant bits. Retransmission is a technique in which the receiver detects the occurrence of an error and ask the sender to resend the message. Resending is repeat until the message is error free. Redundancy is achieved through various coding schemes. We can divided coding schemes into two broad categories which are block coding and convolution coding. A checksum of a message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a one's-complement prior to

transmission to detect errors resulting in all-zero messages. Taking a bitwise exclusive-or in place of performing an addition is an example of "Modulo-2 Arithmetic," which is one form of "polynomial arithmetic." I've seen one author call it "CRC arithmetic. Modulo-2 arithmetic is an arithmetic scheme; like most of the oddities that mathematicians like to study it seems completely useless to a non-mathematician at first glance but turns out to have some very practical applications. In this case, the practical application is in developing CRC checks. The basic idea of modulo-2 arithmetic is just that we are working in binary, but we don't have a carry in addition or a borrow in subtraction. This means: Addition and subtraction become the same operation: just a bit-wise exclusive-or. Because of this, the total ordering we expect of integers is replaced by a partial ordering: one number is greater than another iff its left-most 1 is farther left than the other's. This will have an impact on division, in a moment. Multiplication is just like multiplication in ordinary arithmetic, except that the adds are performed using exclusive-ors instead of additions. Division is like long division in ordinary arithmetic, except for two differences: the subtractions are replaced by exclusive-ors, and you can subtract any time the leftmost bits line up correctly (since, by the partial ordering described above, they are regarded as equal in this case). Block coding is a special case of error-control coding. Block coding techniques map a fixed number of message symbols to a fixed number of code symbols. In block coding, we divided our message into blocks. A block coder treats each block of data independently and is a memoryless device. Convolutional codes are used extensively in numerous applications in order to achieve reliable

data transfer, including digital video, radio, mobile communication, and satellite communication. These codes are often implemented in concatenation with a hard-decision code, particularly Reed Solomon. Prior to turbo codes, such constructions were the most efficient, coming closest to the Shannon limit. One of the central concepts in coding for error-control is the idea of The Hamming distance. The Hamming distance is named after Richard Hamming. It is used in telecommunication to count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called the signal distance. Hamming weight analysis of bits is used in several disciplines including information theory, coding theory, and cryptography. Although the concept of Hamming distance is the central point in dealing with error detection and correction codes, the measurement that is used for designing a code is the minimum hamming distance. The minimum Hamming distance is the smallest Hamming distance between all possible pairs in a set of words. Linear block coding is an error-correcting code for which any linear combination of codewords is another codeword of the code. Linear codes are traditionally partitioned into block codes and convolutional codes, although Turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes. Linear codes are used in forward error correction and are applied in methods for transmitting symbols (e.g., bits) on a communications channel so that, if errors occur in the communication, some errors can be detected by the recipient of a message block. The "codes" in a linear block code are blocks of symbols which are encoded using more symbols than the original value to be sent. A linear code of length n transmits blocks containing n symbols.

Cyclic codes are special linear block code with one extra property. In a cylic code, if a

codeword is cyclically shifted(rotated), the result is another codeword. A cyclic redundancy check (CRC) is a single-burst-error-detecting cyclic code and non-secure hash function designed to detect accidental changes to digital data in computer networks. It is characterized by specification of a so-called generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend, and where the remainder becomes the result. Cyclic codes have favorable properties in that they are well suited for detecting burst errors. CRCs are particularly easy to implement in hardware, and are therefore commonly used in digital networks and storage devices such as hard disk drives. One advantage of cyclic code is that encoder and decoder can easily and cheaply implemented in hardware by using a handful of electronic devices. Also a hardware implementation increases the rate of check bit and syndrome but calculation.

Reference [Link] [Link] [Link] [Link] [Link]

Error Detection and Correction IC Data Communications

by Warasinee Apirakdacharchai ID 5080048

You might also like