0% found this document useful (0 votes)
34 views5 pages

Understanding Bits and Data Representation

Uploaded by

prishag001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views5 pages

Understanding Bits and Data Representation

Uploaded by

prishag001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

1. What is meant by the term "bit"?

What is its role in data


representation?

A bit, short for binary digit, is the smallest unit of data in computing and
can hold one of two values: 0 or 1. Bits are fundamental to digital systems
because all data—whether numbers, text, images, or sound—is ultimately
represented using combinations of bits. In data representation, bits are
grouped into larger units such as bytes (8 bits), kilobytes, and megabytes
to store more complex information. For example, a single character in
ASCII is typically stored using one byte. The binary nature of bits aligns
with the on/off states of electronic circuits, making them ideal for reliable
and efficient data processing.

2. Define binary system and explain why digital computers use it


rather than the denary (decimal) system.

The binary system is a base-2 numeral system that uses only two digits: 0
and 1. Digital computers use the binary system because their internal
hardware operates using electrical signals that have two distinct states—
on and off—which correspond directly to binary digits. This makes binary
representation both efficient and reliable for processing and storing data.
In contrast, the denary system, which uses ten digits (0–9), does not align
with the binary nature of electronic circuits, making it unsuitable for direct
implementation in digital systems. Binary simplifies circuit design and
reduces the risk of signal degradation, especially over long distances.

3. What is ASCII? What is the difference between ASCII and


Unicode?

ASCII (American Standard Code for Information Interchange) is a character


encoding standard that uses 7 or 8 bits to represent up to 128 or 256
characters, including English letters, digits, and basic punctuation.
Unicode is a more comprehensive encoding system that uses up to 32 bits
to represent over 143,000 characters from multiple languages and symbol
sets worldwide. The main difference is that ASCII is limited to the English
language and basic symbols, while Unicode supports a vast range of
characters, making it suitable for global communication and multilingual
computing environments. Unicode also includes emojis and special
characters, which are not supported in ASCII.
4. Explain what is meant by most significant bit (MSB) and least
significant bit (LSB).

In a binary number, the most significant bit (MSB) is the bit with the
highest positional value, located furthest to the left. It determines the
largest portion of the number’s value. Conversely, the least significant bit
(LSB) is the bit with the lowest positional value, located furthest to the
right, and contributes the smallest portion to the overall value. For
example, in the binary number 1001, the MSB is 1 (value of 8) and the
LSB is 1 (value of 1). Understanding MSB and LSB is essential in data
manipulation, binary arithmetic, and error detection techniques such as
parity checking.

5. Why do computers need character encoding standards? Give


two examples.

Character encoding standards are necessary to ensure that text data is


stored and interpreted consistently across different systems and
platforms. Without a standard, the same binary value could represent
different characters on different machines, leading to data corruption or
misinterpretation. Two common encoding standards are ASCII and
Unicode. ASCII is used for basic English text, while Unicode supports a
wide range of characters from various languages, enabling global
communication and compatibility in modern computing systems. These
standards also allow for efficient data exchange between devices and
software applications.

6. Compare the advantages and disadvantages of using Unicode


over ASCII.

Unicode offers the advantage of supporting a vast array of characters


from multiple languages, symbols, and scripts, making it ideal for
international applications and multilingual software. It also includes
compatibility with ASCII, allowing seamless integration with legacy
systems. Unicode supports emojis and special symbols, which are not
available in ASCII. However, Unicode requires more storage space per
character—up to 32 bits compared to ASCII’s 7 or 8 bits—which can
increase file sizes and memory usage. Despite this, the benefits of global
compatibility and linguistic diversity outweigh the drawbacks in most
modern computing environments.
7. Explain what is meant by sampling rate and sampling
resolution. How do they affect sound quality and file size?

Sampling rate refers to the number of samples taken per second when
converting analogue sound into digital format, measured in Hertz (Hz).
Sampling resolution, or bit depth, indicates the number of bits used to
store each sample. Higher sampling rates capture more detail from the
original sound, improving audio quality but increasing file size. Similarly,
greater bit depth allows for more precise representation of sound
amplitude, enhancing fidelity but also requiring more storage. Therefore,
both parameters directly influence the trade-off between sound quality
and file size. For example, CD-quality audio typically uses a 44,100 Hz
sampling rate and 16-bit resolution.

8. Why is analogue sound sampled for digital storage? What


limitations are there from sampling?

Analogue sound is sampled to convert continuous audio signals into


discrete digital data that can be stored, processed, and transmitted by
computers. Sampling involves measuring the amplitude of the sound
wave at regular intervals and encoding these values in binary. However,
this process introduces limitations: if the sampling rate or resolution is too
low, important details may be lost, resulting in poor sound quality.
Additionally, sampling cannot perfectly replicate the original analogue
signal, leading to a degree of approximation and potential distortion. The
Nyquist theorem states that the sampling rate must be at least twice the
highest frequency to avoid aliasing.

9. Describe how digital images are represented in a computer


(mention pixels, colour depth etc.).

Digital images are represented as a grid of pixels, where each pixel


corresponds to a tiny square of colour. The resolution of an image refers to
the number of pixels in its width and height, which determines the level of
detail and sharpness. Each pixel’s colour is stored as a binary value, and
the number of bits used per pixel is known as the colour depth or bit
depth. Most images use the RGB colour model, where each pixel’s colour
is created by combining values for red, green, and blue channels. In a 24-
bit image, 8 bits are used for each channel, allowing over 16 million
possible colours. Higher resolution and colour depth improve image
quality but also increase file size, which is calculated by multiplying the
total number of pixels by the bits per pixel.
10. What is colour depth / bit depth in images? How does
increasing colour depth affect quality and storage?

Colour depth, also known as bit depth, refers to the number of bits used to
represent the color of a single pixel in a digital image. A higher bit depth
allows for more distinct colors, resulting in smoother gradients and more
realistic images. For instance, an 8-bit image can display 256 colors, while
a 24-bit image can show over 16 million. Increasing color depth enhances
visual quality but also increases the amount of data required to store the
image, leading to larger file sizes and greater memory usage. This trade-
off must be considered when balancing image quality and storage
efficiency.

11. What is the difference between lossless and lossy


compression? Give examples of when each would be used.

Lossless compression reduces the size of a file without losing any data.
The original file can be perfectly reconstructed from the compressed
version. This method is typically used for text documents, spreadsheets,
and software files where accuracy is essential. Common formats that use
lossless compression include ZIP, PNG, and FLAC. Lossy compression, in
contrast, permanently removes some data to achieve higher compression
ratios. The original file cannot be fully restored, but the loss is often
imperceptible to human senses. This method is suitable for multimedia
files such as JPEG images, MP3 audio, and MP4 video, where some
reduction in quality is acceptable.

12. Explain why compression is needed for images/sound/data.


What are the trade-offs?

Compression is needed to reduce the size of files for efficient storage,


faster transmission, and lower bandwidth usage. This is especially
important for multimedia content such as images and sound, which can
be large and resource-intensive. The trade-off lies in the balance between
file size and quality. Lossless compression maintains quality but offers
limited size reduction, while lossy compression achieves greater savings
at the cost of fidelity. Choosing the appropriate method depends on the
context and the importance of preserving original data. Compression also
helps reduce load times and improves performance in web and mobile
applications.
13. What are the units of data storage (byte, kilobyte, megabyte
etc.)? Why is it important to understand these in data
representation?

Data storage units include bytes (B), kilobytes (KB), megabytes (MB),
gigabytes (GB), and terabytes (TB), each representing increasing powers
of two. For example, 1 KB = 1024 bytes, 1 MB = 1024 KB, and so on.
Understanding these units is essential in data representation to estimate
file sizes, manage storage capacity, and optimize system performance. It
also helps users make informed decisions when transferring, compressing,
or archiving data, ensuring compatibility and efficiency across different
platforms and devices. Additionally, discrepancies may arise between
operating systems and manufacturers due to rounding conventions (e.g.,
power of 10 vs power of 2)

Describe how a sound is stored in a computer from the point of


recording to digital representation. Include the terms sampling
rate and sampling resolution.

[5 marks]

When sound is recorded, it begins as an analogue signal. To store it


digitally, the signal is sampled at regular intervals. The sampling rate
refers to how many samples are taken per second, measured in Hertz
(Hz). Each sample records the amplitude of the sound wave.

The sampling resolution (or bit depth) is the number of bits used to
store each sample, which determines how accurately the amplitude is
recorded. Higher sampling rates and resolutions result in better sound
quality but increase file size.

Each sample is converted into a binary value and stored sequentially,


forming a digital audio file that can be processed and played back by the
computer.

You might also like