0% found this document useful (0 votes)
27 views6 pages

Digital Image Processing Question Bank

The document is a question bank for the course ECE 308T on Digital Image Processing at Maharaja Agrasen Institute of Technology. It includes short and long answer questions divided into four units covering topics such as image processing steps, components of image systems, image enhancement techniques, filtering methods, image compression, and restoration. The questions aim to assess students' understanding of fundamental concepts and applications in digital image processing.

Uploaded by

rajaru3112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views6 pages

Digital Image Processing Question Bank

The document is a question bank for the course ECE 308T on Digital Image Processing at Maharaja Agrasen Institute of Technology. It includes short and long answer questions divided into four units covering topics such as image processing steps, components of image systems, image enhancement techniques, filtering methods, image compression, and restoration. The questions aim to assess students' understanding of fundamental concepts and applications in digital image processing.

Uploaded by

rajaru3112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Maharaja Agrasen Institute of Technology

Question Bank - ECE 308T


Digital Image Processing
UNIT 1

Short Answer Type

1.​ List the steps involved in digital image processing.


2.​ Specify the basic components of the image processing system.
3.​ Illustrate the term ‘Image’.
4.​ Mention the applications of image processing.
5.​ Classify image sensing sensors and give short notes.
6.​ What are the primary and secondary colours?
7.​ Short answer about the terms: Hue, Saturation, Grey level
8.​ How can a digital image be represented?
9.​ Construct the photonic electromagnetic spectrum.
10.​Identify the various multispectral bands and their applications.
11.​Differentiate brightness and contrast.
12.​Identify the difference between regions and boundaries.
13.​Compare Brightness and Contrast.
14.​What do you mean by convolution? Explain all the properties of convolution.
15.​Explain CMY model.
16.​Write a note on Match Band Effect.
17.​Differentiate photopic and scotopic vision?

Long Answer Questions

1.​ In detail explain the fundamental steps involved in digital image processing
Systems.
2.​ What are the components of a digital image processing system? Explain each in
detail.
3.​ Explain in detail about image acquisition system
Maharaja Agrasen Institute of Technology

4.​ Compute the Euclidean Distance (D1), City-block Distance (D2) and Chessboard
distance (D3) for points p and q, where p and q be (5, 2) and (1, 5) respectively.
Give answers in the form (D1, D2, D3).
5.​ Write the expression to find the number of bits to store a digital image? Find the
number of bits required to store a 256 X 256 image with 32 gray levels.
6.​ Illustrate how the image is digitized by sampling and quantization process
7.​ Describe in detail about: Various sensors
8.​ Evaluate the various colour models. Explain each of them in detail.
9.​ Define briefly the following terms: i) image restoration, ii) Compression, iii)
Segmentation, iv) morpological processing
10.​Analyze the various parameters of image processing i) Band number ii)
Spectrum, iii) wave lengths, iv) applications.
11.​Explain the colour image representation.
12.​Explain the process of Analog to Digital Conversion.
13.​Explain the Quantization process with suitable examples.

UNIT 2

Short Answer Type

1.​ What is meant by image filtering?


2.​ Summarize histogram equalization.
3.​ Explain the two categories of image enhancement.
4.​ Write expressions for Gray, Log and Gamma transformations.
5.​ Specify the need for image enhancement.
6.​ What is the spatial domain method?
7.​ Identify the effect of the under sampling process.
8.​ Illustrate with examples for linear and nonlinear filters?
9.​ Evaluate the 2D sampling theorem.
10.​Define frequency domain method.
11.​List various gray level transformation techniques.
12.​Identify the IHPF, BHPF, GHPF frequency domain transfer functions.
13.​Recall the term histogram specification.
14.​Describe a Histogram?
15.​Identify the properties of fourier transform.
Maharaja Agrasen Institute of Technology

Long Answer Questions

1.​ Explain the histogram equalization method of image enhancement.


2.​ Explain histogram specification technique in detail with equations.
3.​ Develop the basics to explain with example:
i) Spatial smoothing
ii) Spatial sharpening
4.​ Write detail note about
i) Spatial domain enhancement
ii) Frequency domain enhancement
5.​ Show the various techniques in frequency domain to enhance a image with
necessary examples
6.​ Distinguish spatial correlation and convolution. Explain each with identical
examples.
7.​ Illustrate the 2D fourier transform and its pair. State and prove their properties.
8.​ Discuss the following spatial enhancement techniques
a) Spatial averaging
b) Median filtering
9.​ Compare the various image transformation techniques.
10.​Compare the various filters available under frequency domain for image
enhancement
11.​With examples explain in detail about spatial averaging.
12.​Describe in detail about various types of mean filters.
13.​Compare smoothing & sharpening in frequency domain
14.​Distinguish between spatial & frequency domain image enhancement
15.​Analyze the performance of following sharpening filters Ideal HPF
16.​What is a histogram? Explain the histogram processing with an example.
17.​Explain the Spatial Filtering with suitable examples.
18.​Explain Fourier transformation and its properties.
19.​Explain the colour models for image processing.
20.​Explain the Pseudo colouring process with suitable examples.
21.​Explain the frequency domain filtering with its types.
22.​Explain the homomorphism filtering with suitable examples.
Maharaja Agrasen Institute of Technology

UNIT 3

Short Answer Type

1.​ Define Weber ratio.


2.​ Mention the applications of image processing.
3.​ Classify image sensing sensors and give short notes.
4.​ What are the primary and secondary colours?
5.​ Short answer about the terms: Hue, Saturation, Grey level.
6.​ Explain the Homomorphism filtering.
7.​ Explain the process of filtering and Compare all filtering methods.
8.​ Explain the process of Analog to Digital Conversion.
9.​ Explain the Quantization process with suitable examples.
10.​What are the operations performed by error free compression?
11.​What is meant by Image Restoration?
12.​What are the two properties in Linear Operator?
13.​Explain additivity property in Linear Operator?
A.​ How is a degradation process modeled?
B.​ Explain homogeneity property in Linear Operator?
14.​Give the relation for degradation model for continuous function?
a. Define circulant matrix?
b. What is the concept algebraic approach?
15.​What are the two methods of algebraic approach? Define Gray-level
interpolation?
16.​What is meant by Noise probability density function?
17.​Why is the restoration called unconstrained restoration?
18.​Which is the most frequent method to overcome the difficulty to formulate the
spatial relocation of pixels?
19.​What are the three methods of estimating the degradation function?
20.​What are the types of noise models?
21.​Write the properties of Singular value Decomposition (SVD)?
22.​What is inverse filtering?
23.​What is a pseudo inverse filter?
24.​What is meant by the least mean square filter?
Maharaja Agrasen Institute of Technology

Long Answer Questions

1.​ What is image degradation and restoration? Explain them with example.
2.​ Explain the Noise model for image restoration.
3.​ Explain the inverse filtering with suitable examples.
4.​ Explain the Homomorphism filtering.
5.​ Explain the process of filtering and Compare all filtering methods.
6.​ With suitable examples explain the noise effect in image processing.
7.​ Write a short note on a) Image restoration b) Image Degradation.
8.​ What are the two approaches for blind image restoration? Explain in detail.
9.​ What is image restoration? Explain the degradation model for continuous
function in detail.

UNIT 4

Short Answer Type

1.​ What is image compression?


2.​ What is Data Compression?
3.​ What are two main types of Data compression?
4.​ What are different Compression Methods?
5.​ Define coding redundancy?
6.​ What is run length coding?
7.​ Define compression ratio.
8.​ Define encoder
9.​ Define source encoder
10.​Define channel encoder
11.​What are the types of decoder?
12.​What are the operations performed by error free compression?
13.​Define Huffman coding
14.​What are three categories of constant area coding?
15.​How sub image size selection affects transform coding error.
16.​What is segmentation?
17.​Write the applications of segmentation
18.​What are the three types of discontinuity in digital image?
Maharaja Agrasen Institute of Technology

19.​How the derivatives are obtained in edge detection during formulation?


20.​Write about linking edge points.

Long Answer Questions

1.​What is data redundancy? Explain three basic data redundancy?


2.​What is image compression? Explain any four variable length coding
compression schemes.
3.​Definition of image compression.
4.​Explain about Image compression model?
5.​Explain about Error free Compression?
6.​Explain about Lossy compression?
7.​Explain the schematics of image compression standard JPEG.
8.​Differentiate between lossless and lossy compression and explain the transform
coding system with a neat diagram.
9.​Explain Boundary descriptors in detail with a neat diagram..
10.​ Explain regional descriptors.
11.​ Explain the two techniques of region representation.
12.​ Explain the segmentation techniques that are based on finding the region
directly.

Common questions

Powered by AI

Lossless compression reduces the size of an image file without any loss of information or image quality, allowing the original data to be perfectly reconstructed from the compressed data. Lossy compression reduces file size by permanently eliminating certain data, especially less significant details, which results in some loss of data quality that cannot be recovered. This impacts data quality where lossless is preferred for applications requiring exact restoration, whereas lossy is often used where some loss can be tolerated in exchange for more significant size reductions .

Image segmentation divides an image into meaningful structures or regions for further analysis. It is essential for tasks like object recognition or scene interpretation since it separates objects from the background, thus simplifying the analysis process. However, segmentation can be challenging due to variability in the image such as noise, occlusion, or variation in illumination. The effectiveness of segmentation often depends on the method used and the specific characteristics of the image being processed, and incorrect segmentation can lead to erroneous analysis results .

Image digitization involves converting an analog image into a digital format. This process comprises two main phases: sampling and quantization. Sampling refers to selecting a set of discrete points from the continuous image which corresponds to the pixel grid. Quantization involves mapping the continuous set of pixel values to discrete levels. The implications of this process are significant as it affects the resolution and quality of the digital image. Higher sampling rates result in more data and potentially better image quality, while higher quantization levels improve the image's precision and detail representation .

The image compression ratio represents the proportion by which an image's file size is reduced. While a higher compression ratio indicates more storage efficiency, it often leads to reduced image quality, particularly in lossy compression. This relationship is complex as it depends on the balance between storage savings and the acceptable level of quality degradation. Ideally, efficient compression achieves a high ratio with minimal perceptible quality loss, which is critical for applications with limited storage or bandwidth capacity .

The fundamental steps in digital image processing include image acquisition, preprocessing, segmentation, representation and description, recognition and interpretation, and finally, knowledge base development. Image acquisition entails capturing the image which is then preprocessed to enhance its quality. Segmentation divides the image into its constituent parts. Representation and description focus on converting the segmented image into a form that is suitable for processing. Recognition involves assigning labels to objects in the image, and interpretation is about making sense of these objects within the context of specific applications .

The Fourier transform plays a crucial role in image processing by allowing a spatial domain image to be transformed into its frequency domain representation. This transformation is beneficial for filtering and noise reduction as it helps in identifying different frequency components within an image. Key properties of the Fourier transform include linearity, symmetry, and periodicity, which simplify the mathematical manipulation of signals. Additionally, the inverse Fourier transform allows conversion back to the spatial domain for practical applications .

Pseudo coloring in image processing involves assigning false colors to grayscale images to highlight features or patterns. This is particularly useful for visualizations where color differentiation can enhance the understanding of spatial relationships and distinctions in the data. Common applications include medical imaging, where pseudo coloring helps in identifying tissues or abnormalities, and remote sensing, where it aids in distinguishing various land forms or vegetation types .

A digital image can be represented as a two-dimensional matrix consisting of pixels, each with an intensity level. These intensity levels represent different attributes such as color or brightness. The key components that make up this representation include the array of pixels, usually stored in a specific format with dimensions and color depth, which collectively determine the image's appearance and quality .

Brightness refers to the overall lightness or darkness of an image and is dependent on the intensity of light emitted or reflected by the image. Contrast, on the other hand, measures the difference in luminance or color that makes an object in an image distinguishable. In image processing, brightness adjustments can enhance an image's visibility while contrast adjustments can provide clarity and emphasize different features within the image by making edges and textures more apparent .

Histogram equalization is a method used in image enhancement to improve the contrast of an image. It works by effectively spreading out the most frequent intensity values, thereby flattening and broadening the dynamic range of the histogram. This technique is useful as it enhances the visibility within an image, making details more distinguishable, especially in conditions where the image's usable data is represented by very close contrast values .

You might also like