(SEM VI) THEORY EXAMINATION 2021-22 IMAGE PROCESSING
IMAGE PROCESSING (KCS062)
Section-wise Detailed Answers – B.Tech Semester VI
SECTION A
(Attempt all questions – descriptive explanations)
Q1(a) Photopic and Scotopic Vision
Human vision operates under two different lighting conditions known as photopic and scotopic vision. Photopic vision occurs under bright light conditions and is mediated by cone cells present in the retina. It enables color perception, high spatial resolution, and sharp visual detail. Cone cells are highly sensitive to different wavelengths, allowing humans to distinguish colors effectively.
Scotopic vision, on the other hand, operates under low light or night conditions and is mediated by rod cells. Rod cells are more sensitive to light intensity but cannot perceive color. As a result, vision under scotopic conditions appears in shades of gray with reduced sharpness. In image processing, understanding these two vision modes helps in designing display systems and enhancement techniques that match human visual perception.
Q1(b) Gamma Correction in Image Processing
Gamma correction is a nonlinear intensity transformation used to compensate for the nonlinear response of display devices and human vision. Most display devices do not respond linearly to input voltage, and without correction, images may appear either too dark or too bright. Gamma correction adjusts pixel intensity values using a power-law relationship so that the displayed image appears visually correct. It is widely used in image enhancement, television systems, and digital cameras to preserve brightness consistency.
Q1(c) Need of Fourier Transform
The Fourier transform is required in image processing to analyze images in the frequency domain. Many image properties such as edges, textures, and noise are more easily analyzed and manipulated in the frequency domain than in the spatial domain. Fourier transform helps in filtering, image restoration, compression, and enhancement by separating low-frequency components, which represent smooth regions, from high-frequency components, which represent edges and fine details.
Q1(d) Relevance of DFT in Image Processing
The Discrete Fourier Transform (DFT) is highly relevant in image processing because digital images are discrete in nature. DFT converts a digital image from the spatial domain into its frequency representation, enabling practical implementation of frequency-domain techniques. It allows efficient filtering, periodic noise removal, image compression, and spectral analysis. Fast Fourier Transform (FFT), an efficient algorithm for computing DFT, makes frequency-based image processing computationally feasible.
Q1(e) Motion Blur in Image Restoration
Motion blur occurs when there is relative movement between the camera and the object during image acquisition. This results in image degradation where edges appear smeared in the direction of motion. In image restoration, motion blur is modeled mathematically using a degradation function that depends on motion velocity and direction. Restoration techniques aim to reverse this degradation using inverse filtering or Wiener filtering, provided the blur parameters are known or estimated.
Q1(f) Band Pass and Band Reject Filters
A band pass filter allows only a specific range of frequencies to pass through while attenuating frequencies outside this range. It is useful for extracting particular features such as textures or patterns from an image. A band reject filter, also known as a band stop filter, removes a specific frequency band while preserving others. Band reject filters are commonly used to remove periodic noise from images.
Q1(g) Watershed Segmentation
Watershed segmentation is a region-based image segmentation technique inspired by topographic interpretation. The image is treated as a surface where pixel intensities represent elevation. Watershed lines separate regions based on local minima. This method is effective in separating touching or
overlapping objects but may lead to over-segmentation if not properly controlled.
Q1(h) Dilation and Erosion in Morphological Image Processing
Dilation and erosion are fundamental morphological operations used to process binary and grayscale images. Dilation expands the boundaries of foreground objects, filling small holes and connecting nearby regions. Erosion shrinks object boundaries, removing small noise and separating connected components. These operations are widely used in shape analysis, noise removal, and object extraction.
Q1(i) Huffman Encoding and Shift Codes
Huffman encoding is a lossless data compression technique that assigns shorter codes to symbols with higher probabilities and longer codes to less frequent symbols. This minimizes the average code length and reduces data size. Shift codes are variable-length codes where symbols are encoded based on shifting operations. Huffman coding is widely used in image compression standards due to its efficiency and simplicity.
Q1(j) Regional Descriptors
Regional descriptors describe properties of image regions rather than boundaries. These include area, perimeter, centroid, orientation, and moments. Regional descriptors are useful for object recognition, classification, and shape analysis because they provide quantitative information about object size, shape, and spatial distribution.
SECTION B
(Attempt any three – detailed explanations)
Q2(a) Sampling, Quantization, and Aliasing
Sampling converts a continuous image into a discrete grid of pixels, while quantization assigns discrete intensity values to sampled pixels. Sampling affects spatial resolution, whereas quantization affects gray-level resolution. Aliasing occurs when sampling frequency is insufficient to capture image details, causing high-frequency components to appear as low-frequency artifacts. Proper sampling and anti-aliasing filters are essential to prevent distortion.
Q2(b) Gray-Level Transformations on Given Image
Given an image with gray levels ranging from 0 to 7, intensity transformations such as inversion, square root, square, and logarithmic functions modify pixel values differently. Inversion reverses brightness, making dark regions bright and vice versa. Square root transformation enhances darker regions, while square transformation emphasizes brighter regions. Logarithmic transformation compresses high-intensity values and expands low-intensity values, improving visibility in dark areas. These transformations significantly alter image contrast and visual appearance.
Q2(c) Homomorphic Filter and Higher-Order Derivatives
A homomorphic filter separates illumination and reflectance components of an image by operating in the frequency domain. It enhances contrast by suppressing low-frequency illumination variations and amplifying high-frequency reflectance details. Higher-order derivative filters are generally avoided because they amplify noise and lead to instability in image enhancement.
Q2(d) Threshold-Based Image Segmentation
Thresholding segments an image by classifying pixels based on intensity values. Global thresholding uses a single threshold for the entire image, while adaptive thresholding adjusts threshold values based on local image characteristics. Threshold-based segmentation is simple and effective for images with well-separated object and background intensities.
Q2(e) Need of Data Compression and Run Length Encoding
Data compression is required in image processing to reduce storage requirements and transmission bandwidth. Run Length Encoding (RLE) is a simple lossless compression technique that replaces consecutive identical pixel values with a single value and count. RLE is particularly effective for images with large uniform regions, such as binary images and scanned documents.
SECTION C
(Attempt any one part – long descriptive answers)
Q3(a) Histogram Equalization and Specification
Histogram equalization enhances image contrast by redistributing gray-level values so that the output histogram becomes approximately uniform. Given the frequency distribution of gray levels, cumulative distribution is computed and mapped to new intensity levels. Histogram specification, also known as histogram matching, modifies an image so that its histogram matches a desired distribution. These techniques improve visual quality and highlight important features.
Q3(b) Colour Models and Conversion
Colour models represent colors using mathematical representations. Common models include RGB, CMY, HSV, and YCbCr. RGB is used in display systems, CMY in printing, HSV in image analysis, and YCbCr in video compression. Conversion between color models involves mathematical transformations that separate luminance and chrominance components, improving processing efficiency and perceptual accuracy.
Related Notes
BASIC ELECTRICAL ENGINEERING
ENGINEERING PHYSICS THEORY EXAMINATION 2024-25
(SEM I) ENGINEERING CHEMISTRY THEORY EXAMINATION...
THEORY EXAMINATION 2024-25 ENGINEERING MATHEMATICS...
(SEM I) THEORY EXAMINATION 2024-25 ENGINEERING CHE...
(SEM I) THEORY EXAMINATION 2024-25 ENVIRONMENT AND...
Need more notes?
Return to the notes store to keep exploring curated study material.
Back to Notes StoreLatest Blog Posts
Best Home Tutors for Class 12 Science in Dwarka, Delhi
Top Universities in Chennai for Postgraduate Courses with Complete Guide
Best Home Tuition for Competitive Exams in Dwarka, Delhi
Best Online Tutors for Maths in Noida 2026
Best Coaching Centers for UPSC in Rajender Place, Delhi 2026
How to Apply for NEET in Gurugram, Haryana for 2026
Admission Process for BTech at NIT Warangal 2026
Best Home Tutors for JEE in Maharashtra 2026
Meet Our Exceptional Teachers
Discover passionate educators who inspire, motivate, and transform learning experiences with their expertise and dedication
Explore Tutors In Your Location
Discover expert tutors in popular areas across India
Discover Elite Educational Institutes
Connect with top-tier educational institutions offering world-class learning experiences, expert faculty, and innovative teaching methodologies