(SEM VIII) THEORY EXAMINATION 2017-18 DATA COMPRESSION
DATA COMPRESSION (NCS-085)
According to the uploaded question paper
The Data Compression examination is structured into three sections: A, B, and C. The paper begins with basic definitions and theoretical concepts, progresses to entropy and coding calculations, and finally evaluates advanced coding schemes, quantization methods, and compression standards.
Below is a detailed explanation of each section in descriptive format.
Section A – Fundamental Concepts of Data Compression (20 Marks)
Section A consists of ten compulsory short-answer questions, each carrying two marks. This section evaluates your understanding of the basic principles of data compression and related terminology.
The questions cover definitions such as data compression and its necessity, difference between compression and reconstruction, limitations and applications of Huffman coding, comparison between binary code and Huffman code, Graphic Interchange Format (GIF), rate distortion criterion, uniform and non-uniform quantization, predictive coding, and vector quantization.
This section focuses on conceptual clarity. For example, when defining rate distortion criterion, you must explain the trade-off between compression rate and reconstruction quality. Similarly, predictive coding should be described as a technique where current samples are predicted based on previous samples to reduce redundancy.
Although the questions are short, they form the foundation for understanding entropy, coding efficiency, and lossy versus lossless compression.
Section B – Entropy, Coding Algorithms, and Compression Models (30 Marks)
Section B requires you to attempt any three questions, each carrying ten marks. This section focuses on entropy calculations, Huffman coding design, adaptive compression, Prediction by Partial Matching (PPM), and vector quantization procedures.
One question requires calculating first-order entropy using given symbol probabilities. You must apply Shannon’s entropy formula:
H = − Σ P(x) log₂ P(x)
Another question requires constructing a Huffman code for a given probability distribution, computing entropy, and finding average code length. This evaluates your understanding of optimal prefix coding and coding efficiency.
The section also includes conceptual topics such as adaptive versus statistical compression schemes. Adaptive schemes update probabilities dynamically during encoding, while statistical schemes use predefined probabilities.
Vector quantization questions require explanation of codebook generation and mapping input vectors to nearest code vectors.
This section tests your mathematical skills, coding algorithm design, and understanding of source modeling.
Section C – Advanced Coding Techniques and Lossy Compression (50 Marks)
Section C carries the highest weightage and requires you to attempt one part from each question. This section focuses on advanced coding techniques, image compression standards, quantization models, and source modeling.
The topics include source models (physical, probabilistic, Markov, composite), uniquely decodable codes, Tunstall coding, Golomb coding, facsimile encoding techniques (MH, MMR, JBIG), adaptive quantization, distortion measures in lossy compression, Linde-Buzo-Gray (LBG) algorithm, and additive noise model of quantizer.
For example, Tunstall coding requires building a variable-to-fixed length code for a memoryless source. Golomb coding requires designing codes for specific parameter m and integer sequence n.
Facsimile encoding and JBIG standards focus on bilevel image compression techniques, particularly run-length encoding and arithmetic coding methods.
Adaptive quantization and additive noise models require understanding how quantization error is modeled and how distortion is measured using metrics such as Mean Square Error (MSE) and Signal-to-Noise Ratio (SNR).
This section evaluates deep theoretical knowledge, algorithm design skills, and understanding of both lossless and lossy compression methods.
Overall Paper Structure and Preparation Strategy
The paper is structured progressively:
Section A tests basic definitions and conceptual understanding.
Section B focuses on entropy calculations and coding algorithm design.
Section C evaluates advanced compression techniques and mathematical modeling.
To perform well:
Master Shannon entropy and Huffman coding.
Practice code construction and average length calculation.
Understand source modeling and Markov processes.
Study vector quantization and LBG algorithm carefully.
Learn image compression standards such as JBIG and facsimile coding.
Related Notes
BASIC ELECTRICAL ENGINEERING
ENGINEERING PHYSICS THEORY EXAMINATION 2024-25
(SEM I) ENGINEERING CHEMISTRY THEORY EXAMINATION...
THEORY EXAMINATION 2024-25 ENGINEERING MATHEMATICS...
(SEM I) THEORY EXAMINATION 2024-25 ENGINEERING CHE...
(SEM I) THEORY EXAMINATION 2024-25 ENVIRONMENT AND...
Need more notes?
Return to the notes store to keep exploring curated study material.
Back to Notes StoreLatest Blog Posts
Best Home Tutors for Class 12 Science in Dwarka, Delhi
Top Universities in Chennai for Postgraduate Courses with Complete Guide
Best Home Tuition for Competitive Exams in Dwarka, Delhi
Best Online Tutors for Maths in Noida 2026
Best Coaching Centers for UPSC in Rajender Place, Delhi 2026
How to Apply for NEET in Gurugram, Haryana for 2026
Admission Process for BTech at NIT Warangal 2026
Best Home Tutors for JEE in Maharashtra 2026
Meet Our Exceptional Teachers
Discover passionate educators who inspire, motivate, and transform learning experiences with their expertise and dedication
Explore Tutors In Your Location
Discover expert tutors in popular areas across India
Discover Elite Educational Institutes
Connect with top-tier educational institutions offering world-class learning experiences, expert faculty, and innovative teaching methodologies