(SEM VII) THEORY EXAMINATION 2024-25 INFORMATION THEORY & CODING
INFORMATION THEORY & CODING (KEC075) – COMPLETE SOLVED PAPER
Time: 3 Hours Max Marks: 100
Instructions: Attempt all Sections
SECTION A (2 × 10 = 20 Marks)
Attempt all questions in brief
a) Mutual information & relation to entropy
Mutual information I(X;Y) measures the amount of information shared between two random variables.
I(X;Y)=H(X)−H(X∣Y)I(X;Y) = H(X) - H(X|Y)I(X;Y)=H(X)−H(X∣Y)
b) Relative entropy vs mutual information
Relative entropy (Kullback–Leibler divergence) measures distance between two distributions, while mutual information measures dependence between variables. Mutual information is a special case of relative entropy.
c) Kraft inequality
For a prefix code with codeword lengths lil_ili:
∑2−li≤1\sum 2^{-l_i} \le 1∑2−li≤1
Significance: Necessary and sufficient condition for existence of a prefix code.
d) Huffman coding vs Shannon–Fano coding
| Huffman | Shannon–Fano |
|---|---|
| Optimal | Sub-optimal |
| Bottom-up | Top-down |
| Minimum average length | Not guaranteed minimum |
e) Channel coding theorem
It states that reliable communication is possible if transmission rate R < channel capacity C, using proper encoding and decoding.
f) Symmetric channels
Channels where transition probabilities are symmetric, e.g., Binary Symmetric Channel (BSC).
g) Minimum distance of a block code
Minimum Hamming distance between any two distinct codewords.
h) Purpose of parity-check matrix
Used to detect and locate errors in received codewords.
i) Viterbi algorithm
A maximum likelihood decoding algorithm used for convolutional codes, finding the most probable path in a trellis.
j) Convolutional codes & representation
Codes where output depends on current and previous input bits. Represented using:
Generator polynomials
Trellis diagram
SECTION B (10 × 3 = 30 Marks)
Attempt any three
a) Entropy, joint entropy & conditional entropy
Entropy: H(X)=−∑p(x)logp(x)H(X) = -\sum p(x)\log p(x)H(X)=−∑p(x)logp(x)
Joint entropy: H(X,Y)H(X,Y)H(X,Y)
Conditional entropy: H(X∣Y)H(X|Y)H(X∣Y)
Example: Fair coin → H = 1 bit.
b) Asymptotic Equipartition Property (AEP)
AEP states that for a long sequence of i.i.d symbols, most sequences are typical, each with probability ≈ 2−nH2^{-nH}2−nH.
c) Jointly typical sequences
Sequences whose joint probability is close to 2−nH(X,Y)2^{-nH(X,Y)}2−nH(X,Y).
Used in source coding and channel coding proofs.
d) Single parity-check code Adds one parity bit to ensure even/odd parity.
Detects single-bit errors, but cannot correct.
e) Viterbi algorithm in decoding
Uses dynamic programming to minimize path metric in trellis, providing optimal decoding of convolutional codes.
SECTION C (10 × 5 = 50 Marks)
Attempt one from each question
Q3(a) Entropy calculation
Given:
P=(12,14,18,116,132,132)P = \left(\frac12,\frac14,\frac18,\frac1{16},\frac1{32},\frac1{32}\right)P=(21,41,81,161,321,321) H=−∑plog2pH = -\sum p \log_2 pH=−∑plog2p H=1.9375 bits/symbol (approx)H = 1.9375 \text{ bits/symbol (approx)}H=1.9375 bits/symbol (approx)
Rate of information:
R=16×1.9375=31 bits/secR = 16 \times 1.9375 = \boxed{31 \text{ bits/sec}}R=16×1.9375=31 bits/sec
Q3(b) Log-sum inequality
∑ailogaibi≥(∑ai)log∑ai∑bi\sum a_i \log\frac{a_i}{b_i} \ge \left(\sum a_i\right)\log\frac{\sum a_i}{\sum b_i}∑ailogbiai≥(∑ai)log∑bi∑ai
Used in proving entropy inequalities and channel capacity results.
Q4(a) Data compression & types
Data compression reduces redundancy.
Types: Lossless: Huffman, RLE Lossy: JPEG, MP3
Q4(b) Shannon–Fano–Elias coding algorithm
Compute cumulative probabilities Convert cumulative value to binary
Assign codewords Ensures prefix property.
Q5(a) Channel capacity properties
Non-negative Additive for independent channels
Upper bound on reliable transmission rate Impact: Determines maximum achievable data rate.
Q5(b) Channel coding theorem (importance)
It provides the fundamental limit of communication, guiding practical code design.
Q6(a) (8,7) parity-check code (p = 0.01)
Probability of correct decoding:
Pc=(1−p)8=0.9227P_c = (1-p)^8 = 0.9227Pc=(1−p)8=0.9227
Probability of decoding error:
Pe=8p(1−p)7=0.0746P_e = 8p(1-p)^7 = 0.0746Pe=8p(1−p)7=0.0746
Probability of failure:
Pf=1−(Pc+Pe)≈0.0027P_f = 1 - (P_c + P_e) \approx 0.0027Pf=1−(Pc+Pe)≈0.0027
Q6(b) Hamming distance
Number of differing bit positions.
Relation: Error detection: dmin−1d_{min}-1dmin−1
Error correction: ⌊(dmin−1)/2⌋\lfloor (d_{min}-1)/2 \rfloor⌊(dmin−1)/2⌋
Q7(a) Generator matrices (convolutional codes)
Defines relationship between input and output bits using shift registers and modulo-2 adders.
Q7(b) Short notes
i) Code tree: Graphical representation of code sequences
ii) Trellis diagram: Time-expanded state diagram
iii) State diagram: Shows transitions between encoder states
Related Notes
BASIC ELECTRICAL ENGINEERING
ENGINEERING PHYSICS THEORY EXAMINATION 2024-25
(SEM I) ENGINEERING CHEMISTRY THEORY EXAMINATION...
THEORY EXAMINATION 2024-25 ENGINEERING MATHEMATICS...
(SEM I) THEORY EXAMINATION 2024-25 ENGINEERING CHE...
(SEM I) THEORY EXAMINATION 2024-25 ENVIRONMENT AND...
Need more notes?
Return to the notes store to keep exploring curated study material.
Back to Notes StoreLatest Blog Posts
Best Home Tutors for Class 12 Science in Dwarka, Delhi
Top Universities in Chennai for Postgraduate Courses with Complete Guide
Best Home Tuition for Competitive Exams in Dwarka, Delhi
Best Online Tutors for Maths in Noida 2026
Best Coaching Centers for UPSC in Rajender Place, Delhi 2026
How to Apply for NEET in Gurugram, Haryana for 2026
Admission Process for BTech at NIT Warangal 2026
Best Home Tutors for JEE in Maharashtra 2026
Meet Our Exceptional Teachers
Discover passionate educators who inspire, motivate, and transform learning experiences with their expertise and dedication
Explore Tutors In Your Location
Discover expert tutors in popular areas across India
Discover Elite Educational Institutes
Connect with top-tier educational institutions offering world-class learning experiences, expert faculty, and innovative teaching methodologies