(SEM VII) THEORY EXAMINATION 2022-23 INFORMATION THEORY & CODING
SECTION A – Short Answers (2 Marks Each)
(a) Conditional entropy
Conditional entropy H(X|Y) is the average uncertainty remaining in random variable X when Y is known.
(b) “Sun rises in the south” – amount of information
This statement has maximum information because its probability is almost zero, hence information content is very high.
(c) Kraft’s Inequality
Kraft’s inequality provides a necessary and sufficient condition for the existence of a prefix code.
∑2−li≤1\sum 2^{-l_i} \le 1∑2−li≤1
(d) Optimal solution for source coding
An optimal source coding minimizes the average code length, approaching the source entropy.
(e) Parameters affecting channel capacity
Channel capacity depends on bandwidth, signal power, noise power, and signal-to-noise ratio (S/N).
(f) Noisy channel and its matrix representation
A noisy channel introduces errors during transmission. It is represented using a channel transition probability matrix.
(g) Hamming distance
Hamming distance is the number of differing bits between two codewords.
Example: Distance between 1011 and 1110 is 2.
(h) Prefix code
A prefix code is a code in which no codeword is a prefix of another codeword, ensuring instantaneous decoding.
(i) Parity bits and importance
Parity bits are extra bits added for error detection. They help identify single-bit errors during transmission.
(j) Significance of k in convolution coding
k determines the constraint length, i.e., the number of bits affecting the encoder output.
SECTION B – Long Answers (10 Marks Each)
(a) Entropy and information rate of a discrete source
Entropy is calculated using
H=−∑pilog2piH = -\sum p_i \log_2 p_iH=−∑pilog2pi
Information rate equals entropy multiplied by symbol rate. It represents average information generated per second.
(b) Shannon–Fano coding and efficiency
Shannon–Fano coding assigns codewords based on descending probabilities.
Efficiency = Entropy / Average code length.
It measures closeness to optimal coding.
(c) Channel capacity of AWGN channel
C=Wlog2(1+S/N)C = W \log_2(1 + S/N)C=Wlog2(1+S/N)
Increasing bandwidth or SNR increases capacity, but trade-off exists due to noise and power constraints.
(d) (6,3) block code
All code vectors are obtained by multiplying message vectors with generator matrix.
Error correction capability = ⌊(dmin − 1)/2⌋,
Error detection capability = dmin − 1.
(e) (2,1,3) convolutional encoder
It uses shift registers and modulo-2 adders. Transform domain approach uses generator polynomials to generate output sequences.
SECTION C – Long Answers (10 Marks Each)
3(a) Mutual information
I(X;Y)=H(X)+H(Y)−H(X,Y)I(X;Y) = H(X) + H(Y) − H(X,Y)I(X;Y)=H(X)+H(Y)−H(X,Y)
Properties include symmetry, non-negativity, and relation with entropy.
It measures information shared between X and Y.
3(b) Log-sum inequality
Log-sum inequality states that
∑ailogaibi≥(∑ai)log∑ai∑bi\sum a_i \log \frac{a_i}{b_i} \ge \left(\sum a_i\right)\log\frac{\sum a_i}{\sum b_i}∑ailogbiai≥(∑ai)log∑bi∑ai
Applications include entropy proofs and channel capacity derivations.
4(a) Stop-and-wait ARQ
In stop-and-wait ARQ, sender transmits one frame and waits for acknowledgment. If error occurs, frame is retransmitted.
4(b) Huffman coding
Huffman coding produces minimum average code length.
Code variance measures variability in code lengths.
Efficiency compares average length with entropy.
5(a) Binary Symmetric Channel (BSC)
Input probabilities are calculated from given messages.
Efficiency = Information rate / Channel capacity.
Channel capacity of BSC:
C=1−H(p)C = 1 − H(p)C=1−H(p)
5(b) Cascaded BSC
Mutual information decreases with cascading.
I(X,Y) > I(X,Z) due to accumulated noise.
6(a) (7,4) block code
Parity check matrix is derived from generator matrix.
Code vectors are obtained by encoding all possible messages.
6(b) ARQ system block diagram
ARQ system includes transmitter, channel, receiver, error detector, feedback path, and control unit.
7(a) Convolutional encoder output
Code sequence is obtained by shifting input bits and applying generator polynomials using modulo-2 addition.
7(b) Convolutional coder design
Constraint length 6, rate ½ coder uses two output bits per input bit.
Tree and trellis diagrams represent encoder state transitions.
Related Notes
BASIC ELECTRICAL ENGINEERING
ENGINEERING PHYSICS THEORY EXAMINATION 2024-25
(SEM I) ENGINEERING CHEMISTRY THEORY EXAMINATION...
THEORY EXAMINATION 2024-25 ENGINEERING MATHEMATICS...
(SEM I) THEORY EXAMINATION 2024-25 ENGINEERING CHE...
(SEM I) THEORY EXAMINATION 2024-25 ENVIRONMENT AND...
Need more notes?
Return to the notes store to keep exploring curated study material.
Back to Notes StoreLatest Blog Posts
Best Home Tutors for Class 12 Science in Dwarka, Delhi
Top Universities in Chennai for Postgraduate Courses with Complete Guide
Best Home Tuition for Competitive Exams in Dwarka, Delhi
Best Online Tutors for Maths in Noida 2026
Best Coaching Centers for UPSC in Rajender Place, Delhi 2026
How to Apply for NEET in Gurugram, Haryana for 2026
Admission Process for BTech at NIT Warangal 2026
Best Home Tutors for JEE in Maharashtra 2026
Meet Our Exceptional Teachers
Discover passionate educators who inspire, motivate, and transform learning experiences with their expertise and dedication
Explore Tutors In Your Location
Discover expert tutors in popular areas across India
Discover Elite Educational Institutes
Connect with top-tier educational institutions offering world-class learning experiences, expert faculty, and innovative teaching methodologies