(SEM VII) THEORY EXAMINATION 2024-25 DEEP LEARNING
DEEP LEARNING (KDS078) – COMPLETE SOLVED PAPER
Time: 3 Hours Max Marks: 100
Instructions: Attempt all Sections
SECTION A (2 × 10 = 20 Marks)
Attempt all questions in brief
a) Perceptron vs Support Vector Machine (SVM)
Perceptron: Linear classifier, updates weights using misclassified samples, no margin maximization.
SVM: Maximizes margin between classes, uses kernel trick, better generalization.
b) Loss function in neural networks
A loss function measures the difference between predicted output and true output. It guides weight updates during backpropagation.
Examples: Mean Squared Error, Cross-Entropy Loss.
c) Role of convolutional layers in CNNs Extract spatial features (edges, textures)
Use weight sharing Reduce parameters
Preserve spatial locality
d) Deep vs shallow networks
| Deep Networks | Shallow Networks |
|---|---|
| High representational power | Limited feature learning |
| Complex patterns | Simple patterns |
| High computation | Low computation |
e) Impact of batch normalization Reduces internal covariate shift
Speeds up convergence Allows higher learning rates
Acts as regularizer
f) Autoencoders for low-dimensional representation
Autoencoders compress input data into a latent space via an encoder and reconstruct it via a decoder, learning compact feature representations.
g) Role of non-convex optimization
Deep learning loss landscapes are non-convex with multiple local minima. Optimization focuses on finding good enough minima, not global minima.
h) Challenges of stochastic optimization Noisy gradients
Slow convergence Sensitive to learning rate
Risk of overfitting
i) Deep learning in computer vision
Revolutionized tasks like: Image classification
Object detection Face recognition
Medical image analysis
j) Challenges in modeling audio signals Temporal dependencies
Noise sensitivity Variable length signals
High dimensionality
SECTION B (10 × 3 = 30 Marks)
Attempt any three
a) Mathematical foundation of SVM & kernel trick
SVM solves a convex optimization problem by maximizing margin.
Kernel trick maps data into higher-dimensional space for non-linear classification.
Difference from Logistic Regression: SVM focuses on margin; logistic regression models probability.
b) Batch normalization derivation & role
Batch norm normalizes activations: x^=x−μσ\hat{x} = \frac{x - \mu}{\sigma}x^=σx−μ
It stabilizes learning, reduces covariate shift, and accelerates training.
c) Hyperparameter optimization in ConvNets
Key hyperparameters: Learning rate
Batch size Number of layers
Filter size Dropout rate
Methods: Grid search, random search, Bayesian optimization.
d) LSTM vs traditional RNN
| RNN | LSTM |
|---|---|
| Suffers vanishing gradients | Solves vanishing gradients |
| Short-term memory | Long-term memory |
| Simple structure | Gated architecture |
e) Deep learning in bioinformatics
Applications: Protein structure prediction
Gene expression analysis Drug discovery
Disease diagnosis using CNNs and RNNs
SECTION C (10 × 5 = 50 Marks)
Attempt one from each question
Q3(a) Stochastic Gradient Descent (SGD)
SGD updates weights using one sample at a time.
| Batch GD | Mini-Batch GD | SGD |
|---|---|---|
| Accurate | Balanced | Fast |
| Slow | Moderate | Noisy |
Q3(b) Perceptron vs Logistic Regression
Perceptron uses hard threshold Logistic regression uses sigmoid function
Logistic regression provides probability output
Q4(a) Generative Adversarial Network (GAN)
GAN consists of: Generator: Creates fake data
Discriminator: Distinguishes real vs fake Challenges: Mode collapse, unstable training
Mitigation: Wasserstein GAN, gradient penalty
Q4(b) Probabilistic theory in deep learning
Bayesian deep learning incorporates: Prior over weights
Posterior estimation Uncertainty modeling
Used in Bayesian neural networks and variational inference.
Q5(a) PCA vs LDA
| PCA | LDA |
|---|---|
| Maximizes variance | Maximizes class separation |
| Unsupervised | Supervised |
| Dimensionality reduction | Classification |
Q5(b) Reconstruction loss of autoencoder
L=∣∣X−X^∣∣2L = ||X - \hat{X}||^2L=∣∣X−X^∣∣2
Lower reconstruction loss indicates better compression and feature learning.
Q6(a) Deep Reinforcement Learning (DRL)
DRL combines: Reinforcement learning
Deep neural networks Used in robotics, games (AlphaGo), and autonomous driving.
Q6(b) RNN language model architecture
Input: word embeddings Hidden state: RNN/LSTM
Output: probability of next word Limitations: Long-term dependency issues, slow training.
Q7(a) Limitations & ethics in deep learning
Limitations: Lack of interpretability
Data bias High computational cost
Solutions: Explainable AI
Fairness constraints Ethical AI guidelines
Q7(b) Face recognition deep learning model
Uses CNNs (e.g., FaceNet): Feature embeddings
Data augmentation Normalization
Robust to pose, illumination, and expression variations.
Related Notes
BASIC ELECTRICAL ENGINEERING
ENGINEERING PHYSICS THEORY EXAMINATION 2024-25
(SEM I) ENGINEERING CHEMISTRY THEORY EXAMINATION...
THEORY EXAMINATION 2024-25 ENGINEERING MATHEMATICS...
(SEM I) THEORY EXAMINATION 2024-25 ENGINEERING CHE...
(SEM I) THEORY EXAMINATION 2024-25 ENVIRONMENT AND...
Need more notes?
Return to the notes store to keep exploring curated study material.
Back to Notes StoreLatest Blog Posts
Best Home Tutors for Class 12 Science in Dwarka, Delhi
Top Universities in Chennai for Postgraduate Courses with Complete Guide
Best Home Tuition for Competitive Exams in Dwarka, Delhi
Best Online Tutors for Maths in Noida 2026
Best Coaching Centers for UPSC in Rajender Place, Delhi 2026
How to Apply for NEET in Gurugram, Haryana for 2026
Admission Process for BTech at NIT Warangal 2026
Best Home Tutors for JEE in Maharashtra 2026
Meet Our Exceptional Teachers
Discover passionate educators who inspire, motivate, and transform learning experiences with their expertise and dedication
Explore Tutors In Your Location
Discover expert tutors in popular areas across India
Discover Elite Educational Institutes
Connect with top-tier educational institutions offering world-class learning experiences, expert faculty, and innovative teaching methodologies