(SEM VII) THEORY EXAMINATION 2024-25 ARTIFICIAL INTELLIGENCE
ARTIFICIAL INTELLIGENCE (KDS071) – COMPLETE SOLVED PAPER
Time: 3 Hours Max Marks: 100 Instructions: Attempt all sections
SECTION A (2 × 10 = 20 Marks)
Attempt all questions in brief
a) Examples of convex optimization problems
Linear Programming (LP) Least Squares Regression
Support Vector Machines (SVM) Lasso and Ridge Regression
Quadratic Programming
b) Karush–Kuhn–Tucker (KKT) conditions For constrained optimization:
Stationarity Primal feasibility
Dual feasibility Complementary slackness
c) Basic steps of mirror descent algorithm Initialize parameter
Compute gradient Map to dual space
Perform gradient step Map back using mirror function
d) Proximal gradient methods
They solve non-smooth convex problems by combining: Gradient descent (smooth part)
Proximal operator (non-smooth part)
e) Monotone operators
An operator T is monotone if: (Tx−Ty)T(x−y)≥0(Tx - Ty)^T (x - y) \ge 0(Tx−Ty)T(x−y)≥0
Used in convex optimization and variational inequalities.
f) Intuition behind Douglas–Rachford splitting
It solves problems with two complex constraints by splitting them into simpler subproblems and iteratively combining their solutions.
g) Langevin dynamics
A stochastic optimization method that adds Gaussian noise to gradient descent to escape local minima and sample from probability distributions.
h) Purpose of averaging methods in stochastic optimization
Reduces variance
Improves convergence stability Produces smoother trajectories
i) Supervised vs Unsupervised learning
| Supervised | Unsupervised |
|---|---|
| Labeled data | Unlabeled data |
| Prediction | Pattern discovery |
| Examples: Regression | Examples: Clustering |
j) Feature extraction in machine learning
Transforming raw data into meaningful numerical features to improve model accuracy and efficiency.
SECTION B (10 × 3 = 30 Marks)
Attempt any three
a) Duality in optimization (with example)
Primal problem minimizes cost; dual maximizes constraints.
Example:
Minimize x2x^2x2 subject to x≥1x ≥ 1x≥1.
Dual formulation provides bounds and optimality checks.
b) ODE interpretations and optimization algorithms
Gradient descent can be viewed as a discretized Ordinary Differential Equation (ODE), explaining convergence behavior and stability.
c) Augmented Lagrangian method (quadratic constraint)
Adds penalty term to Lagrangian:
L(x,λ)=f(x)+λg(x)+ρ2g(x)2L(x,λ)=f(x)+λg(x)+\frac{ρ}{2}g(x)^2L(x,λ)=f(x)+λg(x)+2ρg(x)2
Improves convergence for constrained problems.
d) Polyak–Juditsky averaging
Averages all previous iterates: xˉk=1k∑i=1kxi\bar{x}_k=\frac{1}{k}\sum_{i=1}^k x_ixˉk=k1i=1∑kxi
Reduces noise in stochastic gradients.
e) Application domains of machine learning
Healthcare (diagnosis) Finance (fraud detection)
NLP (chatbots) Computer Vision
Autonomous vehicles Recommendation systems
SECTION C (10 × 5 = 50 Marks)
Attempt one from each question
Q3(a) Apply KKT conditions to a quadratic programming problem
Minimize: f(x)=x2f(x)=x^2f(x)=x2
Subject to: x≥1x ≥ 1x≥1
Steps:
Lagrangian: L=x2+λ(1−x)L=x^2+λ(1-x)L=x2+λ(1−x)
Apply KKT conditions Optimal solution: x = 1
Q4(b) Gradient descent for f(x)=x2+4x+4f(x)=x^2+4x+4f(x)=x2+4x+4
Gradient: f′(x)=2x+4f'(x)=2x+4f′(x)=2x+4
Set gradient = 0 → x = −2 Minimum value: f(−2)=0
Q5(a) Augmented Lagrangian vs Standard Lagrangian
| Standard | Augmented |
|---|---|
| Slow convergence | Faster |
| No penalty term | Penalty added |
| Sensitive to constraints | More stable |
Q6(a) Polyak–Juditsky averaging application
Used in SGD Improves convergence speed
Reduces oscillations Common in deep learning optimizers
Q7(a) Bayesian classifier use case
Email Spam Detection Uses Bayes’ theorem
Computes probability of spam given words
Fast, interpretable, efficient for text data
Q7(b) PCA – real-world optimization scenario
Image Compression High-dimensional pixels reduced
Retains maximum variance Faster computation
Less storage and noise
Related Notes
BASIC ELECTRICAL ENGINEERING
ENGINEERING PHYSICS THEORY EXAMINATION 2024-25
(SEM I) ENGINEERING CHEMISTRY THEORY EXAMINATION...
THEORY EXAMINATION 2024-25 ENGINEERING MATHEMATICS...
(SEM I) THEORY EXAMINATION 2024-25 ENGINEERING CHE...
(SEM I) THEORY EXAMINATION 2024-25 ENVIRONMENT AND...
Need more notes?
Return to the notes store to keep exploring curated study material.
Back to Notes StoreLatest Blog Posts
Best Home Tutors for Class 12 Science in Dwarka, Delhi
Top Universities in Chennai for Postgraduate Courses with Complete Guide
Best Home Tuition for Competitive Exams in Dwarka, Delhi
Best Online Tutors for Maths in Noida 2026
Best Coaching Centers for UPSC in Rajender Place, Delhi 2026
How to Apply for NEET in Gurugram, Haryana for 2026
Admission Process for BTech at NIT Warangal 2026
Best Home Tutors for JEE in Maharashtra 2026
Meet Our Exceptional Teachers
Discover passionate educators who inspire, motivate, and transform learning experiences with their expertise and dedication
Explore Tutors In Your Location
Discover expert tutors in popular areas across India
Discover Elite Educational Institutes
Connect with top-tier educational institutions offering world-class learning experiences, expert faculty, and innovative teaching methodologies