(SEM VI) THEORY EXAMINATION 2022-23 ADVANCED MACHINE LEARNING
ADVANCED MACHINE LEARNING – KML-061
Section-wise Important Questions & Ready Answers
SECTION A
(Attempt all questions – 2 marks each)
(a) Role of Inductive Bias in ANN
Inductive bias refers to the set of assumptions a neural network makes to generalize beyond training data. In ANN, inductive bias comes from network architecture, activation functions, and learning algorithms, helping the model learn meaningful patterns instead of memorizing data.
(b) Choosing Hidden Layers and Nodes in MLP
The number of hidden layers and nodes depends on problem complexity, size of dataset, and required accuracy. Simple problems need fewer layers, while complex nonlinear problems require deeper networks. Trial-and-error, cross-validation, and heuristics are commonly used.
(c) Bayesian Learning
Bayesian learning is a probabilistic approach where model parameters are treated as random variables. Learning is performed by updating prior beliefs using observed data to obtain posterior probabilities through Bayes’ theorem.
(d) Incorporating Prior Knowledge in Bayesian Models
Prior knowledge is incorporated using prior probability distributions over model parameters. These priors influence learning, especially when data is limited, and are updated using observed evidence.
(e) Pruning in Decision Tree Models
Pruning removes unnecessary branches from a decision tree to reduce complexity. It prevents overfitting by eliminating splits that provide little predictive power on unseen data.
(f) Overfitting and Underfitting in Decision Trees
Overfitting occurs when a tree becomes too complex and captures noise in data, reducing generalization. Underfitting occurs when the tree is too simple to capture underlying patterns, resulting in poor accuracy.
(g) Reinforcement Learning vs Other ML Types
Reinforcement learning differs from supervised and unsupervised learning because it learns through interaction with an environment using rewards and penalties instead of labeled datasets.
(h) Feedback Network in Reinforcement Learning
A feedback network uses rewards returned from the environment to update future actions. This feedback loop helps the agent improve its policy over time.
(i) Random Forest
A random forest is an ensemble learning technique that builds multiple decision trees using random subsets of data and features, and combines their outputs for final prediction.
(j) Random Forest vs Decision Tree
A decision tree uses a single model and is prone to overfitting, whereas a random forest uses multiple trees and improves accuracy, robustness, and generalization.
SECTION B
(Attempt any three – 10 marks each)
2(a) Backpropagation in Neural Networks
Backpropagation is a supervised learning algorithm used to train multilayer neural networks. It works by computing the error at the output layer and propagating it backward through hidden layers using the chain rule. Weights are updated using gradient descent to minimize the loss function, enabling the network to learn complex mappings.
2(b) Markov Chain Monte Carlo (MCMC) in Bayesian Learning
MCMC methods are used to sample from complex probability distributions when direct computation is infeasible. Algorithms like Metropolis-Hastings and Gibbs sampling generate samples from the posterior distribution, enabling parameter estimation and model selection in Bayesian learning.
2(c) Decision Trees vs Neural Networks and SVMs
Decision trees are easy to interpret and fast to train but may overfit. Neural networks handle complex nonlinear patterns but lack interpretability. Support Vector Machines provide strong theoretical guarantees and work well with high-dimensional data but require careful kernel selection.
2(d) Learning Models in Reinforcement Learning
Reinforcement learning models include value-based methods like Q-learning, policy-based methods, and model-based learning. Each approach balances exploration and exploitation differently to maximize cumulative reward.
2(e) Ensemble Learning and Model Performance
An ensemble combines multiple models to improve accuracy and stability. Techniques like bagging, boosting, and stacking reduce variance and bias, leading to better generalization than single models.
SECTION C
3(a) Gradient Descent in Backpropagation
Gradient descent is an optimization algorithm that minimizes error by adjusting weights in the direction of the negative gradient of the loss function. In backpropagation, gradients are computed layer-by-layer and used to update weights iteratively.
3(b) Applications of MLP and Neural Networks
MLPs are widely used in image recognition, speech processing, medical diagnosis, fraud detection, stock market prediction, and natural language processing due to their ability to model nonlinear relationships.
Related Notes
BASIC ELECTRICAL ENGINEERING
ENGINEERING PHYSICS THEORY EXAMINATION 2024-25
(SEM I) ENGINEERING CHEMISTRY THEORY EXAMINATION...
THEORY EXAMINATION 2024-25 ENGINEERING MATHEMATICS...
(SEM I) THEORY EXAMINATION 2024-25 ENGINEERING CHE...
(SEM I) THEORY EXAMINATION 2024-25 ENVIRONMENT AND...
Need more notes?
Return to the notes store to keep exploring curated study material.
Back to Notes StoreLatest Blog Posts
Best Home Tutors for Class 12 Science in Dwarka, Delhi
Top Universities in Chennai for Postgraduate Courses with Complete Guide
Best Home Tuition for Competitive Exams in Dwarka, Delhi
Best Online Tutors for Maths in Noida 2026
Best Coaching Centers for UPSC in Rajender Place, Delhi 2026
How to Apply for NEET in Gurugram, Haryana for 2026
Admission Process for BTech at NIT Warangal 2026
Best Home Tutors for JEE in Maharashtra 2026
Meet Our Exceptional Teachers
Discover passionate educators who inspire, motivate, and transform learning experiences with their expertise and dedication
Explore Tutors In Your Location
Discover expert tutors in popular areas across India
Discover Elite Educational Institutes
Connect with top-tier educational institutions offering world-class learning experiences, expert faculty, and innovative teaching methodologies