(SEM V) THEORY EXAMINATION 2022-23 MACHINE LEARNING TECHNIQUES
SECTION A – Short Answer Type Questions (2 Marks each)
(a) Discuss Model Representation of Artificial Neuron.
An artificial neuron is a mathematical model inspired by a biological neuron. It processes input signals to produce an output based on weights and activation function.
Mathematical Model:
y=f(∑i=1nwixi+b)y = f\left(\sum_{i=1}^{n} w_i x_i + b\right)y=f(i=1∑nwixi+b)
where:
xix_ixi: Input signals
wiw_iwi: Weights
bbb: Bias
fff: Activation function (e.g., sigmoid, ReLU)
yyy: Output
Explanation:
Inputs are multiplied by weights and summed with bias.
The activation function introduces non-linearity.
The neuron "fires" (produces output) if the total input exceeds a threshold.
Example: Used in perceptrons and neural networks for pattern recognition.
(b) Differentiate Between Gradient Descent and Stochastic Gradient Descent.
| Feature | Gradient Descent (GD) | Stochastic Gradient Descent (SGD) |
|---|---|---|
| Data Processing | Uses the entire dataset for one update. | Updates parameters after each sample. |
| Speed | Slower for large datasets. | Faster convergence with noise. |
| Accuracy | More stable and precise. | May fluctuate around minimum. |
| Memory Requirement | High (needs all samples). | Low (one sample at a time). |
Conclusion:
SGD is preferred for large datasets and online learning, while GD suits small or static datasets.
SECTION B – Long Answer Type Questions (10 Marks each)
(a) Explain Supervised and Unsupervised Learning Techniques.
1. Supervised Learning:
Input data is labeled, meaning the correct output is known.
The model learns a mapping function f(x)→yf(x) → yf(x)→y.
Examples:
Regression: Predicting house prices. Classification: Spam email detection.
Algorithms: Linear Regression, Decision Trees, SVM, Neural Networks.
2. Unsupervised Learning:
No labeled outputs; the model finds hidden patterns in data.
Examples:
Clustering: Customer segmentation using K-Means.
Association: Market basket analysis using Apriori algorithm.
Key Difference:
Supervised learning focuses on prediction, while unsupervised focuses on pattern discovery.
(b) Explain Various Types of Activation Functions with Examples.
Activation functions introduce non-linearity to neural networks, enabling complex pattern learning.
1. Step Function:
f(x)={1if x≥00if x<0f(x) = \begin{cases} 1 & \text{if } x ≥ 0 \\ 0 & \text{if } x < 0 \end{cases}f(x)={10if x≥0if x<0
Used in perceptrons.
2. Sigmoid Function:
f(x)=11+e−xf(x) = \frac{1}{1 + e^{-x}}f(x)=1+e−x1
Smooth curve; output between 0 and 1. Used in binary classification.
3. Tanh Function:
f(x)=tanh(x)f(x) = \tanh(x)f(x)=tanh(x)
Range = (-1, 1); centered around zero.
4. ReLU (Rectified Linear Unit):
f(x)=max(0,x)f(x) = \max(0, x)f(x)=max(0,x)
Efficient for deep networks; reduces vanishing gradient problem.
5. Leaky ReLU:
Allows small negative slope for negative inputs to avoid dead neurons.
SECTION C – Very Long Answer Type Questions (10 Marks each)
(a) Demonstrate K-Nearest Neighbors (KNN) Algorithm for Classification with Example.
Definition:
KNN is an instance-based learning algorithm that classifies a new sample based on the majority label of its K nearest neighbors.
Algorithm Steps:
Choose the number of neighbors KKK.
Compute distance between test sample and all training samples (e.g., Euclidean distance).
Select K nearest samples.
Assign the most frequent class label among them to the test point.
Example:
If K=3 and among the nearest points two belong to class “A” and one to “B,” the new point is classified as
Class A.
Applications:
Image recognition.
Recommendation systems.
(b) Explain Different Layers Used in Convolutional Neural Network (CNN) with Suitable Examples.
1. Input Layer:
Accepts image data (e.g., 28×28×3 for RGB).
2. Convolutional Layer:
Applies filters to extract features like edges or textures.
Y=X∗W+bY = X * W + bY=X∗W+b
Example: Edge detection using 3×3 filter.
3. Activation Layer (ReLU):
Applies non-linearity to feature maps.
4. Pooling Layer:
Reduces spatial dimensions using Max or Average Pooling, retaining important information.
5. Fully Connected Layer:
Combines all extracted features for classification.
6. Output Layer:
Applies softmax activation for multi-class classification (e.g., handwritten digit recognition).
Related Notes
BASIC ELECTRICAL ENGINEERING
ENGINEERING PHYSICS THEORY EXAMINATION 2024-25
(SEM I) ENGINEERING CHEMISTRY THEORY EXAMINATION...
THEORY EXAMINATION 2024-25 ENGINEERING MATHEMATICS...
(SEM I) THEORY EXAMINATION 2024-25 ENGINEERING CHE...
(SEM I) THEORY EXAMINATION 2024-25 ENVIRONMENT AND...
Need more notes?
Return to the notes store to keep exploring curated study material.
Back to Notes StoreLatest Blog Posts
Best Home Tutors for Class 12 Science in Dwarka, Delhi
Top Universities in Chennai for Postgraduate Courses with Complete Guide
Best Home Tuition for Competitive Exams in Dwarka, Delhi
Best Online Tutors for Maths in Noida 2026
Best Coaching Centers for UPSC in Rajender Place, Delhi 2026
How to Apply for NEET in Gurugram, Haryana for 2026
Admission Process for BTech at NIT Warangal 2026
Best Home Tutors for JEE in Maharashtra 2026
Meet Our Exceptional Teachers
Discover passionate educators who inspire, motivate, and transform learning experiences with their expertise and dedication
Explore Tutors In Your Location
Discover expert tutors in popular areas across India
Discover Elite Educational Institutes
Connect with top-tier educational institutions offering world-class learning experiences, expert faculty, and innovative teaching methodologies