1.
Linear Discriminants Analysis :
Linear Discriminants, often referred to in the context of Linear Discriminant Analysis (LDA),
is a dimensionality reduction technique used in machine learning and statistics. It is particularly
useful for supervised classification tasks where the goal is to project data onto a lower-
dimensional space while maximizing class separability.
What Is Meant by "Discriminant"?
A discriminant is a function or value that helps distinguish between different classes or
categories in a dataset.
In Linear Discriminant Analysis (LDA), discriminants are linear combinations of features
that maximize the separation between different classes.
Linear Discriminant Analysis (LDA) is used in machine learning and statistics
primarily for the following reasons:
1. Dimensionality Reduction
LDA reduces the number of features while preserving class information, making it
useful for high-dimensional datasets.
Helps avoid the curse of dimensionality, improving computational efficiency and
reducing overfitting.
2. Classification Improvement
LDA enhances class separability, making classification tasks easier for models like
Logistic regression, SVM, and Naive Bayes.
It projects data onto a lower-dimensional space where classes are more distinguishable.
3. Noise Reduction
By focusing on the features that maximize class separation, LDA filters out irrelevant
variations in the data.
This improves model robustness, especially in datasets with noisy or correlated
features.
4. Better Interpretability
LDA provides a linear transformation that is easy to interpret compared to complex
deep learning models.
The generated linear discriminants can be analyzed to understand which features
contribute most to class separation.
5. Performance Boost in Small Datasets
In contrast to deep learning, LDA works well even when the number of samples is
smaller than the number of features.
It's particularly useful in medical diagnosis, face recognition (Fisherfaces), and speech
recognition. Fisherfaces: Face Recognition Using Linear Discriminant
Analysis (LDA)
Fisherfaces is a face recognition technique that applies Linear Discriminant Analysis
(LDA) to extract facial features for classification. It was introduced by Belhumeur,
Hespanha, and Kriegman in 1997 as an improvement over Eigenfaces (which uses
PCA).
Applications of LDA:
Face Recognition (e.g., Fisherfaces)
Medical Diagnosis
Financial Predictions
Image and Speech Recognition
Text Classification
Example : Linear Discriminant Analysis Algorithm using for Classification
Let’s Consider Iris Dataset: A Classic Dataset for Machine Learning
The Iris dataset is one of the most famous datasets in machine learning and statistics. It was
introduced by Ronald Fisher in 1936 as an example of linear discriminant analysis (LDA).
The dataset is widely used for classification tasks and serves as a benchmark for testing
machine learning algorithms.
The Iris dataset contains 150 samples in total, with 50 samples per class (Setosa,
Versicolor, and Virginica).
Here are 10 example samples from the dataset in a tabular format:
According to above 150 samples of Iris Setosa, Iris Versicolor, Iris Virginica we need to
discriminant the samples into linearly by doing analysis after that we need to train the final
Linear discriminant data to Logistic regression an machine learning algorithm for final
accuracy.
Output :
Accuracy after LDA using Logistic Regression : 100.00%
2. The Perceptron Classifier :
The Perceptron is the simplest type of artificial neural network, used for binary
classification problems. It was invented by Frank Rosenblatt in 1958.
The Perceptron Algorithm is a binary classifier that makes predictions based on a
linear decision boundary. The core of the algorithm is the weighted sum of inputs,
which is then passed through an activation function.
o If the sum is greater than 0, it predicts Class 1.
o If the sum is less than or equal to 0, it predicts Class 0.
What is ANN?
An Artificial Neural Network (ANN) is a machine learning model inspired by the
human brain. It consists of interconnected layers of neurons that process input data,
extract patterns, and make predictions.
ANNs are used for complex tasks like image recognition, speech processing,
medical diagnosis, and self-driving cars.
Structure of an ANN
A typical ANN consists of three types of layers:
1. Input Layer – Takes in raw data (e.g., an image, text, or numbers).
2. Hidden Layers – Perform mathematical operations on the data (feature extraction).
3. Output Layer – Produces the final result (e.g., classification label, probability).
Each neuron (node) in a layer is connected to the next layer and uses weights and
activation functions to process the data.
How is ANN a Machine Learning Model?
Machine Learning (ML) is a broad field where models learn from data to make
predictions. ANN is one such model that learns patterns and relationships from
input data.
In traditional ML models like Logistic Regression, Decision Trees, or SVM, the
model learns based on predefined mathematical functions.
ANN, on the other hand, learns through interconnected neurons that adjust
weights automatically, making it a more flexible and powerful ML model.
When Does ANN Become Deep Learning?
A basic ANN with one hidden layer is considered a shallow neural network and is
part of machine learning.
When an ANN has multiple hidden layers (more than one), it is called a Deep
Neural Network (DNN).
Deep Neural Networks = Deep Learning because they have many layers, which
help in learning complex patterns.
Example : Perceptron Classification Algorithm
Let’s Consider a data of fruits, Apple indicates Class 0 and Orange indicates
Class 1 with features weight and texture ( Smooth and Rough )
Now we need to predict new fruit [145, 1] belongs to category Apple or Orange by
applying perceptron classification algorithm by doing standardization of data we need to
calculate the final conditions.
Now in Step 2 We need to standardize the data for normalization purpose the formula is
Now we need to substitute above calculated values in the standardization formula to
predict the new fruit values [145, 1 ] which belongs to Apple group or Orange ????
According to below Perceptron algorithm conditions the New fruit [145,1] belongs to Orange group
because the value is greater then 0 according to condition so the class 1 indicates Orange.