0% found this document useful (0 votes)
19 views10 pages

Classification Models Overview and Techniques

Introduction to classification problem as extension from regression in machine learning

Uploaded by

ann2022srmist
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views10 pages

Classification Models Overview and Techniques

Introduction to classification problem as extension from regression in machine learning

Uploaded by

ann2022srmist
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Classification models:

Introduction
Classification problem
• The goal in classification is to take an input vector x
and to assign it to one of the K discrete classes Ck
where k=1,2,….K.
• In most scenarios, the classes are taken to be disjoint.
• The input space there by divide into decision regions
whose boundaries are called decision boundaries or
decision surfaces.
• Datasets whose classes can be separated exactly by
linear decision surfaces are said to be linearly
separable.
Reference Pg: 179
Inference and decision
• We have broken the classification problem down into
two separate stages, the inference stage in which we
use training data to learn a model for p(Ck|x), and the
subsequent decision stage in which we use these
posterior probabilities to make optimal class
assignments.
• An alternative possibility would be to solve both
problems together and simply learn a function that
maps inputs x directly into decisions. Such a function is
called a discriminant function.

Reference Pg: 42-43


Approaches to solve a classification
problem
• There are three approaches available to solve a linearly
separable classification problems. They are
1. Generative models
2. Discriminative models
3. Discriminant functions

Reference Pg: 43
Generative
Models

Reference Pg: 43
Discriminative
Models

Reference Pg: 43
Discriminant
Functions

Reference Pg: 43
Identification of
decision boundary
using class
density and
posterior
probability

Reference Pg:44
Probabilistic classification models
• For probabilistic models, the most convenient, in the case of two-
class problems, is the binary representation in which there is a single
target variable t ∈ {0, 1} such that t = 1 represents class C 1 and t =
0 represents class C2.
• We can interpret the value of t as the probability that the class is C1,
with the values of probability taking only the extreme values of 0
and 1.
• For K > 2 classes, it is convenient to use a 1-of-K coding scheme
in which t is a vector of length K such that if the class is C j , then all
elements tk of t are zero except element tj , which takes the value 1.
• For instance, if we have K = 5 classes, then a pattern from class 2
would be given the target vector t = (0, 1, 0, 0, 0) T
Reference Pg:180
Generalized linear models for
classification
• Generalization of linear regression model in which we transform the
linear function of w using a nonlinear function f(·) so that,

• In the machine learning literature f(·) is known as an activation


function, whereas its inverse is called a link function in the statistics
literature.
• The decision surfaces correspond to y(x) = constant, so that
wTx + w0 = constant
and hence the decision surfaces are linear functions of x, even if the
function f(·) is nonlinear. For this reason, the class of models described
above are called generalized linear models

You might also like