0% found this document useful (0 votes)
175 views9 pages

Variational Autoencoders for Fashion MNIST

Uploaded by

reddybpavan23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views9 pages

Variational Autoencoders for Fashion MNIST

Uploaded by

reddybpavan23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Variational AutoEncoders-Fashion

MNIST

Team Members:-
1CR20IS061 BHARATH KUMAR REDDY
1CR20IS042 [Link] SAI KUMAR REDDY

Project Domain:-ML
Abstract
Autoencoders are a type of neural network that can be used to learn efficient
codings of input [Link] comprises of two parts: an encoder network and a
decoder network.

Its a deep learning model that can generate new data samples.
Autoencoders are used to help reduce the noise in data through the process of
compressing input data,encoding it,and then reconstructing it as an output.[1]

Autoencoders allows you to reduce dimensionality and focus only on areas of


real value.

Our aim is to build an autoencoder to reconstruct images using the Fashion-


MNIST dataset.
Base Papers
 Auto-encoding Variational Bayes by Diederik P Knigma and Max Welling, Dec
2022.

 IEEE Conditional Introspective Variational Autoencoders for Image Synthesis, Jun


2020, IEEE.
Introduction
Variational Autoencoder was proposed by Knigma and Welling in 2013 at
Google and Qualcomm.

A VAE provides a problimistic manner for describing an observation in latent


space. Thus rather than building an encoder that outputs a single value to
describe each latent state attribute,we will formulate our encoder to describe a
probability distribution for each latent attribute.[2]

VAE’s have more parameters so that gives significant control over how we want
our latent distribution.
Problem Statement
To build a Variational autoencoders using Fashion MNIST dataset available
through the Keras Datasets API in either TensorFlow or PyTorch .

The goal is to develop a model capable of generating diverse fashion items while
exploring the dataset’s latent [Link] project will demonstrate proficiency in
VAEs, deep learning frameworks, and generative modeling for creative
applications in computer vision.
Scope of the Project
Variational Autoencoders is implemented in Tensorflow/PyTorch which are end to
end Machine Learning Platforms.

Concepts such as latent spaces, encoder-decoder architecture,and the loss


function(including the reconstruction loss and KL Divergence) are implemented.[5]

It will be tested using Fashion-MNIST data set which is already available in Keras
datasets API.
Tools and Technologies
1. Tensorflow
2. Programming Language: Python
3. Fashion-MNIST dataset(Keras package)
Conclusion
In conclusion, this Variational Autoencoder (VAE) project will showcase the
potential of VAEs in data representation and generative modeling. It will address
challenges such as image recreation and mode collapse, highlighting the model's
adaptability. The VAE's latent space will facilitate data generation and anomaly
detection, with practical applications in diverse domains. This work will underscore
the importance of future research to refine VAEs and overcome existing limitations,
offering promising opportunities for further advancements in the field of deep
learning.
References
[1] Yoshua Bengio, Aaron Courville, and Pascal Vincent Representation learning: A review and new
perspectives. 2013.

[2] David M Blei, Michael 1 Jordan, and John W Paisley. Variational Bayesian inference with Stochastic
Search. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages
1367-1374, 2012.

[3] Yoshua Bengio and Eric Thibodeau-Laufer. Deep generative stochastic networks trainable by
backprop. arXiv preprint arXiv:1306.1091, 2013.

[4] Luc Devroye. Sample-based non-uniform random variate generation. In Proceedingsof the 18th
conference on Winter simulation, pages 260-265. ACM, 1986.

[5] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning
and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, 2010.

You might also like