Deep Learning Laboratory Manual
Department of Artificial Intelligence and Machine Learning
Mysore University School of Engineering
University of Mysore
Karnataka, Mysuru
Deep Learning Laboratory
Semester:
No. of Lecture hours/Week: - CIE Marks: 50
No. of Tutorial hours/week: -
SEE Marks: - 50
Total No. of Lecture hours: -
Exam Hours: - 03
L: T:P: - [Link]
Credits: - 01
Sl. Title of the Experiment
No.
1 Design a single unit perceptron for classification of a linearly separable binary dataset with-
out using pre-defined models. Use the Perceptron() from sklearn.
2 Identify the problem with a single unit Perceptron. Classify using OR, AND, and XOR-ed
data and analyze the result.
3 Build an Artificial Neural Network by implementing the Backpropagation algorithm and
test the same using appropriate datasets. Vary the activation functions used and compare
the results.
4 Build a Deep Feed Forward ANN by implementing the Backpropagation algorithm and test
the same using appropriate datasets. Use the number of hidden layers ≥ 4.
5 Design and implement an image classification model to classify a dataset of images using
Deep Feed Forward NN. Record the accuracy corresponding to the number of epochs. Use
the MNIST, CIFAR-10 datasets.
6 Design and implement a CNN model (with 2 and with 4+ layers of convolutions) to classify
multi-category image datasets. Use the MNIST, Fashion MNIST, CIFAR-10 datasets. Set
the No. of Epochs as 5, 10, and 20. Record the accuracy and execution time on CPU and
GPU (Colab). Compare architectures:
a. Base Model
b. Model with L1 Regularization
c. Model with L2 Regularization
d. Model with Dropout
e. Model with both L2 (or L1) and Dropout.
7 Use the concept of Data Augmentation to increase the data size from a single image.
8 Design and implement a CNN model to classify CIFAR10 image dataset. Use Data Aug-
mentation while designing the CNN model. Record accuracy corresponding to the number
of epochs.
9 Implement the standard LeNet-5 CNN architecture model to classify multi-category image
datasets (MNIST, Fashion MNIST) and check the accuracy.
10 Implement the standard VGG-16 & VGG-19 CNN architecture models to classify multi-
category image datasets and check the accuracy.
11 Implement RNN for sentiment analysis on movie reviews.
12 Implement Bidirectional LSTM for sentiment analysis on movie reviews.
Laboratory Outcomes
At the end of the course, the student will be able to:
• Understand and implement basic deep learning models.
• Understand and implement the Backpropagation Algorithm.
3
• Explore Convolutional Neural Networks (CNNs) and work with advanced CNN architectures.
• Implement Deep Feed Forward Networks, explore Recurrent Neural Networks (RNNs) and LSTMs.
• Leverage GPUs for training deep learning models.
Assessment Details (both CIE and SEE)
The weightage of Continuous Internal Evaluation (CIE) is 50% and for Semester End Exam (SEE) is 50%.
The student must obtain a minimum of 40% marks individually in both CIE and SEE to pass. The Practical
Semester End Exam (SEE) is conducted for 100 marks (3 Hours duration). Based on this, grading will be
awarded.
• Continuous Internal Evaluation (CIE): 75% (35 marks) is based on the weekly evaluation of
laboratory journals/reports after the conduction of every experiment, and 25% (15 marks) is based on
one practical test. The total CIE marks are the sum of marks secured in the above events.
• Semester End Evaluation (SEE): The practical examinations will be conducted as per the univer-
sity timetable in a batch-wise manner with a student strength of 10-12 per batch.
– All laboratory experiments are included in the practical examination.
– The instructions printed on the cover page of the answer script must be strictly adhered to by
the examiners.
– The break-up of marks is 15% for the procedure, 70
– Students can pick one experiment from the question lot prepared by the examiners.
– Change of experiment is allowed only once, and the 15% marks allotted to the procedure part
will be reduced to zero.
Web References
• Deep Learning with Python
• TensorFlow Tutorials
• PyTorch Tutorials and Understanding CNNs
Descriptions
The programs can be implemented in Python, and datasets can be taken from standard repositories.
4
1. Design a single unit perceptron for classification of a linearly
separable binary dataset without using pre-defined models. Use
the Perceptron() from sklearn.
import numpy as np
from [Link] import make_classification
from sklearn.linear_model import Perceptron
from sklearn.model_selection import train_test_split
from [Link] import accuracy_score
import [Link] as plt
# Generate a synthetic linearly separable dataset
X, y = make_classification(
n_samples=100, n_features=2, n_informative=2,
n_redundant=0, n_clusters_per_class=1,
flip_y=0, random_state=42
)
# Split the dataset into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train and evaluate the Perceptron
perceptron = Perceptron(max_iter=1000, eta0=1, random_state=42)
[Link](X_train, y_train)
accuracy = accuracy_score(y_test, [Link](X_test))
print(f’Accuracy: {accuracy:.2f}’)
# Plot decision boundary
xx, yy = [Link](
[Link](X[:, 0].min() - 1, X[:, 0].max() + 1, 100),
[Link](X[:, 1].min() - 1, X[:, 1].max() + 1, 100)
)
Z = [Link](np.c_[[Link](), [Link]()]).reshape([Link])
[Link](figsize=(10, 6))
[Link](xx, yy, Z, alpha=0.3, cmap=’coolwarm’)
[Link](X[:, 0], X[:, 1], c=y, edgecolor=’k’, cmap=’coolwarm’, marker=’o’)
[Link](’Perceptron Classification’)
[Link](’Feature 1’)
[Link](’Feature 2’)
[Link]()
Output:
5
2. Identify the problem with single unit Perceptron. Classify us-
ing Or-, Andand Xor-ed data and analyze the result.
import numpy as np
from sklearn.linear_model import Perceptron
from [Link] import accuracy_score
# Data for AND, OR, XOR gates
data = {
’AND’: ([Link]([[0, 0], [0, 1], [1, 0], [1, 1]]), [Link]([0, 0, 0, 1])),
# Use 0 and 1 for labels
’OR’: ([Link]([[0, 0], [0, 1], [1, 0], [1, 1]]), [Link]([0, 1, 1, 1])),
# Use 0 and 1 for labels
’XOR’: ([Link]([[0, 0], [0, 1], [1, 0], [1, 1]]), [Link]([0, 1, 1, 0])),
# Use 0 and 1 for labels
}
# Classify AND, OR, XOR gates
for gate, (X, y) in [Link]():
# Initialize and train the Perceptron
perceptron = Perceptron(max_iter=10, eta0=1, random_state=42)
[Link](X, y)
# Make predictions
y_pred = [Link](X)
# Calculate accuracy
acc = accuracy_score(y, y_pred) * 100
print(f"{gate} gate accuracy: {acc:.2f}%")
# Output predictions and true labels for clarity
print(f"Predictions: {y_pred}")
print(f"True Labels: {y}")
6
Output:
AND gate accuracy: 100.00%
Predictions: [0 0 0 1]
True Labels: [0 0 0 1]
OR gate accuracy: 100.00%
Predictions: [0 1 1 1]
True Labels: [0 1 1 1]
XOR gate accuracy: 50.00%
Predictions: [0 0 0 0]
True Labels: [0 1 1 0]
3. Build an Artificial Neural Network by implementing the Back-
propagation algorithm and test the same using appropriate data
sets. Vary the activation functions used and compare the re-
sults.
import tensorflow as tf
from [Link] import Sequential
from [Link] import Dense
from [Link] import to_categorical
from [Link] import load_iris
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
# Load and preprocess the Iris dataset
iris = load_iris()
X = [Link]
y = [Link]
# One-hot encode labels
y = to_categorical(y)
# Standardize features
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Split dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Build the neural network with exactly four hidden layers
model = Sequential([
Dense(64, activation=’relu’, input_shape=(X_train.shape[1],)), # Input layer
Dense(64, activation=’relu’), # Hidden layer 1
Dense(64, activation=’relu’), # Hidden layer 2
Dense(64, activation=’relu’), # Hidden layer 3
Dense(64, activation=’relu’), # Hidden layer 4
Dense(3, activation=’softmax’) # Output layer with 3 units for 3 classes
])
# Compile the model
7
[Link](optimizer=’adam’,
loss=’categorical_crossentropy’,
metrics=[’accuracy’])
# Train the model
history = [Link](X_train, y_train, epochs=10, batch_size=10, validation_split=0.2, verbose=1)
# Evaluate the model
loss, accuracy = [Link](X_test, y_test)
print(f’Test Loss: {loss:.4f}’)
print(f’Test Accuracy: {accuracy * 100:.2f}%’)
# Make predictions
predictions = [Link](X_test)
predicted_classes = [Link](predictions, axis=1)
true_classes = [Link](y_test, axis=1)
accuracy = tf.reduce_mean([Link](predicted_classes == true_classes, tf.float32))
print(f’Predicted Accuracy: {[Link]() * 100:.2f}%’)
Output:
Epoch 1/10
9/9 4s 38ms/step - accuracy: 0.4640 - loss: 1.0965 - val_accuracy: 0.9524 - val_loss: 1.0140
Epoch 2/10
9/9 0s 11ms/step - accuracy: 0.8771 - loss: 0.9489 - val_accuracy: 0.7143 - val_loss: 0.8705
Epoch 3/10
9/9 0s 12ms/step - accuracy: 0.8318 - loss: 0.7586 - val_accuracy: 0.7143 - val_loss: 0.6709
Epoch 4/10
9/9 0s 15ms/step - accuracy: 0.7989 - loss: 0.5677 - val_accuracy: 0.7619 - val_loss: 0.4985
Epoch 5/10
9/9 0s 11ms/step - accuracy: 0.8308 - loss: 0.3653 - val_accuracy: 0.9048 - val_loss: 0.3965
Epoch 6/10
9/9 0s 13ms/step - accuracy: 0.8579 - loss: 0.3118 - val_accuracy: 0.9048 - val_loss: 0.3752
Epoch 7/10
9/9 0s 11ms/step - accuracy: 0.7998 - loss: 0.2841 - val_accuracy: 0.9048 - val_loss: 0.3447
Epoch 8/10
9/9 0s 12ms/step - accuracy: 0.9444 - loss: 0.1501 - val_accuracy: 0.9524 - val_loss: 0.3688
Epoch 9/10
9/9 0s 11ms/step - accuracy: 0.9317 - loss: 0.1919 - val_accuracy: 0.9524 - val_loss: 0.3074
Epoch 10/10
9/9 0s 17ms/step - accuracy: 0.9239 - loss: 0.1670 - val_accuracy: 0.9524 - val_loss: 0.3382
2/2 0s 25ms/step - accuracy: 1.0000 - loss: 0.0951
Test Loss: 0.0969
Test Accuracy: 100.00%
2/2 0s 88ms/step
Predicted Accuracy: 100.00%
4. Build a Deep Feed Forward ANN by implementing the Back-
propagation algorithm and test the same using appropriate data
sets. Use the number of hidden layers ¿=4.
8
import tensorflow as tf
from [Link] import load_iris
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler, OneHotEncoder
import numpy as np
iris = load_iris()
X = [Link]
y = [Link]
encoder = OneHotEncoder(sparse_output=False)
y = encoder.fit_transform([Link](-1, 1))
scaler = StandardScaler()
X = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
class DeepNN([Link]):
def __init__(self):
super(DeepNN, self).__init__()
self.dense1 = [Link](64, activation=’relu’)
self.dense2 = [Link](64, activation=’relu’)
self.dense3 = [Link](64, activation=’relu’)
self.dense4 = [Link](64, activation=’relu’)
self.output_layer = [Link](3, activation=’softmax’)
def call(self, inputs):
x = self.dense1(inputs)
x = self.dense2(x)
x = self.dense3(x)
x = self.dense4(x)
return self.output_layer(x)
model = DeepNN()
def compute_loss(y_true, y_pred):
return tf.reduce_mean([Link].categorical_crossentropy(y_true, y_pred))
optimizer = [Link]()
def train_step(X_batch, y_batch):
with [Link]() as tape:
y_pred = model(X_batch, training=True)
loss = compute_loss(y_batch, y_pred)
gradients = [Link](loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
epochs = 100
batch_size = 10
for epoch in range(epochs):
for i in range(0, len(X_train), batch_size):
9
X_batch = X_train[i:i+batch_size]
y_batch = y_train[i:i+batch_size]
loss = train_step(X_batch, y_batch)
if epoch % 10 == 0:
print(f’Epoch {epoch}, Loss: {[Link]()}’)
def evaluate_model(X_test, y_test):
y_pred = model(X_test, training=False)
correct_predictions = [Link]([Link](y_test, axis=1), [Link](y_pred, axis=1))
accuracy = tf.reduce_mean([Link](correct_predictions, tf.float32))
return [Link]()
accuracy = evaluate_model(X_test, y_test)
print(f’Test Accuracy: {accuracy * 100:.2f}%’)
Output:
Epoch 0, Loss: 0.9263695478439331
Epoch 10, Loss: 0.06529425084590912
Epoch 20, Loss: 0.00856734998524189
Epoch 30, Loss: 0.0026814767625182867
Epoch 40, Loss: 0.0012963423505425453
Epoch 50, Loss: 0.0006400287384167314
Epoch 60, Loss: 0.0003800290869548917
Epoch 70, Loss: 0.00228338735178113
Epoch 80, Loss: 0.0003549569519236684
Epoch 90, Loss: 0.00012473043170757592
Test Accuracy: 100.00%
5. Design and implement an Image classification model to clas-
sify a dataset of images using Deep Feed Forward NN. Record
the accuracy corresponding to the number of epochs. Use the
MNIST, CIFAR-10 datasets
import tensorflow as tf
from [Link] import mnist
from [Link] import Sequential
from [Link] import Dense, Flatten
from [Link] import to_categorical
# Load and preprocess the MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Normalize the input data
X_train = X_train.astype(’float32’) / 255.0
X_test = X_test.astype(’float32’) / 255.0
# Convert labels to one-hot encoding
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
10
# Define the model
model = Sequential([
Flatten(input_shape=(28, 28)), # Flatten the input images
Dense(128, activation=’relu’), # First hidden layer
Dense(256, activation=’relu’), # Second hidden layer
Dense(10, activation=’softmax’) # Output layer
])
# Compile the model
[Link](optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[’accuracy’])
# Train the model
[Link](X_train, y_train, epochs=5, batch_size=64, validation_split=0.2)
# Evaluate the model on the test set
test_loss, test_accuracy = [Link](X_test, y_test)
print(f’Test Loss: {test_loss:.4f}’)
print(f’Test Accuracy: {test_accuracy:.2f}’)
Output:
Epoch 1/5
750/750 7s 7ms/step - accuracy: 0.8480 - loss: 0.5174 - val_accuracy: 0.9487 - val_loss: 0.1753
Epoch 2/5
750/750 6s 8ms/step - accuracy: 0.9632 - loss: 0.1192 - val_accuracy: 0.9658 - val_loss: 0.1092
Epoch 3/5
750/750 4s 5ms/step - accuracy: 0.9764 - loss: 0.0772 - val_accuracy: 0.9675 - val_loss: 0.1053
Epoch 4/5
750/750 6s 6ms/step - accuracy: 0.9829 - loss: 0.0537 - val_accuracy: 0.9693 - val_loss: 0.1095
Epoch 5/5
750/750 4s 5ms/step - accuracy: 0.9889 - loss: 0.0379 - val_accuracy: 0.9743 - val_loss: 0.0931
313/313 1s 2ms/step - accuracy: 0.9737 - loss: 0.0892
Test Loss: 0.0773
Test Accuracy: 0.98
6. Design and implement a CNN model (with 2 and with 4+ lay-
ers of convolutions) to classify multi category image datasets.
Use the MNIST, Fashion MNIST, CIFAR-10 datasets. Set the
No. of Epoch as 5, 10 and 20. Make the necessary changes
whenever required. Record the accuracy corresponding to the
number of epochs. Record the time required to run the pro-
gram, using CPU as well as using GPU in Colab. and Test
accuracy corresponding to the following architectures:
• Base Model
• Model with L1 Regularization
• Model with L2 Regularization
11
• Model with Dropout
• Model with both L2 (or L1) and Dropout.
import tensorflow as tf
from [Link] import layers, models
from [Link] import fashion_mnist
from [Link] import to_categorical
from [Link] import l1, l2
# Load and preprocess the dataset
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1)).astype(’float32’) / 255
test_images = test_images.reshape((10000, 28, 28, 1)).astype(’float32’) / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# Function to build the CNN model
def build_model(regularizer=None, dropout_rate=None):
model = [Link]()
[Link](layers.Conv2D(32, (3, 3), activation=’relu’, input_shape=(28, 28, 1), kernel_regulari
[Link](layers.Conv2D(64, (3, 3), activation=’relu’, kernel_regularizer=regularizer))
[Link](layers.MaxPooling2D((2, 2)))
if dropout_rate:
[Link]([Link](dropout_rate)) # Dropout layer if specified
[Link](layers.Conv2D(128, (3, 3), activation=’relu’, kernel_regularizer=regularizer))
[Link](layers.Conv2D(128, (3, 3), activation=’relu’, kernel_regularizer=regularizer))
[Link](layers.MaxPooling2D((2, 2)))
if dropout_rate:
[Link]([Link](dropout_rate)) # Dropout layer if specified
[Link]([Link]())
[Link]([Link](128, activation=’relu’, kernel_regularizer=regularizer))
if dropout_rate:
[Link]([Link](dropout_rate)) # Dropout layer if specified
[Link]([Link](10, activation=’softmax’))
return model
# Function to compile, train and evaluate a model
def compile_and_train(model, model_name):
[Link](optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[’accuracy’])
[Link](train_images, train_labels, epochs=5, batch_size=64, validation_split=0.2)
test_loss, test_acc = [Link](test_images, test_labels)
print(f"{model_name} Test Accuracy: {test_acc:.4f}")
# Base Model (No Regularization, No Dropout)
print("\nTraining Base Model...")
base_model = build_model()
compile_and_train(base_model, "Base Model")
# L1 Regularization Model
print("\nTraining L1 Regularization Model...")
l1_model = build_model(regularizer=l1(0.001))
compile_and_train(l1_model, "L1 Regularization Model")
12
# L2 Regularization Model
print("\nTraining L2 Regularization Model...")
l2_model = build_model(regularizer=l2(0.001))
compile_and_train(l2_model, "L2 Regularization Model")
# Dropout Model
print("\nTraining Dropout Model...")
dropout_model = build_model(dropout_rate=0.5)
compile_and_train(dropout_model, "Dropout Model")
Output:
Downloading data from [Link]
[Link]
29515/29515 0s 0us/step
Downloading data from [Link]
[Link]
26421880/26421880 0s 0us/step
Downloading data from [Link]
[Link]
5148/5148 0s 0us/step
Downloading data from [Link]
[Link]
4422102/4422102 0s 0us/step
Training Base Model...
/usr/local/lib/python3.11/dist-packages/keras/src/layers/convolutional/base_conv.py:107:
UserWarning: Do not pass an ‘input_shape‘/‘input_dim‘ argument to a layer.
When using Sequential models, prefer using an ‘Input(shape)‘
object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
750/750 244s 320ms/step - accuracy: 0.7415 - loss: 0.6963 -
val_accuracy: 0.8795 - val_loss: 0.3296
313/313 13s 42ms/step - accuracy: 0.8837 - loss: 0.3387
Base Model Test Accuracy: 0.8788
Training L1 Regularization Model...
750/750 247s 326ms/step - accuracy: 0.6251 - loss: 3.3622 -
val_accuracy: 0.7746 - val_loss: 0.9674
313/313 14s 43ms/step - accuracy: 0.7713 - loss: 0.9884
L1 Regularization Model Test Accuracy: 0.7699
Training L2 Regularization Model...
750/750 241s 316ms/step - accuracy: 0.7384 - loss: 0.9840 -
val_accuracy: 0.8598 - val_loss: 0.5422
313/313 13s 43ms/step - accuracy: 0.8650 - loss: 0.5530
L2 Regularization Model Test Accuracy: 0.8613
Training Dropout Model...
750/750 246s 323ms/step - accuracy: 0.6357 - loss: 0.9961 -
val_accuracy: 0.8588 - val_loss: 0.3955
313/313 13s 43ms/step - accuracy: 0.8605 - loss: 0.4070
Dropout Model Test Accuracy: 0.8580
13
7. Use the concept of Data Augmentation to increase the data size
from a single image.
import cv2
import numpy as np
import albumentations as A
import os
def augment_image(image_path, output_folder, num_augmented=10):
"""
Applies data augmentation techniques to generate multiple variations of an image.
Parameters:
- image_path: Path to the input image.
- output_folder: Folder where augmented images will be saved.
- num_augmented: Number of augmented images to generate.
"""
if not [Link](output_folder):
[Link](output_folder)
# The problem was here: ’[Link]’ needed to be a string
image = [Link](image_path)
image = [Link](image, cv2.COLOR_BGR2RGB) # Also, you should convert the loaded ’image’
transform = [Link]([
[Link](p=0.5),
[Link](p=0.3),
[Link](limit=30, p=0.5),
[Link](p=0.5),
[Link](blur_limit=(3, 7), p=0.3),
[Link](var_limit=(10.0, 50.0), p=0.3),
])
for i in range(num_augmented):
augmented = transform(image=image)
augmented_image = augmented["image"]
save_path = [Link](output_folder, f"augmented_{i}.jpg")
[Link](save_path, [Link](augmented_image, cv2.COLOR_RGB2BGR))
print(f"{num_augmented} augmented images saved in {output_folder}")
# Example Usage
image_path = "[Link]" # Change this to the actual image path
output_folder = "augmented_images"
augment_image(image_path, output_folder, num_augmented=10)
Output:
10 augmented images saved in augmented_images
<ipython-input-6-548755f45b14>:28: UserWarning: Argument(s) ’var_limit’ are not valid for transform
14
[Link](var_limit=(10.0, 50.0), p=0.3),
8. Design and implement a CNN model to classify CIFAR10 im-
age dataset. Use Data Augmentation while designing the CNN
model. Record accuracy corresponding to the number of epochs.
import tensorflow as tf
from [Link] import layers, models
import [Link] as plt
from [Link] import ImageDataGenerator
import numpy as np
# Load the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = [Link].cifar10.load_data()
# Normalize pixel values (0 to 255 → 0 to 1)
x_train, x_test = x_train / 255.0, x_test / 255.0
# Convert labels to categorical format
y_train = [Link].to_categorical(y_train, 10)
y_test = [Link].to_categorical(y_test, 10)
# Data Augmentation
datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
zoom_range=0.1
)
[Link](x_train)
# Define the CNN model
model = [Link]([
layers.Conv2D(32, (3, 3), activation=’relu’, padding=’same’, input_shape=(32, 32, 3)),
layers.Conv2D(32, (3, 3), activation=’relu’, padding=’same’),
layers.MaxPooling2D((2, 2)),
[Link](0.25),
layers.Conv2D(64, (3, 3), activation=’relu’, padding=’same’),
layers.Conv2D(64, (3, 3), activation=’relu’, padding=’same’),
layers.MaxPooling2D((2, 2)),
[Link](0.25),
layers.Conv2D(128, (3, 3), activation=’relu’, padding=’same’),
layers.Conv2D(128, (3, 3), activation=’relu’, padding=’same’),
layers.MaxPooling2D((2, 2)),
[Link](0.25),
[Link](),
[Link](512, activation=’relu’),
[Link](0.5),
15
[Link](10, activation=’softmax’)
])
# Compile the model
[Link](optimizer=’adam’,
loss=’categorical_crossentropy’,
metrics=[’accuracy’])
# Train the model and record accuracy
epochs = 30
history = [Link]([Link](x_train, y_train, batch_size=64),
validation_data=(x_test, y_test),
epochs=epochs,
verbose=1)
# Plot accuracy vs. epochs
[Link](range(1, epochs+1), [Link][’accuracy’], label=’Training Accuracy’)
[Link](range(1, epochs+1), [Link][’val_accuracy’], label=’Validation Accuracy’)
[Link](’Epochs’)
[Link](’Accuracy’)
[Link]()
[Link](’CNN Model Accuracy on CIFAR-10’)
[Link]()
# Evaluate on test data
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
print(f"Final Test Accuracy: {test_acc:.4f}")
Output:
Epoch 1/2
782/782 46s 50ms/step - accuracy: 0.2630 - loss: 1.9741 - val_accuracy: 0.4860 - val_loss: 1.3860
Epoch 2/2
782/782 35s 45ms/step - accuracy: 0.4618 - loss: 1.4634 - val_accuracy: 0.5690 - val_loss: 1.1867
313/313 - 1s - 3ms/step - accuracy: 0.5690 - loss: 1.1867
Final Test Accuracy: 0.5690
16
9. Design and implement a CNN model to classify CIFAR10 im-
age dataset. Use Data Augmentation while designing the CNN
model. Record accuracy corresponding to the number of epochs.
import tensorflow as tf
from [Link] import layers, models
import [Link] as plt
# Load MNIST and Fashion-MNIST datasets
datasets = {
"MNIST": [Link].load_data(),
"Fashion-MNIST": [Link].fashion_mnist.load_data()
}
for dataset_name, (data_train, data_test) in [Link]():
(x_train, y_train), (x_test, y_test) = data_train, data_test
# Normalize pixel values (0 to 255 -> 0 to 1)
x_train, x_test = x_train / 255.0, x_test / 255.0
# Reshape data to fit CNN input format (28x28 images with 1 channel)
x_train = x_train.reshape(-1, 28, 28, 1)
x_test = x_test.reshape(-1, 28, 28, 1)
# Convert labels to categorical format (one-hot encoding)
y_train = [Link].to_categorical(y_train, 10)
y_test = [Link].to_categorical(y_test, 10)
# Define LeNet-5 model
model = [Link]([
layers.Conv2D(6, (5, 5), activation=’tanh’, padding=’same’, input_shape=(28, 28, 1)),
layers.AveragePooling2D((2, 2)),
layers.Conv2D(16, (5, 5), activation=’tanh’),
layers.AveragePooling2D((2, 2)),
[Link](),
[Link](120, activation=’tanh’),
[Link](84, activation=’tanh’),
[Link](10, activation=’softmax’) # Output layer with 10 classes
])
# Compile the model
[Link](optimizer=’adam’,
loss=’categorical_crossentropy’,
metrics=[’accuracy’])
# Train the model
history = [Link](x_train, y_train, epochs=15, batch_size=128,
validation_data=(x_test, y_test), verbose=1)
# Evaluate accuracy
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
17
print(f"\n{dataset_name} Test Accuracy: {test_acc:.4f}\n")
# Plot accuracy vs. epochs
[Link]([Link][’accuracy’], label=’Training Accuracy’)
[Link]([Link][’val_accuracy’], label=’Validation Accuracy’)
[Link](’Epochs’)
[Link](’Accuracy’)
[Link]()
[Link](f’LeNet-5 Accuracy on {dataset_name}’)
[Link]()
Output:
10 augmented images saved in augmented_images
<ipython-input-6-548755f45b14>:28: UserWarning: Argument(s) ’var_limit’ are not valid for transform
[Link](var_limit=(10.0, 50.0), p=0.3),Downloading data from [Link]
11490434/11490434 0s 0us/step
Downloading data from [Link]
29515/29515 0s 0us/step
Downloading data from [Link]
26421880/26421880 0s 0us/step
Downloading data from [Link]
5148/5148 0s 1us/step
Downloading data from [Link]
4422102/4422102 0s 0us/step
Epoch 1/2
469/469 8s 10ms/step - accuracy: 0.8219 - loss: 0.6148 - val_accuracy: 0.9535 - val_loss: 0.1498
Epoch 2/2
469/469 6s 4ms/step - accuracy: 0.9595 - loss: 0.1369 - val_accuracy: 0.9716 - val_loss: 0.0925
313/313 - 1s - 4ms/step - accuracy: 0.9716 - loss: 0.0925
MNIST Test Accuracy: 0.9716
10 augmented images saved in augmented_images
<ipython-input-6-548755f45b14>:28: UserWarning: Argument(s) ’var_limit’ are not valid for transform
[Link](var_limit=(10.0, 50.0), p=0.3),Downloading data from [Link]
18
11490434/11490434 0s 0us/step
Downloading data from [Link]
29515/29515 0s 0us/step
Downloading data from [Link]
26421880/26421880 0s 0us/step
Downloading data from [Link]
5148/5148 0s 1us/step
Downloading data from [Link]
4422102/4422102 0s 0us/step
Epoch 1/2
469/469 7s 8ms/step - accuracy: 0.7142 - loss: 0.8074 - val_accuracy: 0.8270 - val_loss: 0.4756
Epoch 2/2
469/469 2s 5ms/step - accuracy: 0.8460 - loss: 0.4266 - val_accuracy: 0.8492 - val_loss: 0.4126
313/313 - 1s - 3ms/step - accuracy: 0.8492 - loss: 0.4126
Fashion-MNIST Test Accuracy: 0.8492
10. Implement the standard VGG-16 & VGG-19 CNN architecture
models to classify multi-category image datasets and check the
accuracy.
import tensorflow as tf
from [Link] import layers, models
from [Link] import VGG16, VGG19
from [Link] import ImageDataGenerator
import [Link] as plt
# Load CIFAR-10 dataset (can be replaced with MNIST/Fashion-MNIST)
(x_train, y_train), (x_test, y_test) = [Link].cifar10.load_data()
# Normalize pixel values (0 to 255 -> 0 to 1)
x_train, x_test = x_train / 255.0, x_test / 255.0
# Convert labels to categorical format
y_train = [Link].to_categorical(y_train, 10)
y_test = [Link].to_categorical(y_test, 10)
19
# Data Augmentation
datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
zoom_range=0.1
)
[Link](x_train)
# Function to build VGG model
def build_vgg_model(vgg_type="VGG16"):
base_model = VGG16(weights=None, include_top=False, input_shape=(32, 32, 3)) if vgg_type == "VG
else VGG19(weights=None, include_top=False, input_shape=(32, 32, 3))
model = [Link]([
base_model,
[Link](),
[Link](512, activation=’relu’),
[Link](0.5),
[Link](10, activation=’softmax’)
])
[Link](optimizer=’adam’,
loss=’categorical_crossentropy’,
metrics=[’accuracy’])
return model
# Train and evaluate both VGG models
for vgg_type in ["VGG16", "VGG19"]:
print(f"\nTraining {vgg_type} Model...\n")
model = build_vgg_model(vgg_type)
history = [Link]([Link](x_train, y_train, batch_size=64),
validation_data=(x_test, y_test),
epochs=20, verbose=1)
# Evaluate test accuracy
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
print(f"\n{vgg_type} Test Accuracy: {test_acc:.4f}\n")
# Plot accuracy vs. epochs
[Link]([Link][’accuracy’], label=’Training Accuracy’)
[Link]([Link][’val_accuracy’], label=’Validation Accuracy’)
[Link](’Epochs’)
[Link](’Accuracy’)
[Link]()
[Link](f’Accuracy of {vgg_type} on CIFAR-10’)
[Link]()
Output:
20
Training VGG16 Model...
Epoch 1/2
/usr/local/lib/python3.11/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121:
self._warn_if_super_not_called()
782/782 67s 71ms/step - accuracy: 0.0989 - loss: 2.3029 - val_accuracy: 0.1000 - val_loss: 2.3027
Epoch 2/2
782/782 50s 63ms/step - accuracy: 0.0995 - loss: 2.3028 - val_accuracy: 0.1000 - val_loss: 2.3026
313/313 - 2s - 8ms/step - accuracy: 0.1000 - loss: 2.3026
VGG16 Test Accuracy: 0.1000
Training VGG19 Model...
Epoch 1/2
782/782 71s 79ms/step - accuracy: 0.1000 - loss: 2.3046 - val_accuracy: 0.1000 - val_loss: 2.3026
Epoch 2/2
782/782 56s 72ms/step - accuracy: 0.0973 - loss: 2.3028 - val_accuracy: 0.1000 - val_loss: 2.3026
313/313 - 3s - 10ms/step - accuracy: 0.1000 - loss: 2.3026
VGG19 Test Accuracy: 0.1000
21
11. Implement RNN for sentiment analysis on movie reviews.
import tensorflow as tf
from [Link] import layers, models, preprocessing
import [Link] as plt
# Load IMDb dataset
vocab_size = 10000 # Limit vocabulary size
max_length = 200 # Pad sequences to this length
(x_train, y_train), (x_test, y_test) = [Link].load_data(num_words=vocab_size)
# Pad sequences to ensure uniform length
x_train = [Link].pad_sequences(x_train, maxlen=max_length)
x_test = [Link].pad_sequences(x_test, maxlen=max_length)
# Define RNN model for sentiment analysis
model = [Link]([
[Link](input_dim=vocab_size, output_dim=128, input_length=max_length),
[Link](64, activation=’tanh’, return_sequences=True),
[Link](32, activation=’tanh’),
[Link](1, activation=’sigmoid’) # Binary classification (positive/negative sentiment)
])
# Compile the model
[Link](optimizer=’adam’,
loss=’binary_crossentropy’,
metrics=[’accuracy’])
# Train the model
history = [Link](x_train, y_train, epochs=10, batch_size=64,
validation_data=(x_test, y_test), verbose=1)
# Evaluate model performance
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
print(f"\nTest Accuracy: {test_acc:.4f}\n")
# Plot accuracy vs. epochs
[Link]([Link][’accuracy’], label=’Training Accuracy’)
[Link]([Link][’val_accuracy’], label=’Validation Accuracy’)
[Link](’Epochs’)
[Link](’Accuracy’)
[Link]()
[Link](’RNN Accuracy on IMDb Sentiment Analysis’)
[Link]()
Output:
Downloading data from [Link]
17464789/17464789 0s 0us/step
Epoch 1/2
/usr/local/lib/python3.11/dist-packages/keras/src/layers/core/[Link]: UserWarning: Argumen
22
[Link](
391/391 21s 44ms/step - accuracy: 0.6766 - loss: 0.5675 - val_accuracy: 0.7679 - val_loss: 0.5023
Epoch 2/2
391/391 16s 37ms/step - accuracy: 0.8592 - loss: 0.3394 - val_accuracy: 0.6352 - val_loss: 0.6844
782/782 - 7s - 9ms/step - accuracy: 0.6352 - loss: 0.6844
Test Accuracy: 0.6352
12. Implement Bidirectional LSTM for sentiment analysis on movie
reviews.
import tensorflow as tf
from [Link] import layers, models, preprocessing
import [Link] as plt
# Load IMDb dataset
vocab_size = 10000 # Limit vocabulary size
max_length = 200 # Pad sequences to this length
(x_train, y_train), (x_test, y_test) = [Link].load_data(num_words=vocab_size)
# Pad sequences to ensure uniform length
x_train = [Link].pad_sequences(x_train, maxlen=max_length)
x_test = [Link].pad_sequences(x_test, maxlen=max_length)
# Define Bidirectional LSTM model
model = [Link]([
[Link](input_dim=vocab_size, output_dim=128, input_length=max_length),
[Link]([Link](64, return_sequences=True)), # First LSTM layer
[Link]([Link](32)), # Second LSTM layer
[Link](1, activation=’sigmoid’) # Binary classification (positive/negative sentiment)
])
# Compile the model
[Link](optimizer=’adam’,
23
loss=’binary_crossentropy’,
metrics=[’accuracy’])
# Train the model
history = [Link](x_train, y_train, epochs=10, batch_size=64,
validation_data=(x_test, y_test), verbose=1)
# Evaluate model performance
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
print(f"\nTest Accuracy: {test_acc:.4f}\n")
# Plot accuracy vs. epochs
[Link]([Link][’accuracy’], label=’Training Accuracy’)
[Link]([Link][’val_accuracy’], label=’Validation Accuracy’)
[Link](’Epochs’)
[Link](’Accuracy’)
[Link]()
[Link](’Bidirectional LSTM Accuracy on IMDb Sentiment Analysis’)
[Link]()
Output:
Epoch 1/2
391/391 24s 44ms/step - accuracy: 0.7269 - loss: 0.5115 - val_accuracy: 0.8709 - val_loss: 0.3074
Epoch 2/2
391/391 17s 44ms/step - accuracy: 0.9156 - loss: 0.2268 - val_accuracy: 0.8672 - val_loss: 0.3142
782/782 - 10s - 13ms/step - accuracy: 0.8672 - loss: 0.3142
Test Accuracy: 0.8672