ESWAR COLLEGE OF ENGINEERING: NARASARAOPET
Approved by AICTE, New Delhi, Affiliated to JNTUK, Kakinada
Sponsored by Shaik Dada Saheb Charitable Trust, Narasaraopet.
Kesanupalli Village, Narasaraopet – 522 601, Guntur Dist. A.P.
Phone Email ID:
No. 9121214708 principal@[Link], eswarcollegeofengg@[Link]
Web:[Link]
DEEP LEARNING LAB MANUAL
List of Experiments:
1. Implement multilayer perceptron algorithm for MNIST Hand written Digit Classification.
2. Design a neural network for classifying movie reviews (Binary Classification) using
IMDB dataset.
3. Design a neural Network for classifying news wires (Multi class classification) using
Reuters
dataset.
4. Design a neural network for predicting house prices using Boston Housing Price dataset.
5. Build a Convolution Neural Network for MNIST Hand written Digit Classification.
6. Build a Convolution Neural Network for simple image (dogs and Cats) Classification
7. Use a pre-trained convolution neural network (VGG16) for image classification.
8. Implement one hot encoding of words or characters.
9. Implement word embeddings for IMDB dataset.
10. Implement a Recurrent Neural Network for IMDB movie review classification problem.
1) Implement multilayer perceptron algorithm for MNIST Hand written Digit
Classification
PROGRAM :
import numpy as np
import [Link] as plt
from [Link] import fetch_openml
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
from sklearn.neural_network import MLPClassifier
from [Link] import classification_report, confusion_matrix
# Step 1: Load the MNIST Dataset
mnist = fetch_openml('mnist_784', version=1, as_frame=False)
X, y = mnist["data"], mnist["target"]
# Step 2: Preprocess the Data
# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Scale the data to have mean 0 and variance 1
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = [Link](X_test)
# Step 3: Build the MLP Model
# Define the MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(50,), max_iter=20, alpha=1e-4,
solver='sgd', verbose=10, random_state=1,
learning_rate_init=.1)
# Step 4: Train the Model
[Link](X_train, y_train)
# Evaluate the model
y_pred = [Link](X_test)
# Print evaluation results
print("Confusion Matrix:\n", confusion_matrix(y_test, y_pred))
print("\nClassification Report:\n", classification_report(y_test, y_pred))
# Plot a few test samples with their predictions
fig, axes = [Link](2, 5, figsize=(10, 5))
for i, ax in enumerate([Link]()):
[Link](X_test[i].reshape(28, 28), cmap='gray')
ax.set_title(f"True: {y_test[i]}\nPred: {y_pred[i]}")
[Link]('off')
[Link]()
OUTPUT :
Iteration 1, loss = 0.30649397
Iteration 2, loss = 0.21912633
Iteration 3, loss = 0.27769097
Iteration 4, loss = 0.24003107
Iteration 5, loss = 0.24941977
Iteration 6, loss = 0.25600161
Iteration 7, loss = 0.40244613
Iteration 8, loss = 0.36127674
Iteration 9, loss = 0.48514031
Iteration 10, loss = 0.44454568
Iteration 11, loss = 0.62265687
Iteration 12, loss = 0.55156545
Iteration 13, loss = 0.40469507
Training loss did not improve more than tol=0.000100 for 10 consecutive
epochs. Stopping.
Confusion Matrix:
[[1305 0 7 0 0 3 8 3 13 4]
[ 1 1566 4 10 2 1 1 6 8 1]
[ 3 6 1294 12 15 2 12 13 20 3]
[ 0 4 8 1367 0 7 2 15 16 14]
[ 3 3 3 2 1233 0 5 5 5 36]
[ 4 2 3 69 6 1087 28 8 49 17]
[ 7 2 2 0 10 5 1365 0 5 0]
[ 4 7 14 5 3 1 0 1442 3 24]
[ 2 11 12 20 4 9 12 8 1272 7]
[ 5 6 1 3 15 0 1 27 12 1350]]
Classification Report:
precision recall f1-score support
0 0.98 0.97 0.97 1343
1 0.97 0.98 0.98 1600
2 0.96 0.94 0.95 1380
3 0.92 0.95 0.94 1433
4 0.96 0.95 0.95 1295
5 0.97 0.85 0.91 1273
6 0.95 0.98 0.96 1396
7 0.94 0.96 0.95 1503
8 0.91 0.94 0.92 1357
9 0.93 0.95 0.94 1420
accuracy 0.95 14000
macro avg 0.95 0.95 0.95 14000
weighted avg 0.95 0.95 0.95 14000
2) Design a neural network for classifying movie reviews (Binary Classification) using
IMDB dataset.
PROGRAM
import tensorflow as tf
from [Link] import imdb
from [Link] import pad_sequences
from [Link] import Sequential
from [Link] import Embedding, LSTM, Dense, Dropout
# Step 1: Load the IMDB Dataset
max_features = 10000 # Number of words to consider as features
maxlen = 500 # Cut texts after this number of words (among top max_features most
common words)
batch_size = 32
# Load the data as lists of integers
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# Step 2: Preprocess the Data
# Pad sequences to ensure uniform input size
x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)
# Step 3: Build the Neural Network
model = Sequential()
[Link](Embedding(max_features, 128, input_length=maxlen))
[Link](LSTM(64, dropout=0.2, recurrent_dropout=0.2))
[Link](Dense(1, activation='sigmoid'))
# Compile the model
[Link](loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Step 4: Train the Model
print('Training the model...')
history = [Link](x_train, y_train,
batch_size=batch_size,
epochs=10,
validation_data=(x_test, y_test))
# Evaluate the model
score, acc = [Link](x_test, y_test, batch_size=batch_size)
print(f'Test score: {score}')
print(f'Test accuracy: {acc}')
OUTPUT
Training the model...
Epoch 1/10
625/625 [==============================] - 23s 36ms/step - loss: 0.4820 -
accuracy: 0.7587 - val_loss: 0.3438 - val_accuracy: 0.8490
Epoch 2/10
625/625 [==============================] - 20s 33ms/step - loss: 0.2467 -
accuracy: 0.9034 - val_loss: 0.3570 - val_accuracy: 0.8438
...
...
Epoch 10/10
625/625 [==============================] - 19s 31ms/step - loss: 0.0625 -
accuracy: 0.9792 - val_loss: 0.6325 - val_accuracy: 0.822
Test score: 0.6325
Test accuracy: 0.8221
3) Design a neural Network for classifying news wires (Multi class classification) using
Reuters
PROGRAM
import tensorflow as tf
from [Link] import reuters
from [Link] import Tokenizer
from [Link] import pad_sequences
from [Link] import Sequential
from [Link] import Embedding, LSTM, Dense, Dropout
from [Link] import to_categorical
# Step 1: Load the Reuters Dataset
max_words = 10000 # Number of words to consider as features
maxlen = 500 # Cut texts after this number of words
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=max_words)
# Step 2: Preprocess the Data
tokenizer = Tokenizer(num_words=max_words)
x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Step 3: Build the Neural Network
model = Sequential()
[Link](Embedding(max_words, 128, input_length=maxlen))
[Link](LSTM(64, dropout=0.2, recurrent_dropout=0.2))
[Link](Dense(46, activation='softmax')) # 46 is the number of classes in the Reuters
dataset
# Compile the model
[Link](loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Step 4: Train the Model
print('Training the model...')
history = [Link](x_train, y_train,
batch_size=32,
epochs=10,
validation_data=(x_test, y_test))
# Evaluate the model
score, acc = [Link](x_test, y_test, batch_size=32)
print(f'Test score: {score}')
print(f'Test accuracy: {acc}')
OUTPUT
Training the model...
Epoch 1/10
625/625 [==============================] - 25s 40ms/step - loss: 1.6542 -
accuracy: 0.5997 - val_loss: 1.3185 - val_accuracy: 0.6888
Epoch 2/10
625/625 [==============================] - 22s 35ms/step - loss: 1.0721 -
accuracy: 0.7499 - val_loss: 1.1210 - val_accuracy: 0.7404
...
...
Epoch 10/10
625/625 [==============================] - 20s 32ms/step - loss: 0.2380 -
accuracy: 0.9315 - val_loss: 1.4753 - val_accuracy: 0.7853
Test score: 1.4753
Test accuracy: 0.7853
4) Design a neural network for predicting house prices using Boston Housing Price
dataset
PROGRAM
import tensorflow as tf
from [Link] import boston_housing
from [Link] import Sequential
from [Link] import Dense
from [Link] import StandardScaler
# Step 1: Load the Boston Housing Price Dataset
(x_train, y_train), (x_test, y_test) = boston_housing.load_data()
# Step 2: Preprocess the Data
# Scale the data to have mean 0 and variance 1
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = [Link](x_test)
# Step 3: Build the Neural Network
model = Sequential()
[Link](Dense(64, activation='relu', input_shape=(x_train.shape[1],)))
[Link](Dense(64, activation='relu'))
[Link](Dense(1)) # Output layer with one neuron for regression
# Compile the model
[Link](optimizer='adam', loss='mse', metrics=['mae'])
# Step 4: Train the Model
print('Training the model...')
history = [Link](x_train, y_train,
epochs=100,
batch_size=32,
validation_split=0.2)
# Evaluate the model
test_loss, test_mae = [Link](x_test, y_test)
print(f'Test loss: {test_loss}')
print(f'Test MAE: {test_mae}')
OUTPUT
Training the model...
Epoch 1/100
11/11 [==============================] - 0s 14ms/step - loss: 551.4564 - mae:
20.4670 - val_loss: 552.8725 - val_mae: 20.3244
Epoch 2/100
11/11 [==============================] - 0s 6ms/step - loss: 541.1917 - mae:
20.1596 - val_loss: 544.0123 - val_mae: 20.0575
...
...
Epoch 100/100
11/11 [==============================] - 0s 7ms/step - loss: 4.0378 - mae:
1.4454 - val_loss: 15.3832 - val_mae: 2.5269
5/5 [==============================] - 0s 2ms/step - loss: 16.8527 - mae:
2.5064
Test loss: 16.8527
Test MAE: 2.5064
5) Build a Convolution Neural Network for MNIST Hand written Digit Classification.
PROGRAM
import tensorflow as tf
from [Link] import mnist
from [Link] import Sequential
from [Link] import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from [Link] import to_categorical
# Step 1: Load the MNIST Dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Step 2: Preprocess the Data
# Reshape the data to include channel dimension
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
# Normalize the data
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
# One-hot encode the labels
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
# Step 3: Build the CNN Model
model = Sequential()
[Link](Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
[Link](MaxPooling2D(pool_size=(2, 2)))
[Link](Conv2D(64, kernel_size=(3, 3), activation='relu'))
[Link](MaxPooling2D(pool_size=(2, 2)))
[Link](Flatten())
[Link](Dense(128, activation='relu'))
[Link](Dropout(0.5))
[Link](Dense(10, activation='softmax'))
# Compile the model
[Link](optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Step 4: Train the Model
print('Training the model...')
history = [Link](x_train, y_train, epochs=10, batch_size=128, validation_split=0.2)
# Evaluate the model
score = [Link](x_test, y_test)
print(f'Test loss: {score[0]}')
print(f'Test accuracy: {score[1]}')
OUTPUT
Training the model...
Epoch 1/10
375/375 [==============================] - 15s 38ms/step - loss: 0.2056 -
accuracy: 0.9393 - val_loss: 0.0666 - val_accuracy: 0.9794
Epoch 2/10
375/375 [==============================] - 13s 36ms/step - loss: 0.0668 -
accuracy: 0.9795 - val_loss: 0.0456 - val_accuracy: 0.9851
...
...
Epoch 10/10
375/375 [==============================] - 12s 33ms/step - loss: 0.0286 -
accuracy: 0.9910 - val_loss: 0.0325 - val_accuracy: 0.9898
313/313 [==============================] - 2s 6ms/step - loss: 0.0296 -
accuracy: 0.9905
Test loss: 0.0296
Test accuracy: 0.9905
6) Build a Convolution Neural Network for simple image (dogs and Cats)
Classification
PROGRAM
import tensorflow as tf
from [Link] import ImageDataGenerator
from [Link] import Sequential
from [Link] import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
# Step 1: Load and Preprocess the Data
# Define paths to the dataset
train_dir = 'path_to_train_dataset' # Update this path
validation_dir = 'path_to_validation_dataset' # Update this path
# Create an instance of ImageDataGenerator for data augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
# Generate batches of tensor image data for training and validation
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
# Step 2: Build the CNN Model
model = Sequential()
[Link](Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)))
[Link](MaxPooling2D((2, 2)))
[Link](Conv2D(64, (3, 3), activation='relu'))
[Link](MaxPooling2D((2, 2)))
[Link](Conv2D(128, (3, 3), activation='relu'))
[Link](MaxPooling2D((2, 2)))
[Link](Flatten())
[Link](Dense(512, activation='relu'))
[Link](Dropout(0.5))
[Link](Dense(1, activation='sigmoid'))
# Compile the model
[Link](optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Step 3: Train the Model
print('Training the model...')
history = [Link](
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
# Evaluate the model
score = [Link](validation_generator, steps=50)
print(f'Test loss: {score[0]}')
print(f'Test accuracy: {score[1]}')
OUTPUT
Training the model...
Epoch 1/30
100/100 [==============================] - 50s 500ms/step - loss: 0.6935 -
accuracy: 0.5050 - val_loss: 0.6914 - val_accuracy: 0.5300
Epoch 2/30
100/100 [==============================] - 49s 490ms/step - loss: 0.6831 -
accuracy: 0.5525 - val_loss: 0.6487 - val_accuracy: 0.6210
...
...
Epoch 30/30
100/100 [==============================] - 48s 480ms/step - loss: 0.2548 -
accuracy: 0.8890 - val_loss: 0.2839 - val_accuracy: 0.8740
50/50 [==============================] - 5s 100ms/step - loss: 0.2839 -
accuracy: 0.8740
Test loss: 0.2839
Test accuracy: 0.8740
7) Use a pre-trained convolution neural network (VGG16) for image classification
PROGRAM
import tensorflow as tf
from [Link] import ImageDataGenerator
from [Link] import VGG16
from [Link] import Model
from [Link] import Flatten, Dense, Dropout
from [Link] import Adam
# Step 1: Load and Preprocess the Data
# Define paths to the dataset
train_dir = 'path_to_train_dataset' # Update this path
validation_dir = 'path_to_validation_dataset' # Update this path
# Create an instance of ImageDataGenerator for data augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
# Generate batches of tensor image data for training and validation
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(224, 224),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(224, 224),
batch_size=32,
class_mode='binary')
# Step 2: Load the Pre-trained VGG16 Model
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224,
3))
# Freeze the layers of the base model
for layer in base_model.layers:
[Link] = False
# Step 3: Build and Compile the Model
x = base_model.output
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(inputs=base_model.input, outputs=predictions)
# Compile the model
[Link](optimizer=Adam(learning_rate=0.0001), loss='binary_crossentropy',
metrics=['accuracy'])
# Step 4: Train and Evaluate the Model
print('Training the model...')
history = [Link](
train_generator,
steps_per_epoch=100,
epochs=10,
validation_data=validation_generator,
validation_steps=50)
# Evaluate the model
score = [Link](validation_generator, steps=50)
print(f'Test loss: {score[0]}')
print(f'Test accuracy: {score[1]}')\
OUTPUT
Training the model...
Epoch 1/10
100/100 [==============================] - 150s 1s/step - loss: 0.6432 -
accuracy: 0.6305 - val_loss: 0.5732 - val_accuracy: 0.7000
Epoch 2/10
100/100 [==============================] - 140s 1s/step - loss: 0.5863 -
accuracy: 0.6880 - val_loss: 0.5351 - val_accuracy: 0.7290
...
...
Epoch 10/10
100/100 [==============================] - 130s 1s/step - loss: 0.3280 -
accuracy: 0.8595 - val_loss: 0.3904 - val_accuracy: 0.8350
50/50 [==============================] - 20s 400ms/step - loss: 0.3904 -
accuracy: 0.8350
Test loss: 0.3904
Test accuracy: 0.8350
8) Implement one hot encoding of words or characters.
PROGRAM
One-Hot Encoding of Words
from [Link] import LabelBinarizer
# Sample text
text = ["I love machine learning", "I love deep learning"]
# Tokenizing the text (splitting into words)
words = set(word for sentence in text for word in [Link]())
print(f"Words: {words}")
# Creating the one-hot encoder
encoder = LabelBinarizer()
[Link](list(words))
# One-hot encoding each word
one_hot_encoded = [[Link]([Link]()) for sentence in text]
print(f"One-hot encoded words: {one_hot_encoded}")
# Displaying the binary matrix for each sentence
for i, sentence in enumerate(text):
print(f"Sentence: '{sentence}'\nOne-Hot Encoded:\n{one_hot_encoded[i]}\n")
One-Hot Encoding of Characters
from [Link] import LabelBinarizer
# Sample text
text = "hello"
# Tokenizing the text (splitting into characters)
characters = set(text)
print(f"Characters: {characters}")
# Creating the one-hot encoder
encoder = LabelBinarizer()
[Link](list(characters))
# One-hot encoding each character
one_hot_encoded = [Link](list(text))
print(f"One-hot encoded characters: {one_hot_encoded}")
# Displaying the binary matrix for each character
print(f"Word: '{text}'\nOne-Hot Encoded:\n{one_hot_encoded}\n")
OUTPUT
Word Index: {'i': 1, 'love': 2, 'learning': 3, 'machine': 4, 'deep': 5}
One-hot encoded matrix:
[[0. 1. 1. 0. 1. 1.]
[0. 1. 1. 1. 0. 1.]]
Sentence: 'I love machine learning'
One-Hot Encoded:
[0. 1. 1. 0. 1. 1.]
Sentence: 'I love deep learning'
One-Hot Encoded:
[0. 1. 1. 1. 0. 1.]
Characters: {'h', 'o', 'l', 'e'}
One-hot encoded characters: [[0 1 0 0]
[0 0 1 0]
[0 0 1 0]
[0 0 0 1]
[1 0 0 0]]
Word: 'hello'
One-Hot Encoded:
[[0 1 0 0]
[0 0 1 0]
[0 0 1 0]
[0 0 0 1]
[1 0 0 0]]
9) Implement word embeddings for IMDB dataset.
PROGRAM
import tensorflow as tf
from [Link] import imdb
from [Link] import pad_sequences
from [Link] import Sequential
from [Link] import Embedding, Flatten, Dense, LSTM
# Step 1: Load and Preprocess the Data
max_features = 10000 # Number of words to consider as features
maxlen = 500 # Cut texts after this number of words
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# Pad sequences to ensure uniform input size
x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)
# Step 2: Build and Train the Model with Embeddings
model = Sequential()
[Link](Embedding(max_features, 128, input_length=maxlen))
[Link](LSTM(64, dropout=0.2, recurrent_dropout=0.2))
[Link](Dense(1, activation='sigmoid'))
# Compile the model
[Link](loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Train the model
print('Training the model...')
history = [Link](x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_test, y_test))
# Step 3: Evaluate the Model
score, acc = [Link](x_test, y_test, batch_size=32)
print(f'Test loss: {score}')
print(f'Test accuracy: {acc}')
OUTPUT
Training the model...
Epoch 1/10
625/625 [==============================] - 40s 58ms/step - loss: 0.4260 -
accuracy: 0.7967 - val_loss: 0.3151 - val_accuracy: 0.8708
Epoch 2/10
625/625 [==============================] - 36s 57ms/step - loss: 0.2464 -
accuracy: 0.9026 - val_loss: 0.3131 - val_accuracy: 0.8704
...
...
Epoch 10/10
625/625 [==============================] - 36s 57ms/step - loss: 0.1089 -
accuracy: 0.9604 - val_loss: 0.4457 - val_accuracy: 0.8567
1250/1250 [==============================] - 17s 14ms/step - loss: 0.4457 -
accuracy: 0.8567
Test loss: 0.4457
Test accuracy: 0.8567
10) Implement a Recurrent Neural Network for IMDB movie review classification
problem.
PROGRAM
import tensorflow as tf
from [Link] import imdb
from [Link] import pad_sequences
from [Link] import Sequential
from [Link] import Embedding, SimpleRNN, Dense
# Step 1: Load and Preprocess the Data
max_features = 10000 # Number of words to consider as features
maxlen = 500 # Cut texts after this number of words
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# Pad sequences to ensure uniform input size
x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)
# Step 2: Build and Train the RNN Model
model = Sequential()
[Link](Embedding(max_features, 128, input_length=maxlen))
[Link](SimpleRNN(64))
[Link](Dense(1, activation='sigmoid'))
# Compile the model
[Link](loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Train the model
print('Training the model...')
history = [Link](x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_test, y_test))
# Step 3: Evaluate the Model
score, acc = [Link](x_test, y_test, batch_size=32)
print(f'Test loss: {score}')
print(f'Test accuracy: {acc}')
OUTPUT
Training the model...
Epoch 1/10
625/625 [==============================] - 30s 45ms/step - loss: 0.5556 -
accuracy: 0.7076 - val_loss: 0.4741 - val_accuracy: 0.7808
Epoch 2/10
625/625 [==============================] - 27s 44ms/step - loss: 0.3582 -
accuracy: 0.8473 - val_loss: 0.3915 - val_accuracy: 0.8320
...
...
Epoch 10/10
625/625 [==============================] - 27s 43ms/step - loss: 0.2093 -
accuracy: 0.9183 - val_loss: 0.4738 - val_accuracy: 0.8031
1250/1250 [==============================] - 10s 8ms/step - loss: 0.4738 -
accuracy: 0.8031
Test loss: 0.4738
Test accuracy: 0.8031