Deep Learning for Lung Tumor Detection
Deep Learning for Lung Tumor Detection
by
G JAYASURYA (41130194)
JIBU ALEX KOSHY (41130199)
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY
(DEEMED TO BE UNIVERSITY)
Accredited with Grade “A++” by NAAC
APRIL 2025
DEPARTMENT OF ELECTRONICS AND COMMUNICAITON ENGINEERING
BONAFIDE CERTIFICATE
Internal Guide
ii
DECLARATION
DATE:
1.
2.
iii
ACKNOWLEDGEMENT
iv
ABSTRACT
In the segmentation phase, lung regions and potential tumors are isolated
from the background, enhancing the focus on relevant features. Feature
extraction involves identifying key characteristics of the tumor, such as shape
and texture, which are crucial for accurate diagnosis. These features are then
used to train various machine learning classification algorithms, including
Support Vector Machines (SVM), Random Forest, and Artificial Neural Networks
(ANN). Each algorithm's performance is evaluated based on accuracy,
sensitivity, specificity, and other metrics to determine the presence of
malignancy or benignity in tumors. The potential impact of this system extends
to improving lung cancer screening and early diagnosis. Its integration into
clinical settings could enhance the efficiency of cancer screening programs and
support timely interventions, ultimately leading to improved survival rates and
quality of life for patients.
v
TABLES OF CONTENTS
CHAPTER NO. TITLE PAGE NO.
ABSTRACT v
LIST OF TABLES ix
1 INTRODUCTION 1
2 LITERATURE SURVEY
3 EXISTING SYSTEM
3.2 Aim 16
3.2 Scope 18
vi
4 PROPOSED SYSTEM
6.1 Summary 42
6.2 Conclusions 43
REFERENCES 45
vii
LIST OF FIGURES
3.2 CT scan 12
viii
LIST OF TABLES
2D-CNN
ix
CHAPTER 1
INTRODUCTION
Lung cancer remains one of the most prevalent and fatal forms of cancer
worldwide, causing a significant percentage of cancer deaths every year. Early and
accurate detection of the disease is the key to improving survival rates since early
treatment profoundly increases the effectiveness of treatment methods like surgical
procedure, chemotherapy, and radiation therapy. However, the process of diagnosing
lung cancer is complex and requires careful analysis of medical imaging data,
particularly CT scans, which provide detailed pictures of the lungs. Traditional
diagnostic methods, like radiologists' visual examination of images, are prone to human
fallibility and inconsistency, thereby underlining the increasing demand for automatic
and reliable detection methods. New developments in deep learning algorithms, i.e.,
Convolutional Neural Networks (CNNs) and U-Net architectures, have proved effective
tools in the field of medical image analysis, offering improved accuracy in lung tumor
detection and classification.
1
Even though deep learning of lung cancer detection has made significant
progress in automatic detection, some challenges exist. The performance of CNN-
based models is highly data-dependent on the diversity and quality of the training set.
Medical images are also subject to variations in noise levels, contrast, and resolution,
affecting the accuracy of automated systems of detection. Huge amounts of annotated
data are also needed for deep learning models to generalize across various patient
bases. It is, however, challenging to obtain high-quality annotated medical data
because of ethical issues, patient confidentiality legislation, and annotation expertise.
Due to the requirement of strong and generalizable models, researchers have turned
to hybrid methods that integrate deep learning with conventional image processing
methods, including morphological operators and genetic algorithms, to perform
segmentation and classification better.
2
classification accuracy. Fine-tuning pre-trained networks for medical use, however,
needs domain knowledge to guarantee that the features learned from general data are
applicable to lung cancer detection. Furthermore, interpretability is still a major
challenge, as deep learning models are "black boxes," and it is hard to comprehend
the reasoning behind their predictions. Researchers are developing explainable AI
methods to offer more transparent insights into CNN decision-making processes, which
will improve trust and uptake in clinical environments.
3
CHAPTER 2
LITERATURE SURVEY
D. Muruga Radha Devi et al., (2024) has pioneered convolutional neural networks
to find lung cancer using CT scans. The authors employed a pre-processed Kaggle
dataset and yielded more than 80% accuracy. The research emphasizes the strong
performance of ZF NET than other models on account of more pixel information
contained in smaller-sized filters and resulting in better performance in detection.
Mani and Viswanadhapalli, (2021) the article present a DCNN-based method for
the classification of lung tumors as malignant or benign using varying architectures
with varying patch sizes. The approach is more effective than traditional classifiers like
ANNs with high accuracy in early tumor detection and classification. The study shows
the potential of deep learning algorithms to enhance medical image analysis.
Young-Jin Park et al. (2023) suggests an approach for the identification of lung
cancer tumors via preprocessing of DICOM images, lung and soft tissue region
segmentation, and improvement of image clarity. The method achieved impressive
performance metrics with 97.96% sensitivity, 99.23% precision, and F1-score of
98.56%.The results indicate that the method significantly enhances the
4
neural network learning process for lung cancer identification. The models,
nevertheless, can be tested by instances of high fluctuation or rapid changes in the
levels of accuracy, which requires models that are capable of coping with dynamic
change.
Spyrou et al. (2021) compare CNN and LSTM models in forecasting CT scans at
a port area, concluding that LSTM performs better than CNN in handling temporal
relationships. The capability of LSTM to deal with non-linearities and sequential data
ensures it is well-suited for intricate time-series forecasting. Yet, the need for vast
amounts of training data and computational power remains a challenge. In addition,
LSTM models are likely harder to interpret than conventional statistical models such
as CNN, which can deter their real-world use.
Humaira Jahan Tuba et al. (2022) suggests a state-of-the-art approach for lung
tumor detection and segmentation based on U-Net architecture augmented with dilated
convolutions. The authors utilize 3D lung CT scans of the NSCLC- Radiomics Dataset
to develop a pipeline that enhances segmentation accuracy while decreasing
computational complexity. Using a weighted binary cross- entropy loss function instead
of the normal dice loss allows the model to obtain a 53% dice coefficient and overall
accuracy of 78%. Better segmentation with lesser computational resources in
comparison to LungNet makes the proposed model more of an enhanced innovation
in technologies for early detection of lung cancer.
Mst. Farhana Khatun et al. (2022) suggests a deep learning model intended to
improve lung cancer detection and classification from medical images. By combining
sophisticated neural network methods, the model produces high classification
accuracy, which renders it a sound tool for minimizing diagnostic mistakes. The study
highlights the significance of deep learning in medical diagnosis by showing that the
suggested model is capable of differentiating between benign and malignant tumors.
5
P Jayapriya et al. (2023) investigate machine learning methods for detecting lung
tumors at an early stage using CT scans. The paper is centered on the importance of
preprocessing and using sophisticated machine learning algorithms to enhance the
accuracy of tumor detection. The suggested method has a high potential to detect
tumors at an early stage, which is crucial to enhance survival rates. The study
highlights the application of machine learning to improve the accuracy of medical
diagnosis, providing a useful insight into the development of early detection of cancer.
T. Senthil Kumar et al. (2022) investigates the use of Deep Convolutional Neural
Networks (DCNN) in classifying lung tumors based on differentiating benign from
malignant tumors. The authors have come up with three unique DCNN architectures,
with different patch sizes, to fine-tune the classification accuracy. The models were
validated on the LIDC database, wherein the DCNN showed better performance than
conventional classifiers such as Artificial Neural Networks (ANN). This study affirms
the efficacy of DCNN in producing accurate tumor classification that is critical in the
early and accurate detection of lung cancer.
M. Ariful Haque et al.(2020) method for automated lung tumor detection and
segmentation based on a hybrid 3D convolutional neural network. The authors present
a new architecture that blends 2D and 3D convolutions to better extract 3D structural
information in lung CT scans. The model was validated on the NSCLC-Radiomics
Dataset and obtained an average dice coefficient of 65.7%, outperforming other state-
of-the-art models such as LungNet. The research contributes considerably to the area
of medical imaging by providing a superior approach to tumor segmentation, which is
fundamental to precise diagnosis and treatment planning.
6
2.1 INFERENCES AND CHALLENGES IN EXISTING SYSTEM
2.2.1 Inferences
The effectiveness of CNN-based methods has also been reinforced by the use of
transfer learning and image pre-processing techniques, which improve model
generalization, even with limited medical imaging datasets. Some studies have
proposed automated pipelines combining 2D and 3D feature extractions, which refine
segmentation precision and reduce false positives in lung cancer detection.
Additionally, hybrid approaches that combine CT and PET scan data have been
explored, leveraging the strengths of both imaging modalities for improved tumor
classification. The integration of deep learning models into web applications and real-
world clinical environments is also gaining traction, making AI-assisted diagnosis more
accessible for radiologists and medical professionals. The implementation of these
technologies in medical imaging could significantly enhance diagnostic accuracy,
reduce radiologist workload, and facilitate earlier interventions, ultimately improving
patient outcomes.
7
2.2.2 Challenges
Despite In spite of the outstanding progress being made in AI-based lung tumor
detection, numerous challenges still need to be overcome in order to make these
approaches more dependable and universally applicable. One critical challenge lies in
the computational cost of training deep learning models, particularly architectures
employing dilated convolutions, hybrid 3D networks, and ZF Net, which demand large
processing capacity and memory. Availability of large, varied, and well-labeled
datasets is also a big challenge since deep learning models are dependent on high-
quality labeled medical images for successful training. The accuracy of segmentation
also depends on image quality, with the model having a difficult time distinguishing
between tumors and overlapping lung structures, noise, and differences in imaging
conditions.
9
CHAPTER 3
EXISTING SYSTEM
Previously, technologies used for detecting lung tumors have improved over
time with advances in medical diagnostics and imaging. Chest X-ray (CXR) was one of
the first techniques used, delivering a simple 2D representation of the lungs but with
poor detection of small or initial-stage tumors. For better accuracy, computed
tomography (CT) scans were developed, providing sophisticated cross-sectional views
of the lungs, which enabled more accurate detection of tumors. In a few instances,
magnetic resonance imaging (MRI) was utilized, particularly when differentiation of soft
tissues was required, although it was not as frequently utilized for the diagnosis of lung
cancer. Another important development was the positron emission tomography (PET)
scan, usually in conjunction with CT (PET-CT), which identified metabolically active
cancerous tissue and tumor spread.
In addition to imaging, sputum cytology was employed to scan lung mucus for
cancer cells, but it was only effective against tumors that released measurable cells.
More invasive methods of diagnosis comprised bronchoscopy with biopsy, in which a
flexible scope with a camera was passed down the airways to obtain tissue specimens,
and fine-needle aspiration (FNA) biopsy, which involved passing a needle into the lung
to obtain cells for analysis. There were also early computer-aided detection (CAD)
systems based on artificial intelligence (AI) that were used to aid radiologists in
detecting lung abnormalities, though with limited accuracy at first. These conventional
techniques have, over time, given way to advanced deep learning methods like 3D
convolutional neural networks (3D CNNs) that can detect 3D medical images with
higher accuracy, enhancing early diagnosis and detection of lung tumors.
10
3.1.1 Chest X-Ray – The Fig:3.1 demonstrates that the Chest X-ray (CXR) is among
the oldest and most popular imaging methods for the detection of lung tumors. It offers
a two-dimensional projection of the lungs, hence an inexpensive and rapid initial
screening tool. Nevertheless, its principal disadvantage is that it is low in sensitivity to
detect small tumors, particularly at the initial stages of lung cancer. Most lung
pathology, including benign nodules and infections, looks the same on an X-ray, so
malignancy cannot easily be ruled in or out based on this mode of imaging. In spite of
these limitations, X-ray persists in widespread usage in everyday practice because it is
cheap and universally available.
11
Fig:3.2: CT scan
.
3.1.3 Magnetic Resonance Imaging (MRI) – Magnetic Resonance Imaging or MRI is
a non-invasive imaging method which employs strong magnetic fields and radio waves
to create high-quality images of soft tissue. MRI, although not used initially for detection
of lung cancer, is instrumental in evaluating spread of tumors into neighboring
structures like chest wall or mediastinum. As apparent from Fig:3.3, MRI, unlike CT
scans, does not involve ionizing radiation, thus it is a safer method for repeated
imaging. However, MRI suffers from limitations in lung imaging because of lower
concentration of hydrogen atoms in lung tissues, which impacts the clarity of the image.
Therefore, it is generally employed as an adjunct tool in situations where further tissue
characterization is desired.
12
3.1.4 Positron Emission Tomography (PET) Scan – Positron Emission
Tomography (PET) is a nuclear medicine imaging modality that measures metabolic
activity in tissues. As demonstrated in Fig:3.4, it requires the injection of a radiotracer,
usually fluorodeoxyglucose (FDG), which concentrates in very active cancer cells,
highlighting them as "hot spots" on the image. PET scans are usually combined with
CT scans (PET-CT) for better localization and diagnostic accuracy. This method is
especially valuable for the discrimination of malignant from benign lung nodules and
the cancer spread evaluation (staging). PET scans have a very good sensitivity but at
times can raise false-positive interpretations in infection and inflammatory disorders.
These must then be further validated by other types of tests..
13
Fig:3.5: Sputum Cytology
14
3.1.7 Fine-Needle Aspiration (FNA) Biopsy – Fine-Needle Aspiration (FNA) biopsy
is an intra thoracic procedure involved to take samples of lung tissue for microscopic
analysis. A thin needle is guided through the chest wall by imaging modalities like CT
or ultrasound to obtain a sample from the involved tumor area as illustrated in Fig:3.7.
It is a very good method of proving malignancy and is routinely employed when lung
nodule is identified by imaging. FNA biopsy is not without risks, though - pneumothorax
(pneumonia) or bleeding - and is thus a procedure to be carefully performed. It is still
one of the best diagnostic methods for tumors of the lung, even with such risks.
3.2 AIM
The Goal of these past and ongoing research studies in lung tumor detection is
to develop computational models that work on medical imaging data to identify and
classify lung tumors. Researchers have tried many methods to improve accuracy and
efficiency of tumor detection with the help of advanced imaging methods like CT scans
and X-rays. The research studies are mostly aimed at using machine learning and deep
learning models to enhance diagnostic accuracy and minimize human interpretation
dependency.
16
Previous methods for detecting lung tumors used conventional image
processing methods, where radiologists visually examined tumor features like shape,
texture, and intensity changes. Even though these methods were rudimentary, they
were highly subject to human expertise and could not be applied to big data sizes. To
address this, machine learning models were proposed, wherein algorithms like Support
Vector Machines (SVM), k-Nearest Neighbors (k-NN), Decision Trees, and Random
Forests were used to classify lung tumors. These models utilized handcrafted features
of medical images, enhancing classification performance but with reduced adaptability
and automation.
In addition, researchers have also looked into using transfer learning in improving
17
tumor detection models. Using pre-trained CNN models pre-trained on large image
datasets, it has been demonstrated that it is possible to significantly improve the
detection performance of the models by fine-tuning the models using lung tumor
datasets.
This practice is used to reduce issues associated with small medical image
datasets while attaining stable classification performance. Even with these
advancements, current approaches are still burdened by the issues of data shortage,
variability of tumor appearance, and the requirement for more computational efficiency.
Current research continues to uncover emerging techniques, such as attention
mechanisms and combined deep neural network architectures, in order to further
enhance lung tumor detection and classification.
3.3 SCOPE
19
The state of the art remains to develop, mirroring the development of ongoing
initiatives aimed at improving the accuracy and practical applicability of computerized
lung cancer detection systems.
The methods are especially useful in separating the tumor areas from noisy
backgrounds to enhance the classification results. Hybrid methods that integrate
segmentation models and classification networks have been the focus of some studies,
enabling a more precise exploration of tumor characteristics.
20
CHAPTER 4
PROPOSED SYSTEM
Conventional medical testing tools like 2D X-ray and CT scans cannot reveal all
the elements of tumors because they only see in two dimensions. The suggested model
uses 3D Convolutional Neural Networks (3D CNNs) to find lung tumors from volumetric
medical imaging data. The approach conforms to IEEE P2801 standards which
enables trusted AI-based medical imaging systems.
Our system starts by processing both 2D X-ray images and 3D CT scan datasets
which serve as input data. Preprocessing begins with image enhancement and lung
area isolation steps before reducing the image dimensions with both Principal
Component Analysis and autoencoders. The 3D processed data helps train our unique
3D CNN architecture through labeled datasets. Model performance depends on how
we tune its essential parameters. After training the model it analyzes new lung scan
pictures to find and identify tumors. Our testing depends on key performance measures
such as accuracy, precision, recall and F1-score to determine reliability of diagnostic
output. The last stage includes showing and validating tumor finding results as well as
generating diagnosis reports to confirm the model predictions against doctor
assessments.
21
4.1 PROPOSED BLOCK DIAGRAM
The block diagram represents the step-by-step workflow for lung tumor detection
and classification using deep learning techniques. The entire process is structured into
distinct phases, each playing a critical role in ensuring accurate tumor identification.
These phases include Data Acquisition & Preprocessing, 2D to 3D Data Conversion,
Feature Extraction & Selection, Model Training & Classification, and Output Generation
& Evaluation. Each of these stages is further subdivided into key tasks that contribute
to the efficiency and reliability of the system.
The first stage involves acquiring medical imaging data from publicly available
datasets such as LIDC-IDRI, NSCLC-Radiomics, and Kaggle repositories. These
datasets contain lung CT scans and X-ray images used for model training and
evaluation. The data acquisition process ensures that a diverse range of images,
including both benign and malignant cases, is collected for comprehensive analysis.
Once the images are acquired, they undergo preprocessing to enhance quality
and prepare them for deep learning models.
This includes:
Noise Reduction: Medical images often contain noise due to scanning artifacts.
Various filters, such as Gaussian and median filtering, are applied to remove noise and
enhance image clarity.
Contrast Enhancement: Since medical images can have poor contrast, techniques
such as histogram equalization and adaptive contrast stretching are applied to improve
visibility.
Normalization: Image intensity values are normalized to ensure uniformity across
different samples, preventing bias during model training.
Dimensionality Reduction: To improve computational efficiency, techniques like
principal component analysis (PCA) are used to remove redundant features.
23
This phase ensures that high-quality, standardized images are fed into the system for
accurate tumor detection and classification.
Once the data is in a suitable format, essential tumor features are extracted for
analysis. Feature extraction is a key step in deep learning-based classification, as it
helps the model learn critical characteristics of lung tumors. This involves:
Shape & Texture Analysis: Tumor shape, edges, and texture variations are extracted
to differentiate between benign and malignant growths.
Intensity & Density Features: CT scans provide information about tissue density,
which helps in tumor classification. Hounsfield unit (HU) values are analyzed to
distinguish cancerous tissues from healthy lung structures.
24
Deep Feature Extraction via CNNs: The convolutional layers in deep neural networks
automatically extract hierarchical features, such as edges, contours, and patterns, that
are crucial for accurate tumor identification.
In this phase, the preprocessed and feature-extracted images are fed into a
Convolutional Neural Network (CNN)-based model for classification.
The training process involves Splitting Data into Training & Testing Sets. The dataset
is divided into training, validation, and testing subsets to ensure proper model
evaluation. Different CNN architectures, such as ResNet and VGGNet are considered
to determine the most effective model for classification.
Training Process: The model is trained using backpropagation and optimization
techniques, such as Adam and Stochastic Gradient Descent (SGD), to minimize
classification errors.
Loss Function Optimization: To improve classification accuracy, loss functions such
as cross-entropy loss and weighted binary cross-entropy are used.
During training, the model learns to recognize complex patterns in tumor images,
refining its ability to distinguish between malignant and benign cases. The trained
model is then tested on unseen images to evaluate its performance.
The final phase involves generating classification results and evaluating the
model’s performance. This includes:
Classification Output: The model predicts whether the detected tumor is benign or
malignant, providing confidence scores for each prediction.
25
Performance Metrics: The system is evaluated using various metrics, such as:
Accuracy: Measures the overall correctness of predictions.
Sensitivity (Recall): Indicates the model’s ability to detect malignant tumors.
Precision: Reflects how often predicted malignant cases are actually malignant.
F1-Score: A balance between precision and recall.
This outlines the structured workflow for lung tumor detection using deep
learning. By incorporating data preprocessing, 2D to 3D conversion, feature extraction,
model training, and performance evaluation, the system aims to enhance the accuracy
and reliability of tumor classification. The use of CNNs and 3D volumetric data ensures
improved detection performance, making this approach a valuable contribution to the
field of medical image analysis. The structured pipeline facilitates real-world
implementation, bridging the gap between AI-based diagnostics and clinical
applications.
26
4.3 SELECTED METHODOLOGIES
Extension of 2D CNNs:
• 3D CNNs, on the other hand, work with three-dimensional data, such as voxel
data from CT scans, where each voxel represents a point in 3D space.
3D Convolutions:
• The core operation in a CNN is convolution, where filters (or kernels) slide
across the input data to extract features.
27
Volumetric Feature Extraction:
• Lung cancer detection: 3D CNNs are widely used to analyze CT scans for the
detection and classification of lung nodules.
• Brain imaging: They are also used for analyzing MRI scans of the brain to
detect abnormalities such as tumors or lesions.
Advantages of 3D CNNs:
Challenges:
28
• Overfitting: Due to the large number of parameters, 3D CNNs are prone to
overfitting, especially when trained on small datasets.
Preprocessing and data augmentation are critical methods in deep learning for
lung cancer detection, especially with medical imaging data. Preprocessing improves
data quality by normalizing pixel values, removing noise, resampling to uniform voxel
spacing, performing CT windowing to emphasize relevant tissues, and segmenting out
extraneous information, thereby enhancing model input. Data augmentation
synthetically increases the training dataset by applying transformations such as
rotation, translation, flipping, scaling, shearing, adding noise, and elastic deformations,
and in 3D, volumetric transformations, which are essential for reducing overfitting and
enhancing model generalization, particularly considering the usually small size of
medical imaging datasets. These methods allow the model to learn strong features and
identify lung nodules under varying conditions, ultimately improving detection accuracy
and clinical utility.
29
4.3.4 Model Training and Validation:
Model training and validation and are critical to the creation of resilient lung
cancer detection systems. Training entails iteratively tuning model parameters with a
training set, with guidance from a loss function and optimization algorithm, while
tracking performance on a validation set to optimize hyperparameters and avoid
overfitting. The data is usually divided into training, validation, and test sets, with
validation making sure the model performs well on unseen data through metrics such
as accuracy, precision, recall, F1-score, AUC, Dice score, and sensitivity/specificity,
particularly important in medical imaging. Cross-validation adds further robustness to
the evaluation by employing several data folds. Lastly, the test set gives an objective
measurement of the final performance of the model, a measure of clinical usefulness
and reliability in lung cancer detection.
After validation, the system can be incorporated into clinical workflows, which can
benefit radiologists by automating tumor detection and marking areas of interest in lung
scans. This can greatly minimize the time needed for diagnosis and enhance patient
outcomes by allowing early detection and treatment. In conclusion, our proposed
system combines the cutting-edge feature extraction features of 3D CNNs with the
efficiency of transfer learning to develop a strong and precise lung tumor detection
device. This method has the potential to improve the diagnostic process and aid
healthcare professionals in offering timely and accurate cancer treatment.
Deployment and integration are the last essential steps in bringing a successful lung
cancer detection model from the research lab into operational clinical use. Deployment
entails encapsulating the trained model into an operational form, usually in the form of
a software application or web-based tool, so that it can run on new patient CT scans
with optimal speed. Integration focuses on smoothly integrating this AI tool into current
hospital workflows and imaging systems, necessitating keen attention to data security,
patient privacy, and compatibility with electronic health records. This process requires
extensive testing in operational clinical environments, overcoming possible technical
30
issues, and including proper training for radiologists and medical personnel for proper
utilization. Finally, successful
• Medical Imaging:
o SimpleITK: For medical image processing (install with !pip install SimpleITK).
o Nibabel: For working with neuroimaging data (install with !pip install nibabel).
o pydicom: For reading DICOM files (install with !pip install pydicom).
• Visualization:
o Matplotlib: For basic plotting.
o Seaborn: For enhanced statistical plots.
o Plotly: for interactive plots.
31
CHAPTER 5
The Fig:5.1 represents a CT scan of the chest, commonly used in medical imaging
for lung tumor detection and diagnosis. The scan is an axial (cross-sectional) view,
showing detailed internal structures of the chest cavity. Key anatomical features visible
include the lungs, trachea, and major blood vessels. The lungs appear as large, dark
regions on either side, indicating air-filled spaces, while the surrounding
32
lighter areas represent soft tissues, bones, and organs. Circular structures in the center
denote major blood vessels such as the aorta and pulmonary arteries.
The Fig:5.2 represents the 2D output of a lung tumor detection and segmentation
model applied to a CT scan. The left side, labeled "Original Image," shows a grayscale
cross-sectional CT scan of the lungs, while the right side, labeled "Tumor Highlighted,"
displays the same scan with red-colored regions indicating detected tumor areas.
The below Fig:5.3 represents a CT scan of the chest with tumor regions highlighted,
serving as an output from our lung tumor detection [Link] scan is a 2D axial cross-
section of the lungs, with red-marked areas indicating detected tumor regions. These
highlighted areas suggest abnormal tissue growth, which may correspond to malignant
lung tumors or other pathological changes.
33
Fig:5.3: Tumor Highlighted lung image
34
This Fig:5.4 represents a 3D reconstructed model of the lungs, generated as an
extension of the 2D lung tumor detection output from our project. The visualization
showcases a volumetric representation of the lungs, enabling a more comprehensive
analysis of the tumor distribution. By converting 2D CT scan slices into a 3D model.
The Fig:5.5 represents a 3D visualization of tumor spread within the lungs, plotted using
volumetric medical imaging data, likely from a CT scan. The blue points illustrate the
detected tumor regions distributed across the lung structure. The grid background and
axis labels indicate the spatial dimensions of the scan, helping to map the tumor's
extent accurately.
35
Fig:5.6: Ribcage of lung
The above Fig:5.6 represents a 3D reconstructed model of the rib cage, obtained
as part of our lung tumor detection project using CT scan data. The visualization
showcases the skeletal structure surrounding the lungs, including the rib bones and
portions of the thoracic cavity. This reconstruction is generated from volumetric medical
imaging by applying 3D segmentation and rendering techniques to extract bone
structures from the original CT scan data.
36
advancement in AI-assisted medical imaging, improving diagnostic capabilities and
supporting better clinical decision-making.
• 3D CNN performs the best across all metrics, achieving the highest accuracy,
precision, recall, and confidence, indicating its superior capability in tumor
detection and classification.
• 2D CNN shows a noticeable decline in performance, performing worse than 3D
CNN but better than RNN. This suggests that while 2D CNNs are effective, they
lack the ability to fully capture spatial and volumetric information.
• RNN has the lowest performance among the three models, suggesting that
recurrent neural networks, which are typically suited for sequential data, are not
optimal for medical image classification tasks.
37
Table: [Link] Performance Analysis of 3D-CNN and 2D-CNN
From the data, the 3D-CNN model outperforms the 2D-CNN model in all key
performance metrics. The accuracy of 3D-CNN is 90%, which is higher than the 85%
achieved by 2D-CNN. Similarly, precision, which measures the proportion of correctly
identified positive cases among all predicted positives, is 88% for 3D-CNN, compared
to 82% for 2D-CNN, indicating better reliability in detecting tumors. The recall, which
signifies the model’s ability to identify actual tumor cases, is significantly better in 3D-
CNN (91%) than in 2D-CNN (80%), suggesting that the 3D model is more effective in
detecting true positives and reducing false negatives.
38
Additionally, the F1-score, which balances precision and recall, is 90.5% for 3D-
CNN, compared to 81% for 2D-CNN, further highlighting the robustness of the 3D
model. The probability of correct classification is also higher in 3D-CNN (90-95%) than
in 2D-CNN (80-85%), reflecting its superior predictive capability. In terms of error rates,
3D-CNN exhibits a lower False Positive Rate (FPR) and False Negative Rate (FNR),
both classified as low, whereas 2D-CNN has medium levels of FPR and FNR, indicating
a higher chance of incorrect classifications.
The comparison between 3D-CNN and 2D-CNN models for lung tumor detection
reveals that 3D-CNN outperforms 2D-CNN in several key metrics. The accuracy of 3D-
CNN is 90%, which is higher than the 85% achieved by 2D-CNN, indicating better
overall classification performance. Precision, which measures the model’s ability to
39
correctly identify positive cases while minimizing false positives, is also higher in 3D-
CNN (88%) compared to 2D-CNN (82%). Similarly, recall, which determines how well
the model identifies actual positive cases, is significantly better in 3D-CNN (91%) than
in 2D-CNN (80%), demonstrating superior sensitivity to lung tumors. Additionally, the
F1-score, which balances precision and recall, is 90.5% for 3D-CNN and only 81% for
2D-CNN, further proving the robustness of the 3D model. The probability of correct
classification also favors 3D-CNN, ranging from 90-95%, whereas 2D-CNN achieves
only 80-85%. Furthermore, the false positive rate (FPR) and false negative rate (FNR)
are lower in 3D-CNN, whereas 2D-CNN has a medium level of FPR and FNR, indicating
a higher likelihood of misclassifications. Overall, the results suggest that 3D-CNN is a
more effective model for lung tumor detection, offering higher accuracy, better
classification probabilities, and lower error rates compared to 2D-CNN.
40
from medical professionals. Preprocessing steps, such as noise reduction and
normalization, add further complexities in standardizing input data.
41
CHAPTER 6
6.1 SUMMARY
Early diagnosis of lung tumors becomes very important because this disease kills
more patients around the world than any other disease. Trials that depend on chest X-
rays CT scans MRI and PET scans plus biopsy tests for lung tumor identification need
much human participation from radiologists who might mistake results because of
complex lung tumor structures. This project creates an automated system based on
deep learning through 3D Convolutional Neural Networks to better detect lung tumors
from 3D CT scan data.
The proposed 3D-CNN model works with multiple CT images to find important
image details to pinpoint tumor areas and tell malignant from non-malignant tissue. 3D-
CNN analysis outperforms 2D-CNN methods because it evaluates complete volumes
of images to find tumors better. We begin with image preparation that involves reducing
noise and enhancing contrast while segmenting lung areas before sending processed
images into our model.
Our study found that 3D-CNN generated better results when compared to 2D-CNN
by important performance standards. Tests show that the 3D-CNN model delivers 90%
accuracy with 88% precision and 91% recall for an F1-score of 90.5% which proves
superior results when compared with the 2D-CNN model achieving 85% in accuracy
while scoring 82% precision, 80% recall, and 81% F1-score. The new 3D-CNN
approach showed lower FPCR and FNR rates which made disease identification more
reliable for doctors.
The project results show that 3D-CNNs enhance lung tumor detection better than
traditional methods through improved accuracy and faster processing. The research
supports AI systems for faster lung cancer detection with deep learning and medical
image analysis which helps to prevent deaths from this disease.
6.2 CONCLUSIONS
Our project shows that 3D-CNN helps medical imaging by making better tumor
diagnosis and helping computers spot tumors. The study shows that 3D deep learning
methods detect tumors more effectively than standard 2D systems because they
process complete data volumes better than flat images. The new procedure lets
doctors find tumors earlier when lung cancer screening works best to lower mortality
rates from lung cancer.
The 3D model of the lungs plus rib cage improves diagnosis results for radiologists
and oncologists by providing them with clear anatomical information. Thanks to
improved 3D-CNN performance the model needs less human interpretation to enhance
medical diagnosis activities. The investigation displays excellent performance but
requires additional research using extensive varied medical images to boost accuracy
and decrease possible biases.
The system must work correctly with healthcare partners to show effectiveness
before medical teams start using it at their clinics. New research should examine better
model designs and include multiple imaging scans like PET-CT and MRI along with
creating AI tools to help medical experts in their work. The project shows clear AI
potential to detect lung tumors by using 3D-CNNs as an early cancer detection tool.
43
Medical imaging can achieve better diagnostic results and lower the risk of
mistakes through deep learning applications and enhanced treatment planning
benefits patients. AI healthcare technologies are advancing to help defeat lung cancer
and save lives from dangerous diseases.
44
REFERENCES
[1] Alakwaa, W., Nassef, M., Badr, A., "Lung cancer diagnosis and classification
using 3D convolutional neural networks," International Journal of Advanced
Computer Science and Applications, 2017.
[2] Armato, S. G., Giger, M. L., Moran, C. J., Blackburn, J. T., Doi, K., MacMahon,
H., "Computerized detection of pulmonary nodules on CT scans," RadioGraphics,
vol. 19, no. 5, pp. 1303-1311, 1999.
[3] Armato, S. G., Giger, M. L., Moran, C. J., Blackburn, J. T., Doi, K., MacMahon,
H., "Computerized detection of pulmonary nodules on CT scans," RadioGraphics,
vol. 19, no. 5, pp. 1303-1311, 1999.
[4] Bhuvaneswari, K., Bharathy, S., Padmasri, K. K., Tanushree, G., "A deep learning
approach to detect and classify lung cancer," International Journal of Machine
Learning and Cybernetics, vol. 14, no. 3, pp. 711-722, 2023.
[Link]
[5] Brown, A., Wilson, E., & Davis, F. (2022). Interpretable AI for Lung Cancer
Diagnosis: Understanding Model Decision-Making. Nature Medicine.
[6] Chon, A., Lu, P., "Deep CNN for lung cancer diagnosis," Stanford University,
2017.
[7] Daassi, I., Sayadi, M., Ben Slama, A., Trabelsi, H., Barbaria, S., "An Effective
Method for Lung Tumor Screening Using CT Dataset," 2023.
[8] Gao, H., Yi, M., Yu, J., Li, J., Yu, X., "Character segmentation-based coarse-fine
approach for automobile dashboard detection," IEEE Transactions on Industrial
Informatics, vol. 15, no. 10, pp. 5413-5424, 2019.
[9] Garcia, R., Lopez, A., & Martinez, C. (2023). Integration of AI-Assisted Diagnosis
into Clinical Workflows for Lung Cancer Screening. Radiology.
[10] Hossain, S., Najeeb, S., Shahriyar, A., Abdullah, Z. R., Haque, M. A., "Lung
cancer classification using deep convolutional neural networks," International
Journal of Computer Applications, vol. 182, no. 42, pp. 25-31, 2023.
[Link]
45
46