AI-ENABLED GROCERY PLANNER FOR
EFFICIENCY AND SUSTAINABILITY
A PROJECT REPORT
Submitted by
PRIYADHARSHINI B (411721205034)
SUJITHA K (411721205049)
In the partial fulfillment for the award of the degree
Of
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
PRINCE SHRI VENKATESHWARA PADMAVATHY
ENGINEERING COLLEGE [AN AUTONOMOUS INSTITUTION]
PONMAR, CHENNAI 600127.
ANNA UNIVERSITY :: CHENNAI - 600 025
MAY 2025
ANNA UNIVERSITY :: CHENNAI 600 025
BONAFIDE CERTIFICATE
Certified that this project report “AI-ENABLED GROCERY PLANNER
FOR EFFICIENCY AND SUSTAINABILITY” is the bonafide work of
“PRIYADHARSHINI B (411721205034), SUJITHA K (411721205049)”
who carried out the project work under my supervision.
SIGNATURE SIGNATURE
Dr. D. VENKATA SUBRAMANIAN, Ph.D., [Link],M.E,
HEAD OF THE DEPARTMENT SUPERVISOR
PROFESSOR ASSISTANT PROFESSOR
Department of IT, Department of IT,
Prince Shri Venkateshwara Prince Shri Venkateshwara
Padmavathy Engineering Padmavathy Engineering
College, Ponmar, College, Ponmar,
Chennai- 600 127. Chennai- 600 127.
Submitted for the project Viva-Voce examination held on
INTERNAL EXAMINER EXTERNAL EXAMINER
i
ACKNOWLEDGEMENT
First and foremost, we wish to express our sincere thanks to our
FOUNDER AND CHAIRMAN Dr. K. VASUDEVAN, M.A., [Link]., Ph.D.,
and VICE CHAIRMAN Dr. V. VISHNU KARTHIK, MBBS., M.D., for
their endeavor in educating us in their premier institution.
We would like to express our deep gratitude to the
ADMINISTRATIVE OFFICER, Er. K. PARTHASARATHY, B.E., for his
valuable support.
We wish to express our appreciation and gracefulness to our DEAN,
Dr. V. MAHALAKSHMI, M.E., Ph.D., for her encouragement and sincere
guidance.
We are highly indebted to our PRINCIPAL, Dr. G. INDIRA, M.E.,
Ph.D., for her valuable guidance which has promoted our efforts.
We also wish to convey our thanks to our HEAD OF THE
DEPARTMENT, Dr. D. VENKATA SUBRAMANIAN, Ph.D., for his
support and for providing us ample time to complete our project.
We would like to express our sincere gratitude to our PROJECT
COORDINATOR, Mrs. R. REENA M.E., (Ph.D.), Department of
Information Technology, for her guidance and support throughout our project.
We would like to express our sincere gratitude to our PROJECT
SUPERVISOR, Mrs. [Link], M.E, Department of
Information Technology, for her guidance and support throughout our project.
We wish to convey our sincere thanks to all the teaching and non-
teaching staff of the Department of Information Technology, without whose
coordination this venture would not have been a success.
ii
ABSTRACT
In today’s fast-paced world, managing grocery purchases efficiently and
sustainably has become a critical challenge for many households. This project
introduces an AI-Enabled Grocery Planner for Efficiency and Sustainability,
designed to streamline grocery tracking using smart technologies. The
application integrates barcode scanning, receipt analysis, expiration tracking,
and a recipe suggestion feature that recommends meals based on available
pantry items, helping users make the most of what they already have. By
suggesting recipes that align with nearing-expiry ingredients, the app further
reduces food waste and promotes efficient usage. Items are categorized based
on expiration proximity—such as “Good One,” “Expires Today,” “Expires
Tomorrow,” “Expires Soon,” and “Expires Later”—enabling proactive food
usage and waste reduction. Built using Android Studio with Kotlin and
Firebase Firestore, the system leverages Google’s ML Kit for barcode and text
recognition. Users can upload grocery receipts, and the app extracts purchased
items using OCR and machine learning for easy review and addition. Smart
shopping list generation and timely notifications for low-stock and expiring
items support resource-conscious decisions. This intelligent solution
encourages mindful consumption and sustainable living, aligning with global
efforts toward eco-friendly practices
iii
TABLE OF CONTENTS
CHAPTER TITLE PG NO
NO
ABSTRACT iii
LIST OF FIGURES viii
LIST OF ABBREVIATIONS ix
1 INTRODUCTION
1.1 Introduction to artificial intelligence 1
1.2 Evolution of Artificial Intelligence 2
1.2.1 How AI Functions 2
1.2.2 Importance of AI 3
1.3 Core Components of Artificial Intelligence 3
1.4 The AI Landscape 11
1.5 AI in the Healthcare Industry 11
2 LITERATURE SURVEY 21
3 SYSTEM ANALYSIS 26
3.1 Existing System 26
3.2 Proposed System 27
3.3 System Requirements 27
3.3.1 Introduction 27
3.3.2 Hardware requirements 28
3.3.3 Software Requirements 29
4 SYSTEM DESIGN 35
4.1 System Architecture 35
4.2 Data Flow Diagram (DFD) 36
4.3 UML diagrams 38
iv
4.3.1Use Case Diagram 38
4.3.2 Sequence Diagram 40
4.3.3 Activity Diagram 42
5 IMPLEMENTATION
5.1 Design and Implementation 44
5.1.1 Module Design 44
5.2 Design and Implementation Constraints 46
5.2.1 Constraint in Analysis 46
5.2.2 Constraints in Design 47
5.2.3 Constraints in Implementation 47
5.3 Other Non-functional Requirements 47
5.3.1 Performance Requirements 47
5.3.2 Safety Requirements 48
6 SOFTWARE TESTING 51
6.1 Testing 51
6.2 Taxonomy of testing 52
6.2.1 White box testing 52
6.2.2 Black box testing 52
6.2.3 Software Testing Strategy 53
6.3 Types of Testing 53
6.3.1 Unit Testing 53
6.3.2 Functional Testing 54
6.3.3 Integration Testing 56
6.3.4 Validation Testing 57
6.3.5 System Testing 57
7 CONCLUSION AND FUTURE ENHANCEMENTS 59
7.1 Conclusion 59
7.2 Future Enhancement 59
v
APPENDICES
A1:SAMPLE CODING 60
A2: OUTPUT SCREENSHOTS 68
A3: PUBLICATION 71
REFERENCE 73
vi
LIST OF TABLES
TABLE TABLE PAGE NO.
NO.
[Link] FUNCTIONAL 54
TESTING
vii
LIST OF FIGURES
FIGURE NO. TITLE PAGE NO.
4.1.1 Architecture 35
Diagram
4.2.1 Data Flow Diagram 36
Level 0
4.2.2 Data Flow Diagram 37
Level 1
4.2.3 Data Flow Diagram 37
Level 2
[Link] Use Case Diagram 39
[Link] Sequence Diagram 41
[Link] Activity Diagram 42
viii
LIST OF ABBREVIATIONS
UPC Universal Product Code
API Application Programming Interface
RL Reinforcement Learning
ML Machine Learning
NLP Natural Language Processing
DNN Deep neural networks
PCA Principal Component Analysis
DQNs Deep Q-Networks
NER Named entity recognition
DFD Data Flow Diagram
EHRs Electronic Health Records
IDE Integrated Development Environment
AVD Android Virtual Device
ZXing Zebra Crossing
DAOs Data Access Objects
FCM Firebase Cloud Messaging
ix
CHAPTER 1
INTRODUCTION TO ARTIFICIAL INTELLIGENCE:
Artificial Intelligence (AI) is a branch of computer science focused on
creating systems capable of performing tasks that typically require human
intelligence, such as problem-solving, decision-making, and pattern
recognition. AI aims to simulate human cognitive abilities, enabling machines
to perform activities involving perception, reasoning, and understanding. The
journey of AI began in the 1950s with pioneers like Alan Turing laying the
foundation for machine intelligence, and the term "Artificial Intelligence" was
coined by John McCarthy in 1955. AI grew through advancements in
algorithms, computational power, and data availability, leading to subfields
such as Machine Learning (ML), which enables systems to learn from data, and
Natural Language Processing (NLP), which allows machines to understand
human language. Key components of AI also include Robotics and Computer
Vision, enabling autonomous physical tasks and visual data interpretation. AI
is transforming industries with applications in healthcare, finance, and
transportation, aiding in medical diagnostics, fraud detection, and autonomous
vehicles. Its potential extends to solving global challenges like climate change,
resource management, and healthcare accessibility. However, ethical concerns
around privacy, bias, and job displacement remain, necessitating responsible
regulation. With rapid technological advancements, the impact of AI is
expected to grow significantly in the coming decades. In conclusion, AI is not
just a technological advancement but a transformative force shaping the future
of humanity
1
1.2 Evolution of Artificial Intelligence
The history of AI can be traced back to the mid-20th century when
researchers began to explore the possibility of machines mimicking human
cognitive functions.
In the 1950s, Alan Turing proposed the Turing Test to assess a machine's
ability to exhibit intelligent behavior. The 1960s saw the creation of early AI
programs, such as expert systems designed to mimic decision-making abilities.
AI research stagnated in the 1970s due to limited computing power and narrow
AI applications, often referred to as "AI winters."
In the 1990s, with the advent of more powerful computers, AI began to evolve
with new techniques, including machine learning. In recent years, the
breakthrough in deep learning (a subset of machine learning) has fueled
massive advances in AI, enabling complex tasks such as speech recognition
and natural language processing.
1.2.1 How AI Functions
AI works by mimicking cognitive functions through algorithms and
models. One of the core concepts is machine learning (ML), where algorithms
improve from experience without being explicitly programmed. Machine
learning can be categorized into three types:
Supervised learning: The model is trained on labeled data, where the input and
output are both known. The algorithm learns to map inputs to outputs.
Unsupervised learning: The model works with unlabeled data and must find
patterns or structure in the data on its own.
Reinforcement learning: The model learns through trial and error, receiving
rewards or penalties based on its actions.
At the heart of many AI systems are neural networks, which are designed to
simulate the way the human brain processes information. These networks are
2
made up of layers of interconnected neurons, where each layer processes data
and passes it to the next layer. As data moves through these layers, the neural
network learns to identify patterns and make decisions.
1.2.2 Importance of AI
Artificial Intelligence (AI) is pivotal in driving automation, improving
efficiency, and solving problems that would be time-consuming or even
impossible for humans to address on their own. AI has significantly contributed
to various fields by automating repetitive tasks, analyzing massive datasets,
solving complex problems, and enhancing decision-making. In automation, AI
reduces human effort and increases productivity by handling routine tasks with
speed and precision—an example being AI-driven robots in manufacturing that
assemble parts efficiently. For data analysis, AI systems can process large
volumes of information quickly, extracting meaningful insights in areas like
healthcare, finance, and marketing. These insights help organizations make
informed decisions based on accurate, data-driven predictions. AI is also
capable of tackling highly complex problems such as climate modeling, drug
discovery, and predictive maintenance, where traditional methods may fall
short. Furthermore, AI enhances decision-making by recognizing patterns in
data, providing forecasts, and recommending optimal actions. Its ability to
integrate with technologies like cloud computing, big data, and the Internet of
Things has further expanded its capabilities, making AI a cornerstone in
today’s digital transformation across industries.
1.3 Core Components of Artificial Intelligence (AI)
1. Machine Learning (ML):
Machine Learning (ML) is a foundational subfield of Artificial
Intelligence (AI) that focuses on the development of algorithms and statistical
models that enable machines to perform specific tasks without explicit
instructions. Instead, these systems learn patterns and relationships from data
3
and make predictions or decisions based on that data. ML has become the
driving force behind many real-world AI applications, from search engines and
recommendation systems to self-driving cars and predictive [Link] its
core, machine learning involves training a model using historical or observed
data so that it can generalize and perform accurately on new, unseen data. This
ability to adaptively improve performance based on experience sets ML apart
from traditional rule-based programming.
Types of Machine Learning:
Supervised Learning
Unsupervised Learning
Reinforcement Learning (RL)
Supervised Learning - Supervised Learning is one of the most
commonly used types of machine learning, where the algorithm is trained using
a labeled dataset. This means that for every input in the dataset, the
corresponding output (or label) is already known. The primary objective is to
learn a mapping from inputs to outputs so that the system can predict the correct
label for new, unseen data. This method is particularly effective in scenarios
where historical data can be used to predict future outcomes. Common
applications include email spam detection, image recognition, medical
diagnosis, and loan approval systems. Algorithms widely used in supervised
learning include Linear Regression, Logistic Regression, Decision Trees,
Random Forests, Support Vector Machines, and various types of Neural
Networks. The performance of a supervised learning model heavily depends
on the quality and quantity of the labeled training data, and a well-trained
model can make highly accurate predictions.
Unsupervised Learning - Unsupervised Learning involves training a
machine learning model on data that has no labeled responses. Unlike
supervised learning, there is no predefined output, and the algorithm attempts
to identify hidden patterns, groupings, or structures within the data. This typeof
4
learning is particularly useful when exploring unknown data or when labeling
data is too costly or impractical. A common application of unsupervised
learning is customer segmentation, where businesses categorize customers
based on purchasing behavior or demographics without having predefined
categories. Other applications include anomaly detection (such as fraud
detection or network security), dimensionality reduction, and data
compression. Popular algorithms include K-Means Clustering, Hierarchical
Clustering, DBSCAN (Density-Based Spatial Clustering of Applications with
Noise), and Principal Component Analysis (PCA). The success of
unsupervised learning depends largely on how well the data is preprocessed
and the ability of the algorithm to detect meaningful structures.
Reinforcement Learning - Reinforcement Learning (RL) is a unique
type of machine learning where an agent learns to make decisions by
interacting with its environment. The agent takes actions and receives feedback
in the form of rewards or penalties. Over time, by trial and error, the agent
learns an optimal strategy or policy that maximizes cumulative rewards. Unlike
supervised learning, reinforcement learning does not require labeled
input/output pairs but rather focuses on sequential decision-making. It is
inspired by behavioral psychology and is particularly effective in dynamic and
complex environments. Applications of reinforcement learning include
robotics, autonomous vehicles, industrial automation, personalized
recommendations, and gaming—one famous example being AlphaGo, the AI
that defeated human champions in the game of Go. Reinforcement learning
methods are based on concepts like Markov Decision Processes, Q-learning,
Deep Q-Networks (DQNs), and Policy Gradient methods. This type of learning
is powerful for tasks that involve planning, control, and learning from delayed
feedback.
2. Deep Learning:
Deep learning is a branch of artificial intelligence that mimics the
5
human brain’s ability to learn from data. It relies on artificial neural networks
with multiple layers that process input data to recognize patterns and make
decisions. The "deep" in deep learning refers to the use of many layers in the
network, each helping to analyze data at different levels of abstraction. Unlike
traditional machine learning, which often requires manual feature selection,
deep learning automatically extracts important features from raw data. Early
layers detect basic patterns, while deeper layers identify more complex
structures, enabling the model to understand images, audio, or text more
effectively. This capability makes deep learning especially powerful for tasks
involving unstructured data. Training deep learning models usually requires
large datasets and high computational power, typically provided by GPUs.
However, the performance of these models in areas like speech recognition,
image analysis, and language translation often exceeds traditional methods. In
applications, deep learning powers technologies such as facial recognition,
autonomous vehicles, and voice assistants. In computer vision, it enables
systems to classify images and detect objects. In language processing, it helps
machines understand and respond to human language more naturally. Overall,
deep learning plays a vital role in modern AI by enabling machines to learn
complex patterns and perform tasks that once required human intelligence.
3. Natural Language Processing (NLP)
NLP allows machines to understand, interpret, and generate human
language. It bridges the gap between human communication and computer
understanding.
Core tasks in NLP:
Text classification
Named entity recognition (NER)
Sentiment analysis
Machine translation
Question answering
Text classification- Text classification is a foundational task in NLP
6
that involves assigning predefined categories or labels to a given body of text.
This technique is widely used in various applications such as spam detection
in email systems, where incoming messages are automatically filtered as
"spam" or "not spam" based on their content. It is also used in topic detection
for news articles, organizing documents, and even in online platforms to
categorize customer complaints or reviews. By training models on labeled data,
text classification enables machines to understand the context of text and make
accurate decisions about its category.
Named Entity Recognition (NER) - Named Entity Recognition refers
to the process where the system scans a text and identifies named entities such
as names of people, places, organizations, dates, currencies, and more. For
example, in the sentence "Apple Inc. was founded in California by Steve Jobs,"
NER helps detect that “Apple Inc.” is an organization, “California” is a
location, and “Steve Jobs” is a person. This task is essential in extracting
structured information from unstructured text, making it easier to process large
volumes of data in fields like journalism, finance, or legal analysis.
Sentiment analysis- Sentiment analysis is another core task that
involves determining the emotional tone or attitude expressed in a piece of text.
It aims to classify whether the sentiment behind the content is positive,
negative, or neutral. This is particularly useful in understanding customer
feedback, analyzing social media posts, or monitoring public opinion on
various topics. Companies use sentiment analysis to measure user satisfaction,
respond to negative comments promptly, and gauge the success of marketing
campaigns based on public reactions.
Machine translation- Machine translation allows a system to translate
text or speech from one language to another. This task has grown in accuracy
and usefulness with the help of advanced neural network models. Tools like
Google Translate are common examples where machine translation breaks
down language barriers, helping people communicate across different regions.
7
It is widely used in global business operations, international customer service,
and cross-cultural academic research. Good machine translation requires the
system to not only replace words but also understand grammar, idioms, and
cultural nuances of both languages involved.
Question answering- Question answering involves building systems
that can automatically answer questions posed by humans, using a given
context or by retrieving information from large datasets or the internet. This is
the technology behind intelligent virtual assistants such as Alexa, Google
Assistant, and Siri. These systems are capable of understanding the user’s
question, searching for relevant information, and responding with a concise and
accurate answer. Question answering taskare particularly challenging because
they require both understanding natural language and reasoning based on the
context provided
4. Expert Systems
Expert Systems are AI programs that emulate the decision-making
abilities of a human expert. They rely on a knowledge base and inference rules
to draw conclusions.
Components:
Knowledge base
Inference engine
User interface
Knowledge base- The knowledge base is one of the core components
of an expert system. It contains domain-specific information in the form of
facts and rules that mimic the knowledge of a human expert. This component
serves as the central repository where all relevant information needed for
decision-making is stored. For example, in a medical diagnosis expert system,
the knowledge base might include symptoms, diseases, and treatment
protocols. The strength of an expert system largely depends on the richness and
accuracy of its knowledge base, as it forms the foundation for reasoning and
problem-solving.
8
Inference engine- The inference engine is the brain of the expert
system. It processes the information stored in the knowledge base and applies
logical rules to derive conclusions or new facts. This component simulates the
reasoning process of a human expert by analyzing input data, comparing it
against known facts, and using a set of rules to generate answers or solutions.
It can use forward chaining (data-driven) or backward chaining (goal-driven)
reasoning methods. The effectiveness of the inference engine directly
influences the system’s ability to solve complex problems accurately and
efficiently.
User interface- The user interface acts as the communication bridge
between the user and the expert system. It allows users to input queries, view
results, and receive recommendations in a clear and interactive manner. A well-
designed user interface ensures that the system is user-friendly and accessible,
even to individuals without technical knowledge of how the expert system
functions internally. It plays a vital role in making the expert system practical
and usable across different fields like healthcare, finance, or engineering
5. Data (Big Data)
Data (Big Data) plays a foundational role in artificial intelligence. AI
systems depend heavily on data to learn patterns, make predictions, and
improve their performance over time. Big Data refers to the massive volume
of structured and unstructured data generated from numerous sources at high
speeds. This data acts as the fuel that powers AI models, enabling them to
understand complex relationships, make decisions, and continuously evolve.
Without access to vast and diverse datasets, AI algorithms would lack the
information necessary to become accurate or reliable.
9
Characteristics of Big Data:
Volume
Velocity
Variety
Veracity
Volume- Volume refers to the sheer quantity of data produced every
second across the globe. In the age of digital transformation, organizations and
devices generate terabytes and even petabytes of data daily. This includes
everything from user clicks on websites to sensor readings from smart devices.
AI requires such massive volumes of data to identify meaningful patterns and
learn from a wide range of examples. For instance, training a speech
recognition AI system would involve thousands of hours of audio data to
account for different accents, tones, and languages.
Velocity- Velocity is about the speed at which data is created,
transmitted, and processed. With advancements in technology, data is now
being generated in real time from sources such as online transactions, GPS-
enabled devices, and live video streams. For AI systems to remain effective
and up to date, they must be able to process and analyze data at the same pace
it is produced. Real-time data processing enables applications like fraud
detection, personalized recommendations, and autonomous vehicle navigation.
Variety- Variety addresses the different types of data that AI systems
need to handle. Unlike traditional systems that only dealt with structured data
(like spreadsheets), modern AI models must process unstructured and semi-
structured data such as images, audio, video, social media content, and sensor
outputs. Each type of data requires different techniques for processing and
understanding. For example, computer vision models analyze images and
video, while natural language processing models work with text and speech.
Veracity- Veracity concerns the quality and trustworthiness of the
data. AI systems must rely on accurate and consistent data to make correct
decisions. If the data is incorrect, incomplete, or biased, it can lead to faulty AI
10
outputs. Ensuring data veracity involves cleaning the data, validating its
sources, and eliminating noise and inconsistencies. High-quality data is crucial
in sensitive fields like healthcare and finance, where incorrect AI predictions
can have serious consequences.
1.4 The AI Landscape
The AI landscape refers to the broad and continuously evolving
environment where artificial intelligence is researched, developed, and applied.
It includes various core areas such as machine learning, deep learning, natural
language processing, computer vision, and robotics. These technologies enable
machines to learn from data, interpret human language, recognize images and
patterns, and perform complex decision-making [Link] these capabilities
grow, AI systems are increasingly integrated into everyday applications,
ranging from digital assistants to autonomous vehicles.
This landscape is shaped by contributions from academic institutions,
private tech companies, startups, and government bodies. These stakeholders
invest in research, build innovative tools, and establish policies to guide
responsible AI use. Ethical concerns, including data privacy, fairness, and the
impact of automation on jobs, have prompted global discussions and regulatory
efforts. As AI continues to transform industries and society, understanding its
components and developments is crucial for navigating the opportunities and
challenges it presents.
1.5 AI in the Healthcare Industry
Artificial Intelligence (AI) is rapidly transforming the healthcare
industry, revolutionizing the way healthcare providers diagnose, treat, and
manage patients. The integration of AI technologies in healthcare has led to the
development of more efficient, accurate, and personalized medical practices,
addressing challenges in diagnostics, patient care, drug discovery, and
administrative processes. AI's ability to process and analyze large amounts of
data enables healthcare professionals to make better-informed decisions,
11
ultimately improving patient outcomes and reducing healthcare costs.
AI in Medical Diagnostics
One of the most promising applications of AI in healthcare is in the
field of medical diagnostics. AI-powered tools, especially those based on
machine learning (ML) and deep learning (DL) algorithms, have shown
immense potential in enhancing diagnostic accuracy. These tools can analyze
medical images, such as X-rays, CT scans, MRIs, and ultrasound images, far
more quickly and accurately than traditional methods. For instance, AI
algorithms trained to identify patterns in radiology images can detect early
signs of conditions such as cancer, heart disease, and neurological disorders
with remarkable precision.
Additionally, AI is increasingly used in the analysis of patient data,
including electronic health records (EHRs), to identify potential risks and
diagnose conditions that might otherwise go unnoticed. By analyzing vast
amounts of data, AI can uncover hidden patterns and predict the likelihood of
diseases before they manifest clinically. Early detection through AI not only
improves patient outcomes but also lowers the cost of treatment by enabling
earlier interventions.
AI in Early Disease Detection and Prevention
Perhaps one of the most promising aspects of AI in medical
diagnostics is its ability to detect diseases at their earliest stages, sometimes
even before symptoms appear. Early detection is key to improving survival
rates and minimizing the impact of many diseases. By analyzing vast amounts
of medical data—such as genetic profiles, lifestyle factors, and environmental
exposures—AI systems can identify individuals at high risk for certain
conditions, allowing for early interventions and preventive measures.
For instance, AI systems are being used to predict the onset of diabetes
by analyzing blood sugar levels, genetic factors, and lifestyle habits such as
diet and physical activity. By identifying individuals at risk early on, healthcare
12
providers can implement preventive measures, such as lifestyle changes, to
delay or even prevent the onset of the disease. Similarly, AI systems are being
used to predict the likelihood of heart disease by analyzing factors such as
cholesterol levels, blood pressure, and family history. Early intervention, such
as prescribing medications or recommending lifestyle changes, can
significantly reduce the risk of heart attacks and strokes.
AI in Predictive Diagnostics
Apart from imaging, AI is also proving invaluable in predictive
diagnostics. Through the analysis of patient data, including electronic health
records (EHRs), laboratory results, and other medical histories, AI can identify
potential risks, predict the development of certain conditions, and recommend
preventive measures. By examining a patient's complete health record and
comparing it with vast databases of similar patient profiles, AI can uncover
hidden patterns that may not be immediately obvious to a healthcare provider.
This form of diagnostic analysis can often reveal risk factors for diseases before
they even manifest.
For example, AI can analyze a patient's medical history, family
history, genetic predisposition, and lifestyle choices to predict the likelihood of
developing conditions such as cardiovascular diseases or diabetes. Early
identification of such risks can prompt preventive measures such as lifestyle
modifications or medications to mitigate the potential onset of the disease.
AI can also provide doctors with predictive models for the progression
of existing conditions, such as chronic illnesses. For example, in patients with
chronic kidney disease (CKD), AI can analyze kidney function over time to
predict the likelihood of renal failure, allowing for early intervention and more
personalized treatment plans. AI can also help healthcare providers anticipate
complications that may arise during surgery or other treatments by evaluating
a patient's overall health and response to past treatments.
The combination of machine learning and vast amounts of data allows AI to
13
offer insights that were previously unavailable through traditional diagnostic
methods. This predictive capability is particularly valuable in preventive
medicine, as it allows healthcare providers to intervene before a disease or
condition becomes acute, leading to better health outcomes and reduced
healthcare costs in the long term.
AI in Medical Imaging
One of the most significant areas of AI in medical diagnostics is the
analysis of medical images, which includes imaging modalities like X-rays, CT
scans, MRIs, and ultrasound images. Traditionally, these images are
interpreted by radiologists, who assess the images visually for any
abnormalities or signs of disease. However, this process is not only time-
consuming but also prone to human error due to fatigue or the sheer volume of
images that need to be reviewed.
AI tools, particularly those powered by deep learning algorithms, can
analyze these medical images far more efficiently and accurately than human
radiologists in certain instances. Deep learning models are trained on vast
datasets of labeled images to recognize patterns that may indicate the presence
of a disease. For example, AI has been shown to significantly outperform
human radiologists in detecting early signs of lung cancer, breast cancer, and
brain tumors in imaging scans. These algorithms can identify small, subtle
anomalies that may be missed by human eyes, making early detection possible.
For instance, AI systems trained on radiology images can detect the initial signs
of lung cancer at stages when it is not visible to the naked eye. Early detection
significantly increases the chances of successful treatment and survival.
Similarly, AI tools used for mammography can more accurately detect breast
cancer compared to traditional methods, often identifying cancers that are
harder to see in standard [Link], AI can be used in the analysis of
neurological conditions through the examination of brain scans. By identifying
patterns in MRIs, AI can help doctors recognize early-stage Alzheimer's
14
disease, multiple sclerosis, or even stroke. This ability to detect abnormalities
at a microscopic level ensures more accurate and timely diagnoses, which is
crucial for patient outcomes.
AI in Personalized Medicine
Personalized medicine is transforming healthcare by tailoring medical
treatments to individual patients based on their unique genetic makeup,
lifestyle, and other factors. Unlike traditional approaches that offer generalized
treatments for all patients with the same condition, personalized medicine uses
AI algorithms to analyze large datasets, such as genetic information, medical
history, and environmental factors. These algorithms identify specific markers
that can influence disease risk, drug effectiveness, and potential side effects.
As a result, patients receive treatments that are specifically suited to their
unique biology, improving the overall effectiveness of healthcare while
minimizing adverse effects. AI's role in processing and interpreting vast and
complex datasets allows healthcare providers to create individualized treatment
plans, optimizing outcomes for patients across different medical fields.
In oncology, for example, AI helps in the early detection and
treatment of cancer by analyzing genetic profiles of tumors. By using machine
learning models, AI can identify genetic mutations that drive cancer growth,
enabling doctors to select targeted therapies that specifically address these
mutations. AI-powered systems can also predict how a patient’s cancer will
respond to certain drugs, which is crucial for avoiding ineffective treatments
and minimizing the side effects commonly associated with traditional cancer
therapies. This move towards precision oncology ensures that patients receive
the most effective drugs, tailored to the specific genetic alterations in their
cancer cells, significantly improving survival rates and reducing the need for
trial-and-error treatment [Link] cardiology, AI’s ability to analyze both
genetic and lifestyle data plays a crucial role in predicting a patient’s risk for
heart disease. By integrating data such as genetic predispositions, exercise
15
habits, and diet, AI models can assess a patient's likelihood of developing
cardiovascular diseases. This allows doctors to intervene earlier, implementing
lifestyle changes or medications that can prevent the onset of heart disease. AI
can also be used to develop personalized treatment regimens for heart disease
patients, ensuring that the prescribed medications are the most effective for the
individual. In this way, AI not only improves diagnostic accuracy but also
offers proactive strategies to manage or prevent chronic conditions, further
enhancing personalized care.
AI’s integration with pharmacogenomics is another breakthrough in
personalized medicine. Pharmacogenomics involves the study of how genes
influence an individual’s response to drugs. AI algorithms can quickly analyze
genetic data to predict how a person will react to a particular medication,
determining the optimal dosage and identifying potential risks of adverse
reactions. This ensures that patients receive medications that are not only
effective but also safe, reducing the occurrence of harmful side effects. As AI
continues to learn from the ever-growing body of genetic and medical data, the
potential for more accurate predictions of drug responses becomes more
refined, further supporting personalized medicine’s goal of providing
individualized healthcare solutions.
AI in Routine Patient Care
Artificial Intelligence (AI) is becoming an integral part of routine
patient care by streamlining clinical workflows, enhancing diagnostic
accuracy, and improving patient engagement. In day-to-day healthcare
settings, AI is used to automate administrative tasks such as scheduling
appointments, managing patient records, and sending reminders for
medications or follow-ups. Virtual assistants and AI-powered chatbots are
being deployed in clinics and hospitals to help patients navigate their care
plans, check symptoms, and receive guidance without always needing to see a
healthcare professional. This not only increases operational efficiency but also
16
frees up valuable time for clinicians to focus more on complex patient needs.
As a result, healthcare institutions can deliver more responsive and patient-
centric services.
One of the most impactful contributions of AI in routine care is in
clinical decision support systems (CDSS). These systems use machine learning
algorithms to analyze medical data and assist doctors in making better-
informed decisions. For example, when a physician inputs a patient's
symptoms, history, and lab results into an AI system, it can cross-reference the
data with thousands of case histories to suggest possible diagnoses or
recommend additional tests. This helps reduce diagnostic errors and ensures
that patients receive the most accurate and timely treatment. AI can also flag
early warning signs of deteriorating health, prompting immediate intervention
and potentially preventing complications.
Remote monitoring is another area where AI plays a critical role in
routine care. With the help of wearable devices and smart sensors, AI systems
can continuously track vital signs like heart rate, oxygen levels, and glucose
levels. These systems analyze the data in real-time and alert both patients and
providers when readings fall outside the normal range. For chronic disease
patients, this means fewer unnecessary hospital visits and more consistent
management of their conditions. The continuous data stream also allows for
personalized adjustments to treatment plans, creating a more dynamic and
responsive healthcare experience.
Moreover, AI supports routine imaging and diagnostic processes
through tools that can read X-rays, CT scans, and MRIs with remarkable
precision. These tools are not meant to replace radiologists but rather to serve
as a second opinion, increasing diagnostic confidence. In some cases, AI can
detect anomalies that are too subtle for the human eye to catch. This capability
is especially valuable in busy healthcare settings where large volumes of
imaging need to be reviewed quickly. As these technologies continue to evolve,
17
the role of AI in everyday patient care is likely to grow, making healthcare
more efficient, accurate, and personalized for all patients.
AI in Surgical Robotics
One of the most significant applications of AI in healthcare robotics
is in the field of surgery. Surgical robots, such as the da Vinci Surgical System,
have been revolutionizing the way surgeries are performed. These robots are
not designed to replace surgeons, but rather to augment their capabilities by
offering enhanced precision, control, and flexibility during operations. The da
Vinci Surgical System, for example, uses AI algorithms to filter out tremors in
a surgeon’s hand, allowing for smoother, more accurate movements. This
precision is particularly valuable in complex, delicate surgeries, such as those
involving the heart, brain, or prostate.
AI's role in surgical robotics doesn't stop at precision. By integrating
machine learning and data analytics, robotic systems can help surgeons make
real-time decisions during an operation. These systems are capable of
analyzing live data, such as the patient's vital signs and surgical progress, and
providing recommendations to the surgeon, ensuring the procedure stays on
track. In addition, AI can analyze historical data from previous surgeries to
predict potential complications and suggest the best course of action.
A key benefit of AI-powered surgical robots is the ability to perform
minimally invasive surgeries. Minimally invasive procedures are performed
through small incisions, reducing the trauma to the body and allowing surgeons
to work with greater accuracy in confined spaces. Because of this, patients
experience significantly smaller scars, reduced blood loss, and less pain after
surgery. The smaller incisions also lead to a shorter recovery time, meaning
patients can return to their daily lives much sooner. This not only improves the
patient’s experience but also helps reduce the costs associated with hospital
stays, making healthcare more efficient.
In addition, AI-powered robotics can help surgeons plan surgeries
18
with greater precision. By using pre-operative data, such as medical imaging
and 3D modeling, AI can create detailed surgical plans, enabling surgeons to
visualize the procedure before even making the first incision. This allows for a
more organized and methodical approach to surgery, which can significantly
reduce the risk of errors and complications.
Advantages of AI in the Healthcare Industry
Faster and Accurate Diagnosis: AI systems can process large volumes of
medical data quickly, helping doctors detect diseases like cancer, heart issues,
and neurological disorders more accurately and at an early stage.
Improved Treatment Planning: AI algorithms help in creating personalized
treatment plans by analyzing patient history, genetic information, and current
conditions.
24/7 Patient Monitoring: With AI-powered wearables and monitoring
systems, patients can be tracked continuously, allowing timely interventions
and reducing the need for frequent hospital visits.
Automation of Administrative Tasks: AI can handle routine tasks such as
patient record management, billing, and appointment scheduling, which
reduces administrative burden and human errors.
Cost Reduction: Long-term use of AI reduces operational costs by optimizing
hospital workflows, reducing unnecessary tests, and avoiding readmissions.
Drug Discovery and Development: AI speeds up the process of finding new
drugs by simulating how different drugs react in the human body, thus
accelerating research and development.
Enhanced Radiology and Imaging: AI tools assist radiologists in interpreting
scans and images with greater precision and speed, minimizing chances of
oversight.
19
Disadvantages of AI in the Healthcare Industry
Lack of Human Empathy: AI cannot replace the human touch, compassion,
and emotional understanding provided by healthcare professionals.
High Implementation Costs: Setting up and maintaining AI systems requires
significant investment in infrastructure, software, and training.
Data Privacy and Security Risks: Healthcare data is highly sensitive; if AI
systems are not properly secured, it can lead to data breaches or misuse.
Dependence on Data Quality: AI outcomes are only as good as the data
provided. Incomplete, biased, or incorrect data can lead to wrong diagnoses or
treatment suggestions.
Job Displacement Concerns: Some roles, especially administrative or
diagnostic support roles, may be reduced or replaced by automation, causing
concern over employment in the sector.
Regulatory and Legal Challenges: AI in healthcare must comply with strict
medical laws and standards, and accountability becomes complex if AI causes
harm.
Limited Flexibility in Complex Cases: AI systems may not perform well in
rare or unexpected medical cases where human judgment is critical.
20
CHAPTER-2
2 .LITERATURE SURVEY
AI-Driven Produce Management and Self-Checkout System for Supermarkets
Author(s): V. Nandhakumar, B. Jyothsna, and S. Gnanapriya
Published Year: 2023
In this study, the authors developed an AI-powered self-checkout
system for supermarkets focused on recognizing fruits and vegetables without
barcodes. The system utilizes computer vision and machine learning
algorithms to identify items, weigh them, and automatically generate prices,
eliminating the need for manual input. By integrating with inventory databases,
it ensures real-time stock updates and promotes a seamless shopping
experience. The system significantly reduces human error, customer wait
times, and labor costs, offering a modern solution to conventional checkout
processes. A notable advantage is its automation of fresh produce handling—a
segment often resistant to digitization. However, the system requires advanced
imaging hardware and complex integration with point-of-sale systems, which
results in a high initial implementation cost. These limitations restrict its
adoption, particularly in small-scale retail stores with limited budgets or legacy
[Link] model supports multilingual interfaces to accommodate diverse
customer demographics. It also allows for training customization based on
regional produce variations. Implementation may be more feasible through
phased integration with existing billing counters. The project demonstrates
AI’s potential to streamline checkout while improving customer autonomy.
21
Real-Time Supermarket Product Recognition on Mobile Devices
Author(s): Julian Strohmayer and Martin Kampel
Published Year: 2023
This work presents a mobile-based solution for real-time supermarket
product recognition using lightweight deep learning techniques. The system
uses convolutional neural networks (CNNs) optimized for smartphone
processors, enabling users to scan product GTIN codes through their device’s
camera to receive instant pricing and nutritional information. Designed with
energy efficiency and low latency in mind, the framework supports self-
checkout and is especially suited for low-resource environments such as
developing countries. One of the key strengths of this system is its wide
compatibility with Android devices, enhancing accessibility and user adoption.
The use of mobile AI significantly lowers infrastructure requirements, making
it more viable than fixed-station solutions. However, the model’s effectiveness
is limited by real-world variables such as lighting conditions, camera
resolution, and potential occlusions during [Link] developers also
included offline caching to handle poor network areas. User data privacy is
maintained through local-only processing unless cloud backup is enabled.
Integration with loyalty programs and mobile payment systems adds customer
value. Overall, it provides an affordable, scalable tech upgrade for grocery
stores worldwide. Enhancing the user interface could further support elderly or
visually impaired users. Future updates may use AR overlays to assist in faster
product identification. The system can also support multilingual label
recognition to aid diverse customer bases. Retailers may use collected data to
adjust shelf arrangements for optimal scanning. Continuous training from user
interactions helps improve long-term recognition precision.
22
Inventory Control Using RFID and AI-Based Drone
Author(s): Filip Benes, Pavel Stasa, and Jiri Svub
Published Year: 2022
This study explores the application of AI-guided aerial drones
equipped with RFID scanners for inventory control in large supermarkets and
warehouses. The drones autonomously follow pre-programmed routes,
scanning RFID-tagged items and transmitting real-time data to a central
inventory management system. The integration of object detection and
navigation algorithms ensures accurate path following and item identification.
This solution greatly reduces manual labour and enhances inventory auditing
speed and accuracy. It is particularly effective for large storage areas where
manual stock-taking is time-consuming and error-prone. However, the
approach requires complete RFID coverage across all SKUs, which involves
high upfront tagging costs. Additionally, environmental interference, indoor
flight safety regulations, and maintenance needs present further operational
[Link] increase reliability, collision avoidance systems are incorporated
using ultrasonic sensors. Rechargeable battery packs allow for extended
operation with minimal downtime. The system can integrate with warehouse
management software for automated restocking. Future updates may include
AI anomaly detection for stock discrepancies. Machine learning can optimize
drone flight paths over time. Battery monitoring systems alert operators before
operational failures occur. Night vision support allows inventory checks during
off-hours. The drones can also collect environmental data like temperature and
humidity. This data can help preserve perishable items stored in large
warehouses.
23
AI to Reduce Food Waste in Small Grocery Stores
Author(s): Alexandre M. Nascimento and Anna C. M. Queiroz
Published Year: 2022
This research introduces an AI-based demand forecasting system
tailored for small and medium-sized grocery stores to minimize food waste.
The model incorporates historical sales, perishability indicators, and seasonal
trends to optimize stock levels, ensuring that perishable items are ordered in
quantities that match predicted demand. The system is designed to function
effectively with smaller datasets, offering adaptability to store-specific sales
patterns. A key benefit is the potential for immediate reduction in overstock
and spoilage, improving sustainability and profitability. The researchers
highlight its suitability for independent grocery stores that often lack access to
enterprise-level analytics tools. However, the system's broader applicability is
hindered by the diversity of consumer behavior, variations in local supply
chains, and the initial need for structured sales data to train the model. An
intuitive dashboard allows store owners to visualize demand trends in real-
time. Updates can be made automatically to reordering systems based on
prediction confidence. Manual override features ensure user control in
exceptional cases. The system supports integration with point-of-sale platforms
for seamless data flow. Suggestions for discounts on near-expiry products can
be generated dynamically. AI can also help cluster stores with similar profiles
for shared [Link] for multilingual interfaces makes the platform
usable in diverse regions. Cloud-based deployment allows remote access and
monitoring. The system can also flag anomalies in customer purchasing trends
for further analysis.
24
Digitizing Fresh Food Supply Chains to Reduce Loss and Waste
Author(s): Emily Crofton
Published Year: 2023
This study focuses on the use of IoT sensors and predictive analytics
to enhance transparency and reduce waste in fresh food supply chains. A digital
platform was developed to assess real-time produce shelf life by analyzing
environmental conditions like temperature, humidity, and handling stress. This
information enables a First Expired, First Out (FEFO) inventory strategy across
distribution centers and retail points. The implementation resulted in
significant waste reduction and better stock rotation. Key strengths include
enhanced freshness, traceability, and decision-making based on data analytics.
However, the model relies heavily on continuous sensor calibration, data
accuracy, and consistent connectivity throughout the supply chain. Such
requirements can be difficult to meet in rural or fragmented logistical
[Link] system also features alert mechanisms for threshold breaches in
storage conditions. Data collected can be visualized through dashboards to aid
logistics planning. Mobile access allows on-site personnel to view inventory
status in real-time. It provides actionable insights for supply chain partners to
optimize delivery timing and reduce spoilage. Future improvements may
include blockchain for tamper-proof traceability. AI models can evolve by
learning from past spoilage incidents. Predictive maintenance alerts for
refrigeration units could be added to prevent failures. The solution may also be
adapted for use in export/import monitoring. Partnerships with logistics firms
can expand its global usability.
25
CHAPTER 3
3. SYSTEM ANALYSIS
3.1 EXISTING SYSTEM:
The current systems for grocery management primarily involve basic
inventory tracking and expiry date management, often relying on manual
entries and spreadsheets. These systems fail to provide users with the flexibility
to manage groceries efficiently or reduce food waste effectively. Most existing
solutions do not have an integrated, intelligent way of tracking expiry dates,
and users are often left to remember when items are approaching their
expiration or expiry. In addition, many grocery management apps provide
limited or no assistance in recipe suggestions, which leads to users either
wasting food or having to manually search for ways to use ingredients that are
about to expire. These systems typically lack smart notification features,
meaning users must manually check the expiry dates, which can lead to missed
items and food wastage. Some systems include basic barcode scanning for
inventory management, but the process is typically slow and lacks the
sophistication needed for large inventories. Also, these solutions do not provide
automatic updates when an item is used or when new items are added to the
inventory. Most existing solutions still require a lot of manual input, leading to
user fatigue and inconsistent data, which further hinders the goal of reducing
food [Link] existing systems also don’t take full advantage of cloud
technologies like Firebase for real-time updates or machine learning to provide
smarter notifications and alerts. Users can enter their items manually but are
left with minimal insights into food usage patterns or suggestions on when to
restock items. Additionally, there’s no integration with external services, like
recipe suggestion platforms, which could further optimize the use of groceries.
26
3.2 PROPOSED SYSTEM:
The proposed system aims to address the growing issue of food waste by
providing a comprehensive mobile-based grocery management solution. This
system allows users to efficiently track and manage their grocery items through
features such as barcode scanning and receipt Optical Character Recognition
(OCR). By leveraging the ZXing API for barcode scanning and Google ML
Kit for OCR, the system facilitates easy product addition to the inventory. Once
products are added, the system organizes them by category and expiry date,
enabling users to monitor their groceries effectively.
A key component of the system is its automatic expiry date tracking.
As products near their expiration date, the system sends notifications to the
user, helping to reduce the risk of food spoilage. Additionally, the integration
with the Spoonacular API allows for personalized recipe recommendations
based on the current inventory, encouraging users to use the items they already
have before purchasing more. The system also features automatic inventory
updates, ensuring that the inventory is always accurate by adjusting the stock
count whenever items are consumed or added. This streamlined approach to
grocery management not only enhances user convenience but also contributes
significantly to reducing food waste.
3.3 SYSTEM REQUIREMENTS
3.3.1 INTRODUCTION
The goal of this project is to develop a mobile-based solution that
helps users manage their groceries efficiently while reducing food waste. The
system tracks expiry dates, manages inventory, and provides personalized
recipe suggestions based on available ingredients. Through features like
barcode scanning, receipt OCR, and automatic stock updates, the system
ensures users are always aware of their food inventory. Notifications are sent
when items are nearing expiration, and the system also offers smart restocking
27
alerts. This smart grocery management solution is designed to be user-friendly,
leveraging technologies like Kotlin, Jetpack Compose, Firebase, and APIs like
Spoonacular for recipe suggestions.
The key functionalities include a barcode scanning feature to easily
add items, receipt OCR for extracting text from grocery receipts, and an
inventory management system using Firebase Firestore. Expiry tracking is a
critical feature, which notifies users about items approaching their expiration
date through Firebase Cloud Messaging (FCM). Recipe recommendations are
generated based on the available ingredients in the user's inventory. The system
is also designed for auto stock updates when users mark items as used, and
smart restocking alerts based on usage patterns.
3.3.2 HARDWARE REQUIREMENTS
As the proposed system is a mobile-based application aimed at
helping users manage groceries and minimize food wastage, the hardware
requirements are minimal and primarily focus on the end-user's mobile device.
Below is a detailed breakdown of the necessary hardware components:
SMARTPHONE (ANDROID DEVICE)
BARCODE SCANNING MODULE (VIA SMARTPHONE CAMERA)
SMARTPHONE (ANDROID DEVICE)
The core hardware component required for the functioning of this
application is an Android smartphone. The device must meet the following
minimum specifications:
Operating System: Android 5.0 (Lollipop) or higher.
Processor: Quad-core 1.5 GHz or better.
RAM: Minimum 2 GB.
Storage: At least 100 MB of free space for app installation, with additional
storage required for data caching, barcode data, and receipt image processing.
Camera:Functional rear-facing camera with autofocus capability, essential for
28
barcode scanning and receipt OCR.
Internet Connectivity:Required for synchronizing with Firebase services,
retrieving recipe data from Spoonacular API, and sending notifications via
Firebase Cloud Messaging (FCM).
Smartphones serve as the user interface for all interactions, including data
entry, notifications, inventory visualization, and system updates. The mobile
device should be capable of handling background services for expiry checks
and push notification processing without performance degradation.
BARCODE SCANNING MODULE (VIA SMARTPHONE CAMERA)
Instead of requiring any dedicated external hardware for barcode
scanning, the application leverages the inbuilt smartphone camera paired with
the ZXing (Zebra Crossing) Android Embedded API. The ZXing library
enables the camera to function as a barcode scanner with the following
characteristics:
Barcode Formats Supported: UPC-A, UPC-E, EAN-8, EAN-13, Code 39,
Code 93, Code 128, QR Code, etc.
Real-time Detection: The module processes barcode data in real-time, enabling
users to add grocery items with minimal manual input.
Integration with Inventory: Upon successful scanning, product information (if
available) is populated automatically, streamlining inventory management.
The barcode scanning functionality is optimized for standard lighting
conditions and uses autofocus features of the camera to ensure high accuracy
and fast response times.
3.3.3 SOFTWARE REQUIREMENTS
ANDROID STUDIO (INTEGRATED DEVELOPMENT
ENVIRONMENT)
Android Studio is the official Integrated Development Environment
(IDE) for Android application development, developed and maintained by
29
Google. Built on JetBrains’ IntelliJ IDEA, Android Studio provides a powerful
platform that combines intelligent code editing, real-time error detection,
refactoring tools, and extensive developer support, making it the preferred
choice for building robust and scalable Android applications. For this grocery
management project, Android Studio plays a central role in managing the
application’s source code, designing user interfaces, configuring dependencies,
and integrating external libraries such as ZXing for barcode scanning and ML
Kit for OCR processing.
In addition to code development, Android Studio offers advanced
tools for debugging and testing. The built-in Android Virtual Device (AVD)
Manager allows the developer to simulate a wide range of devices, screen sizes,
and Android versions, which ensures that the application works reliably across
diverse hardware configurations. It also includes a Gradle-based build system
that automates tasks like compiling, packaging, and deploying the app.
Moreover, its support for version control systems like Git makes collaborative
development seamless and ensures proper code tracking and maintenance. This
combination of features makes Android Studio not just a coding tool but a
comprehensive developmentenvironment that supports the full lifecycle of
Android app development.
KOTLIN PROGRAMMING LANGUAGE
Kotlin is a statically typed, modern programming language officially
supported by Google for Android development. It enhances productivity
through its concise syntax, improved type safety, and support for functional
programming constructs. In the grocery management application, Kotlin is
used to handle all backend logic, including inventory updates, data
synchronization, and API interactions. Its seamless interoperability with Java
and Android SDK makes it an excellent choice for building scalable and
maintainable mobile apps. The language’s built-in support for coroutines
allows for efficient handling of asynchronous tasks such as fetching recipes,
30
scanning barcodes, or reading OCR data—ensuring that these operations don’t
block the main UI thread. Kotlin also improves code readability and reduces
the likelihood of common programming errors, such as null pointer exceptions,
through features like null safety. This leads to a more stable and user-friendly
mobile application
GOOGLE ML KIT OCR
Google’s ML Kit Optical Character Recognition (OCR) is a machine
learning-powered toolkit that allows text extraction from images. In this
application, ML Kit is used to scan grocery receipts and extract relevant text
such as product names and expiry dates. This feature automates the manual
entry process and significantly speeds up inventory creation, enhancing user
convenience. ML Kit OCR runs on-device, ensuring quick response times and
preserving user privacy.
The OCR engine can recognize printed text in multiple languages and
is designed to be robust in varying lighting conditions and font styles. It is
integrated within the application via the camera interface, and the
extracted text is parsed and stored in the app’s local or cloud database. This
seamless integration of AI-powered OCR ensures greater accuracy in data
entry and reduces errors, which is especially useful for users with lengthy
shopping receipts
ZXING API FOR BARCODE SCANNING
Zxing (“Zebra Crossing”) is an open-source barcode scanning library
widely used in Android applications. In this grocery management app, Zxing
is integrated to enable users to scan product barcodes using their smartphone
cameras. Upon scanning, the app retrieves relevant product data and adds it to
the user’s inventory, streamlining the input process. This significantly reduces
the time required for manual data entry and minimizes input errors. The
integration is achieved via the Zxing Android Embedded library, which wraps
the core Zxing functionality for easier use in modern Android projects. By
31
embedding barcode scanning directly into the UI flow, the app improves both
convenience and user engagement.
The Zxing API supports various barcode formats including UPC,
EAN, QR codes, and more, making it versatile for different grocery itemsIt is
lightweight and runs efficiently on Android devices, providing a smooth user
experience even on older phones. Barcode scanning is one of the most
interactive and user-friendly features of the application, contributing to its goal
of simplifying grocery tracking and encouraging consistent use. Additionally,
the camera is dynamically controlled within the Compose UI structure to
ensure a seamless scanning experience. The robustness of Zxing’s decoding
engine also allows for accurate scans even in low lighting or with slightly
damaged barcode labels.
FIREBASE FIRESTORE
Firebase Firestore is a NoSQL cloud database developed by Google
that allows for scalable and real-time data storage. It is used in this application
to store users’ grocery inventories, including item names, quantities, and expiry
dates. Firestore supports automatic syncing across devices, so users can access
their inventory from any Android phone logged in with the same account. Its
flexible schema makes it easy to update and expand data models as new
features are added to the app.
Firestore also supports offline capabilities, meaning users can still
view and modify their data without an internet connection. Once the device
reconnects, changes are synchronized automatically with the cloud. This
ensures data reliability and a seamless experience for the user. With built-in
security rules, Firestore helps protect user data while allowing for custom
access control based on authentication.
32
ROOM DATABASE (LOCAL CACHE)
The Room Persistence Library is part of Android Jetpack and serves as
the local database solution for this app. While Firestore handles cloud storage,
Room is used for offline caching and local operations, ensuring that the app
remains functional without network access. Room abstracts the underlying
SQLite database and provides a cleaner, object-oriented approach to data
access using Data Access Objects (DAOs).
This local caching mechanism improves app performance by reducing the
number of calls to the cloud database and allowing quick retrieval of stored
data. Room also supports reactive programming through LiveData and Kotlin
Flow, enabling the app to automatically update the UI in response to database
changes. This dual-layered approach of local and cloud storage offers a highly
resilient and responsive system architecture.
FIREBASE CLOUD MESSAGING (FCM)
Firebase Cloud Messaging is a cross-platform messaging solution used
to send notifications to users. In this application, FCM is employed to alert
users when a grocery item is nearing its expiry date or has already expired.
These real-time alerts play a crucial role in reducing food waste by prompting
timely consumption or disposal of items.
FCM is highly customizable and allows for both scheduled and event-
triggered notifications. The notifications are lightweight, secure, and reliable,
even in low-bandwidth conditions. With FCM, users can stay informed without
actively opening the app, making it a key driver of user engagement and
proactive grocery management.
SPOONACULAR API
The Spoonacular API is an external service that offers food-related data
including recipes, nutritional information, and meal planning. In this app, the
API is used to suggest recipes based on the ingredients available in the user’s
33
inventory. This feature not only encourages users to make use of soon-to-expire
items but also inspires creativity in meal preparation.
The API fetches recipes by matching user inventory items with
ingredient lists in its vast recipe database. Users receive suggestions that are
both relevant and practical, often with cooking instructions and preparation
time. This integration transforms the app from a passive tracking tool into an
active assistant that enhances everyday cooking and supports sustainable eating
habits.
34
CHAPTER 4
4. SYSTEM DESIGN
4.1 SYSTEM ARCHITECTURE
FIG 4.1.1 : SYSTEM ARCHITECTURE
35
4.2 DATA FLOW DIAGRAMS
The classic visual representation of how information moves through a
system is a data flow diagram (DFD). A tidy and understandable DFD can
graphically represent the appropriate quantity of the system demand. It can be
done manually, automatically, or both. It demonstrates how information enters
and exits the system, what modifies the data, and where information is kept. A
DFD’s goal is to outline the boundaries and scope of a system as a whole. It
can be used as a channel of communication between a system analyst and
anyone involved in the decision-making process that leads to the redesign of a
system. A data flow diagram or bubble chart is another name for the DFD. A
data flow diagram has no control flow, there are no decision rules and no loops.
They are used to graphically represent the flow of data in a business information
system. DFD describes the processes that are involved in a system to transfer
data from the input to the file storage and report
DFD LEVEL-0
FIG 4.2.1 : DFD LEVEL-0
36
DFD LEVEL-1
FIG 4.2.2 : DFD LEVEL-1
DFD LEVEL-2
FIG 4.2.3 : DFD LEVEL-2
37
4.3 UML DIAGRAMS
4.3.1 USE CASE DIAGRAM
Unified Modeling Language (UML) is a standardized general-purpose
modeling language in software engineering. The standard is managed and was
created by the Object Management Group. UML includes a set of graphic
notation techniques to create visual models of software-intensive systems. This
language is used to specify, visualize, modify, construct, and document the
artifacts of an object-oriented software-intensive system under development. A
Use case Diagram is used to present a graphical overview of the functionality
provided by a system in terms of actors, their goals, and any dependencies
between those use cases.
The use case diagram consists of two parts:
Use case: A use case describes a sequence of actions that provided measurable
value to an actor and is drawn as a horizontal ellipse.
Actor: An actor is a person, organization, or external system that plays a role
in one or more interactions with the system.
The actors that the system you're describing interacts with, the system itself,
the use cases or services that the system can provide, and the connections
between these elements are all represented by lines.
Subsystems: Subsystems are a sort of stereotyped component used in UML
models that represent autonomous, behavioral elements within a system.
Large-scale components in the system you are modeling are represented as
subsystems in class, component, and use-case diagrams.
Relationships in use-case diagrams: A relationship in the UML is a link
between model elements. By specifying the structure and behavior between
the model elements, a UML relationship is a type of model element that gives
a model semantics.
38
FIG [Link] : USE CASE DIAGRAM
39
4.3.2 SEQUENCE DIAGRAM
UML Sequence Diagrams are interaction diagrams that describe the
steps used to complete an operation. They depict how items interact
within the framework of cooperation. By using the vertical axis of the
diagram to represent time and the messages that are transmitted and
when, sequence diagrams, which have a time focus, can visually depict
the order of interaction.
Lifelines – A lifeline is a named element which depicts an individual
participant in a sequence diagram. So basically each instance in a
sequence diagram is represented by a lifeline. Lifeline elements are
located at the top in a sequence diagram. The standard in UML for naming
a lifeline follows the following format
Instance Name : Class Name
Messages – Communication between objects is depicted using
messages. The messages appear in sequential order on the lifeline. We
represent messages using arrows. Lifelines and messages form the core of
a sequence diagram. Messages can be broadly classified into the
following categories
Synchronous messages
Asynchronous Messages
Create message
Delete Message
Self Message
Reply Message
Found Message
Lost Message
40
FIG [Link] : SEQUENCE DIAGRAM
41
4.3.3 Activity Diagram:
An activity diagram is a graphical representation of workflows of stepwise
activities and actions with support for choice, iteration, and concurrency. An
activity diagram shows the overall flow of control. In essence, it is a flowchart
that shows how one activity leads to another. The action might be referred to
as a system operation. One operation leads to the next in the control flow.
Activity diagram -Module 1
Activity diagram -Module 2
42
Activity diagram -Module 3
FIG [Link] : ACTIVITY DIAGRAM
43
CHAPTER 5
5.1 Design and Implementation
5.1.1 Module Design
List of Modules:
o Barcode Scanner
o Receipt OCR Processor
o Inventory Management System
o Expiry Tracker and Notifications
o Recipe Recommendation Engine
Description of Modules
Barcode Scanner
This module is responsible for scanning product barcodes and automatically
retrieving product names and categories. It uses the ZXing API integrated into
the Kotlin-based Android application. When a user adds a new grocery item,
they simply scan the barcode using their mobile camera. The barcode scanner
decodes the Universal Product Code (UPC) and checks the corresponding
product information in the app’s database or a third-party API if available.
Once scanned, the user can manually add the expiry date and quantity, or let
the system suggest typical shelf-life durations based on the product type. This
greatly reduces manual entry and enhances accuracy. Integration with the
camera hardware via Jetpack Compose ensures smooth scanning functionality
with minimal delays.
Receipt OCR Processor
The receipt OCR module uses Google ML Kit to extract text data from physical
or digital grocery receipts. The system scans the receipt image, detects text
blocks, and identifies relevant details like product names, quantities, and
44
prices. This is done using machine learning-based text recognition and heuristic
filtering.
After extraction, the data is structured and displayed to the user for
confirmation before adding items to the inventory. This feature reduces the
manual effort required to input groceries after a shopping trip and ensures
inventory is updated swiftly and accurately.
Inventory Management System
This is the central module where all user grocery data is stored and maintained.
It is built on Firebase Firestore, which provides a scalable NoSQL cloud
database that syncs data in real time across devices. Users can view their
available groceries, categorize items (e.g., dairy, vegetables, frozen), and
update quantities as they consume products.
Each item includes attributes such as name, quantity, category, and expiry date.
The system supports bulk addition, editing, and deletion of items. The
inventory automatically syncs with barcode and receipt inputs, ensuring the
data remains updated without duplication.
Expiry Tracker and Notifications
This module monitors the expiry dates of all grocery items in the database. It
compares the current date with stored expiry dates and generates expiry alerts.
Notifications are scheduled using Android’s WorkManager or AlarmManager,
depending on app state and battery optimization settings.
Users are notified of items approaching expiry 3 days in advance, on the day
of expiry, and optionally post-expiry. A color-coded interface (green for fresh,
yellow for nearing expiry, red for expired) allows quick assessment. In future
45
iterations, this module may incorporate AI-based predictions of spoilage based
on user environment (e.g., temperature, frequency of usage).
Recipe Recommendation Engine
This module uses the Spoonacular API to suggest recipes based on current
inventory. It queries available food items and fetches recipes that maximize
usage of items nearing expiry. This not only reduces food waste but also
enhances user convenience.
Recipes are filtered by user preferences such as dietary restrictions (e.g.,
vegetarian, gluten-free) and cuisine types. Each recipe comes with detailed
ingredients, instructions, and estimated prep time. Integration with expiry
tracking ensures that recipes are dynamically updated based on real-time
inventory.
5.2 Design and Implementation Constraints
5.2.1 Constraints in Analysis
Language Processing Constraints: OCR and barcode modules depend on
structured text formats. Variations in receipt designs or damaged barcodes
may affect accuracy.
Operational Restrictions: The app requires camera access and internet
connectivity for optimal functioning. Offline mode has limited features.
Constraints Integrated into Existing Model Concepts: Firebase schema
must align with expected inventory structure; OCR and scanner modules must
conform to input expectations.
Constraints as a Separate Concept: Non-standard input (e.g., hand-written
receipts) may not be processed unless advanced models are integrated.
Constraints Implied by Model Structure: Real-time updates require atomic
operations and concurrency handling in Firestore to avoid data loss or race
conditions.
46
5.2.2 Constraints in Design
Periodic Update Determination: Expiry check runs daily in the background,
affecting notification delivery if battery optimization interferes.
Action Determination: Scanning and OCR must trigger inventory updates
automatically to ensure consistency.
Data Connectivity: Real-time sync with Firebase requires consistent internet
access; offline mode stores data locally until reconnected.
Network Bandwidth: Use of APIs like Spoonacular and Firebase Cloud
Messaging must be optimized to reduce unnecessary data consumption.
Global Actions and Constraint Realization: Notifications, sync, and OCR
functions must be coordinated to avoid resource clashes and user disruption.
5.2.3 Constraints in Implementation
Barcode and OCR Limitations: Poor lighting, smudged receipts, or partial
barcodes can result in scan failures or incorrect parsing.
Cloud Dependency: While Firebase offers convenience, any service outage
can halt real-time updates or syncing.
Battery Optimization: On certain Android devices, background tasks
(expiry alerts) may not run as intended without user-permitted settings.
User Interaction Lag: Recipe suggestions depend on up-to-date inventory;
delays in user inputs can reduce recommendation accuracy.
Hardware Variability: Differences in camera resolution and processor speed
may affect scan and OCR performance.
5.3 Other Non-functional Requirements
5.3.1 Performance Requirements
The app is optimized to deliver smooth, real-time updates with low latency.
The barcode module scans within 1–2 seconds, and the OCR engine processes
a receipt in under 5 seconds under optimal conditions. Firebase ensures sub-
47
second read/write operations. The app consumes minimal memory and battery
by offloading heavy tasks to cloud-based services and running periodic tasks
only when necessary.
The barcode and OCR modules are performance-tested on mid-
range Android devices (4GB RAM, Android 10+). The Spoonacular API
caches common queries to reduce latency and bandwidth consumption.
Additionally, the app utilizes image compression techniques for scanned
receipts to minimize storage use and ensure fast loading times. The app is also
designed to work efficiently in low network conditions, allowing users to still
access most features even with limited connectivity. Performance is
continuously monitored, with automatic updates optimized for better speed and
reduced memory footprint.
5.3.2 Safety Requirements
The grocery management mobile application prioritizes user safety through
data protection, operational accuracy, and secure device interaction. While not
a safety-critical system, the app is designed to ensure privacy, secure
functionality, and accurate food management, minimizing risks related to food
spoilage and data breaches.
1. Data Safety and Privacy
User Data Protection: The app uses Firebase Firestore to store user data
securely, ensuring encryption both in transit and at rest. Firebase
Authentication controls access, allowing only authorized users to manage their
inventory and other data.
Privacy Compliance: The app adheres to GDPR standards, providing users
with full control over their data, including options to view, modify, or delete it.
Only essential information, like product names, quantities, and expiry dates, is
collected, ensuring minimal data storage.
48
Notification Privacy: To protect sensitive information, notifications avoid
displaying product details on lock screens, ensuring user privacy even in public
spaces.
2. Operational Accuracy and Safety
Expiry Tracking: A key feature of the app is real-time expiry tracking, which
ensures that users receive timely reminders about products nearing their expiry.
Expiry dates are validated to prevent errors, and users can verify scanned data
before it’s saved.
Barcode Scanning and OCR Verification: To minimize errors from
scanning, users are prompted to manually verify information before it is added
to the inventory. This step ensures accuracy, especially with receipt scanning
or barcode mismatches.
Notification Management: Notifications about expiring products are designed
to be timely but not overwhelming. Reminders are sent in advance, on the
expiry date, and post-expiry, helping users manage their groceries effectively
without unnecessary alarms.
3. Device Interaction and Safety
Battery Efficiency: The app is optimized to reduce battery consumption by
limiting background processes and only activating power-heavy features like
GPS and barcode scanning when necessary.
Performance Optimization: Background tasks such as barcode scanning and
OCR processing do not interfere with the app’s responsiveness. This ensures
that the app remains smooth and user-friendly on a variety of devices. The app
is optimized to deliver smooth, real-time updates with low latency. The
barcode module scans within 1–2 seconds, and the OCR engine processes a
receipt in under 5 seconds under optimal conditions. Firebase ensures sub-
second read/write operations. The app consumes minimal memory and battery
49
by offloading heavy tasks to cloud-based services and running periodic tasks
only when necessary.
Cross-Device Synchronization: The app uses Firebase’s real-time database to
ensure that data is synced across all devices, allowing users to seamlessly
transition between their phone and tablet without data loss.
Performance Testing: The barcode and OCR modules are performance-tested
on mid-range Android devices (4GB RAM, Android 10+). The Spoonacular
API caches common queries to reduce latency and bandwidth consumption,
ensuring efficient use of resources.
50
CHAPTER 6
6. SOFTWARE TESTING
6.1 TESTING
The process of testing involves running a program with the goal of
identifying an error. A test case that has a high chance of spotting an error
that hasn't been found yet is a good test case. An effective test is one that
identifies an error that hasn't been found yet. Prior to the start of live
operation, system testing, which is a stage of implementation, aims to
make sure the system performs precisely and efficiently as intended. It
confirms that all of the programs work together as a group. System testing
is necessary for a test to include numerous essential actions and steps for
running a program, string, and system, and it is crucial for the adoption of
a successful new system. Prior to the system being installed for user
acceptance testing, this is your final opportunity to find and fix faults.
Following the development of the program, the design of the associated
documentation and data structures, and the start of software testing. Error
correction in software depends on testing. The project or program is not
considered finished if that is the case. The most thorough examination of
specification design and code is represented by software testing, which is
a crucial component of software quality assurance. Testing is the process
of running the program and looking for errors. If there is a chance of
discovering an error that hasn't been found yet, the test case design is
good. A test is deemed successful if it identifies a previously unidentified
error. Functional testing on a part utilizing software still tries to verify
that the part functions in accordance with requirements, but the
specifics of how that testing is carried out will differ from those of a
design from a previous decade
51
6.2 TAXONOMY OF TESTING
6.2.1 WHITE BOX TESTING
White box testing is a testing method that examines the
internal organization, code, and design of software in order to validate
input-output functionality and enhance design, usability, and security.
White box testing is also known as clear box testing, open box testing,
transparent box testing, code-based testing, and glass box testing since
code is visible to testers during this type of testing.
The system, integration, and unit testing are all possible during the
software development process. Verifying that an application's working
flow is one of white box testing's fundamental objectives. It is comparing
a sequence of specified inputs to desired or expected outputs in order to
identify bugs when a particular input does not provide the desired
outcome.
Basis path testing:
Flow graph notation
Cyclometric complexity
Deriving test cases
Graph matrices Control
6.2.2 BLACK BOX TESTING
Knowing a product's internal operation allows testers to run tests
to make sure "all gears mesh," or that the internal operation operates as
intended and that all internal components have received sufficient use. It
primarily focuses on the software's functional needs. The following
procedures are used in black-box test case design:
52
Graph-based testing methods
Equivalence partitioning
Boundary value analysis
Comparison testing
6.2.3 SOFTWARE TESTING STRATEGIES
A plan for software testing serves as a guide for the software
developer. Testing is an established activity that may be organized in advance
and carried out in a methodical manner. For this reason, a blueprint for software
testing should consist of a series of steps into which we can insert particular
test case design techniques.
Testing begins at the module level and works “outward” toward the
integration of the entire computer-based system.
Different testing techniques are appropriate at different points in time.
The developer of the software and an independent test group conducts
testing.
Testing and Debugging are different activities but debugging must be
accommodated in any testing strategy.
6.3 TYPES OF TESTING
6.3.1UNIT TESTING
In the software development process known as unit testing,
the tiniest testable components of an application known as units are separately
and independently examined for appropriate operation. Software developers
and occasionally QA employees use this testing methodology during the
development phase. Unit testing's major goal is to separate written code so that
it can be tested and checked to see if it functions as intended. Unit testing is a
crucial step in the development process because, when carried out properly, it
53
can assist in finding early code defects that could be more challenging to
identify in later testing stages.
6.3.2 FUNCTIONAL TESTING
A type of software testing known as functional testing verifies a
software system against functional specifications and requirements. Each
function of the software program is tested using functional tests, which involve
supplying the right input and comparing the output to the functional
requirements. Functional testing mostly involves "black box" testing and is
unconcerned with the application's source code. This testing examines the
Application Under Test's User Interface, APIs, Database, Security,
Client/Server connection, and other functionalities. Testing can be carried out
manually or automatically. Functional testing doesn't look at the reliability,
security, or efficiency of the application's underlying source code, in contrast
to non-functional testing. It does not assess reliability, scalability, and speed.
TABLE [Link]
Sno. Test Cases Action input Expected Actual result Pass/Fail
Result
1. Add item Enter Name, Item saved to Item saved Pass
manually Quality, Firebase to Firebase
Expiry Date database Database
2. Scan Barcode Scan product Product Product Pass
barcode details details
retrieved and retrieved and
saved saved
54
3. Edit Item Update Item details Item details Pass
Quantity or updated in updated in
Expiry Date database database
4 Delete Item Click delete Item removed Item Pass
on selected from removed
item database from
database
5 Expiry Alert Date check Alert Alert Pass
Trigger runs in notification notification
background sent for sent for
expiry items expiry items
6 Notification Receive Message Message Pass
Display expiry alert shown in shown in
notification notification
bar bar
7 Barcode API Send barcode Receive Receive Pass
Response to API correct correct
product info product info
8 App Data Sync Add/Edit/Del Firebase in Sync with Pass
ete item Sync realtime Firebase in
with realtime
55
6.3.3 INTEGRATION TESTING
A methodical approach to building the program's
structure and running tests at the same time to find interface issues is
known as integration testing. Integration testing, then, is the thorough
testing of the collection of modules that comprise the product. The goal
is to develop a software structure using untested modules, and the tester
should pinpoint the crucial modules. It is important to test crucial
components as soon as feasible. Waiting until every unit has passed
testing before combining them and testing them is one strategy. This
strategy developed from testing unstructured tiny programs. A different
approach is to build the product using tested unit increments. An
additional module is introduced and its
interactions are tested after a small group of modules has been merged
and tested. so forth. The benefit of this strategy is that interface errors
may be quickly identified and fixed.
Linking errors were the biggest problem during the project. The
connection with all support files is not properly set when all the
modules are integrated. Next, we examined linkages and connectivity.
Localized errors only affect the new module and its communications.
Once modules complete unit testing, the product development can be
staged and added in. When the final module has been merged and tested,
testing is finished.
Integration testing is a methodical approach for creating the program's
structure while also running tests to find faults related to it. When we
combine individual modules, which are particularly susceptible to
interface problems, we shouldn't trust that everything will work right
away. Naturally, "bringing them together"—or interacting—is the issue.
Data loss between sub-functions may occur, which could prevent the
56
desired principal function from being produced. Individually
acceptable impressions may be amplified to undesirable levels, and
global data structures may cause issues.
6.3.4 VALIDATION TESTING
Software is fully assembled as a bundle at the conclusion of
integration testing. The last round of software test-validation testing has
been completed after interface issues have been found and fixed. Many
definitions exist for validation testing, but a straightforward one is that
it is successful when the program performs as the client would
reasonably anticipate. Via a number of black box tests that show
conformity with requirements, software validation is accomplished.
There are two possibilities when a validation test has been completed. It
is acknowledged
that the function or performance characteristics meet the requirements.
A shortfall is formed once a specification validation is found.
A deficit is formed after validation from the specification is
discovered. Before the project is finished, any deviations or problems
found at this stage are fixed with the user's assistance through
negotiation to devise a procedure for fixing problems. As a result,
validation testing was used to examine the suggested system under
consideration, and it was discovered to be functioning satisfactorily.
Despite the system having certain flaws, they weren't severe.
6.3.5 SYSTEM TESTING
Testing is done to find mistakes. It is employed for quality
control. The entire process of development and maintenance includes
testing. In addition to ensuring the accuracy of the design itself, the
testing phase's objectives are to confirm that the specification has been
57
accurately and completely included in the design. For instance, the
design must be free of logic errors before any coding is done, or the cost
of rectifying the errors will be significantly higher as a result. Inspection
and walkthrough techniques can both be used to find design flaws. One
of the crucial elements in the software development process is testing.
The following test cases are used during project-wide testing to check
for errors:
Static analysis is used to look into the structural characteristics of the
source code.
By the use of a program that runs on test data, dynamic testing is used
to examine the behavior of the source code.
58
CHAPTER 7
7. CONCLUSION AND FUTURE ENHANCEMENT
7.1 CONCLUSION
In this project, we have developed a mobile-based grocery management
application aimed at reducing food waste and improving kitchen inventory
efficiency. The app enables users to scan barcodes and receipts using OCR to
add items, track expiry dates, and receive timely reminders. By integrating real-
time inventory updates and recipe suggestions based on available ingredients,
the system supports smarter grocery usage and meal planning. The use of
Firebase Firestore ensures secure data storage and synchronization across
devices. The application helps users monitor stock levels and avoid food
spoilage, ultimately promoting sustainable household practices. It is especially
beneficial for individuals managing busy lifestyles, large families, or those
aiming to reduce food waste.
7.2 FUTURE ENHANCEMENT
Currently, the app supports single-user access with basic features like scanning
and expiry tracking. In the future, multi-user support can be implemented so
that all family members can access and update a shared inventory. The app can
also be enhanced by integrating AI-based expiry prediction and nutrition
tracking based on consumption habits. Cloud-based analytics could be
introduced to offer personalized insights, and IoT device integration, such as
smart fridges, can provide real-time storage data. These enhancements will
make the system smarter, more collaborative, and more efficient in minimizing
food waste.
59
APPENDICES
A1:SAMPLE CODE
[Link]
package [Link].pantry360
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
class MainActivity : AppCompatActivity() {
private lateinit var db: FirebaseFirestore
private lateinit var nameEditText: EditText
private lateinit var quantityEditText: EditText
private lateinit var dateEditText: EditText
private lateinit var categorySpinner: Spinner
private lateinit var barcodeLauncher: ActivityResultLauncher<Intent>
@SuppressLint("MissingInflatedId")
override fun onCreate(savedInstanceState: Bundle?) {
[Link](savedInstanceState)
setContentView([Link].activity_add_item)
db = [Link]()
nameEditText = findViewById([Link])
quantityEditText = findViewById([Link])
60
dateEditText = findViewById([Link])
categorySpinner = findViewById([Link])
val addButton = findViewById<Button>([Link])
val viewItemsBtn = findViewById<Button>([Link])
val smartListBtn = findViewById<Button>([Link])
val recipeBtn = findViewById<Button>([Link])
val btnBarcodeScan = findViewById<Button>([Link])
val btnUploadReceipt =
findViewById<Button>([Link])
[Link] {
startActivity(Intent(this,UploadReceiptActivity::[Link])) }
val categories = listOf("Select Category", "Snacks", "Dairy",
"Beverages", "Bakery", "Frozen", "Others")
val categoryAdapter = ArrayAdapter(this,
[Link].simple_spinner_item, categories)
[Link]([Link].simple_spinn
er_dropdown_item)
[Link] = categoryAdapter
[Link] {
startActivity(Intent(this, SmartShoppingActivity::[Link]))
}
[Link] {
startActivity(Intent(this, RecipeActivity::[Link]))
}
barcodeLauncher =
registerForActivityResult([Link]()) {
result ->
if ([Link] == RESULT_OK) {
val scannedValue = [Link]?.getStringExtra("barcode_result")
[Link](scannedValue)
[Link](this, "Scanned: $scannedValue",
Toast.LENGTH_SHORT).show()
}
}
[Link] {
val intent = Intent(this, BarcodeScannerActivity::[Link])
[Link](intent)
61
}
[Link] {
val intent = Intent(this, UploadReceiptActivity::[Link])
startActivity(intent)
}
[Link] = false
[Link] = true
[Link] {
val calendar = [Link]()
val year = [Link]([Link])
val month = [Link]([Link])
val day = [Link](Calendar.DAY_OF_MONTH)
val datePicker = DatePickerDialog(this, { _, selectedYear,
selectedMonth, selectedDay ->
val formattedDate = "${[Link]().padStart(2,
'0')}/${(selectedMonth + 1).toString().padStart(2, '0')}/$selectedYear"
[Link](formattedDate)
}, year, month, day)
[Link]()
}
[Link] {
startActivity(Intent(this, ViewItemsActivity::[Link]))
}
[Link] {
val name = [Link]().trim()
val quantity = [Link]().toIntOrNull() ?: 0
val date = [Link]().trim()
val selectedCategory = [Link]()
if ([Link]() || [Link]() || selectedCategory == "Select
Category") {
[Link](this, "Please fill all fields and select category",
Toast.LENGTH_SHORT).show()
return@setOnClickListener
}
val item = hashMapOf(
"name" to name,
62
"quantity" to quantity,
"expirationDate" to date,
"category" to selectedCategory
)
[Link]("items")
.add(item)
.addOnSuccessListener {
[Link](this, "Item added successfully!",
Toast.LENGTH_SHORT).show()
if (quantity <= 2) {
[Link](this, "Warning: Low stock for $name!",
Toast.LENGTH_LONG).show()
}
val sdf = SimpleDateFormat("dd/MM/yyyy",
[Link]())
val today = [Link]().time
val expDate = [Link](date)
if (expDate != null && [Link](today)) {
[Link](this, "Item already expired!",
Toast.LENGTH_LONG).show()
}
[Link]()
[Link]()
[Link]()
[Link](0)
}
.addOnFailureListener {
[Link](this, "Error: ${[Link]}",
Toast.LENGTH_SHORT).show()
}
}
val workRequest = PeriodicWorkRequestBuilder<ExpiryWorker>(12,
[Link]).build()
[Link](this).enqueue(workRequest)
}
}
[Link]
package [Link].pantry360
import [Link]
63
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
import [Link]
class AddItemActivity : AppCompatActivity() {
private lateinit var editName: EditText
private lateinit var editQuantity: EditText
private lateinit var editDate: EditText
private lateinit var btnAddItem: Button
private lateinit var btnViewItems: Button
private val db = [Link]()
override fun onCreate(savedInstanceState: Bundle?) {
[Link](savedInstanceState)
setContentView([Link].activity_add_item)
editName = findViewById([Link])
editQuantity = findViewById([Link])
editDate = findViewById([Link])
btnAddItem = findViewById([Link])
btnViewItems = findViewById([Link])
// Date picker setup
[Link] {
val calendar = [Link]()
val datePicker = DatePickerDialog(
this,
{ _, year, month, dayOfMonth ->
val selectedDate = [Link]("%02d/%02d/%04d", dayOfMonth,
month + 1, year)
[Link](selectedDate)
},
[Link]([Link]),
[Link]([Link]),
[Link](Calendar.DAY_OF_MONTH)
)
[Link]()
64
}
[Link] {
val name = [Link]().trim()
val quantity = [Link]().toIntOrNull()
val expiry = [Link]().trim()
if ([Link]() || quantity == null || [Link]()) {
[Link](this, "Please fill all fields",
Toast.LENGTH_SHORT).show()
return@setOnClickListener
}
val item = hashMapOf(
"name" to name,
"quantity" to quantity,
"expirationDate" to expiry
)
[Link]("pantry_items")
.add(item)
.addOnSuccessListener {
[Link](this, "Item Added!", Toast.LENGTH_SHORT).show()
[Link]()
[Link]()
[Link]()
}
.addOnFailureListener {
[Link](this, "Failed to Add", Toast.LENGTH_SHORT).show()
}
}
[Link] {
startActivity(Intent(this, ViewItemsActivity::[Link]))
}
}
}
[Link](module :app)
plugins {
alias([Link])
alias([Link])
id("[Link]-services")
}
65
android {
namespace = "[Link].pantry360"
compileSdk = 35
defaultConfig {
applicationId = "[Link].pantry360"
minSdk = 21
targetSdk = 35
versionCode = 1
versionName = "1.0"
testInstrumentationRunner =
"[Link]"
}
buildFeatures{
viewBinding = true
dataBinding = true
}
buildTypes {
release {
isMinifyEnabled = false
proguardFiles(
getDefaultProguardFile("[Link]"),
"[Link]"
)
}
}
compileOptions {
sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8
}
kotlinOptions {
jvmTarget = "1.8"
}
}
dependencies {
val cameraXVersion = "1.2.3" // Or latest stable
implementation([Link])
implementation([Link])
implementation([Link])
implementation([Link])
implementation([Link])
implementation([Link])
66
implementation([Link])
// ML Kit Barcode Scanner
implementation("[Link]:material:1.11.0")
implementation("[Link]:barcode-scanning:17.2.0")
implementation("[Link]:guava:31.1-android")
implementation("[Link].retrofit2:retrofit:2.9.0")
implementation("[Link].retrofit2:converter-gson:2.9.0")
implementation("[Link].okhttp3:okhttp:4.12.0")
implementation("[Link].okhttp3:logging-interceptor:4.12.0")
implementation("[Link]:glide:4.16.0")
annotationProcessor("[Link]:compiler:4.16.0")
implementation("[Link]:text-recognition:16.0.1")
// CameraX
implementation("[Link]:camera-core:1.1.0")
implementation("[Link]:camera-camera2:1.1.0")
implementation("[Link]:camera-lifecycle:1.1.0")
implementation("[Link]:camera-view:1.1.0")
implementation("[Link]:work-runtime-ktx:2.7.1")
implementation("[Link]:camera-view:1.0.0-alpha31")
implementation("[Link]:zxing-android-embedded:4.3.0")
testImplementation([Link])
androidTestImplementation([Link])
androidTestImplementation([Link])
}
67
A2: OUTPUT AND SCREENSHOTS
68
69
70
A3:PUBLICATION
TITLE: AI – Enabled Grocery Planner For Efficiency And Sustainability
CONFERENCE: International Conference on “Nexus of Knowlegde using
AI & IoT ( ICNKAI-2025)”.
PUBLISHER: Sriram Engineering College
71
72
REFERENCES
[1] V. Nandhakumar, B. Jyothsna, and S. Gnanapriya, "AI-Driven Produce
Management and Self-Checkout System for Supermarkets," presented at the
2023 4th International Conference on Electronics and Sustainable
Communication Systems (ICESC), 2023. Available at: IEEE Xplore |
ResearchGate.
[2] J. Sun, K. Radecka, and Z. Zilic, "FoodTracker: A Real-time Food
Detection Mobile Application by Deep Convolutional Neural Networks," IEEE
Access, vol. 7, pp. 154-163, 2019. Available at: IEEE Xplore.
[3] J. Quino, J. M. Maja, J. Robbins, R. T. Fernandez, J. S. Owen Jr., and M.
Chappell, "RFID and Drones: The Next Generation of Plant Inventory,"
AgriEngineering, vol. 3, no. 2, pp. 168-181, 2021. Available at: MDPI.
[4] A. M. Nascimento, A. C. M. Queiroz, V. V. de Melo, and F. S. Meirelles,
"Applying Artificial Intelligence to Reduce Food Waste in Small Grocery
Stores," ISLA 2022 Proceedings, 2022. Available at: IEEE Xplore |
ResearchGate.
[5] I. Kollia, J. Stevenson, and S. Kollias, "AI-enabled Efficient and Safe Food
Supply Chain," IEEE Transactions on Automation Science and Engineering,
vol. 18, no. 2, pp. 1223-1234, 2021. Available at: IEEE Xplore.
[6] B. Hardy, Kotlin for Android Developers: Learn Kotlin the Easy Way
While Developing an Android App. Leanpub, 2015
[7] Android Developers, "Building Apps with Android SDK," [Online].
Available: [Link]
[8] Google ML Kit, "Text Recognition with ML Kit for Android," [Online].
Available: [Link]
[9] Firebase, "Firebase Realtime Database and Authentication
Documentation," [Online]. Available: [Link]
[10] Spoonacular, "Spoonacular Food and Recipe API," [Online]. Available:
[Link]
73
74