0% found this document useful (0 votes)
144 views22 pages

Project Report on Face Recognition

Uploaded by

21051178
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
144 views22 pages

Project Report on Face Recognition

Uploaded by

21051178
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

1

A PROJECT REPORT
on

“Face Recognition”

Submitted to
KIIT Deemed to be University

In Partial Fulfillment of the Requirement for the Award of

BACHELOR’S DEGREE IN
COMPUTER SCIENCE & ENGINEERING

BY

Siddharth Himanshee 21051178


Priyanshu Shekhar 21051415
Harsh Kumar Sharma 2105624
Aayushman Attreya 21052636

UNDER THE GUIDANCE OF


Proff. Ramakant Parida

SCHOOL OF COMPUTER ENGINEERING


KALINGA INSTITUTE OF INDUSTRIAL TECHNOLOGY
BHUBANESWAR, ODISHA - 751024
April 2024
2

Acknowledgements

We are profoundly grateful to Proff. Ramakant Parida of Kalinga Institute of Industrial


Technology for his expert guidance and continuous encouragement throughout to see
that this project rights its target since its commencement to its completion. .....................

Siddharth Himanshee
Priyanshu Shekhar
Harsh Kumar Sharma
Aayushman Attreya
3

TABLE OF CONTENTS

[Link]. Topic Pageno.

Acknowledgments………………………………………………………… 2

Table Of Contents……..………………………………………………...... 3

Abstract…….....………………………………………………………....... 4

I. Introduction……………………………………………………………… 4

A. Face Recognition system……………………………………....... 4


B. Overview of facial recognition system……………………………….... 4

C. Methods used in facial recognition system…..……………………….. 5

D. Application ……………………………………………………………. 6

E. Objectives……………………………………………………………… 6

F. Motivation…………………………………………………………....... 6

II. Existing work & proposed workflow..…………………………………. 7

A. Overview of Existing work….……………………………………….. 7

B. Proposed workflow….……………………………………………...... 8

C. Working of 2D-DCT model………………………………................... 9

III. Dataset, Implementation and Results………………………………..... 10

A. Dataset Details……………………………………………………….. 10

B. Tools & Technology used……………………………………………. 10

IV. Results………………………………………………………………….. 11

A. Test Data 2D-DCT………………………………………………….. 11


B. Test of Data system……………………………………………………. 11
C. Implementation and model summary……………………………… 12

V. Conclusion ………………..………………………..…………………… 13
4

ABSTRACT
Imagine unlocking your phone with a glance or having your favorite store greet you by name as you walk in. These
are just a few possibilities brought to life by facial recognition [Link] recognition is a rapidly evolving
field in computer vision that allows computers to identify or verify a person's identity based on their face. Just like
humans effortlessly recognize friends and family, facial recognition systems aim to achieve the same feat using
sophisticated algorithms and vast data [Link] technology works by:

 Detection: First, the system locates and isolates faces within an image or video frame.
 Recognition: Unique facial features are then extracted and compared against a database of known faces for
identification. These features can be distances between eyes, nose shape, or even patterns learned from training
data.

 Facial recognition holds immense potential across various sectors, including:

 Security: It can be used for access control, surveillance, and criminal identification.
 Consumer Technology: Unlocking devices, personalized experiences in stores, and photo tagging on social
media are just a few examples.
 Law Enforcement: Identifying missing persons or suspects from video footage can be a valuable tool.
 Keywords: Brain tumors, MRI, Convolution Neural Network, VGG19,
5
6

[Link]
[Link] RECOGNITION SYSTEM:

Face recognition technology has become a prominent area in computer vision due to its wide range of applications.
This project explores the development of a face recognition system capable of identifying individuals from images or
videos.

The system typically involves two key stages: face detection and recognition. In the first stage, the system locates and
isolates faces within the input image or video frame. Various techniques, such as skin tone detection or feature
extraction, can be employed for this purpose.

Once a face is detected, the recognition stage extracts unique features that represent the individual. These features can
be geometric distances between facial landmarks or patterns learned from a training dataset using machine learning
algorithms. The extracted features are then compared against a database of known faces to identify the individual in
the image.

This project will detail the chosen methods for face detection, feature extraction, and recognition. The implemented
system will be evaluated on a benchmark dataset to assess its accuracy and performance. The impact of factors like
pose variation, illumination changes, and occlusion on recognition accuracy will also be discussed.

[Link] OF FACIAL RECOGNITION SYSTEM:

While facial recognition offers undeniable benefits, it's crucial to acknowledge the ongoing discussions surrounding
its use. Concerns regarding privacy, potential bias in algorithms, and the possibility of misuse necessitate careful
consideration and regulations.

This overview provides a foundational understanding of facial recognition systems. As you delve deeper, you'll
explore the technical intricacies, the evolving landscape of applications, and the critical discussions shaping the
responsible use of this powerful technology.
7

Fig.1: Working of facial recognition system

Facial Recognition: Once a face is pinpointed, the system gets down to business. Here, unique features are extracted
to create a digital fingerprint of the face. These features can be:

Geometric: Distances between facial landmarks like eyes, nose, and mouth.

Learned Patterns: Machine learning algorithms analyze training datasets to identify patterns specific to each face.

With a digital representation in hand, the system then performs a comparison against a database of known faces. This
matching process determines whether the individual is recognized or not. It's like comparing a fingerprint scan to a
database of registered individuals.

[Link] USED IN FACE REGOGNITION SYSTEM:


The main methods used in face recognition system is described below.

 Infrared Facial Recognition: This is the most likely scenario. Facial recognition systems often use infrared
cameras to capture images, especially in low-light conditions. Infrared light is invisible to the human eye but
allows the camera to "see" heat patterns, which can be helpful for facial recognition.

 "3D Scanning": Some facial recognition systems use 3D scanning techniques to create a more detailed map of a
person's face. This might be misinterpreted as "saccing," but it's a non-invasive process that projects light patterns
and analyzes the reflection to create a depth map.
8

 Infrared Facial Imaging: This is a broader term encompassing any use of infrared cameras to capture facial
images.

 Depth Sensing in Facial Recognition: This refers to techniques that go beyond a simple 2D image and capture
depth information for more accurate recognition.

[Link]:
 Diagnosing genetic disorders. ...
 Facilitating mental therapy. ...
 Checkout-free software solutions. ...
 Loyalty programs. ...
 Personalized shopping experience

[Link]:
 Major Objectives
 Save Images To Database
 Detect Faces
 Match detected faces to Database
 Recognize Faces
 Provides accurate information about them
Fig
9
[Link]:
Facial Recognition, also known as facial detection, is briefly a computer technology that relies on artificial
intelligence (AI) and Machine Learning (ML). It is used to detect human faces in images or videos. Thanks to face
detection algorithms, It is possible to detect faces in an image or video regardless of the camera angles, the position of
the subject's head, lighting, or skin color.

When this technology is combined with biometric security systems (especially facial recognition), it makes it possible
to track people's faces in real time. Face detection is usually the first step in apps that use facial tracking, analysis, and
recognition, and it dramatically affects how the next steps in the app will work.

Face detection helps with facial analysis as well. It helps to figure out which parts of a video or picture should be
focused on to determine gender, age, or feelings. In the same way, face detection data is built into the algorithms of
facial recognition systems, which create "faceprint" maps of facial features. Face detection assists in identifying the
elements of the video or image that are necessary to generate a faceprint.
10

II. EXISTING WORK & PROPOSED WORKFLOW

A. OVERVIEW OF EXISTING WORK:

[Link] work flow of facial recognition system. [12]

Facial recognition systems are like detectives for faces. They analyze an image or video to identify a person based on
their unique facial characteristics. Here's a breakdown of how they work:

Stage 1: Face Detection

Imagine searching for a specific person in a crowded room. Facial recognition does something similar. First, it needs
to isolate the faces itself. Here are some common techniques used for face detection:

 Skin Tone Detection: The system might look for pixels with colors within the typical range of human skin tones. This
is a simple but less robust approach.
 Haar Feature Cascade Classifiers: These are machine learning models trained to identify specific features like edges
and corners that often form patterns around eyes, nose, and mouth.

Stage 2: Face Recognition

Once a face is isolated, the system extracts a unique "facial signature" to distinguish it from others. Here's where
things get interesting:

 Facial Landmark Detection: The system identifies key points on the face like the corners of the eyes, tip of the nose,
and edge of the mouth. Measuring the distances and ratios between these landmarks creates a basic descriptor of the
face.
 Feature Extraction: More sophisticated techniques go beyond landmarks. Deep learning algorithms can analyze the
entire face image and extract complex patterns specific to each individual.

Stage 3: Matching and Recognition


11
Now that the system has a facial signature, it needs to compare it to something to recognize the person. This is where
the magic happens:

 Facial Recognition Database: The system compares the extracted features against a database of known faces. This
database can be local (for unlocking your phone) or stored on a server (for security systems).
 Matching Algorithm: A sophisticated algorithm calculates the similarity between the extracted features and faces in
the database. The face with the closest match is considered the identified person.

Accuracy and Challenges:

Facial recognition is impressive, but it's not perfect. Factors like lighting variations, pose changes, and even facial
expressions can affect accuracy. Additionally, bias in training data can lead to unfair recognition across different
demographics.
12

B. PROPOSED WORKFLOW:

Fig. [Link] work flow Face recognition

The recognition stage typically uses an intensity (grayscale) representation of the image compressed by the
2D-DCT for further processing. This grayscale version contains intensity values for skin pixels. A block
diagram of the proposed technique of the face recognition system .The second stage uses a self-organizing map
(SOM) with an unsupervised learning technique to classify vectors into groups to recognize if the subject in
the input image is “present” or “not present” in the image [Link] the subject is classified as present, the
best match image found in the training database is displayed as the result, else the result displays that the
subject is not found in the image database.
13

C. Working of 2D-DCT model:

The basic operation of the DCT is as follows:


 The input image is N by M;
 f(i,j) is the intensity of the pixel in row i and column j;
 F(u,v) is the DCT coefficient in row k1 and column k2 of the DCT matrix.
 For most images, much of the signal energy lies at low frequencies; these appear in the upper left
corner of the DCT.
 Compression is achieved since the lower right values represent higher frequencies, and are often small
- small enough to be neglected with little visible distortion.
 The DCT input is an 8 by 8 array of integers. This array contains each pixel's gray scale level;
 8 bit pixels have levels from 0 to 255.

III. DATASET IMPLEMENTATION AND RESULT


14

A. DATASET DETAIL:

 The CASIA-WebFace dataset is used for face verification and face identification tasks.
 The dataset contains 494,414 face images of 10,575 real identities collected from the web.
 The MS-Celeb-1M dataset is a large-scale face recognition dataset consists of 100K identities, and each identity
has about 100 facial images.

[Link] & TECHNOLOGY USED:


• Python: Python was the language of selection for this project. This was a straightforward call for many
reasons.
• Python has a vast community and strong support on platforms like Stack Overflow. It is one of the most
commonly used languages, ensuring quick access to solutions.
• Python offers powerful tools for scientific computing, such as NumPy, Pandas, and SciPy, which are
widely available and well-documented. These tools can significantly simplify programming and speed up
the iteration process.
• Python's forgiving nature allows for programming that closely resembles pseudo code. This is
particularly useful when implementing pseudo-code from academic publications. However, Python is not
without faults; it is dynamically typed and packages are sometimes known for Duck typing. This can lead
to confusion when a method returns unexpected outputs, making it difficult to learn and implement new
libraries.

• Jupyter Notebook: Jupyter Notebook is a popular open-source web application used for creating and
sharing documents that include live code, equations, visualizations, and text. It is commonly used for
tasks such as data cleaning, statistical modeling, and machine learning.
• Noise Removal and Sharpening: Unwanted elements in the data can be removed using filters, and
images can be sharpened. Grayscale images are often used as input for this process.
• Erosion and Dilation: These operations are typically applied to binary images but can also be used with
grayscale images, depending on the variant. The basic effect of these operators on binary images is
erosion towards the boundaries of regions.
• Negation: Negation involves creating a negative image, where the lightest areas in the original image
appear darkest and vice versa. This technique is commonly used in photography.
15

• Subtraction: Image subtraction involves taking the digital pixel values of one image and subtracting
them from another. This process can, for example, help to isolate a white tumor from the rest of the
image.
• Threshold:Thresholding is a method of segmenting images by converting grayscale images into binary
images.

IV. RESULTS

A. MODEL BUILDING:

Fig 9. Consist of new user login

B. MODEL COMPILATION:

Fig 10. Consist permission for camera access

C. MODEL TRAINING:

Fig 12. Training of the Model


16

D. TRAINED MODEL SUMMARY:

Fig 13. Trained Model Summary


17

E. BEST TRAINED MODEL EPOCH DETAILS

Fig 14. Epochs Trained


18
F. ACCURACY:

Obviously, in order to test the system some faces are required. There are so many standard face databases for testing
and rating a face detection algorithm. A standard database of face imagery is essential to supply standard imagery to
the algorithm developers and to supply a sufficient number of images to allow testing of these algorithms. Without
such databases and standards, there will be no way to accurately evaluate or compare facial recognition algorithms.
All the experiments described here have been executed mainly on the faces provided by the ORL face database.

G. PLOT PERFORMANCE:

Fig 15. Graph of the Best Trained Model


19
CONCLUSION

In conclusion, Facial recognition system has successfully demonstrated the potential of facial recognition
technology. By achieving accurate face detection, robust feature extraction, and reliable recognition, this
project has contributed to the advancement of this field. As facial recognition continues to evolve,
responsible development and ethical considerations will be paramount. Facial recognition system serves
as a stepping stone towards a future where facial recognition technology can be a powerful tool for
identification and verification, while ensuring privacy and fairness for all.

 Effective Face Detection: Facial recognition system successfully employed techniques like infrared and 3d
imaging to isolate faces within images or videos, paving the way for recognition.
 Accurate Feature Extraction: You've explored techniques like 2C DCT to extract unique facial signatures,
creating a digital representation for identification.
 Reliable Recognition: The system compared these features against a database of known faces, achieving a
recognition of good accuracy under certain circumstances.
 I would like to thank again to our respected teacher Professor Ramakant Parida sir for his valuable guidance.

Thank-you…..
20
21
22

REFERENCES:
[1].Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D, Menon, D.
K., ... &
Glocker, B. (2017). Efficient multi-scale 3D 2D-CDT Image Analysis, 36, 61-78.

[2].Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., & Pal, C.
(2017).
Brain tumor segmentation with deep neural networks. Medical Image Analysis, 35, 18-31.

[3].Pereira, S., Pinto, A., Alves, V., & Silva, C. A. (2016). 35(5), 1240-
1251.

[4].Razzak, M. I., Naz, S., & Zaib, A. (2018). Deep learning for image processing:
Overview,
challenges and the future. In Classification in BiometricApps (pp. 323-350). Springer, Cham.

[5].Zhao, Q., Feng, Y., Cheng, S., Dong, Z., & Tu, Z. (2015). An automatic detection
system. (pp. 424-432).

[6].Han, Z., Wei, B., Zheng, Y., Yin, Y., & Li, K. (2017). AI face detection model based on
deep learning. In 2017 12th IEEE Conference on Industrial Electronics and Applications
(ICIEA) (pp.
1258-1262). IEEE.

[7].Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., & Maier-Hein, K. H. (2018). No
new-net.
In International MICCAI Workshop (pp. 234-244). Springer, Cham.

[8].Shin, H. C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., ... & Summers, R. M.
(2016). Deep
convolutional networks for computer-aided detection: 2D-DCT architectures, dataset
characteristics and transfer learning.35(5), 1285-1298.
a

Common questions

Powered by AI

Facial features are extracted using methods such as geometric analysis and deep learning-based pattern recognition. Geometric methods involve measuring the distances and ratios between facial landmarks like eyes, nose, and mouth, creating a basic descriptor of the face. Deep learning methods use algorithms to analyze the entire face and extract complex patterns specific to each individual, offering a more nuanced representation. These features are critical for the identification process as they create a unique 'signature', which is then compared against a database of known faces for recognition .

Challenges in implementing facial recognition systems, particularly concerning accuracy, include variations in lighting, pose, facial expressions, and occlusions. These factors can significantly affect the system's ability to detect and recognize faces correctly. Lighting variations can obscure facial features, while changes in pose or expression can alter the appearance of key landmarks used for recognition. Occlusions, such as hats or glasses, can conceal parts of the face, complicating the recognition process. Bias in training data also poses a challenge, potentially affecting accuracy across different demographics .

Infrared technology enhances facial recognition systems by capturing images in low-light conditions through heat patterns, which are invisible to the human eye. This allows infrared cameras to effectively 'see' in environments where traditional cameras struggle. The use of infrared in facial recognition is critical for high-accuracy recognition in varying lighting conditions. Main applications include security surveillance, where cameras need to operate around the clock, and devices that require accurate face recognition in diverse environments .

The two key stages involved in a face recognition system are face detection and recognition. In the detection stage, the system locates and isolates faces within the input image or video frame. Techniques like skin tone detection or feature extraction are used to identify and separate faces. In the recognition stage, unique features of the face are extracted to represent the individual. These features can include geometric distances between facial landmarks or patterns learned from a training dataset using machine learning algorithms. The extracted features are then matched against a database of known faces to identify the individual .

Facial recognition technology can be applied across various sectors beyond security. Personalized experiences in consumer technology, such as unlocking devices or tailored shopping experiences, leverage facial recognition for convenience and efficiency. In healthcare, it can assist in diagnosing genetic disorders or facilitating mental therapy by analyzing facial cues. Retail applications include checkout-free software solutions and loyalty programs, enhancing customer service and satisfaction. These diverse applications showcase the technology's potential to enrich user experiences in non-security domains .

Python plays a crucial role in developing facial recognition systems due to its powerful libraries and ease of use. Libraries like NumPy, Pandas, and SciPy offer robust tools for scientific computing, essential for processing and analyzing data. Python's community support further aids rapid development and problem-solving. Its ability to handle complex code structures resembling pseudo-code makes it particularly suitable for implementing algorithms from academic research. Despite its benefits, challenges like dynamic typing can complicate library integration and output expectations .

The ethical considerations associated with facial recognition technology include concerns about privacy, potential misuse, and bias in algorithms. Privacy concerns stem from the ability to identify and track individuals without their consent, raising issues about surveillance overreach. There is a possibility of misuse by entities to conduct unauthorized surveillance or control. Additionally, biases in the machine learning algorithms, often due to non-diverse training datasets, can lead to unfair treatment of certain demographic groups, exacerbating existing inequalities. These issues highlight the need for regulations and responsible use of facial recognition technology .

Datasets like CASIA-WebFace and MS-Celeb-1M significantly impact facial recognition accuracy by providing diverse training samples, which help the system to generalize better across different identities. CASIA-WebFace, with its extensive collection of over 494,000 images across 10,575 identities, and MS-Celeb-1M, with its large-scale dataset of 100,000 identities, offer varied facial data points in terms of poses, lighting, and expressions. Such comprehensive datasets enhance the model's ability to accurately identify and verify individuals under varying conditions, though challenges with bias remain .

Ethical considerations in developing facial recognition technology are significant due to privacy concerns and potential biases. Addressing these involves implementing transparent policies for data collected and ensuring consent from individuals. Algorithimic fairness can be improved by diversifying training datasets to reduce demographic biases. Regulatory measures must be in place to prevent misuse and ensure accountability in deployment scenarios. Engaging with stakeholders, including ethicists and regulatory bodies, is also essential to guide responsible innovation and applications .

The 2D-DCT (Two-Dimensional Discrete Cosine Transform) model contributes to the face recognition process by compressing the grayscale representation of the image, which highlights critical intensity values of skin pixels. It helps in extracting significant facial features by focusing on low-frequency components that contain the majority of the signal energy, while allowing high-frequency components (often negligible) to be disregarded. This assists in reducing data size while retaining essential facial information, thereby enhancing recognition efficiency and performance .

You might also like