sample
Haven't found the Essay You Want?
GET YOUR CUSTOM ESSAY SAMPLE
For Only $12.90/page

Going Mobile: Pros and Cons of Cell Phones Essay

NFC Institute of Engineering & Technology Multan, Pakistan (Federal degree awarding Institute) The project Implementation of Face Recognition based Attendance System presented by: Ramsha Tariq Zaira Ashraf Fatima Masood

(2K9-CSE-129) (2K9-CSE-130) (2k9-CSE-153)

Under the supervision of their project advisor and approved by the project Examination committee, has been accepted by the NFC Institute of Engineering & Technology, in partial fulfillment of the requirements for the four year Degree of B.Sc (Computer Systems Engineering).

(Engr. Shahzad Ashraf) Lecturer Internal Examiner

(Dr. M Shoaib) Professor External Examiner

 (Dr. Kamran Liaqat Bhatti) Associate Professor Head, Department of Electrical Engineering

Page 2

Dedication

This project work has been dedicated to our beloved parents and teachers who inspired us to higher ideas of life.

Page 3

Acknowledgements
All praise to Allah almighty lord of all the worlds the most beneficent the most merciful owner of the Day of Judgment. We ask Allah to bestow his blessing and salutations of piece upon our noble Prophet Muhammad (PBUH). We pay our humblest gratitude to Allah to who bestowed upon his blessing which guided and helped us to complete this report. With the thanks providence that we have come to the other corner of the knowledge and even then the point of satiation never comes to end.

At the beginning, we would like to acknowledge all of the assistance and contributions of NFC Institute of engineering and technological training ,Multan for supporting us with all that is needed starting from the books, and ending with the full care that it provided us with, to help us to be professionals in the field of Information Technology. We sincerely thank our parent’s, families and friends for all support, encouragement and patience they have provided us with throughout.

It is our greatest pleasure to acknowledge the efforts, guidance and contributions of our supervisor Engr.Shahzad Ashraf. We would also like to acknowledge the efforts and knowledge of NFC Institute of engineering and technological training, Multan staff from professors and instructors who provided us with the help, support and guidance throughout separation.

Page 4

Table of Contents
LIST OF FIGURES LIST OF TABLES CHAPTER 1 INTRODUCTION
1.1 Background 1.2 Problem Definition 1.3 Current / Existing systems 1.3.1 Entertainment/ Human-Computer Interaction 1.3.2. Smart Cards and Face ID 1.3.3 Security 1.3.4. Law infer cement and Surveillance 1.3.5. Others 1.4 Proposed scope and enhancement 1.5 Scope excluded (due to constrain – mainly time) 1.6 Developments of Project Objectives 1.6.1 Academic objectives 1.6.2 Managerial objectives

8 9 11 11
11 11 12 12 12 12 12 12 12 13 13 13 13

CHAPTER 2 PLANNING AND REQUIREMENTS
2.1 Scope Initiation 2.2 Activities definition 2.3 Information Gathering 2.4 Literature review and Significant prior research 2.5 Emergence 2.6 Biometric recognition 2.6.1 Iris 2.6.2 Retina 2.6.3 Face 2.6.4 Finger 2.6.5 Hand 2.6.5.1 Hand geometry 2.6.5.2 Vein Pattern analysis 2.6.5.3 Palm identification 2.7 Modules of a Biometric System 2.7.1 Sensor or Capturing Module 2.7.2 Feature Extraction module 2.7.3 Matcher Module 2.7.4 System Database Module 2.8 Advantages

14 14
14 14 14 15 15 15 16 16 16 16 16 16 17 17 17 17 17 18 18 18

CHAPTER 3 FACIAL RECOGNITION
3.1 Facial Recognition 3.1.1 Face Detection 3.1.2 Face Alignment 3.1.3 Feature Extraction 3.1.4 Face Matching 3.2 Facial Recognition Techniques 3.2.1 Eigen faces 3.2.2 Neural Networks 3.2.3 Graph Matching 3.2.4 Hidden Markov Models (HMMs)

20 20
20 20 20 20 21 21 21 21 21 21

Page 5

3.2.5 Geometrical Feature Matching 3.2.6 Template Matching 3.2.7 3D Morph able Model 3.2.8 Automatic Facial Recognition Process Flow 3.3 Limitations and Challenges of Face Recognition Technologies 3.4 Accuracy 3.5 Pattern classes and patterns 3.6 Fundamental problems in pattern recognition system design 3.7 Supervised and Unsupervised Pattern Recognition 3.8 Outline of a Typical Pattern Recognition System 3.9 Training and Learning 3.10 Security 3.11 Privacy 3.12 System Requirements 3.12.1 Student requirements: 3.12.2 Teaching Staff Requirements 3.12.3 Administrator Requirements 3.13 System Development Requirements 3.13.1 Hardware 3.13.2 Software

21 22 22 22 22 23 24 25 26 26 27 28 28 28 29 29 29 30 30 30

CHAPTER 4 SYSTEM ANALYSIS AND DESIGN
4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 Logical Data Flow (DFD) System with proposed automated attendance module Context Diagram (Level 0) High Diagram (Level 1) Low level Diagram (Level 2) Data Dictionary Data Dictionary Cards Flow chart of the complete project

31 31
31 31 32 32 32 33 34 34

CHAPTER 5: HARDWARE AND SOFTWARE
5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 MAX232 Features of MAX232 Applications Functional Description of MAX232 GSM GSM/GPRS Module M10 SIM 900 GSM MODULE SOFTWARE USED

35 35
35 35 36 36 37 37 38 38

CHAPTER 6 IMPLEMENTATION AND RESULTS
6.1 Introduction 6.2 Face Recognition 6.3 Background 6.3.1 Outline of a typical Face Recognition System 6.3.2 Problems that may occur during Face
Recognition 6.4 Feature Based Face Recognition 6.4.1 Introduction 6.4.2 Effective Feature Selection 6.5 Color based Technique 6.5.1 Description 6.6 Problems faced 6.7 Limitations 6.8 Recommendations 6.9 Principal Component Analysis

39 39
39 39 39 41 43 45 45 47 51 51 52 52 53 53

Page 6

6.9.1 Description 6.9.2 PCA-based face recognition algorithm 6.9.2.1 Initialization 6.10 Main Algorithm phases 6.10.1 Face database formation phase 6.10.2 Training phase 6.10.3 Recognition and learning phase 6.11 The Use of Eigen faces for Recognition 6.11.1 Overview of the algorithm using eigen faces 6.11.2 Eigen vectors and Eigen values definitions 6.11.3 Problems faced 6.11.4 Limitations 6.11.5 Recommendation 6.12 Calculation of Eigen Faces with PCA 6.12.1 Basic Definitions 6.12.2 Computation steps 6.13 Classifying Images 6.13.1 Input and Output 6.13.2 Visual Basic Implementation 6.13.3 Database Implementation 6.13.3.1 Database of face images

53 54 54 55 55 55 56 56 56 56 57 57 57 57 57 58 60 60 63 64 64

CHAPTER 7 CONCLUSION
7.1 Conclusion 7.2 Future Recommendation for the organizations 7.2.1 Bulk Load of Employees 7.2.2 Storage Size 7.2.3 Access of the Data 7.3 Additional Features

69 69
69 70 70 70 71 71

LIST OF ABBREVIATIONS APPENDIX A

74
75

Page 7

LIST OF FIGURES
Figure 2.1 Figure 3.1 Figure 3.2 Figure 3.3 Figure 3.4 Figure 3.5 Figure 3.6 Figure 3.7 Typical biometric recognition system Face recognition processing flow Biometrics System Errors Position, lighting, expression Occlusion/blockage/hiding of some features Ageing Two disjoint pattern classes. Functional block diagram of an adaptive pattern recognition system Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 4.5 Figure 5.1 Figure 5.2 Figure 6.1 Figure 6.2 Figure 6.3 Figure 6.4 Figure 6.5 Figure 6.6 Figure 6.7 Figure 6.8 Figure 6.9 Figure 6.10 Figure6.11 Figure 6.12 Context Diagram- Level 0 High Diagram level 1 Low Level Diagram- Level 2 (Process 2) (Low Level Diagram- Level 2 (Process 3)) Flow chart Max 232 GSM Module Outline of a typical face recognition system Problems Geometric figures for Feature based recognition Sample Eigen faces form the sample training set PCA training phase PCA Recognition phase Training set Normalized training set Mean Image Eigen faces The input image and its reconstructed image The weight of the input image and its Euclidian Distance 27 32 32 33 33 34 35 38 41 44 45 57 61 62 64 64 65 65 66 66 18 22 23 23 23 24 25

ABSTRACT

This research aims at providing a system to automatically record the students’ attendance during lecture hours in a hall or room using facial recognition technology instead of the traditional manual methods. The objective behind this research is to thoroughly study the field if pattern recognition (facial recognition) which is very important and is used in various applications like identification and detection. Webcam is used to capture the faces of the students and the images of the students are stored in the database made in ms access. When the face is detected from the data base attendance is marked and the confirmation message is sent to the students mo bile phone to assure that the attendance has been marked successfully. The designed system is checked with different faces and the system performsCHAPTER 1 INTRODUCTION

1.1 Background
With the rapid development in the field of pattern recognition and its uses in different areas e.g. (signature recognition, facial recognition), arises the importance of the utilization of this technology in different areas in large organizations. This is mainly because these applications help the top-management take decisions that improve the performance and effectiveness of the organization. On the other hand, for an organization to be effective, it needs accurate and fast means of recording the performance of the people inside this organization. Biometric recognition has the potential to become an irreplaceable part of many identification systems used for evaluating the performance of those people working within the organization. Although biometric technologies are being applied in many fields it has not yet delivered its promise of guaranteeing automatic human recognition

1.2 Problem Definition
Every time a lecture, section or laboratory starts the lecturer or teaching. Assistant delays the lecture to record students’ attendance. This is a lengthy process and takes a lot of time and effort, especially if it is a lecture with a huge number of students. It also causes a lot of disturbance and interruption when an exam is held. Moreover the attendance sheet is subjected to damage and loss while being passed on between different students or teaching staff. And when the number of students enrolled in a certain course is huge, the doctors tend to call a couple of student names at random which is not a fair student evaluation process either. Finally, these attendance records are used by the staff to monitor the students attendance rates.

1.3 Current / Existing systems
Automatic face recognition techniques have been utilized in many applications over the past years (Li and Jain, 2004). Here are some application areas and examples presents by the researched conducted by Zhao et al (2003), Li and Jain (2004) and Roethenbaugh (2005)[9][10][14].

1.3.1 Entertainment/ Human-Computer Interaction Video Gaming- virtual reality- training programs – proactive computing. 1.3.2. Smart Cards and Face ID User authentication- stored value passports- voters’ registration. 1.3.3 Security TV Parental control – Device logon- application security- database securitysecurity- drivers’ license-national ID-

file encryption- secure trading terminals- medical records –internet and intranet security- terrorist alert. 1.3.4. Law infercement and Surveillance Advanced video surveillance- Crime stopping and suspect alert- suspect tracking and investigation- suspect background check – post event analysis- shoplifter recognition. 1.3.5. Others Time attendance and monitoring. Face identification applications are becoming more and more used, and it is expected to keep growing and become widely used in both small and large
scale applications.

1.4 Proposed scope and enhancement
Our project proposes solutions to all the above mentioned problems by

providing an automated attendance system for all the students that attend a lecture, section, laboratory or exam at its specific time, thus saving time, effort and reducing distractions and disturbance. Another advantage concerning exams, is when the doctor or the advisor accidentally losses an exam paper or the student lies about attending the exam, there will be a record of the students’ attendance for the exam at that time, thus protecting both doctors’ and students’ rights. In addition, an automated Page 12

performance evaluation would provide more accurate and reliable results avoiding human error.

1.5 Scope excluded (due to constrain – mainly time)
We have about 16 weeks to deliver a well document system with full

implementation and functionality. We also will exclude the costing for our project.

1.6 Developments of Project Objectives

1.6.1 Academic objectives The objective behind this research is to thoroughly study the field of pattern recognition (more specifically facial recognition) which is very important and is used in various applications like identification and detection. 1.6.2 Managerial objectives

The project aims in helping the academic staff (the end users) in the Petra University for information technology in evaluating the students’ performance according to their attendance which has a vital relationship with their grades in quizzes and exams. “The main objective of the system is to provide an automated attendance system that is practical, reliable and eliminates disturbance and time loss in traditional attendance systems.” “A further objective is to present a system that can accurately evaluate students’ performances depending on their recorded attendance rates.”

CHAPTER 2 Planning and Requirements
2.1 Scope Initiation
We will begin the project by understanding the problem, gathering information about the system by interviewing our supervisor , and then we will analyze the information from the interviews, then write the functional and non- functional

requirements after that we will do the data flow diagram, then design the interface then implementation using VB6, finally documentation and present it.

2.2 Activities definition
We used Gantt-chart diagram to show the tasks and their time to work on them,

Gantt-chart is figure one. And the WBS is the following order :    

Project Initiating Planning and Requirements Analysis and Design Implementation

2.3 Information Gathering
Data will be collected and analyzed by conducting interviews to get familiar with

and know in detail all the process workflow information and rules needed to understand the functionality of the students’ attendance process. We conducted an interview with Engr. Shahzad Ashraf our supervisor to know general Information about the system to help us with our project, he told us that the main goal is to have computerized attendance system for students. Then we collected information about recognition systems and how it work by using internet research.

 2.4 Literature review and Significant prior research
We have studied the literature available in the fields of pattern recognition and

biometric systems with a focus on facial recognition. In addition, a study of previous attendance systems and it atomization attempts was conducted to examine the similar previous systems.

The literature is divided into three main parts; the first part examines the different biometric characteristics and systems. After that, an in depth study on facial recognition is conducted for it is the technology to be used by our proposed project. The final portion of this literature review presents the different time and attendance systems offered by different researchers and vendors. Among there three types, scientists and researchers consider biometric recognition systems as high-level security systems. They define the biometric term as the “science, involving the statistical analysis of biological characteristics”.[12][7][5] Biometric recognition generally matches a live digital image of a portion from a certain physical body part with a previously recorded image of that same portion; whether it
is in identification mode where one-to-many comparisons take place, or verification (authentication) mode where one-to-one comparisons occur.[1][9]

2.5 Emergence For thousands of years, humans have used biometrics and physical characteristics

such as face and voice to identify other human beings for a wide variety of purposes, ranging from simple business interactions to criminal investigations.[5][10][12] The study of fingerprint dates back to ancient China. While the ancient Jordanian have commonly used physical traits such as height, scars, eye and hair color to identify individuals for business transactions [5]. Generally speaking, the human mind automatically recognizes different people depending on their external physical and behavioral features.

2.6 Biometric recognition
There are three different types of authentication: something you know, such as

passwords, something you have, such as badges or cards, and finally something you are, which mainly depends on biometrics and physical traits. Each of these three authentication methods has its advantages and disadvantages, and each is considered appropriate for certain types of application. Page 15

2.6.1 Iris It is the colored tissue representing the ring surrounding the eye pupil. Each person’s iris has a unique structure and a complex pattern. In addition, it is believed that artificially duplicating an iris is virtually impossible. It is also known that the iris is from the first body parts decaying after death, therefore it is unlikely that a dead iris could be used to by-pass a biometric system. 2.6.2 Retina It is the layer of blood vessels situated at the back of the eye. Just as the iris, the retina forms a unique pattern and decays quickly after death. Retina recognition systems are complex but at the same time regarded as the most secure biometric system. 2.6.3 Face For a computerized system to mimic the human ability in recognizing faces, sophisticated and complex artificial intelligence and machine learning is needed to be able to compare and match human faces with different poses, facial hair, glasses, etc.

That is why these systems depend greatly on the extraction and comparison engines. Different tools maybe used in these systems such as standard video, or still imaging or thermal imaging. 2.6.4 Finger In this type of study, the finger tips are analyzed for its unique pattern and print produced by the finger minutiae, in addition to finger geometry which concentrates on the shape of the finger rather than the print itself. 2.6.5 Hand There are mainly three biometric systems using the characteristics of the human hand; these are: Hand geometry, Vein Pattern analysis, and Palm identification. 2.6.5.1 Hand geometry Just as with the finger geometry, the hand geometry analyzing system captures a three-dimensional image of the hand and measures the shape and length of fingers and knuckles. Hand geometry was from those first biometric systems used for access control applications. Although it is not very accurate, it is thought to be very convenient and large amounts can be processed quickly.

2.6.5.2 Vein Pattern analysis The past number of years has witnessed much development in the area of analyzing the patterns of the veins in the back of the human hand. This technique is called the “vein check” which examines the unique pattern of the blood vessels or what is called the “vein tree” which can be captured by the infra-red light. 2.6.5.3 Palm identification Palm biometrics is very similar to the study of finger print; where the print and the patterns of ridges, valleys and minutiae found on the palm are analyzed and studied.

2.6.5.4 Voice Voice biometrics focus on the sound of the voice, not on what is being said. That is why it is important to distinguish between this technology and those technologies recognizing words and commands. The sound of the human voice is caused by vibration in the vocal chords. The shape and size of the vocal tract, in addition, the shape of the mouth and the nasal cavities all contribute in the way a voice sounds. Voice recognition techniques may either use text-dependent or text-independent methods. In other words, voice may be captured by speaking out any phrase, word or number (text-independent) or by specifically saying a certain password combining phrases, words or numbers (text-dependent). However this biometric system may be challenged by the background noise that reduces the quality of the data and the system’s performance.

2.7 Modules of a Biometric System
Biometric systems are designed using the following four modules.

2.7.1 Sensor or Capturing Module This module captures the biometric data. Different devices are used for different

biometric system according to the feature that is being captured; for example, video camera, thermal camera, voice recorder, infrared… etc. 2.7.2 Feature Extraction module This module processes the captured biometric data to extract a set of discriminatory features forming a template. For example: the position and orientation of minutiae points in a fingerprint image are extracted in the feature extraction module of a Page 17

fingerprint-based biometric system. 2.7.3 Matcher Module In this module, the features extracted from the previous module are compared to those stored and decisions regarding matching score are made. For example: in a fingerprint based biometric system, the number of matching minutiae points determine the matching score and consequently the match or not results. 2.7.4 System Database Module It is used by the biometric system to store the templates of the enrolled users. The enrollment module is responsible for enrolling individuals into the biometric system database, where the biometric characteristic of an individual is first scanned by the sensor to produce a digital representation (initial stored template) of the characteristic

 2.8 Advantages
The main advantage of biometric systems over normal automated systems is that

they really do what they are supposed to do, which is authenticating the user, in a way imitating the human capabilities; and using real human physical characteristics, which are almost impossible to change. In addition, some researchers proposed that biometrics are not subjected to theft, loss or passing to anyone else like what is done with cards or passwords. While some others object and point out that they are not a secret and could be in many cases falsified or stolen from computer systems.[11][8] Another advantage of using biometrics would be speed in doing work. For example, authentication of a regular user with iris recognition would take two to three seconds, while using a key to open the door would need five to ten seconds. Page 18

Moreover, as Zhao et al (2003) mentioned in their paper, the normal human perception system has an impressive capability of recognizing and distinguishing different faces, but this capability is limited by the number and type of faces that can be easily processed. This limitations overcome by the computerized facial recognition systems that can mimic the human mind capability in addition to storing and processing as much people as necessary.[14] And finally, it is important to point out that those technologies that are able to support biometric based applications are becoming more and more available, which makes biometric based systems more available to users.

CHAPTER 3 FACIAL RECOGNITION 3.1 Facial Recognition
According to Zhao et al (2003) in addition to Li and Jain (2004), face recognition is considered to be one of the most successful applications of image analysis and processing; that is the main reason behind the great attention it has been given in the past several years. This attention is clearly evident in the emergence of many research conferences targeting the field of facial recognition, such as: International Conference on Audio and Video-Based Person Authentication (AVBPA) and the International Conference on Automatic Face and Gesture Recognition (AFGR). In addition, many systematic, empirical evaluation techniques have been developed in this field (FRT), including: FERET presented in 2000, FRVT presented in 2000 and 2002, and XM2VTS presented in 1999.[14] 3.1.1 Face Detection This process separates the facial area from the rest of the background image. In the case of video streams, faces can be tracked using a face tracking component. 3.1.2 Face Alignment This process focus on finding the best localization and normalization of the face; where the detection step roughly estimates the position of the face, this step outlines the facial components, such as face outline, eyes, nose, ears and mouth. Afterwards normalization with respect to geometrical transforms such as size and pose, in addition to photometrical properties such as illumination and grey scale take place. 3.1.3 Feature Extraction After the previous two steps, feature extraction is performed resulting in effective information that is useful for distinguishing between faces of different persons and stable with respect to the geometrical and photometrical variations.

 3.1.4 Face Matching The extracted features are compared to those stored in the database, and decisions are made according to the sufficient confidence in the match score

3.2 Facial Recognition Techniques
This section gives an overview on the major human face recognition techniques that apply mostly to frontal faces. 3.2.1 Eigen faces Eigen face is one of the most thoroughly investigated approaches to face recognition. This method would be less sensitive to appearance changes than the standard Eigen face method. 3.2.2 Neural Networks The attractiveness of using neural networks could be due to its non linearity in the network. In general, neural network approaches encounter problems when the number of classes (i.e., individuals) increases. 3.2.3 Graph Matching Graph matching is another approach to face recognition presented a dynamic link structure. In general, dynamic link architecture is superior to other face recognition techniques in terms of rotation invariance; however, the matching process is computationally expensive. 3.2.4 Hidden Markov Models (HMMs) Stochastic modeling of non-stationary vector time series based on (HMM) has been very successful for speech applications applied this method to human face recognition. Faces were intuitively divided into regions such as the eyes, nose, mouth, etc., which can be associated with the states of a hidden Markov model. 3.2.5 Geometrical Feature Matching Geometrical feature matching techniques are based on the computation of a set of geometrical features from the picture of a face. Current automated face feature location algorithms do not provide a high degree of accuracy and require considerable computational time. Page 21

3.2.6 Template Matching A simple version of template matching is that a test image represented as a twodimensional array of intensity values is compared using a suitable metric, such as the Euclidean distance, with a single template representing the whole face 3.2.7 3D Morph able Model The morph able face model is based on a vector space representation of faces that is constructed such that any convex combination of shape and texture vectors of a set of examples describes a realistic human face. 3.2.8 Automatic Facial Recognition Process Flow Generally any biometric system goes through the same processes of the four modules explained earlier, biometric capture, feature extraction, and comparison with templates available in the database. The facial recognition process is similar to the general biometric recognition process. As explained by Li and Jain (2005), in the face-base biometric systems detection, alignment, feature extraction, and matching take place.[9][10]

Figure 3.1 Face recognition processing flow

The facial recognition process can be divided into two main stages: processing before detection where face detection and alignment take place (localization and normalization), and afterwards recognition occur through feature extraction and matching steps.

 Limitations and Challenges of Face Recognition Technologies
As mentioned earlier, face recognition technology, just as any other biometric

technology, has not yet delivered it promise. In spite of all its potentials, it is still quite limited in its applied scope. Many researchers have identified different problems for Page 22

the biometric system; they can be categorized in four main challenges:

 Accuracy
Two biometric samples collected from the same person are not exactly the same due

to the imperfect imaging conditions. In addition, the face recognition technology is not robust enough to handle uncontrolled and unconstrained environments. In consequence, the results accuracy is not acceptable. As explained in figure 3.2, inaccuracy can occur in two different forms, either False Non-Match (false reject / Type 1 error) in which the system falsely declares the failure of match between the instance and the correct stored template; or False Match (False Accept / Type 2 error) in which the system incorrectly declares a successful match between the instance and one of the
templates in the database.

Figure 3.2 Biometrics System Errors Biometrics at Frontiers: Assessing the Impact on Society (2005)

These errors are mainly caused by the complexity and difficulties of the recognition process because of the uncontrollable variables such as lighting, pose, expression, aging, weight gain or loss, hairstyle and accessories; figures 4, 5 and 6 present examples for some types of variations. This challenge is reduced as more Figure3.3Position,lighting, expression

Figure 3.4 Occlusion/blockage/hiding of some features

Figure 3.5 Ageing

3.5 Pattern classes and patterns
Pattern recognition can be defined as the categorization of input data into identifiable

classes via the extraction of significant features or attributes of the data from a background of irrelevant detail. A pattern class is a category determined by some given common attributes or features. The features of a pattern class are the characterizing attributes common to all patterns belonging to that class. Such features are often referred to as intra-set features. The features which represent the differences between pattern classes may be referred to as the inter-set features. A pattern is the description of any member of a category representing a pattern class. For convenience, patterns are usually represented by a vector such as:-

 where each element Xj , represents a feature of that pattern. It is often useful to think of a pattern vector as a point in an n-dimensional Euclidean space.

3.6 Fundamental problems in pattern recognition system design The design of automatic pattern recognition systems generally involves several major problem areas: A- First of all, we have to deal with the representation of input data which can be measured from the objects to be recognized. This is the sensing problem. Each measured quantity describes a characteristic of the pattern or object. In other words, a pattern vector that describes the input data has to be formed. The pattern vectors contain all the measured information available about the patterns. The set of patterns belonging to the same class corresponds to an ensemble of points scattered within some region of the measurement space. A simple example of this is shown in Figure 2.1 for two pattern classes denoted by w 1 and w 2 .

Fig 3.6 Two disjoint pattern classes. Each pattern is characterized by two measurements: height and weight. The pattern vector therefore is in the form of x ={x1, x2}

B- The second problem in pattern recognition concerns the extraction of characteristic features or attributes from the received input data and the reduction of the dimensionality of pattern vectors. This is often referred to as the pre-processing and the feature extraction problem. The elements of intra-set features which are common to all pattern classes under consideration carry no discriminatory information and can be ignored. If a complete set of discriminatory features for each pattern class can be determined from the measured data, the recognition and classification of patterns will present little difficulty. Automatic recognition may be reduced to a simple matching process or a table look-up scheme. However, in most pattern recognition problems which arise in practice, the Page 25

determination of a complete set of discriminatory features is extremely difficult, if not impossible. C- The third problem in pattern recognition system design involves the determination of the optimum decision procedures, which are needed in the identification and classification process. After the observed data from patterns to be recognized have been expressed in the form of pattern points or measurement vectors in the pattern space, we want the machine to decide to which pattern class these data belong. Let the system be capable of recognizing M different pattern classes. Then the pattern space can be considered as consisting of M regions, each of which encloses the pattern points of a class. The recognition problem can now be viewed as that of generating the decision boundaries which separate the M pattern classes on the basis of the observed measurement vectors. These decision boundaries are generally determined by decision functions.

3.7 Supervised and Unsupervised Pattern Recognition
In most cases, representative patterns from each class under consideration are available. In these situations, supervised pattern recognition techniques are applicable. In a supervised learning environment, the system is taught to recognize patterns by means of various adaptive schemes. The essentials of this approach are a set of training patterns of known classification and the implementation of an appropriate learning procedure. In some applications, only a set of training patterns of unknown classification may be available. In these situations, unsupervised pattern recognition techniques are applicable. As mentioned above, supervised pattern recognition is characterized by the fact that the correct classification of every training pattern is known. In the unsupervised case however, one is faced with the problem of actually learning the pattern classes present in the given data. This problem is also known as “learning without a teacher”.

3.8 Outline of a Typical Pattern Recognition System
In Figure 3.7, functional block diagram of an adaptive pattern recognition system is shown. Although the distinction between optimum decision and pre-processing or feature extraction is not essential, the concept of functional breakdown provides a clear picture for the understanding of the
pattern recognition problem.

Fig 3.7 Functional block diagram of an adaptive pattern recognition system.

Correct recognition will depend on the amount of discriminating information contained in the measurements and the effective utilization of this information. In some applications, contextual information is indispensable in achieving accurate recognition. For instance, in the recognition of cursive handwritten characters and the classification of fingerprints, contextual information is extremely desirable. When we wish to design a pattern recognition system which is resistant to distortions, flexible under large pattern deviations, and capable of self-adjustment, we are confronted with the adaptation problem.

3.9 Training and Learning
The decision functions can be generated in a variety of ways. When complete a priori

knowledge about the patterns to be recognized is available, the decision function may be determined with precision on the basis of this information. When only qualitative knowledge about the patterns is available, reasonable guesses of the forms of the decision functions can be made. In this case the decision boundaries may be far from correct, and it is necessary to design the machine to achieve satisfactory performance through a sequence of adjustments. The more general situation is that there exists little, if any, a priori knowledge about the patterns to be recognized. Under these circumstances pattern recognizing machines are best designed using a training or learning procedure.

Arbitrary decision functions are initially assumed, and through a sequence of iterative training steps these decision functions are made to approach optimum or satisfactory forms. It is important to keep in mind that learning or training takes place only during
the design (or updating) phase of a pattern recognition system. Once acceptable results have been obtained with the training set of patterns, the system is applied to the task of actually performing recognition on samples drawn from the environment in which it is expected to operate. The quality of the recognition performance will be largely determined by how closely the training patterns resemble the actual data with which the system will be confronted during normal operation.

3.10 Security Facial recognition and other biometric systems are used for many security applications, claiming that biometrics is a secure way of authenticating access. But in fact, security of biometrics (especially face), is very questionable [11] .This is caused by two main reasons: a. Biometrics is not a secret: This means that anyone including the attacker knows exactly the biometric features of the targeted user. b. Biometrics is not recoverable: This means that one cannot change his face in case it became compromised.

3.11 Privacy The issue of using recognition based systems has raised many concerns of possible privacy violations; which is a major concern in many locations, such as the American Civil Liberties Union (ACLU) which opposes the use of face recognition software at airports due to ineffectiveness and privacy concern .[9][10] The database of a biometric system hold irrefutable proof of one’s identity; and there are no regulations or guarantees on how these information might be used or what it could be used for. These privacy issues mostly result in the reluctance of users to use these biometric systems.[11] On the other hand Roethen baugh (2005) argues that this is not true. He proposed that biometrics is a privacy protection tool rather than intrusion to civil rights. This would be achieved through managing data protection and encryption along the biometric system.

3.12 System Requirements Analyzing user requirements and needs is a vital task in any system development process. End users must be the main concern of the system designer in order to produce a valid, useful and user-satisfying system. This section examines and analyzes the requirements and needs of the possible different system end users.

3.12.1 Student requirements: The student needs to keep track of his attendance. This would require him to login using his ID and password to the system. The system will accept him if his ID and password are the same as the ones saved in the database and a page will appear according to the student’s privileges which are viewing his progress, course and attendance reports. If the information entered does not match the ones in the database an error page will appear and the student will be asked to enter the ID and password again. On the event of for getting the password the system will display a message asking the student to go to the administration department to acquire a new password or receive it on his personal email.

3.12.2 Teaching Staff Requirements The teaching staff needs an efficient and reliable automated system for recording the students’ attendance during lectures, sections, labs and exams. This system should be able to calculate and process the performance of students according to their attendance rates. The teaching staff needs to keep track of their courses and the students’ attendance in these courses. This would require them to inserts their ID and password to the system then the system will accept them if the ID and password entered are the same as the ones saved in the database and a page will appear according to their privileges which are viewing their students’ progress, course and attendance reports. If the information entered does not match the ones in the database an error page will appear and they will be asked to enter the ID and password again.

On the event of forgetting the password the system will display a message asking them to go to the administration department to acquire a new password or receive it on his personal email. 3.12.3 Administrator Requirements The administrator should be able to enter the all the users’ (students, doctors and teaching
assistants) information and creates IDs and passwords for them to access the system.

Assigns doctors and teaching assistance to the courses when adding new doctors to the system. Responsible to provide a new password on the event of the users forgetting their login details.

3.13 System Development Requirements
The system needs some hardware and software to achieve the best results: 3.13.1 Hardware 1. A surveillance camera 2. University server 3. A desktop or laptop computer for the system users. 4. A desktop or a laptop computer to run the program on in the room with the following specifications:   Intel core 2 duo Processor 2.4 MHz or Higher cache 2 MB. 3 GB RAM DDR2 or Higher. 250 GB storage space.

3.13.2 Software 1. Operating System (Microsoft Windows 7) 2. Web server (Apache web server) 3. Database (SQL Server/My SQL) 4. Web Browser 5. Visual Studio 2005 6. Video for Windows (VFW) 7. Camera Driver

 CHAPTER 4 SYSTEM ANALYSIS AND DESIGN

4.1 Logical Data Flow (DFD)
The data flow diagram is a diagram that shows how the data manipulated by a system flows through the various processes. It provides no information about the timing or ordering of processes, or about whether processes will operate in sequence or in parallel. It is therefore quite different from a flowchart, which shows the flow of control through an algorithm, allowing a reader to determine what operations will be performed, in what order, and under what circumstances, but not what kinds of data will be input to and output from the system, nor where the data will come from and go to, nor where the data will be stored (all of which are shown on a DFD). DFD consists of four main symbols which are:1. External Entities 2. Data stores 3. Processes 4. Dataflow lines The system will be illustrated on three
different levels of DFD which are Context diagram(level 0)( Figure 4.1) which consists of all the external entities, one main process only and the data flow between them. It has no further details. Then is drawn the next level of DFD which is High level diagram (level 1) (Figure 4.2) in which the data stores are shown and the main single process is broken down into the major high-level, processes that are done by the system and the dataflow between them. The third level of DFD, Low diagram (level 2) (Figure 4.3) breaking down the small, main processes into even smaller has a purpose of (children processes

processes) to simplify their work to become clearer to the reader. To gether they wholly and completely describe the parent process, and combined must perform its full capacity. This decomposition of the parent process is called explosion of the process.

4.2 System with proposed automated attendance module
In the figures below is the proposed system data flow that will be Page 31

operating on automated facial recognition attendance recording methods.

4.3 Context Diagram (Level 0)

Figure 4.1 (Context Diagram- Level 0)

4.4 High Diagram (Level 1)

Figure 4.2 (High Diagram level 1)

4.5 Low level Diagram (Level 2)
This level is the level where we explode or break down the process into smaller ones.

We will only examine the segmentation and the face detection processes because these are the processes where the automation and most of the project work will be take place.

Figure 4. 3 (Low Level Diagram- Level 2 (Process 2))

Figure 4. 4 (Low Level Diagram- Level 2 (Process 3))

4.6 Data Dictionary
Database users and application developers can benefit from an authoritative data

dictionary document that catalogs the organization, contents, and conventions of one or more databases. This typically includes the names and descriptions of various tables and fields in each database, plus additional details, like the type and length of each data element.

4.7 Data Dictionary Cards
These are cards that will be preserved with the documentation of the system for any inquiry by developers, analysts and programmers as manual to guide them through the data flow and database design, like entities and what they represent in the system etc.

4.8 Flow chart of the complete project

CHAPTER 5: HARDWARE AND SOFTWARE
5.1 MAX232
The MAX232 is an integrated circuit that converts signals from an RS-232 serial port to signals suitable for use in TTL compatible digital logic circuits. The MAX232 is a dual driver/receiver and typically converts the RX, TX, CTS and RTS signals. Since the RS232 is not compatible with today’s microprocessors and microcontrollers, we need a line driver (voltage converter) to convert the RS232’s signals to TTL voltage levels that will be acceptable to the controller’s TxD and RxD pins. One such converter is MAX232 from Maxim Corp. the MAX232 converts from RS232 voltage levels to TTL voltage levels and vice versa.

Figure 5.1 Max 232

Features of MAX232
Operates From a Single 5-V Power Supply with 1.0- µF Charge-Pump Capacitors Operates Up To 120 k bit/s Two Drivers and Two Receivers 25 Low-Power Receive Mo d e in Shut down Meet All EIA/TIA- 2 3 2E an d V. 2 8 Specifications Multiple Drivers and Receivers Page 35

3 -St a t e Dr iv e r and Receiver Output s±30-V Input Levels Low Supply Current – 8 mA Typical

Applications
TIA/EIA-232-F, Battery-Powered Systems, Terminals, Modems, and Computers

5.4 Functional Description of MAX232
Maxims MAX232 is one of those wonderful components that solves so many signal

conversions. This chip converts RS232 signal voltage levels to TTL voltage levels and viceversa, hence if you need to communicate to your PC through it’s serial port (COM1 or COM2) then this is the chip that can perform that function. If you have a microcontroller circuit, or a phone, or a calculator that requires a PC connection then this is the chip that is needed to make that communication happen. The RS232 serial port protocol (v.24) states -15v to represent binary 1 and +15v to represent binary 0. For TTL communication this is incompatible since TTL uses 0v to represent binary 0 and +5v to represent binary 1. MAX232 chip converts serial signal voltage levels to TTL standards, and also vice-versa. The support capacitors C4 and C3 are used for the internal voltage inverter that creates the negative voltage level for the serial communication. C1 and C3 are used for the voltage doublers to raise the TTL (5v)level. As we can see, there are 2 drivers, and 2 receivers in the MAX232 package. This can be confusing for students and makes the chip look more complicated than it really is, but it’s actually very easy since for most applications we generally use only one driver and one receiver. I tend to use Pins 7, 8, 9, 10 for most of my circuits because I am used to using those. The other driver and receiver is not used, think of it as a spare.

5.5 GSM (Global System for Mobile Communications), originally Groupe Spécial

Mobile), is a standard set developed by the European Telecommunications Standards Institute (ETSI) to describe protocols for second generation (2G) digital cellular networks used by mobile phones. It became the de facto global standard for mobile communications with over 80% market share.

5.6 GSM/GPRS Module M10
The M10 is a complete Quad-band GSM/GPRS solution in an SMD type which can

be embedded in customer application, offering the highest reliability and robustness. Featuring an industry-standard interface, the M10 delivers GSM/GPRS

850/900/1800/1900MHz performance for voice, SMS, Data, and Fax in a small form factor and with extremely low power consumption. With a tiny configuration of 29mm x 29mm x 3.6mm, the M10 can fit into almost all the M2M applications, including VTS, Smart Metering, Wireless POS, Security, etc. such as M2M, Telemetry and other mobile data communication systems.    

Quad-band GSM/GPRS module with a size of 29mm x 29mm x 3.6mm SMD type suit for customer application Embedded powerful Internet service protocols Based on mature and field-proven platform, backed up by our support service, from definition to design and pro-duction

Product features:
Quad-Band: 850/ 900/ 1800/ 1900 MHz GPRS multi-slot: Class 12/10/8 GPRS mobile station: Class B Compliant to GSM: Class 4 (2W @850/ 900 MHz)

5.7 SIM 900 GSM MODULE
Designed for global market, SIM900 is a quad-band GSM/GPRS engine that works on

frequencies GSM 850MHz, EGSM 900MHz, DCS 1800MHz and PCS 1900MHz. SIM900 features GPRS multi-slot class 10/ class 8 (optional) and supports the GPRS coding schemes CS-1, CS-2, CS-3 and CS-4. With a tiny configuration of 24mm x 24mm x 3mm, SIM900 can meet almost all the space requirements in your applications, such as M2M, smart phone, PDA and other mobile devices. The physical interface to the mobile application is a 68-pin SMT pad, which provides all hardware interfaces between the module and customers’ boards. The SIM900 is integrated with the TCP/IP protocol; extended TCP/IP AT commands are developed for customers to use the TCP/IP protocol easily, which is very useful for those data transfer applications.

5.8 SOFTWARE USED
Operating System (Microsoft Windows 7 ,xp) Visual studio (VB6 language) Database (SQL Server/MySQL) Web Cam drivers

CHAPTER 6 IMPLEMENTATION AND RESULTS

6.1 Introduction
In this project different techniques have being studied like color based detection and

Principle Component Analysis (PCA) for face detection and for feature
extraction, PCA and Linear Discriminate Analysis (LDA). For detection, Color based technique was implemented, which depends on the detection of the human skin color with all its different variations in the image. The skin area of the image is then segmented and passed to the recognition process. For recognition, PCA technique has been implemented which a statistical approach that deals with pure mathematical matrixes not image processing like the color based technique used for detection. PCA can also be used for detection.

6.2 Face Recognition
Face recognition is a pattern recognition task performed specifically on faces. It can

be described as classifying a face either “known” or “unknown”, after comparing it with stored known individuals. It is also desirable to have a system that has the ability of learning to recognize unknown faces. Computational models of face recognition must address several difficult problems. This difficulty arises from the fact that faces must be represented in a way that best utilizes the available face information to distinguish a particular face from all other faces. Faces pose a particularly difficult problem in this respect because all faces are similar to one another in that they contain the same set of features such as eyes, nose, mouth arranged in roughly the same manner.

6.3 Background
Much of the work in computer recognition of faces has focused on detecting

individual features such as the eyes, nose, mouth, and head outline, and defining a face model by the position, size, and relationships among these features. Such approaches have proven difficult to extend to multiple views and have often been quite fragile, requiring a good initial guess to guide them. Research in human strategies of face recognition, moreover, has shown that individual features and their immediate relationships comprise an insufficient representation to account for the performance of adult human face identification. [15] Nonetheless, this approach to face recognition remains the most popular one in the computer vision literature. Bledsoe [16, 17] was the first to attempt semi-automated face recognition with a hybrid human-computer system that classified faces on the basis of fiducial marks entered on photographs by hand. Parameters for the classification were normalized distances and ratios among points such as eye corners, mouth corners, nose tip, and chin point.

Later work at Bell Labs developed a vector of up to 21 features, and recognized faces using standard pattern classification techniques. Fischler and Elschlager, attempted to measure similar features automatically. They described a linear embedding algorithm that used local feature template matching and a global measure of fit to find and measure facial features. This template matching approach has been continued and improved by the recent work of Yuille and Cohen. [18] Their strategy is based on deformable templates, which are parameterized models of the face and its features in which the parameter values are determined by interactions with the face image. Connectionist approaches to face identification seek to capture the configurational nature of the task. Kohonen and Kononen and Lehtio describe an associative network with a simple learning algorithm that can recognize face images and recall a face image from an incomplete or noisy version input to the network.

Fleming and Cottrell extend these ideas using nonlinear units, training the system by back-propagation. Others have approached automated face recognition by characterizing a face by a set of geometric parameters and performing pattern recognition based on the parameters. Kanade’s face identification system was the first system in which all steps of the recognition process were automated, using a top-down control strategy directed by a generic model of expected feature characteristics. His system calculated a set of facial parameters from a single face image and used a pattern classification technique to match the face from a known set, a purely statistical approach depending primarily on local histogram analysis and absolute gray-scale values. Recent work by Burt uses a smart sensing approach based on Multi-resolution template matching. This coarse to fine strategy uses a special purpose computer built to calculate multi-resolution pyramid images quickly, and has been demonstrated identifying Page 40

people in near real time.

6.3.1 Outline of a typical Face Recognition System
In Figure 6.1, the outline of a typical face recognition system is given. This outline heavily carries the characteristics of a typical pattern recognition system.

Fig 6.1 Outline of a typical face recognition system There are six main functional blocks, whose responsibilities are given below:

The acquisition module

This is the entry point of the face recognition process. It is the

module where the face image under consideration is presented to the system. In other word, the user is asked to present a face image to the face recognition system in this module.An acquisition module can request a face image from several different environments: The face image can be an image file that is located on a magnetic disk, it can be captured by a frame grabber or it can be scanned from paper with the help of a scanner. The pre-processing module In this module, by means of early vision techniques, face images are normalized and if desired, they are enhanced to improve the recognition performance of the system. Some or all of the following pre-processing steps may be implemented in a Face Recognition Systems are: Image size normalization. It is usually done to change the acquired image size to a default image size such as 128 x 128, on which the face recognition system operates.

This is mostly encountered in systems where face images are treated as a whole like the one proposed in this thesis. Histogram equalization. It is usually done on too dark or too bright images inorder to enhance image quality and to improve face recognition performance. It modifies the dynamic range (contrast range) of the image and as a result, some important facial features become more apparent. Median filtering. For noisy images especially obtained from a camera or from a frame grabber, median filtering can clean the image without losing information. High-pass filtering. Feature extractors that are based on facial outlines, may benefit the results that are obtained from an edge detection scheme. High-pass filtering emphasizes the details of an image such as contours which can dramatically improve edge detection performance. Background removal. In order to deal primarily with facial information itself, face background can be removed. This is especially important for face recognition systems where entire information contained in the image is used.

It is obvious that, for background removal, the preprocessing module should be capable of determining the face outline. Translational and rotational normalizations. In some cases, it is possible to work on a face image in which the head is somehow shifted or rotated. The head plays the key role in the determination of facial features. Especially for face recognition systems that are based on the frontal views of faces, it may be desirable that the preprocessing module determines. Illumination normalization. Face images taken under different illuminations can degrade recognition performance especially for face recognition systems based on the principal component analysis in which entire face information is used for recognition. A picture can be equivalently viewed as an array of reflectivities r(x). Thus, under a uniform illumination I, the corresponding picture is given by:

The normalization comes in imposing a fixed level of illumination at a reference point on a picture. The normalized picture is given by

In actual practice, the average of two reference points, such as one under each eye, each consisting of 2 x 2 array of pixels can be used. The feature extraction module. After performing some pre-processing (if necessary), the normalized face image is presented to the feature extraction module in order to find the key features that are going to be used for classification. In other words, this module is responsible for composing a feature vector that is well enough to represent the face image. The classification module In this module, with the help of a pattern classifier, extracted features of the face image is compared with the ones stored in a face library (or face database). After doing this comparison, face image is classified as either known or unknown. Training set Training sets are used during the “learning phase” of the face recognition process.

The feature extraction, and the classification modules adjust their parameters in order to achieve optimum recognition performance by making use of training sets. Face library or face database After being classified as “unknown”, face images can be added to a library (or to a database) with their feature vectors for later comparisons. The classification module makes direct use of the face library.

6.3.2 Problems that may occur during Face Recognition
Due to the dynamic nature of face images, a face recognition system encounters various problems during the recognition process. It is possible to classify a face recognition system as either “robust” or “weak” based on its recognition performances under these circumstances. The objectives of a robust face recognition system are given below:

Scale invariance. The same face can be presented to the system at different scales as shown in Figure 6.2-b. This may happen due to the focal distance between the face and the camera. As this distance gets closer, the face image gets bigger.

Shift invariance. The same face can be presented to the system at different perspectives and orientations as shown in Figure 6.2-c. For instance, face images of the same person could be taken from frontal and profile views. Besides, head orientation may change due to translations and rotations.

Illumination invariance. Face images of the same person can be taken under different illumination conditions such as, the position and the strength of the light source can be modified like the ones shown in Figure 6.2-d.

Emotional expression and detail invariance. Face images of the same person can differ in expressions when smiling or laughing. Also, like the ones shown in Figure 6.2-e, some details such as dark glasses, beards or moustaches can be present.

Noise invariance. A robust face recognition system should be insensitive to noise generated by frame grabbers or cameras. Also, it should function under partially occluded images. A robust face recognition system should be capable of classifying a face image as “known” under even above conditions, if it has already been stored in the face database.

6.4 Feature Based Face Recognition
It was mentioned before that, there were two basic approaches to the face recognition

problem: Feature based face recognition and principal component analysis methods. Although feature based face recognition can be divided into two different categories, based on frontal views and profile silhouettes, they share some common properties and we will treat them as a whole. In this section, basic principals of feature based face recognition from frontal views are presented.

6.4.1 Introduction The first step of human face identification is to extract the features from facial images.

In the area of feature selection, the question has been addressed in studies of cue salience in which discrete features such as the eyes, mouth, chin and nose have been found important cues for discrimination and recognition of faces.

Page 45 After knowing what the effective features are for face recognition, some methods should be utilized to get contours of eyes, eyebrows, mouth, nose, and face. For different facial contours, different models should be used to extract them from the original portrait. Because the shapes of eyes and mouth are similar to some geometric figures as shown in the Figure 2.5, they can be extracted in terms of the deformable template model. The other facial features such as eyebrows, nose and face are so variable that they have to be extracted by the active contour model. These two models can be illustrated in the following:

Fig 6.3 Geometric figures for Feature based recognition

Deformable template model. The deformable templates are specified by a set of parameters which uses a priori knowledge about the expected shape of the features to guide the contour deformation process. The templates are flexible enough to change their size and other parameter values, so as to match themselves to the data. The final values of these parameters can be used to describe the features. This method works well regardless of variations in scale, tilt, and rotations of the head. Variations of the parameters should allow the template to fit any normal instance of the feature. The deformable templates interact with the image in a dynamic manner. An energy function is defined which contains terms attracting the template to salient features such as peaks and valleys in the image intensity, edges and intensity itself. The minima of the energy function corresponds to the best fit with the image. The parameters of the template are then updated by steepest descent.

Active contour model (Snake). The active contour or snake is an energy minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes lock onto nearby edges, localizing them accurately. Because the snake is an energy minimizing spline, energy functions whose local minima  comprise the set of alternative solutions to higher level processes should be designed. Selection of an answer from this set is accomplished by the addition of energy terms that push the model toward the desired solution.

The result is an active model that falls into the desired solution when placed near it. In the active contour model issues such as the connectivity of the contours and the presence of corners affect the energy function and hence the detailed structure of the locally optimal contour. These issues can be resolved by very high-level computations.

6.4.2 Effective Feature Selection Before mentioning the facial feature extraction procedures, there are two considerations to be taken into account: The picture-taking environment must be fixed in order to get a good snapshot. Effective features that can be used to identify a face efficiently should be known. Despite the marked similarity of faces as spatial patterns we are able to differentiate and remember a potentially unlimited number of faces. With sufficient familiarity, the faces of any two persons can be discriminated. The skill depends on the ability to extract invariant structural information from the transient situation of a face, such as changing hairstyles, emotional expression, and facial motion effect. Features are the basic elements for object recognition. Therefore, to identify a face, we need to know what features are used effectively in the face recognition process. Because the variance of each feature associated with the face recognition process is relatively large, the features are classified into three major types:

First-order features values. Discrete features such as eyes, eyebrows, mouth,
chin, and nose, which have been found to be important in face identification and are specified without reference to other facial features, are called first-order features. Important first-order features are given in Table 6.1.

Second-order features values. Another configural set of features which characterize the spatial relationships between the positions of the first-order features and information about the shape of the face are called second-order features. Important second-order features are given in Table 6.2. Second order features that are related to nose, if nose is noticeable are given in Table 6.3.

Higher-order feature values. There are also higher-level features whose values depend on a complex set of feature values. For instance, age might be a function of hair coverage, hair color, skin tension, presence of wrinkles and age spots, forehead height which changes because of receding hairline, and so on. Variability such as emotional expression or skin tension exists in the higher-order features and the complexity, which is the function of first-order and second-order features, is very difficult to predict. Permanent information belonging to the higher-order features cannot be found simply by using first and second-order features. For a robust face recognition system, features that are invariant to the changes of the picture taking environment should be used. Thus, these features may contain merely first-order and second-order ones.

These effective feature values cover almost all the obtainable information from the portrait. They are sufficient for the face recognition process. The feature values of the second-order are more important than those of the first-order and they are dominant in the feature vector. Before mentioning the facial feature extraction process, it is necessary to deal with two preprocessing steps: Threshold assignment. Brightness threshold should be known in order to discriminate the feature and other areas of the face. Generally, different thresholds are used for eyebrows, eyes, mouth, nose, and face according to the brightness of the picture. Rough Contour
Estimation Routine (RCER). The left eyebrow is the first feature that is to be extracted.

The first step is to estimate the rough contour of the left eyebrow and find the contour points. Generally, the position of the left eyebrow is about one-fourth of the facial width. Having this a priori information, the coarse position of the left eyebrow can be found and its rough contour can be captured. Once the rough contour of the left eyebrow is established, the rough contours of other facial features such as left eye, right eyebrow, mouth or nose can be estimated by RCER. After the rough contour is obtained, its precise contour will be extracted by the deformable template model or the active contour model.

Color based Technique

6.5.1 Description
In this technique the image converted into a new color space, HCbCr which the Hue component of the HIS color space and the Cb and Cr components that are calculated by the formulas stated in Equation 6.1.

Cb = (0.148 * r) – (0.291*g) + (0.439 * b) + 128 Cr= (0.439 * r) – (0.368 * g) – (0.071 * b) + 128 Equation 6.1

The new image is scanned pixel by pixel and if each pixel satisfies the condition stated in Equation 6.2, which is the range of the human skin, its value is setto white else it is set to black.

If (h(x,y)>=.01) && (h(x,y)=140)&&(cr(x,y)=140)&& (cb(x,y)


Essay Topics:


Sorry, but copying text is forbidden on this website. If you need this or any other sample, we can send it to you via email. Please, specify your valid email address

We can't stand spam as much as you do No, thanks. I prefer suffering on my own