Trisha Shetty (Editor)

Facial recognition system

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Facial recognition system

A face recognition system is a computer application capable of identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a face database.

Contents

It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. Recently, it has also become popular as a commercial identification and marketing tool.

Traditional

Some face recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.

Recognition algorithms can be divided into two main approaches, geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances.

Popular recognition algorithms include principal component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching.

3-dimensional recognition

A newly emerging trend, claimed to achieve improved accuracy, is three-dimensional face recognition. This technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.

One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition. 3D research is enhanced by the development of sophisticated sensors that do a better job of capturing 3D face imagery. The sensors work by projecting structured light onto the face. Up to a dozen or more of these image sensors can be placed on the same CMOS chip—each sensor captures a different part of the spectrum.

Even a perfect 3D matching technique could be sensitive to expressions. For that goal a group at the Technion applied tools from metric geometry to treat expressions as isometries A company called Vision Access created a firm solution for 3D face recognition. The company was later acquired by the biometric access company Bioscrypt Inc. which developed a version known as 3D FastPass.

A new method is to introduce a way to capture a 3D picture by using three tracking cameras that point at different angles; one camera will be pointing at the front of the subject, second one to the side, and third one at an angle. All these cameras will work together so it can track a subject’s face in real time and be able to face detect and recognize.

Skin texture analysis

Another emerging trend uses the visual details of the skin, as captured in standard digital or scanned images. This technique, called skin texture analysis, turns the unique lines, patterns, and spots apparent in a person’s skin into a mathematical space.

Tests have shown that with the addition of skin texture analysis, performance in recognizing faces can increase 20 to 25 percent.

Thermal cameras

A different form of taking input data for face recognition is by using thermal cameras, by this procedure the cameras will only detect the shape of the head and it will ignore the subject accessories such as glasses, hats, or make up. A problem with using thermal pictures for face recognition is that the databases for face recognition is limited. Diego Socolinsky, and Andrea Selinger (2004) research the use of thermal face recognition in real life, and operation sceneries, and at the same time build a new database of thermal face images. The research uses low-sensitive, low-resolution ferro-electric electrics sensors that are capable of acquire long wave thermal infrared (LWIR). The results show that a fusion of LWIR and regular visual cameras has the greater results in outdoor probes. Indoor results show that visual has a 97.05% accuracy, while LWIR has 93.93%, and the Fusion has 98.40%, however on the outdoor proves visual has 67.06%, LWIR 83.03%, and fusion has 89.02%. The study used 240 subjects over the period of 10 weeks to create the new database. The data was collected on sunny, rainy, and cloudy days.

Software

Notable software with face recognition ability include:

  • digiKam (KDE)
  • iPhoto (Apple)
  • OnTime Pro (ClockedIn)
  • Lightroom (Adobe)
  • OpenCV (Open Source)
  • OpenFace (Open Source)
  • Photos (Apple)
  • Photoshop Elements (Adobe Systems)
  • Google Photos (Google)
  • Picture Motion Browser (Sony)
  • Windows Live Photo Gallery (Microsoft)
  • DeepFace (Facebook)
  • visage SDK (Visage Technologies)
  • Ayonix face SDK (Ayonix 3D face technology)
  • TrueKey (Intel)
  • Notable users and deployments

    The Australian and New Zealand Customs Services have an automated border processing system called SmartGate that uses face recognition. The system compares the face of the individual with the image in the e-passport microchip to verify that the holder of the passport is the rightful owner.

    Law enforcement agencies in the United States, including the Los Angeles County Sheriff, use arrest mugshot databases in their forensic investigative work. Law enforcement has been rapidly building a database of photos in recent years.

    The U.S. Department of State operates one of the largest face recognition systems in the world with 117 million American adults including mostly law abiding citizens in its database. The photos are typically drawn from driver's license photos. Although it is still far from completion, it is being put to use in certain cities to give clues as to who was in the photo. The FBI uses the photos as an investigative lead not for positive identification.

    In recent years Maryland has used face recognition by comparing people's faces to their driver's license photos. The system drew controversy when it was used in Baltimore to arrest unruly protesters after the death of Freddie Gray in police custody. Many other states are using or developing a similar system however some states have laws prohibiting its use.

    The FBI has also instituted its Next Generation Identification program to include face recognition, as well as more traditional biometrics like fingerprints and iris scans, which can pull from both criminal and civil databases.

    The Tocumen International Airport in Panama operates an airport-wide surveillance system using hundreds of live face recognition cameras to identify wanted individuals passing through the airport.

    Major Canadian airports will be using a new facial recognition program as part of the Primary Inspection Kiosk program that will compare people's faces to their passports. This program will first come to Ottawa International Airport in early 2017 and to other airports in 2018.

    In 2017, Time & Attendance company ClockedIn released facial recognition as a form of attendance tracking for businesses and organisations looking to have a more automated system of keeping track of hours worked as well as for security and health & safety control.

    Additional uses

    In addition to being used for security systems, authorities have found a number of other applications for face recognition systems. While earlier post-9/11 deployments were well publicized trials, more recent deployments are rarely written about due to their covert nature.

    At Super Bowl XXXV in January 2001, police in Tampa Bay, Florida used Viisage face recognition software to search for potential criminals and terrorists in attendance at the event. 19 people with minor criminal records were potentially identified.

    In the 2000 presidential election, the Mexican government employed face recognition software to prevent voter fraud. Some individuals had been registering to vote under several different names, in an attempt to place multiple votes. By comparing new face images to those already in the voter database, authorities were able to reduce duplicate registrations. Similar technologies are being used in the United States to prevent people from obtaining fake identification cards and driver’s licenses.

    There are also a number of potential uses for face recognition that are currently being developed. For example, the technology could be used as a security measure at ATMs. Instead of using a bank card or personal identification number, the ATM would capture an image of the customer's face, and compare it to the account holder's photo in the bank database to confirm the customer's identity.

    Face recognition systems are used to unlock software on mobile devices. An independently developed Android Marketplace app called Visidon Applock makes use of the phone's built-in camera to take a picture of the user. Face recognition is used to ensure only this person can use certain apps which they choose to secure.

    Face detection and face recognition are integrated into the iPhoto application for Macintosh, to help users organize and caption their collections.

    Also, in addition to biometric usages, modern digital cameras often incorporate a facial detection system that allows the camera to focus and measure exposure on the face of the subject, thus guaranteeing a focused portrait of the person being photographed. Some cameras, in addition, incorporate a smile shutter, or automatically take a second picture if someone blinks during exposure.

    Because of certain limitations of fingerprint recognition systems, face recognition systems could be used as an alternative way to confirm employee attendance at work for the claimed hours.

    Another use could be a portable device to assist people with prosopagnosia in recognizing their acquaintances.

    Compared to other technologies

    Among the different biometric techniques, face recognition may not be most reliable and efficient. However, one key advantage is that it does not require the cooperation of the test subject to work. Properly designed systems installed in airports, multiplexes, and other public places can identify individuals among the crowd, without passers-by even being aware of the system. Other biometrics like fingerprints, iris scans, and speech recognition cannot perform this kind of mass identification. However, questions have been raised on the effectiveness of face recognition software in cases of railway and airport security.

    Weaknesses

    Face recognition is far from perfect and struggles to perform under certain conditions. Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute, describes one obstacle related to the viewing angle of the face: "Face recognition has been getting pretty good at full frontal faces and 20 degrees off, but as soon as you go towards profile, there've been problems."

    Current face recognition still often misidentifies people which can sometimes led to controversy. Google was criticized for racism in its system when a black couple were misidentified as gorillas. Face recognition software generally doesn't do as well in identifying minorities when most of the subjects used in testing the technology were from the majority group.

    Other conditions where face recognition does not work well include poor lighting, sunglasses, hats, scarves, beards, long hair, makeup or other objects partially covering the subject’s face, and low resolution images.

    Another serious disadvantage is that many systems are less effective if facial expressions vary. Even a big smile can render the system less effective. For instance: Canada now allows only neutral facial expressions in passport photos.

    There is also inconstancy in the datasets used by researchers. Researchers may use anywhere from several subjects to scores of subjects, and a few hundred images to thousands of images. It is important for researchers to make available the datasets they used to each other, or have at least a standard dataset.

    Effectiveness

    Critics of the technology complain that the London Borough of Newham scheme has, as of 2004, never recognized a single criminal, despite several criminals in the system's database living in the Borough and the system having been running for several years. "Not once, as far as the police know, has Newham's automatic face recognition system spotted a live target." This information seems to conflict with claims that the system was credited with a 34% reduction in crime (hence why it was rolled out to Birmingham also). However it can be explained by the notion that when the public is regularly told that they are under constant video surveillance with advanced face recognition technology, this fear alone can reduce the crime rate, whether the face recognition system technically works or does not. This has been the basis for several other face recognition based security systems, where the technology itself does not work particularly well but the user's perception of the technology does.

    An experiment in 2002 by the local police department in Tampa, Florida, had similarly disappointing results.

    A system at Boston's Logan Airport was shut down in 2003 after failing to make any matches during a two-year test period.

    As of 2016, facial recognition is still not effective for most applications even though the accuracy has been substantially improved. Although systems are often advertised as having accuracy near 100%, this is misleading as the studies often uses much smaller sample sizes than would be necessary for large scale applications. Because facial recognition is not completely accurate, it creates a list of potential matches. A human operator must then look through these potential matches and studies show the operators pick the correct match out of the list only about half the time. This causes the issue of targeting the wrong suspect.

    Privacy issues

    Civil rights right organizations and privacy campaigners such as the Electronic Frontier Foundation and the ACLU express concern that privacy is being compromised by the use of surveillance technologies. Some fear that it could lead to a “total surveillance society,” with the government and other authorities having the ability to know the whereabouts and activities of all citizens around the clock. This knowledge has been, is being, and could continue to be deployed to prevent the lawful exercise of rights of citizens to criticize those in office, specific government policies or corporate practices. Many centralized power structures with such surveillance capabilities have abused their privileged access to maintain control of the political and economic apparatus, and to curtail populist reforms.

    Face recognition can be used not just to identify an individual, but also to unearth other personal data associated with an individual – such as other photos featuring the individual, blog posts, social networking profiles, Internet behavior, travel patterns, etc. – all through facial features alone. Moreover, individuals have limited ability to avoid or thwart face recognition tracking unless they hide their faces. This fundamentally changes the dynamic of day-to-day privacy by enabling any marketer, government agency, or random stranger to secretly collect the identities and associated personal information of any individual captured by the face recognition system.

    Social media web sites such as Facebook have very large numbers of photographs of people, annotated with names. This represents a database which may be abused by governments for face recognition purposes. Face recognition was used in Russia to harass women allegedly involved in online pornography. In Russia there is an app 'FindFace' which can identify faces with about 70% accuracy using the social media app called VK. This app would not be possible in other countries which do not use VK as their social media platform photos are not stored the same way as with VK.

    In July 2012, a hearing was held before the Subcommittee on Privacy, Technology and the Law of the Committee on the Judiciary, United States Senate, to address issues surrounding what face recognition technology means for privacy and civil liberties.

    In 2014, the National Telecommunications and Information Association (NTIA) began a multi-stakeholder process to engage privacy advocates and industry representatives to establish guidelines regarding the use of face recognition technology by private companies. In June 2015, privacy advocates left the bargaining table over what they felt was an impasse based on the industry representatives being unwilling to agree to consent requirements for the collection of face recognition data. The NTIA and industry representatives continued without the privacy representatives, and draft rules are expected to be presented in the spring of 2016.

    States have begun enacted legislation to protect citizen's biometric data privacy. Illinois enacted the Biometric Information Privacy Act in 2008. Facebook's DeepFace has become the subject of several class action lawsuits under the Biometric Information Privacy Act, with claims alleging that Facebook is collecting and storing face recognition data of its users without obtaining informed consent, in direct violation of the Biometric Information Privacy Act. The most recent case was dismissed in January 2016 because the court lacked jurisdiction. Therefore, it is still unclear if the Biometric Information Privacy Act will be effective in protecting biometric data privacy rights.

    History

    Pioneers of automated face recognition include Woody Bledsoe, Helen Chan Wolf, and Charles Bisson.

    During 1964 and 1965, Bledsoe, along with Helen Chan and Charles Bisson, worked on using the computer to recognize human faces (Bledsoe 1966a, 1966b; Bledsoe and Chan 1965). He was proud of this work, but because the funding was provided by an unnamed intelligence agency that did not allow much publicity, little of the work was published. Given a large database of images (in effect, a book of mug shots) and a photograph, the problem was to select from the database a small set of records such that one of the image records matched the photograph. The success of the method could be measured in terms of the ratio of the answer list to the number of records in the database. Bledsoe (1966a) described the following difficulties:

    This project was labeled man-machine because the human extracted the coordinates of a set of features from the photographs, which were then used by the computer for recognition. Using a graphics tablet (GRAFACON or RAND TABLET), the operator would extract the coordinates of features such as the center of pupils, the inside corner of eyes, the outside corner of eyes, point of widows peak, and so on. From these coordinates, a list of 20 distances, such as width of mouth and width of eyes, pupil to pupil, were computed. These operators could process about 40 pictures an hour. When building the database, the name of the person in the photograph was associated with the list of computed distances and stored in the computer. In the recognition phase, the set of distances was compared with the corresponding distance for each photograph, yielding a distance between the photograph and the database record. The closest records are returned.

    Because it is unlikely that any two pictures would match in head rotation, lean, tilt, and scale (distance from the camera), each set of distances is normalized to represent the face in a frontal orientation. To accomplish this normalization, the program first tries to determine the tilt, the lean, and the rotation. Then, using these angles, the computer undoes the effect of these transformations on the computed distances. To compute these angles, the computer must know the three-dimensional geometry of the head. Because the actual heads were unavailable, Bledsoe (1964) used a standard head derived from measurements on seven heads.

    After Bledsoe left PRI in 1966, this work was continued at the Stanford Research Institute, primarily by Peter Hart. In experiments performed on a database of over 2000 photographs, the computer consistently outperformed humans when presented with the same recognition tasks (Bledsoe 1968). Peter Hart (1996) enthusiastically recalled the project with the exclamation, "It really worked!"

    By about 1997, the system developed by Christoph von der Malsburg and graduate students of the University of Bochum in Germany and the University of Southern California in the United States outperformed most systems with those of Massachusetts Institute of Technology and the University of Maryland rated next. The Bochum system was developed through funding by the United States Army Research Laboratory. The software was sold as ZN-Face and used by customers such as Deutsche Bank and operators of airports and other busy locations. The software was "robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hair styles and glasses—even sunglasses".

    In about January 2007, image searches were "based on the text surrounding a photo," for example, if text nearby mentions the image content. Polar Rose technology can guess from a photograph, in about 1.5 seconds, what any individual may look like in three dimensions, and claimed they "will ask users to input the names of people they recognize in photos online" to help build a database. Identix, a company out of Minnesota, has developed the software, FaceIt. FaceIt can pick out someone’s face in a crowd and compare it to databases worldwide to recognize and put a name to a face. The software is written to detect multiple features on the human face. It can detect the distance between the eyes, width of the nose, shape of cheekbones, length of jawlines and many more facial features. The software does this by putting the image of the face on a faceprint, a numerical code that represents the human face. Face recognition software used to have to rely on a 2D image with the person almost directly facing the camera. Now, with FaceIt, a 3D image can be compared to a 2D image by choosing 3 specific points off of the 3D image and converting it into a 2D image using a special algorithm that can be scanned through almost all databases.   In 2006, the performance of the latest face recognition algorithms were evaluated in the Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face scans, and iris images were used in the tests. The results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins.

    U.S. Government-sponsored evaluations and challenge problems have helped spur over two orders-of-magnitude in face-recognition system performance. Since 1993, the error rate of automatic face-recognition systems has decreased by a factor of 272. The reduction applies to systems that match people with face images captured in studio or mugshot environments. In Moore's law terms, the error rate decreased by one-half every two years.

    Low-resolution images of faces can be enhanced using face hallucination. Further improvements in high resolution, megapixel cameras in the last few years have helped to resolve the issue of insufficient resolution.

    Emotion detection

    Facial recognition systems have been used for emotion recognition In 2016 Facebook acquired emotion detection startup FacioMetrics.

    Anti facial recognition systems

    In January 2013 Japanese researchers from the National Institute of Informatics created 'privacy visor' glasses that uses nearly infrared light to make the face underneath it unrecognizable to face recognition software. The latest version uses a titanium frame, light-reflective material and a mask which uses angles and patterns to disrupt facial recognition technology through both absorbing and bouncing back light sources. In December 2016 a form of anti-CCTV and facial recognition sunglasses called 'reflectacles' were invented by a custom-spectacle-craftsmen based in Chicago named Scott Urban. They reflect infrared and, optionally, visible light which makes the users face a white blur to cameras. The project easily surpassed its crowdfunding goal of $28,000 and reflectacles will be commercially available by June 2017. It is conceivable that such technology might be fused with future head-mounted displays such as potential successors of HoloLens.

    References

    Facial recognition system Wikipedia