Face Matching

Face matching is the process of determining whether images of two faces belong to the same person. This is particularly useful in onboarding scenarios when we want to check if the holder of the document is present during onboarding.

Another typical use case is login, when the user is verified with a previously stored face.

Face matching is provided by the mobile app libraries:

Better precision of face matching can be achieved by calling the Core server.

Innovatriocs face biometric algorithm ranks among the top in the NIST FRVT

Matching steps

In order to verify two faces, the following steps must be performed:

  • Face detection - find position of the face in the image
  • Template extraction - compute representation of the face used for matching
  • Matching - compare two face templates and output similarity score

Face detection

First step while performing face matching is face detection. This is an important step, because there might be no face or multiple faces present in the picture. Once a face is detected it can be used in the matching process. There are various face detection modes available, fast mode provides lower latency, in accurate mode detection is more precise. Mobile devices only support fast mode.

Template extraction

Once the face has been detected, the face template can be generated. These templates can be cached on the application level to speed up the matching. Once the reference image is uploaded to the server, the template can be generated and cached. When a user logs in, face detection and extraction is only performed on the probe image from the user and the reference template is pulled from the cache. For extraction fast and accurate modes are available too. Similarly to detection, on mobile devices only fast mode is supported. Only templates generated by the same mode and product version can be matched. During major product upgrades, templates must be regenerated as mentioned in the respective product changelog.

Matching

Matching is a very fast operation, and it calculates similarity of two templates, providing a matching score. The higher the score the more similar the faces are.

Matching threshold

Final decision if the two faces belong to the same person should be determined by the similarity score and the threshold. If the score is above the threshold this can be interpreted as accepted, if the score is below the threshold it is rejected.

Following characteristics has been measured on our ICAO face quality testing dataset using Core Server (accurate extraction mode):

FAR levelsFAR [%]FRR [%]Score threshold
1:501.9990.00918.34
1:1001.0000.01120.18
1:5000.2000.02025.23
1:10000.1000.02227.17
1:50000.0200.04032.28
1:100000.0100.05834.52
1:500000.0020.17141.32
eer0.0340.03430.42

Following characteristics has been measured on our ICAO face quality testing dataset using DOT Face mobile library for Android and iOS (fast extraction mode):

FAR levelsFAR [%]FRR [%]Score threshold
1:1001.0000.47622.38
1:5000.2000.86727.60
1:10000.1001.20529.77
1:50000.0202.48335.81
1:100000.0103.21637.88
1:500000.0025.25042.82
eer0.5730.57324.31

Example

If we require a FAR level of 1:5000, we have to set the threshold of the resulting score of the matching function to 39.8. If we have a representative set of 10000 matching face pairs, statistically 638 will be in this case incorrectly marked as not matching, even though they are. If we have 10000 not matching pairs, statistically 2 will be wrongly marked as matching.


Setting the correct threshold depends on the security/convenience balance that is required for the specific use case.

During the initial configuration of the system, two thresholds can be set. If the score is below the bottom threshold, the result is automatically set to reject. If the score is above the top threshold it is automatically accepted. If the score is between the two thresholds, images go for review to a back office operator for final decision.


NOTE

To add matching to your workflow, please consider the following:

  • Image quality - if image quality is low, accuracy of the matching decreases
  • Age difference between the images - time difference between the capture of the two images is several years, the person’s appearance might have changed significantly.

Image vs template usage

When performing matching using images, face detection is always called internally and a template is generated. When using templates, face detection is skipped.

Using images

If you do not need the result of the face detection for other purposes you can simply invoke matching with images. This is particularly useful when matching is performed only once during the flow. An example would be a simple selfie vs identity document face comparison.

Using templates

If you need more data about the face, such as age estimation or passive liveness, recommended approach is as follows:

  1. Invoke face detection with all needed attributes and also template extraction enabled
  2. Cache this template
  3. Use it for matching

This approach can be used when we want to evaluate the passive liveness and also perform face matching. Calling verify with at least one template reduces processing time.

Templates can be also cached on the application level for use cases like login, where the same reference face is needed. Please note that templates are incompatible across major product upgrades and must be regenerated by invoking the face detection on the source images again.

Usage

To perform matching on mobile, please check our mobile SDKs. For use on the server, please check DOT Core Server verify operation