Fast and Interpretable Face Identification for Out-Of-Distribution Data Using Vision Transformers

Hai Phan, Cindy X. Le, Vu Le, Yihui Le, Anh Totti Nguyen

Links: pdf | code | project page

Most face identification approaches employ a Siamese neural network to compare two images at the image embedding level. Yet, this technique can be subject to occlusion (e.g. faces with masks or sunglasses) and out-of-distribution data. DeepFace-EMD (Phan et al. 2022) reaches state-of-the-art accuracy on out-of-distribution data by first comparing two images at the image level, and then at the patch level. Yet, its later patch-wise re-ranking stage admits a large O(n3logn) time complexity (for n patches in an image) due to the optimal transport optimization. In this paper, we propose a novel, 2-image Vision Transformers (ViTs) that compares two images at the patch level using cross-attention. After training on 2M pairs of images on CASIA Webface (Yi et al. 2014), our model performs at a comparable accuracy as DeepFace-EMD on out-of-distribution data, yet at an inference speed more than twice as fast as DeepFace-EMD (Phan et al. 2022). In addition, via a human study, our model shows promising explainability through the visualization of cross-attention. We believe our work can inspire more explorations in using ViTs for face identification.

Acknowledgment: This work is supported by the National Science Foundation under Grant No. 2145767, Adobe Research, and donations from the NaphCare Foundation.

Conference: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024) (acceptance rate: 476/1226 = ∼38.8%).

Figure 1: The architecture of the six networks evaluated in this work including our proposed H2L.

 

Figure 2: The hybrid (CNN-ViT) architecture of the proposed ViT-based Model H2L.

Figure 3: Actual runtime in seconds (lower is better) for the re-ranking computation in face identification under occlusion. Our proposed model is at least 2x faster than the state-of-the-art DeepFace-EMD [40] over all the datasets.

Figure 4: Properties of the six networks evaluated in this work. We categorize into 2 types of models: 1-image and 2-image. 1-image models include CNN (C) and ViT (V) while the 2-image group contains DeepFace-EMD (D). Hybrid-ViT can be 1-image (H1) or 2-image (H2 and H2L). The difference between H2 and H2L is the Transformer output of [CLS] vs. 2-Linear, respectively.