PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans

Giang Nguyen, Valerie Chen, Mohammad Reza Taesiri, Anh Totti Nguyen

Links: pdf | code | project page

Nearest neighbors (NN) are traditionally used to compute final decisions, e.g., in Support Vector Machines or k-NN classifiers, and to provide users with explanations for the model’s decision. In this paper, we show a novel utility of nearest neighbors: To improve predictions of a frozen, pretrained classifier C. We leverage an image comparator S that (1) compares the input image with NN images from the top-K most probable classes; and (2) uses S’s output scores to weight the confidence scores of C. Our method consistently improves fine-grained image classification accuracy on CUB-200, Cars-196, and Dogs-120. Also, a human study finds that showing lay users our probable-class nearest neighbors (PCNN) reduces over-reliance on AI, thus improving their decision accuracy over prior work which only shows only the top-1 class examples.

Acknowledgment: This work is supported by the National Science Foundation under Grant No. 2145767, Adobe Research, and the NaphCare Charitable Foundation.

Published in Transactions on Machine Learning Research (TMLR) in 08/2024. Reviews: https://openreview.net/forum?id=OcFjqiJ98b

Figure 1: C × S re-ranking algorithm: From each class among the top-K predicted classes by C, we find the nearest neighbor nn to the query x and compute a sigmoid similarity score S(x,nn), which weights the original C(x) probabilities, re-ranking the labels.

C × S model successfully corrects originally wrong predictions made by ResNet-50

Figure 6: When pairing a well-trained comparator S1 with an unseen, black-box classifier (⏺), our C × S models (★) consistently yields a higher accuracy on CUB-200. Along the x-axis, each star (★) shows the total number of parameters of S1 and a paired classifier.