Strike (with) a Pose: Neural networks are easily fooled by strange poses of familiar objects
Michael Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-shinn Ku, Anh Nguyen
Links: pdf | code | project page
Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones, which are common in real-world settings. In this paper, we present a framework for discovering DNN failures that harnesses 3D renderers and 3D models. That is, we estimate the parameters of a 3D renderer that cause a target DNN to misbehave in response to the rendered image. Using our framework and a self-assembled dataset of 3D objects, we investigate the vulnerability of DNNs to OoD poses of well-known objects in ImageNet. For objects that are readily recognized by DNNs in their canonical poses, DNNs incorrectly classify 97% of their poses. In addition, DNNs are highly sensitive to slight pose perturbations (e.g. 8 degree in rotation). Importantly, adversarial poses transfer across models and datasets. We find that 99.9% and 99.4% of the poses misclassified by Inception-v3 also transfer to the AlexNet and ResNet-50 image classifiers trained on the same ImageNet dataset, respectively, and 75.5% transfer to the YOLO-v3 object detector trained on MS COCO.
Conference: CVPR 2019 (acceptance rate: 25.2%).
Acknowledgment: This material is based upon work supported by the National Science Foundation under Grant No. 1850117 and a donation from Adobe Inc.
Press coverage:
-
- Nature. Why deep-learning AIs are so easy to fool
- ZDNet. Google’s image recognition AI fooled by new tricks
- Nautilus. Why Robot Brains Need Symbols
- Gizmodo. Google’s ‘Inception’ Neural Network Tricked By Images Resembling Bad Video Games
- New Scientist. The best image-recognition AIs are fooled by slightly rotated images (pdf)
- Gary Marcus. The deepest problem with deep learning
- Communications of ACM. March 2019 news.
- Manifold.ai. We need to build interactive computer vision systems
- Medium. AI is about to get bigger, better, and more boring
- Adobe. Neural Networks Easily Fooled by Common Objects Seen from New Angles
- Gizmodo. Thousands of Reasons That We Shouldn’t Trust a Neural Network to Analyze Images
- Binary District Journal. AI Applications and “Black Boxes”
- Facebook AI blog. Building AI that can understand variation in the world around us