Inverting Adversarially Robust Networks for Image Synthesis
Renan A. Rojas-Gomez, Raymond A. Yeh, Minh N. Do, Anh Nguyen
Links: pdf | code | project page
Recent research in adversarially robust classifiers suggests their representations tend to be aligned with human perception, which makes them attractive for image synthesis and restoration applications. Despite favorable empirical results on a few downstream tasks, their advantages are limited to slow and sensitive optimization-based techniques. Moreover, their use on generative models remains unexplored. This work proposes the use of robust representations as a perceptual primitive for feature inversion models, and show its benefits with respect to standard non-robust image features. We empirically show that adopting robust representations as an image prior significantly improves the reconstruction accuracy of CNN-based feature inversion models. Furthermore, it allows reconstructing images at multiple scales out-of-the-box. Following these findings, we propose an encoding-decoding network based on robust representations and show its advantages for applications such as anomaly detection, style transfer and image denoising.
Acknowledgment: This work is supported by the National Science Foundation under Grant No. 1850117.
Conference: Asian Conference on Computer Vision (ACCV 2022). Acceptance rate: 279/836 = 33.3%.