ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object

CVPR 2024 Highlight (2.8%)

1KAIST 2University of Michigan 3 McGill University 4MILA
*Corresponding author
@InProceedings{Zhang_2024_CVPR,
      author    = {Zhang, Chenshuang and Pan, Fei and Kim, Junmo and Kweon, In So and Mao, Chengzhi},
      title     = {ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month     = {June},
      year      = {2024},
      pages     = {21752-21762}
  }
Teaser Figure

Abstract

We establish rigorous benchmarks for visual perception robustness. Synthetic images such as ImageNet-C, ImageNet-9, and Stylized ImageNet provide specific types of evaluation over synthetic corruptions, backgrounds, and textures. Yet, those robustness benchmarks are restricted in specified variations and have low synthetic quality. In this work, we introduce generative models as a data source for synthesizing hard images that benchmark deep models' robustness. Leveraging diffusion models, we are able to generate images with more diversified backgrounds, textures, and materials than any prior work, which we term as ImageNet-D. Experimental results show that ImageNet-D results in a significant accuracy drop to a range of vision models, from the standard ResNet visual classifier to the latest foundation models like CLIP and MiniGPT-4, significantly reducing their accuracy by up to 60%. Our work suggests that diffusion models can be an effective source to test vision models.

Method: Benchmark Robustness by Diffusion

We create ImageNet-D by first generating a large image pool using diffusion models. To make the test set challenging, we only keep the hard images from the large pool that make multiple surrogate models fail. The test set is then refined through human verification to ensure the labeling quality.

Results: SOTA Model Accuracy Drops by up to 60%

Model accuracy on ImageNet vs. ImageNet-D. Each data point corresponds to one tested model. The plots reveal that there is a significant accuracy drop from ImageNet to our new test set ImageNet-D.

Visualizations: Higher Quality and Diverse Variations

Compared to prior synthetic test sets, ImageNet-D achieves higher image quality and diverse variations. ImageNet-D can be scaled efficiently to include more categories and nuisances.

Visualizations: CLIP Fails on ImageNet-D

For each group of images, the ground truth label is color green, while the predicted categories by CLIP (ViT-L/14) on each image are in black.

Visualizations: MiniGPT-4 and LLaVa-1.5 Fail

Both MiniGPT-4 and LLaVa-1.5 fail to recognize the object on ImageNet-D, highlighting the challenges the dataset poses for these models.