Tianlong Chen (陈天龙)

What does not kill you makes you stronger

(NeurIPS 2020) Adversarial Contrastive Learning, Harvesting More Robustness from Unsupervised Pre-Training

Adversarial Contrastive Learning: Harvesting More Robustness from Unsupervised Pre-Training

[Paper] [Code]

Abstract

Recent work has shown that, when integrated with adversarial training, self-supervised pre-training with several pretext tasks can lead to state-of-the-art robustness. In this work, we show that contrasting features to random and adversarial perturbations for consistency can benefit robustness-aware pre-training even further. Our approach leverages a recent contrastive learning framework, which learns representations by maximizing feature consistency under differently augmented views. This fits particularly well with the goal of adversarial robustness, as one cause of adversarial fragility is the lack of feature invariance, i.e., small input perturbations can result in undesirable large changes in features or even predicted labels. We explore various options to formulate the contrastive task, and demonstrate that by injecting adversarial augmentations, contrastive pre-training indeed contributes to learning data-efficient robust models. We extensively evaluate the proposed Adversarial Contrastive Learning (ACL) and show it can consistently outperform state-of-the-arts. For example on the CIFAR-10 dataset, ACL outperforms the latest unsupervised robust pre-training approach with substantial margins: 2.99% on robust accuracy and 2.14% on standard accuracy. We further demonstrate that ACL pre-training can improve semi-supervised adversarial training, even at very low label rates. The codes are publicly available at:.