Tianlong Chen (陈天龙)

What does not kill you makes you stronger

(NeurIPS 2020) Pre-Training Graph Neural Networks A Contrastive Learning Framework with Augmentations

Pre-Training Graph Neural Networks: A Contrastive Learning Framework with Augmentations

[Paper] [Code]

Abstract

Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed for convolutional neural networks (CNNs) for image data, self-supervised learning and pre-training are intrinsically difficult to pursue and indeed rarely explored for GNNs. In this paper, we propose a graph contrastive learning (GraphCL) framework to learn perturbation-invariant unsupervised representations of graph data. To this end, we first design four types of graph augmentations to incorporate various priors. Furthermore, we systematically assess, summarize, and rationalize the impact of contrasting various combinations of graph augmentations on various datasets, in the setting of semi-supervised, unsupervised, and transfer learning as well as adversarial attacks. The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, our GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. We also investigate the impact of parameterized graph augmentation extents and patterns, and observe further performance gains in preliminary experiments. The codes are publicly available at:.