Share this page:

Pre-Training Graph Neural Networks for Generic Structural Feature Extraction

Ziniu Hu, Changjun Fan, Ting Chen, Kai-Wei Chang, and Yizhou Sun, in ICLR 2019 Workshop: Representation Learning on Graphs and Manifolds, 2019.

Download the full text


Abstract

Graph neural networks (GNNs) are shown to be successful in modeling applications with graph structures. However, training an accurate GNN model requires a large collection of labeled data and expressive features, which might be inaccessible for some applications. To tackle this problem, we propose a pre-training framework that captures generic graph structural information that is transferable across tasks. Our framework can leverage the following three tasks: 1) denoising link reconstruction, 2) centrality score ranking, and 3) cluster preserving. The pre-training procedure can be conducted purely on the synthetic graphs, and the pre-trained GNN is then adapted for downstream applications. With the proposed pre-training procedure, the generic structural information is learned and preserved, thus the pre-trained GNN requires less amount of labeled data and fewer domain-specific features to achieve high performance on different downstream tasks. Comprehensive experiments demonstrate that our proposed framework can significantly enhance the performance of various tasks at the level of node, link, and graph.


Bib Entry

@inproceedings{hu2019pretraining,
  author = {Hu, Ziniu and Fan, Changjun and Chen, Ting and Chang, Kai-Wei and Sun, Yizhou},
  title = {Pre-Training Graph Neural Networks for Generic Structural Feature Extraction},
  booktitle = {ICLR 2019 Workshop: Representation Learning on Graphs and Manifolds},
  year = {2019}
}

Related Publications