go back
go back
Volume 18, No. 4
Graph Neural Network Training Systems: A Performance Comparison of Full-Graph and Mini-Batch.
Abstract
Graph Neural Networks (GNNs) have gained signicant attention in recent years due to their ability to learn representations of graphstructured data. Two common methods for training GNNs are minibatch training and full-graph training. Since these two methods require dierent training pipelines and systems optimizations, two separate classes of GNN training systems emerged, each tailored for one method. Works that introduce systems belonging to a particular category predominantly compare them with other systems within the same category, oering limited or no comparison with systems from the other category. Some prior work also justies its focus on one specic training method by arguing that it achieves higher accuracy than the alternative. The literature, however, has incomplete and contradictory evidence in this regard. In this paper, we provide a comprehensive empirical comparison of representative full-graph and mini-batch GNN training systems. We nd that the mini-batch training systems consistently converge faster than the full-graph training ones across multiple datasets, GNN models, and system congurations. We also nd that minibatch training techniques converge to similar to or often higher accuracy values than full-graph training ones, showing that minibatch sampling is not necessarily detrimental to accuracy. Our work highlights the importance of comparing systems across dierent classes, using time-to-accuracy rather than epoch time for performance comparison, and selecting appropriate hyperparameters for each training method separately.
PVLDB is part of the VLDB Endowment Inc.
Privacy Policy