Abstract: Communication is a key bottleneck for distributed graph neural network (GNN) training. Existing GNN training systems fail to scale to deep GNNs because of the tremendous amount of inter-GPU ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results