According to the authors of GraphSAGE:
“GraphSAGE is a framework for inductive representation learning on large graphs. GraphSAGE is used to generate low-dimensional vector representations for nodes, and is especially useful for graphs that have rich node attribute information.”
GraphSAGE improves generalization on unseen data better than previous graph learning methods. It is often referred to as leveraging inductive learning as opposed to transductive learning meaning the patterns the model is learning have a stronger ability to generalize to unseen test data. To do this the algorithm samples node features in the local neighborhood of each node in the graph data and then learns how to aggregate the information each node receives as it’s passed through the GNN layers. As each neural network layer processes the message passing data, aggregation functions pool information to learn each node’s neighborhood structure, yielding a model that produces embeddings that are more transferable than transductive modeling approaches. A key aspect of the aggregation function is that it is invariant to permutations in the local neighborhood and thus to graph isomorphisms. The power of this is clear to those familiar with CNNs:
neighborhood-permutation invariance in a GNN is an extension of the spatial invariance realized by CNNs as the algorithm slides feature-detecting filters around the 2D grid of an image
The original paper presenting the GraphSAGE framework is titled Inductive Representation Learning on Large Graphs. The loss function described in the paper “encourages nearby nodes to have similar representations, while enforcing that the representations of disparate nodes are highly distinct”. The formulation functions in both a supervised and unsupervised setting. Another important advancement presented in the GraphSAGE paper was uniform neighborhood sampling, which Intel used in their experiments also for the RGCN model.
To learn more about how to build Graph Neural Networks with SigOpt, I encourage you to read this case study or watch our GNN panel with Amazon AI, PayPal, and Intel Labs at the SigOpt Summit. If you’d like to learn more about Graph Neural Networks, we have provided an Overview of Graph Neural Networks. If you’d like to apply hyper parameter optimization to your Graph Neural Networks, sign up to use SigOpt for free.