Like GraphSAGE, Relational Graph Convolutional Networks extend the notion of the Graph Convolution Network (GCN). The layers of a GCN are a generalization of convolutional layers in a CNN where the data can have a dynamic number of neighbors instead of being fixed on a grid like the pixels of an image. Where GraphSAGE focuses on extending GCNs to generalize by using trainable aggregation functions, RGCN extends GCNs to operate on multigraphs, where there is more than one edge type. There are many variations on GCN layers and more being added all the time.
There is an RGCN implementation in Deep Graph Library, as well as RGCN-hetero, which further extends the learnability of the framework to heterographs containing multiple node and edge types. The original paper presenting the RGCN framework is: Modeling Relational Data with Graph Convolutional Networks.
To learn more about how Intel uses RGCNs with SigOpt, I encourage you to read this case study. If you’d like to learn more about Graph Neural Networks, we have provided an Overview of Graph Neural Networks. If you’d like to apply hyper parameter optimization to your Graph Neural Networks, sign up to use SigOpt for free.