Overview of Graph Neural Networks

Eddie Mattia
Artificial Intelligence, Graph Neural Networks, Knowledge Graphs, Machine Learning

What is a Graph Neural Network? 

Graph Neural Networks are neural networks that operate on graph data. This informative intro to GNNs defines them as “an optimizable transformation on all attributes of the graph (nodes, edges, global-context) that preserves graph symmetries (permutation invariances).” 

GNN layers take graph feature data in and apply a transformation to generate embeddings (vector representations). The graph connectivity structure does not change across neural network layers while each layer takes in the previous layer’s node features and transforms them into node embeddings. A similar process happens for edges and the graph embeddings. 

In fact, GNNs and the broader topic of geometric deep learning generalize other kinds of neural nets like CNNs and Transformers

A Graph about Graph ML Research – Screenshot from Connected Papers website on December 7, 2021

How can multilayer GNNs learn patterns related to connectivity in graph data?

Message passing allows multilayer GNNs to learn patterns in graph data.

Message passing is a concept introduced here by Gilmer et al which allows connectivity information to propagate throughout the graph structure during model training. The key steps of message passing can be summarized as: 

  1. Gather all messages in the neighborhood of each node 
  2. Aggregate all the messages for each node 
  3. Apply an update function to all aggregated messages (e.g., multi layer perceptron)

Message passing on graph data achieves a similar effect as a convolutional neural network layer achieves on image data. In a CNN, the convolutional layers typically take in a slice of a 2D grayscale image matrix or 3D grid with color channels, and output a feature map encoding the presence or absence of a feature at each pixel location. Message passing applies a similar process to the node data. The difference is that graphs can have a dynamic number of neighbors for each node, where pixels in images have a static number of neighbors. After K message passing steps or “hops”, each node will have accumulated information from K connections away.  

With each step of message passing, the receptive field grows as in CNNs models. For CNNs, stacking multiple layers (expanding the receptive field) is not a problem when an image is 1024×1024 pixels; however, for a graph that has millions, and sometimes even billions of nodes, it becomes a computational challenge that is called a neighborhood explosion.

Sample of impactful choices when designing a GNN architecture 

These are some questions that GNN experts ask during the design phase of experimentation:

  • How deep should the GNN be for the task? 
  • What aggregation function should you choose? 
  • What message passing algorithm(s) should you use? 
  • How should you set the dimensionality of embeddings? 
  • Should we sample in batches or the full graph?

To learn more about GNN design and optimization, I encourage you to watch our GNN panel with Amazon AI, PayPal, and Intel Labs at the SigOpt Summit. If you’d like to start your own Intelligent Experimentation with Graph Neural Networks, sign up to use SigOpt for free.

Eddie-Mattia-1
Eddie Mattia Machine Learning Specialist