Overview of Temporal Graph Neural Networks

Luis Bermudez
Graph Neural Networks

The popularity of Graph Neural Networks (GNNs) has risen in recent years. They have been successfully applied to many areas such as particle physics, biology, social networks, and recommendation systems. Most of this amazing work with GNNs has been applied on static graph domains. However, in the internet age, data changes over time. So we need GNNs that handle graph data that changes over time as well. This is where Temporal Graph Neural Networks (TGNNs) come in.  

Temporal Graph Neural Networks are part of a larger encoder-decoder architecture. The encoder takes the graph data as the input to produce node embeddings. Whereas the decoder takes the node embeddings as input to output task-specific predictions. TGNN’s are a particular type of encoder that input graphs with temporal data; this graph data that changes over time is referred to as dynamic graphs. The TGNN takes a continuous-time dynamic graph as input to output a node embedding for each time stamp. 

Figure 1: Computations performed by Temporal GNN’s on a batch of time-stamped interactions. The memory is updated, and then the embeddings are produced by the embedding module using the temporal graph and the node’s memory.

There are 5 key components of a Temporal GNN: 

  1. Memory. The memory of each visited node is updated after each node event. The first node event for a node is the initial visit to the node. Afterwards, any events that are related to the node will update the node’s memory. This node-level memory enables the TGNN to memorize long term dependencies. 
  2. Message Function. A message function is generated during each event to update the node’s memory. These message functions can start out as identity functions, but are learnable (e.g., multilayer perceptron). Events can include an interaction between two nodes (which generates a message for each node). Events also include deletion events. 
  3. Message Aggregator. At any given time, multiple events can occur that are related to one node. This could lead to multiple messages for the same node to be included in the same batch. In these cases, we aggregate the messages for a node at a given time. To do this, we use a non-learnable solution (e.g., most recent or mean message). 
  4. Memory Updater. When a node-level event happens, the node memory needs to be updated. The memory is updated with a memory updater. The memory updater is a learnable memory update function that can be implemented using Long Short-Term Memory (LSTM) or Gate Recurrent Units (GRUs). 
  5. Embedding. Node memory is only updated when a new event occurs, so stale events could occur if a long time passes between events. To mitigate the stale memory, a variety of graph embedding modules are implemented. The graph embedding modules aggregate neighboring nodes that have been active more recently. This creates an up-to-date embedding for the node. 

Temporal GNN’s have the ability to outperform previous architectures, and are able to do so with higher computational efficiency. This computational efficiency allows TGNN’s to be a plausible method for many more data scientists and machine learning engineers in the future. However, there are still many experiments that can be conducted to improve on the message functions, message aggregators, embeddings and more. To learn how to run more experiments like these, sign up now.

1605041069985
Luis Bermudez AI Developer Advocate

Want more content from SigOpt? Sign up now.