Temporal Graph-Convolutional Layers

Convolutions for time-varying graphs (temporal graphs) such as the TemporalSnapshotsGNNGraph.

Docs

GNNLux.A3TGCNType
A3TGCN(in => out; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32, add_self_loops = false, use_edge_weight = true)

Attention Temporal Graph Convolutional Network (A3T-GCN) model from the paper A3T-GCN: Attention Temporal Graph Convolutional Network for Traffic Forecasting.

Performs a TGCN layer, followed by a soft attention layer.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

using GNNLux, Lux, Random

# initialize random number generator
rng = Random.default_rng()

# create data
g = rand_graph(rng, 5, 10)
x = rand(rng, Float32, 2, 5)

# create A3TGCN layer
l = A3TGCN(2 => 6)

# setup layer
ps, st = LuxCore.setup(rng, l)

# forward pass
y, st = l(g, x, ps, st)      # result size (6, 5)
source
GNNLux.EvolveGCNOType
EvolveGCNO(ch; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random

# initialize random number generator
rng = Random.default_rng()

# create data
tg = TemporalSnapshotsGNNGraph([rand_graph(rng, 10, 20; ndata = rand(rng, 4, 10)), rand_graph(rng, 10, 14; ndata = rand(rng, 4, 10)), rand_graph(rng, 10, 22; ndata = rand(rng, 4, 10))])

# create layer
l = EvolveGCNO(4 => 5)

# setup layer
ps, st = LuxCore.setup(rng, l)

# forward pass
y, st = l(tg, tg.ndata.x , ps, st)      # result size 3, size y[1] (5, 10)
source
GNNLux.DCGRUMethod
DCGRU(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random

# initialize random number generator
rng = Random.default_rng()

# create data
g = rand_graph(rng, 5, 10)
x = rand(rng, Float32, 2, 5)

# create layer
l = DCGRU(2 => 5, 2)

# setup layer
ps, st = LuxCore.setup(rng, l)

# forward pass
y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.GConvGRUMethod
GConvGRU(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random

# initialize random number generator
rng = Random.default_rng()

# create data
g = rand_graph(rng, 5, 10)
x = rand(rng, Float32, 2, 5)

# create layer
l = GConvGRU(2 => 5, 2)

# setup layer
ps, st = LuxCore.setup(rng, l)

# forward pass
y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.GConvLSTMMethod
GConvLSTM(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random

# initialize random number generator
rng = Random.default_rng()

# create data
g = rand_graph(rng, 5, 10)
x = rand(rng, Float32, 2, 5)

# create GConvLSTM layer
l = GConvLSTM(2 => 5, 2)

# setup layer
ps, st = LuxCore.setup(rng, l)

# forward pass
y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.TGCNMethod
TGCN(in => out; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32, add_self_loops = false, use_edge_weight = true)

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

using GNNLux, Lux, Random

# initialize random number generator
rng = Random.default_rng()

# create data
g = rand_graph(rng, 5, 10)
x = rand(rng, Float32, 2, 5)

# create TGCN layer
tgcn = TGCN(2 => 6)

# setup layer
ps, st = LuxCore.setup(rng, tgcn)

# forward pass
y, st = tgcn(g, x, ps, st)      # result size (6, 5)
source