Temporal Graph-Convolutional Layers
Convolutions for time-varying graphs (temporal graphs) such as the TemporalSnapshotsGNNGraph
.
Docs
GraphNeuralNetworks.A3TGCN
— TypeA3TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])
Attention Temporal Graph Convolutional Network (A3T-GCN) model from the paper A3T-GCN: Attention Temporal Graph Convolutional Network for Traffic Forecasting.
Performs a TGCN layer, followed by a soft attention layer.
Arguments
in
: Number of input features.out
: Number of output features.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.init_state
: Initial state of the hidden stat of the GRU layer. Defaultzeros32
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaultfalse
.use_edge_weight
: Iftrue
, consider the edge weights in the input graph (if available). Ifadd_self_loops=true
the new weights will be set to 1. This option is ignored if theedge_weight
is explicitly provided in the forward pass. Defaultfalse
.
Examples
julia> a3tgcn = A3TGCN(2 => 6)
A3TGCN(2 => 6)
julia> g, x = rand_graph(5, 10), rand(Float32, 2, 5);
julia> y = a3tgcn(g,x);
julia> size(y)
(6, 5)
julia> Flux.reset!(a3tgcn);
julia> y = a3tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20));
julia> size(y)
(6, 5)
Failing to call reset!
when the input batch size changes can lead to unexpected behavior.
GraphNeuralNetworks.EvolveGCNO
— TypeEvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)
Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.
Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.
Arguments
in
: Number of input features.out
: Number of output features.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.init_state
: Initial state of the hidden stat of the LSTM layer. Defaultzeros32
.
Examples
julia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])
TemporalSnapshotsGNNGraph:
num_nodes: [10, 10, 10]
num_edges: [20, 14, 22]
num_snapshots: 3
julia> ev = EvolveGCNO(4 => 5)
EvolveGCNO(4 => 5)
julia> size(ev(tg, tg.ndata.x))
(3,)
julia> size(ev(tg, tg.ndata.x)[1])
(5, 10)
GraphNeuralNetworks.DCGRU
— MethodDCGRU(in => out, k, n; [bias, init, init_state])
Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.
Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.
Arguments
in
: Number of input features.out
: Number of output features.k
: Diffusion step.n
: Number of nodes in the graph.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.init_state
: Initial state of the hidden stat of the LSTM layer. Defaultzeros32
.
Examples
julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
julia> dcgru = DCGRU(2 => 5, 2, g1.num_nodes);
julia> y = dcgru(g1, x1);
julia> size(y)
(5, 5)
julia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);
julia> z = dcgru(g2, x2);
julia> size(z)
(5, 5, 30)
GraphNeuralNetworks.GConvGRU
— MethodGConvGRU(in => out, k, n; [bias, init, init_state])
Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.
Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.
Arguments
in
: Number of input features.out
: Number of output features.k
: Chebyshev polynomial order.n
: Number of nodes in the graph.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.init_state
: Initial state of the hidden stat of the GRU layer. Defaultzeros32
.
Examples
julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
julia> ggru = GConvGRU(2 => 5, 2, g1.num_nodes);
julia> y = ggru(g1, x1);
julia> size(y)
(5, 5)
julia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);
julia> z = ggru(g2, x2);
julia> size(z)
(5, 5, 30)
GraphNeuralNetworks.GConvLSTM
— MethodGConvLSTM(in => out, k, n; [bias, init, init_state])
Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.
Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.
Arguments
in
: Number of input features.out
: Number of output features.k
: Chebyshev polynomial order.n
: Number of nodes in the graph.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.init_state
: Initial state of the hidden stat of the LSTM layer. Defaultzeros32
.
Examples
julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
julia> gclstm = GConvLSTM(2 => 5, 2, g1.num_nodes);
julia> y = gclstm(g1, x1);
julia> size(y)
(5, 5)
julia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);
julia> z = gclstm(g2, x2);
julia> size(z)
(5, 5, 30)
GraphNeuralNetworks.TGCN
— MethodTGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])
Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.
Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.
Arguments
in
: Number of input features.out
: Number of output features.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.init_state
: Initial state of the hidden stat of the GRU layer. Defaultzeros32
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaultfalse
.use_edge_weight
: Iftrue
, consider the edge weights in the input graph (if available). Ifadd_self_loops=true
the new weights will be set to 1. This option is ignored if theedge_weight
is explicitly provided in the forward pass. Defaultfalse
.
Examples
julia> tgcn = TGCN(2 => 6)
Recur(
TGCNCell(
GCNConv(2 => 6, σ), # 18 parameters
GRUv3Cell(6 => 6), # 240 parameters
Float32[0.0; 0.0; … ; 0.0; 0.0;;], # 6 parameters (all zero)
2,
6,
),
) # Total: 8 trainable arrays, 264 parameters,
# plus 1 non-trainable, 6 parameters, summarysize 1.492 KiB.
julia> g, x = rand_graph(5, 10), rand(Float32, 2, 5);
julia> y = tgcn(g, x);
julia> size(y)
(6, 5)
julia> Flux.reset!(tgcn);
julia> tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)) |> size # batch size of 20
(6, 5, 20)
Failing to call reset!
when the input batch size changes can lead to unexpected behavior.