Temporal Graph-Convolutional Layers
Convolutions for time-varying graphs (temporal graphs) such as the TemporalSnapshotsGNNGraph
.
GraphNeuralNetworks.DCGRUCell
— TypeDCGRUCell(in => out, k; [bias, init])
Diffusion Convolutional Recurrent Neural Network (DCGRU) cell from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.
Applyis a DConv
layer to model spatial dependencies, in combination with a Gated Recurrent Unit (GRU) cell to model temporal dependencies.
Arguments
in
: Number of input node features.out
: Number of output node features.k
: Diffusion step for theDConv
.bias
: Add learnable bias. Defaulttrue
.init
: Convolution weights' initializer. Defaultglorot_uniform
.
Forward
cell(g::GNNGraph, x, [h])
g
: The input graph.x
: The node features. It should be a matrix of sizein x num_nodes
.h
: The current state of the GRU cell. It is a matrix of sizeout x num_nodes
. If not provided, it is assumed to be a matrix of zeros.
Performs one recurrence step and returns a tuple (h, h)
, where h
is the updated hidden state of the GRU cell.
Examples
julia> using GraphNeuralNetworks, Flux
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
julia> x = [rand(Float32, d_in, num_nodes) for t in 1:timesteps];
julia> cell = DCGRUCell(d_in => d_out, 2);
julia> state = Flux.initialstates(cell);
julia> y = state;
julia> for xt in x
y, state = cell(g, xt, state)
end
julia> size(y) # (d_out, num_nodes)
(3, 5)
GraphNeuralNetworks.EvolveGCNOCell
— Type" EvolveGCNOCell(in => out; bias = true, init = glorot_uniform)
Evolving Graph Convolutional Network cell of type "-O" from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.
Uses a GCNConv
layer to model spatial dependencies, and an LSTMCell
to model temporal dependencies. Can work with time-varying graphs and node features.
Arguments
in => out
: A pair wherein
is the number of input node features andout
is the number of output node features.bias
: Add learnable bias for the convolution and the lstm cell. Defaulttrue
.init
: Weights' initializer for the convolution. Defaultglorot_uniform
.
Forward
cell(g::GNNGraph, x, [state]) -> x, state
g
: The input graph.x
: The node features. It should be a matrix of sizein x num_nodes
.state
: The current state of the cell. A state is a tuple(weight, lstm)
whereweight
is the convolution's weight andlstm
is the lstm's state. If not provided, it is generated by callingFlux.initialstates(cell)
.
Returns the updated node features x
and the updated state.
julia> using GraphNeuralNetworks, Flux
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = [rand_graph(num_nodes, num_edges) for t in 1:timesteps];
julia> x = [rand(Float32, d_in, num_nodes) for t in 1:timesteps];
julia> cell1 = EvolveGCNOCell(d_in => d_out)
EvolveGCNOCell(2 => 3) # 321 parameters
julia> cell2 = EvolveGCNOCell(d_out => d_out)
EvolveGCNOCell(3 => 3) # 696 parameters
julia> state1 = Flux.initialstates(cell1);
julia> state2 = Flux.initialstates(cell2);
julia> outputs = [];
julia> for t in 1:timesteps
zt, state1 = cell1(g[t], x[t], state1)
yt, state2 = cell2(g[t], zt, state2)
outputs = vcat(outputs, [yt])
end
julia> size(outputs[end]) # (d_out, num_nodes)
(3, 5)
GraphNeuralNetworks.GConvGRUCell
— TypeGConvGRUCell(in => out, k; [bias, init])
Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent cell from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.
Uses ChebConv
to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.
Arguments
in => out
: A pair wherein
is the number of input node features andout
is the number of output node features.k
: Chebyshev polynomial order.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.
Forward
cell(g::GNNGraph, x, [h])
g
: The input graph.x
: The node features. It should be a matrix of sizein x num_nodes
.h
: The current hidden state of the GRU cell. If given, it is a matrix of sizeout x num_nodes
. If not provided, it is assumed to be a matrix of zeros.
Performs one recurrence step and returns a tuple (h, h)
, where h
is the updated hidden state of the GRU cell.
Examples
julia> using GraphNeuralNetworks, Flux
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
julia> x = [rand(Float32, d_in, num_nodes) for t in 1:timesteps];
julia> cell = GConvGRUCell(d_in => d_out, 2);
julia> state = Flux.initialstates(cell);
julia> y = state;
julia> for xt in x
y, state = cell(g, xt, state)
end
julia> size(y) # (d_out, num_nodes)
(3, 5)
GraphNeuralNetworks.GConvLSTMCell
— TypeGConvLSTMCell(in => out, k; [bias, init])
Graph Convolutional LSTM recurrent cell from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.
Uses ChebConv
to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.
Arguments
in => out
: A pair wherein
is the number of input node features andout
is the number of output node features.k
: Chebyshev polynomial order.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.
Forward
cell(g::GNNGraph, x, [state])
g
: The input graph.x
: The node features. It should be a matrix of sizein x num_nodes
.state
: The current state of the LSTM cell. If given, it is a tuple(h, c)
where bothh
andc
are arrays of sizeout x num_nodes
. If not provided, it is assumed to be a tuple of matrices of zeros.
Performs one recurrence step and returns a tuple (output, state)
, where output
is the updated hidden state h
of the LSTM cell and state
is the updated tuple (h, c)
.
Examples
julia> using GraphNeuralNetworks, Flux
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
julia> x = [rand(Float32, d_in, num_nodes) for t in 1:timesteps];
julia> cell = GConvLSTMCell(d_in => d_out, 2);
julia> state = Flux.initialstates(cell);
julia> y = state[1];
julia> for xt in x
y, state = cell(g, xt, state)
end
julia> size(y) # (d_out, num_nodes)
(3, 5)
GraphNeuralNetworks.GNNRecurrence
— TypeGNNRecurrence(cell)
Construct a recurrent layer that applies the graph recurrent cell
forward multiple times to process an entire temporal sequence of node features at once.
The cell
has to satisfy the following interface for the forward pass: yt, state = cell(g, xt, state)
, where xt
is the input node features, yt
is the updated node features, state
is the cell state to be updated.
Forward
layer(g, x, [state])
Applies the recurrent cell to each timestep of the input sequence.
Arguments
g
: The inputGNNGraph
orTemporalSnapshotsGNNGraph
.- If
GNNGraph
, the same graph is used for all timesteps. - If
TemporalSnapshotsGNNGraph
, a different graph is used for each timestep. Not all cells support this.
- If
x
: The time-varying node features.- If
g
isGNNGraph
, it is an array of sizein x timesteps x num_nodes
. - If
g
isTemporalSnapshotsGNNGraph
, it is an vector of lengthtimesteps
, with elementt
of sizein x num_nodes_t
.
- If
state
: The initial state for the cell. If not provided, it is generated by callingFlux.initialstates(cell)
.
Return
Returns the updated node features:
- If
g
isGNNGraph
, returns an array of sizeout_features x timesteps x num_nodes
. - If
g
isTemporalSnapshotsGNNGraph
, returns a vector of lengthtimesteps
, with elementt
of sizeout_features x num_nodes_t
.
Examples
The following example considers a static graph and a time-varying node features.
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
GNNGraph:
num_nodes: 5
num_edges: 10
julia> x = rand(Float32, d_in, timesteps, num_nodes);
julia> cell = GConvLSTMCell(d_in => d_out, 2)
GConvLSTMCell(2 => 3, 2) # 168 parameters
julia> layer = GNNRecurrence(cell)
GNNRecurrence(
GConvLSTMCell(2 => 3, 2), # 168 parameters
) # Total: 24 arrays, 168 parameters, 2.023 KiB.
julia> y = layer(g, x);
julia> size(y) # (d_out, timesteps, num_nodes)
(3, 5, 5)
Now consider a time-varying graph and time-varying node features.
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> num_nodes = [10, 10, 10, 10, 10];
julia> num_edges = [10, 12, 14, 16, 18];
julia> snapshots = [rand_graph(n, m) for (n, m) in zip(num_nodes, num_edges)];
julia> tg = TemporalSnapshotsGNNGraph(snapshots)
julia> x = [rand(Float32, d_in, n) for n in num_nodes];
julia> cell = EvolveGCNOCell(d_in => d_out)
EvolveGCNOCell(2 => 3) # 321 parameters
julia> layer = GNNRecurrence(cell)
GNNRecurrence(
EvolveGCNOCell(2 => 3), # 321 parameters
) # Total: 5 arrays, 321 parameters, 1.535 KiB.
julia> y = layer(tg, x);
julia> length(y) # timesteps
5
julia> size(y[end]) # (d_out, num_nodes[end])
(3, 10)
GraphNeuralNetworks.TGCNCell
— TypeTGCNCell(in => out; kws...)
Recurrent graph convolutional cell from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.
Uses two stacked GCNConv
layers to model spatial dependencies, and a GRU mechanism to model temporal dependencies.
in
and out
are the number of input and output node features, respectively. The keyword arguments are passed to the GCNConv
constructor.
Forward
cell(g::GNNGraph, x, [state])
g
: The input graph.x
: The node features. It should be a matrix of sizein x num_nodes
.state
: The current state of the cell. If not provided, it is generated by callingFlux.initialstates(cell)
. The state is a matrix of sizeout x num_nodes
.
Returns the updated node features and the updated state.
Examples
julia> using GraphNeuralNetworks, Flux
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
julia> x = [rand(Float32, d_in, num_nodes) for t in 1:timesteps];
julia> cell = DCGRUCell(d_in => d_out, 2);
julia> state = Flux.initialstates(cell);
julia> y = state;
julia> for xt in x
y, state = cell(g, xt, state)
end
julia> size(y) # (d_out, num_nodes)
(3, 5)
GraphNeuralNetworks.DCGRU
— MethodDCGRU(args...; kws...)
Construct a recurrent layer corresponding to the DCGRUCell
cell. It can be used to process an entire temporal sequence of node features at once.
The arguments are passed to the DCGRUCell
constructor. See GNNRecurrence
for more details.
Examples
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
julia> x = rand(Float32, d_in, timesteps, num_nodes);
julia> layer = DCGRU(d_in => d_out, 2)
GNNRecurrence(
DCGRUCell(2 => 3, 2), # 189 parameters
) # Total: 6 arrays, 189 parameters, 1.184 KiB.
julia> y = layer(g, x);
julia> size(y) # (d_out, timesteps, num_nodes)
(3, 5, 5)
GraphNeuralNetworks.EvolveGCNO
— MethodEvolveGCNO(args...; kws...)
Construct a recurrent layer corresponding to the EvolveGCNOCell
cell. It can be used to process an entire temporal sequence of graphs and node features at once.
The arguments are passed to the EvolveGCNOCell
constructor. See GNNRecurrence
for more details.
Examples
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> num_nodes = [10, 10, 10, 10, 10];
julia> num_edges = [10, 12, 14, 16, 18];
julia> snapshots = [rand_graph(n, m) for (n, m) in zip(num_nodes, num_edges)];
julia> tg = TemporalSnapshotsGNNGraph(snapshots)
julia> x = [rand(Float32, d_in, n) for n in num_nodes];
julia> cell = EvolveGCNO(d_in => d_out)
GNNRecurrence(
EvolveGCNOCell(2 => 3), # 321 parameters
) # Total: 5 arrays, 321 parameters, 1.535 KiB.
julia> y = layer(tg, x);
julia> length(y) # timesteps
5
julia> size(y[end]) # (d_out, num_nodes[end])
(3, 10)
GraphNeuralNetworks.GConvGRU
— MethodGConvGRU(args...; kws...)
Construct a recurrent layer corresponding to the GConvGRUCell
cell. It can be used to process an entire temporal sequence of node features at once.
The arguments are passed to the GConvGRUCell
constructor. See GNNRecurrence
for more details.
Examples
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
julia> x = rand(Float32, d_in, timesteps, num_nodes);
julia> layer = GConvGRU(d_in => d_out, 2)
GConvGRU(
GConvGRUCell(2 => 3, 2), # 108 parameters
) # Total: 12 arrays, 108 parameters, 1.148 KiB.
julia> y = layer(g, x);
julia> size(y) # (d_out, timesteps, num_nodes)
(3, 5, 5)
GraphNeuralNetworks.GConvLSTM
— MethodGConvLSTM(args...; kws...)
Construct a recurrent layer corresponding to the GConvLSTMCell
cell. It can be used to process an entire temporal sequence of node features at once.
The arguments are passed to the GConvLSTMCell
constructor. See GNNRecurrence
for more details.
Examples
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
julia> x = rand(Float32, d_in, timesteps, num_nodes);
julia> layer = GConvLSTM(d_in => d_out, 2)
GNNRecurrence(
GConvLSTMCell(2 => 3, 2), # 168 parameters
) # Total: 24 arrays, 168 parameters, 2.023 KiB.
julia> y = layer(g, x);
julia> size(y) # (d_out, timesteps, num_nodes)
(3, 5, 5)
GraphNeuralNetworks.TGCN
— MethodTGCN(args...; kws...)
Construct a recurrent layer corresponding to the TGCNCell
cell.
The arguments are passed to the TGCNCell
constructor. See GNNRecurrence
for more details.
Examples
julia> num_nodes, num_edges = 5, 10;
julia> d_in, d_out = 2, 3;
julia> timesteps = 5;
julia> g = rand_graph(num_nodes, num_edges);
julia> x = rand(Float32, d_in, timesteps, num_nodes);
julia> layer = TGCN(d_in => d_out)
julia> y = layer(g, x);
julia> size(y) # (d_out, timesteps, num_nodes)
(3, 5, 5)