Convolutional Layers
Many different types of graphs convolutional layers have been proposed in the literature. Choosing the right layer for your application could involve a lot of exploration. Multiple graph convolutional layers are typically stacked together to create a graph neural network model (see GNNChain
).
The table below lists all graph convolutional layers implemented in the GNNLux.jl. It also highlights the presence of some additional capabilities with respect to basic message passing:
- Sparse Ops: implements message passing as multiplication by sparse adjacency matrix instead of the gather/scatter mechanism. This can lead to better CPU performances but it is not supported on GPU yet.
- Edge Weight: supports scalar weights (or equivalently scalar features) on edges.
- Edge Features: supports feature vectors on edges.
- Heterograph: supports heterogeneous graphs (see
GNNHeteroGraph
). - TemporalSnapshotsGNNGraphs: supports temporal graphs (see
TemporalSnapshotsGNNGraph
) by applying the convolution layers to each snapshot independently.
Layer | Sparse Ops | Edge Weight | Edge Features | Heterograph | TemporalSnapshotsGNNGraphs |
---|---|---|---|---|---|
AGNNConv | ✓ | ||||
CGConv | ✓ | ✓ | ✓ | ||
ChebConv | ✓ | ||||
EGNNConv | ✓ | ||||
EdgeConv | ✓ | ||||
GATConv | ✓ | ✓ | ✓ | ||
GATv2Conv | ✓ | ✓ | ✓ | ||
GatedGraphConv | ✓ | ✓ | |||
GCNConv | ✓ | ✓ | ✓ | ||
GINConv | ✓ | ✓ | ✓ | ||
GMMConv | ✓ | ||||
GraphConv | ✓ | ✓ | ✓ | ||
MEGNetConv | ✓ | ||||
NNConv | ✓ | ||||
ResGatedGraphConv | ✓ | ✓ | |||
SAGEConv | ✓ | ✓ | ✓ | ||
SGConv | ✓ | ✓ |
Docs
GNNLux.AGNNConv
— TypeAGNNConv(; init_beta=1.0f0, trainable=true, add_self_loops=true)
Attention-based Graph Neural Network layer from paper Attention-based Graph Neural Network for Semi-Supervised Learning.
The forward pass is given by
\[\mathbf{x}_i' = \sum_{j \in N(i)} \alpha_{ij} \mathbf{x}_j\]
where the attention coefficients $\alpha_{ij}$ are given by
\[\alpha_{ij} =\frac{e^{\beta \cos(\mathbf{x}_i, \mathbf{x}_j)}} {\sum_{j'}e^{\beta \cos(\mathbf{x}_i, \mathbf{x}_{j'})}}\]
with the cosine distance defined by
\[\cos(\mathbf{x}_i, \mathbf{x}_j) = \frac{\mathbf{x}_i \cdot \mathbf{x}_j}{\lVert\mathbf{x}_i\rVert \lVert\mathbf{x}_j\rVert}\]
and $\beta$ a trainable parameter if trainable=true
.
Arguments
init_beta
: The initial value of $\beta$. Default 1.0f0.trainable
: If true, $\beta$ is trainable. Defaulttrue
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaulttrue
.
Examples:
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
# create layer
l = AGNNConv(init_beta=2.0f0)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st)
GNNLux.CGConv
— TypeCGConv((in, ein) => out, act = identity; residual = false,
use_bias = true, init_weight = glorot_uniform, init_bias = zeros32)
CGConv(in => out, ...)
The crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation
\[\mathbf{x}_i' = \mathbf{x}_i + \sum_{j\in N(i)}\sigma(W_f \mathbf{z}_{ij} + \mathbf{b}_f)\, act(W_s \mathbf{z}_{ij} + \mathbf{b}_s)\]
where $\mathbf{z}_{ij}$ is the node and edge features concatenation $[\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{j\to i}]$ and $\sigma$ is the sigmoid function. The residual $\mathbf{x}_i$ is added only if residual=true
and the output size is the same as the input size.
Arguments
in
: The dimension of input node features.ein
: The dimension of input edge features.
If ein
is not given, assumes that no edge features are passed as input in the forward pass.
out
: The dimension of output node features.act
: Activation function.residual
: Add a residual connection.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create random graph
g = rand_graph(rng, 5, 6)
x = rand(rng, Float32, 2, g.num_nodes)
e = rand(rng, Float32, 3, g.num_edges)
l = CGConv((2, 3) => 4, tanh)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, e, ps, st) # size: (4, num_nodes)
# No edge features
l = CGConv(2 => 4, tanh)
ps, st = LuxCore.setup(rng, l)
y, st = l(g, x, ps, st) # size: (4, num_nodes)
GNNLux.ChebConv
— TypeChebConv(in => out, k; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)
Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.
Implements
\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]
where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:
\[\begin{aligned} Z^{(0)} &= X \\ Z^{(1)} &= \hat{L} X \\ Z^{(k)} &= 2 \hat{L} Z^{(k-1)} - Z^{(k-2)} \end{aligned}\]
with $\hat{L}$ the scaled_laplacian
.
Arguments
in
: The dimension of input features.out
: The dimension of output features.k
: The order of Chebyshev polynomial.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
x = randn(rng, Float32, 3, g.num_nodes)
# create layer
l = ChebConv(3 => 5, 5)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st) # size of the output y: 5 × num_nodes
GNNLux.DConv
— TypeDConv(in => out, k; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)
Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.
Arguments
in
: The dimension of input features.out
: The dimension of output features.k
: Number of diffusion steps.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create random graph
g = GNNGraph(rand(rng, 10, 10), ndata = rand(rng, Float32, 2, 10))
dconv = DConv(2 => 4, 4)
# setup layer
ps, st = LuxCore.setup(rng, dconv)
# forward pass
y, st = dconv(g, g.ndata.x, ps, st) # size: (4, num_nodes)
GNNLux.EGNNConv
— TypeEGNNConv((in, ein) => out; hidden_size=2in, residual=false)
EGNNConv(in => out; hidden_size=2in, residual=false)
Equivariant Graph Convolutional Layer from E(n) Equivariant Graph Neural Networks.
The layer performs the following operation:
\[\begin{aligned} \mathbf{m}_{j\to i} &=\phi_e(\mathbf{h}_i, \mathbf{h}_j, \lVert\mathbf{x}_i-\mathbf{x}_j\rVert^2, \mathbf{e}_{j\to i}),\\ \mathbf{x}_i' &= \mathbf{x}_i + C_i\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_i-\mathbf{x}_j)\phi_x(\mathbf{m}_{j\to i}),\\ \mathbf{m}_i &= C_i\sum_{j\in\mathcal{N}(i)} \mathbf{m}_{j\to i},\\ \mathbf{h}_i' &= \mathbf{h}_i + \phi_h(\mathbf{h}_i, \mathbf{m}_i) \end{aligned}\]
where $\mathbf{h}_i$, $\mathbf{x}_i$, $\mathbf{e}_{j\to i}$ are invariant node features, equivariant node features, and edge features respectively. $\phi_e$, $\phi_h$, and $\phi_x$ are two-layer MLPs. C
is a constant for normalization, computed as $1/|\mathcal{N}(i)|$.
Constructor Arguments
in
: Number of input features forh
.out
: Number of output features forh
.ein
: Number of input edge features.hidden_size
: Hidden representation size.residual
: Iftrue
, add a residual connection. Only possible ifin == out
. Defaultfalse
.
Forward Pass
l(g, x, h, e=nothing, ps, st)
Forward Pass Arguments:
g
: The graph.x
: Matrix of equivariant node coordinates.h
: Matrix of invariant node features.e
: Matrix of invariant edge features. Defaultnothing
.ps
: Parameters.st
: State.
Returns updated h
and x
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create random graph
g = rand_graph(rng, 10, 10)
h = randn(rng, Float32, 5, g.num_nodes)
x = randn(rng, Float32, 3, g.num_nodes)
egnn = EGNNConv(5 => 6, 10)
# setup layer
ps, st = LuxCore.setup(rng, egnn)
# forward pass
(hnew, xnew), st = egnn(g, h, x, ps, st)
GNNLux.EdgeConv
— TypeEdgeConv(nn; aggr=max)
Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.
Performs the operation
\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]
where nn
generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.
Arguments
nn
: A (possibly learnable) function.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).
Examples:
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
in_channel = 3
out_channel = 5
g = GNNGraph(s, t)
x = rand(rng, Float32, in_channel, g.num_nodes)
# create layer
l = EdgeConv(Dense(2 * in_channel, out_channel), aggr = +)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st)
GNNLux.GATConv
— TypeGATConv(in => out, σ = identity; heads = 1, concat = true, negative_slope = 0.2, init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true, dropout=0.0)
GATConv((in, ein) => out, ...)
Graph attentional layer from the paper Graph Attention Networks.
Implements the operation
\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W \mathbf{x}_j\]
where the attention coefficients $\alpha_{ij}$ are given by
\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))\]
with $z_i$ a normalization factor.
In case ein > 0
is given, edge features of dimension ein
will be expected in the forward pass and the attention coefficients will be calculated as
\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))\]
Arguments
in
: The dimension of input node features.ein
: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).out
: The dimension of output node features.σ
: Activation function. Defaultidentity
.heads
: Number attention heads. Default1
.concat
: Concatenate layer output or not. If not, layer output is averaged over the heads. Defaulttrue
.negative_slope
: The parameter of LeakyReLU.Default0.2
.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaulttrue
.dropout
: Dropout probability on the normalized attention coefficient. Default0.0
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
in_channel = 3
out_channel = 5
g = GNNGraph(s, t)
x = randn(rng, Float32, 3, g.num_nodes)
# create layer
l = GATConv(in_channel => out_channel; add_self_loops = false, use_bias = false, heads=2, concat=true)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st)
GNNLux.GATv2Conv
— TypeGATv2Conv(in => out, σ = identity; heads = 1, concat = true, negative_slope = 0.2, init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true, dropout=0.0)
GATv2Conv((in, ein) => out, ...)
GATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.
Implements the operation
\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W_1 \mathbf{x}_j\]
where the attention coefficients $\alpha_{ij}$ are given by
\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_2 \mathbf{x}_i + W_1 \mathbf{x}_j))\]
with $z_i$ a normalization factor.
In case ein > 0
is given, edge features of dimension ein
will be expected in the forward pass and the attention coefficients will be calculated as
\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_3 \mathbf{e}_{j\to i} + W_2 \mathbf{x}_i + W_1 \mathbf{x}_j)).\]
Arguments
in
: The dimension of input node features.ein
: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).out
: The dimension of output node features.σ
: Activation function. Defaultidentity
.heads
: Number attention heads. Default1
.concat
: Concatenate layer output or not. If not, layer output is averaged over the heads. Defaulttrue
.negative_slope
: The parameter of LeakyReLU.Default0.2
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaulttrue
.dropout
: Dropout probability on the normalized attention coefficient. Default0.0
.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
in_channel = 3
out_channel = 5
ein = 3
g = GNNGraph(s, t)
x = randn(rng, Float32, 3, g.num_nodes)
# create layer
l = GATv2Conv((in_channel, ein) => out_channel, add_self_loops = false)
# setup layer
ps, st = LuxCore.setup(rng, l)
# edge features
e = randn(rng, Float32, ein, length(s))
# forward pass
y, st = l(g, x, e, ps, st)
GNNLux.GCNConv
— TypeGCNConv(in => out, σ=identity; [init_weight, init_bias, use_bias, add_self_loops, use_edge_weight])
Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.
Performs the operation
\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]
where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.
If the input graph has weighted edges and use_edge_weight=true
, than $a_{ij}$ will be computed as
\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]
Arguments
in
: Number of input features.out
: Number of output features.σ
: Activation function. Defaultidentity
.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaultfalse
.use_edge_weight
: Iftrue
, consider the edge weights in the input graph (if available). Ifadd_self_loops=true
the new weights will be set to 1. This option is ignored if theedge_weight
is explicitly provided in the forward pass. Defaultfalse
.
Forward
(::GCNConv)(g, x, [edge_weight], ps, st; norm_fn = d -> 1 ./ sqrt.(d), conv_weight=nothing)
Takes as input a graph g
, a node feature matrix x
of size [in, num_nodes]
, optionally an edge weight vector and the parameter and state of the layer. Returns a node feature matrix of size [out, num_nodes]
.
The norm_fn
parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d
) of each node in the graph. If conv_weight
is an AbstractMatrix
of size [out, in]
, then the convolution is performed using that weight matrix.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
x = randn(rng, Float32, 3, g.num_nodes)
# create layer
l = GCNConv(3 => 5)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y = l(g, x, ps, st) # size of the output first entry: 5 × num_nodes
# convolution with edge weights and custom normalization function
w = [1.1, 0.1, 2.3, 0.5]
custom_norm_fn(d) = 1 ./ sqrt.(d + 1) # Custom normalization function
y = l(g, x, w, ps, st; norm_fn = custom_norm_fn)
# Edge weights can also be embedded in the graph.
g = GNNGraph(s, t, w)
l = GCNConv(3 => 5, use_edge_weight=true)
ps, st = Lux.setup(rng, l)
y = l(g, x, ps, st) # same as l(g, x, w)
GNNLux.GINConv
— TypeGINConv(f, ϵ; aggr=+)
Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.
Implements the graph convolution
\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]
where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.
Arguments
f
: A (possibly learnable) function acting on node features.ϵ
: Weighting factor.
Examples:
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
in_channel = 3
out_channel = 5
g = GNNGraph(s, t)
x = randn(rng, Float32, in_channel, g.num_nodes)
# create dense layer
nn = Dense(in_channel, out_channel)
# create layer
l = GINConv(nn, 0.01f0, aggr = mean)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st) # size: out_channel × num_nodes
GNNLux.GMMConv
— TypeGMMConv((in, ein) => out, σ=identity; K = 1, residual = false init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)
Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation
\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]
where $w^a_{k}(e^a)$ for feature a
and kernel k
is given by
\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]
$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.
The input to the layer is a node feature array x
of size (num_features, num_nodes)
and edge pseudo-coordinate array e
of size (num_features, num_edges)
The residual $\mathbf{x}_i$ is added only if residual=true
and the output size is the same as the input size.
Arguments
in
: Number of input node features.ein
: Number of input edge features.out
: Number of output features.σ
: Activation function. Defaultidentity
.K
: Number of kernels. Default1
.residual
: Residual conncetion. Defaultfalse
.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s,t)
nin, ein, out, K = 4, 10, 7, 8
x = randn(rng, Float32, nin, g.num_nodes)
e = randn(rng, Float32, ein, g.num_edges)
# create layer
l = GMMConv((nin, ein) => out, K=K)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, e, ps, st) # size: out × num_nodes
GNNLux.GatedGraphConv
— TypeGatedGraphConv(out, num_layers;
aggr = +, init_weight = glorot_uniform)
Gated graph convolution layer from Gated Graph Sequence Neural Networks.
Implements the recursion
\[\begin{aligned} \mathbf{h}^{(0)}_i &= [\mathbf{x}_i; \mathbf{0}] \\ \mathbf{h}^{(l)}_i &= GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j) \end{aligned}\]
where $\mathbf{h}^{(l)}_i$ denotes the $l$-th hidden variables passing through GRU. The dimension of input $\mathbf{x}_i$ needs to be less or equal to out
.
Arguments
out
: The dimension of output features.num_layers
: The number of recursion steps.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).init_weight
: Weights' initializer. Defaultglorot_uniform
.
Examples:
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
out_channel = 5
num_layers = 3
g = GNNGraph(s, t)
# create layer
l = GatedGraphConv(out_channel, num_layers)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st) # size: out_channel × num_nodes
GNNLux.GraphConv
— TypeGraphConv(in => out, σ = identity; aggr = +, init_weight = glorot_uniform,init_bias = zeros32, use_bias = true)
Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.
Performs:
\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]
where the aggregation type is selected by aggr
.
Arguments
in
: The dimension of input features.out
: The dimension of output features.σ
: Activation function.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
in_channel = 3
out_channel = 5
g = GNNGraph(s, t)
x = randn(rng, Float32, 3, g.num_nodes)
# create layer
l = GraphConv(in_channel => out_channel, relu, use_bias = false, aggr = mean)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st) # size of the output y: 5 × num_nodes
GNNLux.MEGNetConv
— TypeMEGNetConv(ϕe, ϕv; aggr=mean)
MEGNetConv(in => out; aggr=mean)
Convolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x
and edge features e
and returns updated features x'
and e'
according to
\[\begin{aligned} \mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i;\, \mathbf{x}_j;\, \mathbf{e}_{i\to j}]),\\ \mathbf{x}_{i}' = \phi_v([\mathbf{x}_i;\, \square_{j\in \mathcal{N}(i)}\,\mathbf{e}_{j\to i}']). \end{aligned}\]
aggr
defines the aggregation to be performed.
If the neural networks ϕe
and ϕv
are not provided, they will be constructed from the in
and out
arguments instead as multi-layer perceptron with one hidden layer and relu
activations.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create a random graph
g = rand_graph(rng, 10, 30)
x = randn(rng, Float32, 3, 10)
e = randn(rng, Float32, 3, 30)
# create a MEGNetConv layer
m = MEGNetConv(3 => 3)
# setup layer
ps, st = LuxCore.setup(rng, m)
# forward pass
(x′, e′), st = m(g, x, e, ps, st)
GNNLux.NNConv
— TypeNNConv(in => out, f, σ=identity; aggr=+, init_bias = zeros32, use_bias = true, init_weight = glorot_uniform)
The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.
Performs the operation
\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]
where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e
of size (num_edge_features, num_edges)
, the function f
will return an batched matrices array whose size is (out, in, num_edges)
. For convenience, also functions returning a single (out*in, num_edges)
matrix are allowed.
Arguments
in
: The dimension of input node features.out
: The dimension of output node features.f
: A (possibly learnable) function acting on edge features.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).σ
: Activation function.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples:
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
n_in = 3
n_in_edge = 10
n_out = 5
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
x = randn(rng, Float32, n_in, g.num_nodes)
e = randn(rng, Float32, n_in_edge, g.num_edges)
# create dense layer
nn = Dense(n_in_edge => n_out * n_in)
# create layer
l = NNConv(n_in => n_out, nn, tanh, use_bias = true, aggr = +)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, e, ps, st) # size: n_out × num_nodes
GNNLux.ResGatedGraphConv
— TypeResGatedGraphConv(in => out, act=identity; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)
The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.
The layer's forward pass is given by
\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]
where the edge gates $\eta_{ij}$ are given by
\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]
Arguments
in
: The dimension of input features.out
: The dimension of output features.act
: Activation function.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples:
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
in_channel = 3
out_channel = 5
g = GNNGraph(s, t)
x = randn(rng, Float32, in_channel, g.num_nodes)
# create layer
l = ResGatedGraphConv(in_channel => out_channel, tanh, use_bias = true)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st) # size: out_channel × num_nodes
GNNLux.SAGEConv
— TypeSAGEConv(in => out, σ=identity; aggr=mean, init_weight = glorot_uniform, init_bias = zeros32, use_bias=true)
GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.
Performs:
\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]
where the aggregation type is selected by aggr
.
Arguments
in
: The dimension of input features.out
: The dimension of output features.σ
: Activation function.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples:
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
in_channel = 3
out_channel = 5
g = GNNGraph(s, t)
x = rand(rng, Float32, in_channel, g.num_nodes)
# create layer
l = SAGEConv(in_channel => out_channel, tanh, use_bias = false, aggr = +)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st) # size: out_channel × num_nodes
GNNLux.SGConv
— TypeSGConv(int => out, k = 1; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true,use_edge_weight = false)
SGC layer from Simplifying Graph Convolutional Networks Performs operation
\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]
where $\tilde{A}$ is $A + I$.
Arguments
in
: Number of input features.out
: Number of output features.k
: Number of hops k. Default1
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaultfalse
.use_edge_weight
: Iftrue
, consider the edge weights in the input graph (if available). Ifadd_self_loops=true
the new weights will be set to 1. Defaultfalse
.init_weight
: Weights' initializer. Defaultglorot_uniform
.init_bias
: Bias initializer. Defaultzeros32
.use_bias
: Add learnable bias. Defaulttrue
.
Examples
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
x = randn(rng, Float32, 3, g.num_nodes)
# create layer
l = SGConv(3 => 5; add_self_loops = true)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y, st = l(g, x, ps, st) # size: 5 × num_nodes
# convolution with edge weights
w = [1.1, 0.1, 2.3, 0.5]
y = l(g, x, w, ps, st)
# Edge weights can also be embedded in the graph.
g = GNNGraph(s, t, w)
l = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true)
ps, st = LuxCore.setup(rng, l)
y, st = l(g, x, ps, st) # same as l(g, x, w)