Developer Notes
Develop and Managing the Monorepo
Development Enviroment
GraphNeuralNetworks.jl is package hosted in a monorepo that contains multiple packages. The GraphNeuralNetworks.jl package depends on GNNGraphs.jl, also hosted in the same monorepo.
pkg> activate .
pkg> dev ./GNNGraphs
Add a New Layer
To add a new graph convolutional layer and make it available in both the Flux-based frontend (GraphNeuralNetworks.jl) and the Lux-based frontend (GNNLux), you need to:
- Add the functional version to GNNlib
- Add the stateful version to GraphNeuralNetworks
- Add the stateless version to GNNLux
- Add the layer to the table in docs/api/conv.md
Versions and Tagging
Each PR should update the version number in the Porject.toml file of each involved package if needed by semnatic versioning. For instance, when adding new features GNNGraphs could move from "1.17.5" to "1.18.0-DEV". The "DEV" will be removed when the package is tagged and released. Pay also attention to updating the compat bounds, e.g. GraphNeuralNetworks might require a newer version of GNNGraphs.
Generate Documentation Locally
For generating the documentation locally
cd docs
julia
(@v1.10) pkg> activate .
Activating project at `~/.julia/dev/GraphNeuralNetworks/docs`
(docs) pkg> dev ../ ../GNNGraphs/
Resolving package versions...
No Changes to `~/.julia/dev/GraphNeuralNetworks/docs/Project.toml`
No Changes to `~/.julia/dev/GraphNeuralNetworks/docs/Manifest.toml`
julia> include("make.jl")
Benchmarking
You can benchmark the effect on performance of your commits using the script perf/perf.jl
.
First, checkout and benchmark the master branch:
julia> include("perf.jl")
julia> df = run_benchmarks()
# observe results
julia> for g in groupby(df, :layer); println(g, "\n"); end
julia> @save "perf_master_20210803_mymachine.jld2" dfmaster=df
Now checkout your branch and do the same:
julia> df = run_benchmarks()
julia> @save "perf_pr_20210803_mymachine.jld2" dfpr=df
Finally, compare the results:
julia> @load "perf_master_20210803_mymachine.jld2"
julia> @load "perf_pr_20210803_mymachine.jld2"
julia> compare(dfpr, dfmaster)
Caching tutorials
Tutorials in GraphNeuralNetworks.jl are written in Pluto and rendered using DemoCards.jl and PlutoStaticHTML.jl. Rendering a Pluto notebook is time and resource-consuming, especially in a CI environment. So we use the caching functionality provided by PlutoStaticHTML.jl to reduce CI time.
If you are contributing a new tutorial or making changes to the existing notebook, generate the docs locally before committing/pushing. For caching to work, the cache environment(your local) and the documenter CI should have the same Julia version (e.g. "v1.9.1", also the patch number must match). So use the documenter CI Julia version for generating docs locally.
julia --version # check julia version before generating docs
julia --project=docs docs/make.jl
Note: Use juliaup for easy switching of Julia versions.
During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.
git add docs/pluto_output # add generated cache
Check the documenter CI logs to ensure that it used the local cache: