PyMPDATA_MPI
PyMPDATA-MPI
PyMPDATA-MPI constitutes a PyMPDATA + numba-mpi coupler enabling numerical solutions of transport equations with the MPDATA numerical scheme in a hybrid parallelisation model with both multi-threading and MPI distributed memory communication. PyMPDATA-MPI adapts to API of PyMPDATA offering domain decomposition logic.
Example gallery
In a minimal setup, PyMPDATA-MPI can be used to solve the following transport equation: $$\partial_t (G \psi) + \nabla \cdot (Gu \psi)= 0$$ in an environment with multiple nodes. Every node (process) is responsible for computing its part of the decomposed domain.
Spherical scenario (2D)
In spherical geometry, the $G$ factor represents the Jacobian of coordinate transformation.
In this example (based on a test case from Williamson & Rasch 1989),
domain decomposition is done cutting the sphere along meridians.
The inner dimension uses the MPIPolar
boundary condition class, while the outer dimension uses
MPIPeriodic
.
Note that the spherical animations below depict simulations without MPDATA corrective iterations,
i.e. only plain first-order upwind scheme is used (FIX ME).
1 worker (n_threads = 1)
2 workers (MPI_DIM = 0, n_threads = 1)
Cartesian scenario (2D)
In the cartesian example below (based on a test case from Arabas et al. 2014), a constant advector field $u$ is used (and $G=1$). MPI (Message Passing Interface) is used for handling data transfers and synchronisation with the domain decomposition across MPI workers done in either inner or in the outer dimension (user setting). Multi-threading (using, e.g., OpenMP via Numba) is used for shared-memory parallelisation within subdomains (indicated by dotted lines in the animations below) with threading subdomain split done across the inner dimension (internal PyMPDATA logic). In this example, two corrective MPDATA iterations are employed.
1 worker (n_threads=3)
2 workers (MPI_DIM = OUTER, n_threads = 3)
2 workers (MPI_DIM = INNER, n_threads = 3)
3 workers (MPI_DIM = OUTER, n_threads = 3)
3 workers (MPI_DIM = INNER, n_threads = 3)
Shallow Water Scenario
The Shallow Water Scenario is based on a numerical test case from Jarecka et al. 2015. The scenario follows Cartesian Scenario, but implements its own solving algorithm to account for changes of the advector field induced by the advectee.
MPDATA with "nonoscillatory" and "infinite-gauge" options, n_threads = 1, MPI_DIM=OUTER
MPDATA with "nonoscillatory" and "infinite-gauge" options, n_threads = 1, MPI_DIM=INNER
Package architecture
flowchart BT H5PY ---> HDF{{HDF5}} subgraph pythonic-dependencies [Python] TESTS --> H[pytest-mpi] subgraph PyMPDATA-MPI ["PyMPDATA-MPI"] TESTS["PyMPDATA-MPI[tests]"] --> CASES(simulation scenarios) A1["PyMPDATA-MPI[examples]"] --> CASES CASES --> D[PyMPDATA-MPI] end A1 ---> C[py-modelrunner] CASES ---> H5PY[h5py] D --> E[numba-mpi] H --> X[pytest] E --> N F --> N[Numba] D --> F[PyMPDATA] end H ---> MPI C ---> slurm{{slurm}} N --> OMPI{{OpenMP}} N --> L{{LLVM}} E ---> MPI{{MPI}} HDF --> MPI slurm --> MPI style D fill:#7ae7ff,stroke-width:2px,color:#2B2B2B click H "https://pypi.org/p/pytest-mpi" click X "https://pypi.org/p/pytest" click F "https://pypi.org/p/PyMPDATA" click N "https://pypi.org/p/numba" click C "https://pypi.org/p/py-modelrunner" click H5PY "https://pypi.org/p/h5py" click E "https://pypi.org/p/numba-mpi" click A1 "https://pypi.org/p/PyMPDATA-MPI" click D "https://pypi.org/p/PyMPDATA-MPI" click TESTS "https://pypi.org/p/PyMPDATA-MPI"
Rectangular boxes indicate pip-installable Python packages (click to go to pypi.org package site).
Credits & acknowledgments:
PyMPDATA-MPI started as a separate project for the MSc thesis of Kacper Derlatka (@Delcior). Integration of PyMPDATA-MPI into PyMPDATA repo was carried out as a part of BEng project of Michał Wroński.
Development of PyMPDATA-MPI has been supported by the Poland's National Science Centre (grant no. 2020/39/D/ST10/01220).
We acknowledge Poland’s high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2023/016369
copyright: Jagiellonian University & AGH University of Krakow licence: GPL v3
Design goals
- MPI support for PyMPDATA implemented externally (i.e., not incurring any overhead or additional dependencies for PyMPDATA users)
- MPI calls within Numba njitted code (hence not using
mpi4py
, but rathernumba-mpi
) - hybrid domain-decomposition parallelism: threading (internal in PyMPDATA, in the inner dimension) + MPI (either inner or outer dimension)
- example simulation scenarios featuring HDF5/MPI-IO output storage (using h5py)
- py-modelrunner simulation orchestration
- portability across Linux & macOS (no Windows support as of now due to challenges in getting HDF5/MPI-IO to work there)
- Continuous Integration (CI) with different OSes and different MPI implementations (leveraging to mpi4py's setup-mpi Github Action)
- full test coverage including CI builds asserting on same results with multi-node vs. single-node computations (with help of pytest-mpi)
- ships as a pip-installable package - aimed to be a dependency of domain-specific packages
Related resources
open-source Large-Eddy-Simulation and related software
Julia
https://github.com/CliMA/ClimateMachine.jl/
C++
- https://github.com/igfuw/UWLCM
https://github.com/mrnorman/portUrb
C/CUDA
https://github.com/NCAR/FastEddy-model
FORTRAN
- https://github.com/uclales/uclales
- https://github.com/UCLALES-SALSA/UCLALES-SALSA
- https://github.com/igfuw/bE_SDs
- https://github.com/pencil-code/pencil-code
- https://github.com/AtmosFOAM/AtmosFOAM
https://github.com/scale-met/scale
Python (incl. Cython)
- https://github.com/pnnl/pinacles
- https://github.com/google-research/swirl-jatmos
- https://github.com/MetLab-HKUST/LEX/