CAMP 1.0.0
Chemistry Across Multiple Phases
Boot CAMP: Part 4 - Message Passing

This part of Boot CAMP shows how to use CAMP's message passing functions. If you're only interested in using CAMP on a single processor, you can skip this part and move on to Boot CAMP: Part 5 - Aerosol Representations.

We'll wrap our MPI code with a compiler flag named USE_MPI to make sure our box model can be built with or without MPI. The order of operations is important for MPI runs and is summarized in the following table.

Process Operation
primary camp_core => camp_core_t( input_files )
primary call camp_core%initialize( )
primary access camp_core_t properties/set up update_data_t objects
primary pack all objects on a buffer
all pass the buffer
secondary camp_core => camp_core_t( )
secondary unpack the camp_core_t and other objects from the buffer
all call camp_core%solver_initialize( )
all use update_data_t objects to update rates, etc.
all call camp_core%solve( camp_state, time_step )
all deallocate all objects

We'll go through this step-by-step, update our box model and discuss why each process in done when and where it is.

Note that the CAMP MPI functions use MPI_WORLD_COMM by default, but they accept an optional comm argument if you would like to use a different communicator. See the specific function documentation for details.

First, let's add the modules we need for MPI. We'll use the standard mpi module and the CAMP mpi module, with some custom functions.

#ifdef CAMP_USE_MPI
use mpi
#endif
Wrapper functions for MPI.
Definition: mpi.F90:13

Now we'll declare a buffer, a position index, and a pack size

#ifdef CAMP_USE_MPI
integer(kind=i_kind) :: pos, pack_size
character, allocatable :: buffer(:)
#endif

Next, let's initialize MPI and wrap some of our existing code in a conditional statement that ensures we load the input data and initialize CAMP on the primary process only (we're including the existing call to the camp_core_t constructor and camp_core_t::initialize() to show the placement of the start of our new conditional block):

#ifdef CAMP_USE_MPI
call camp_mpi_init( )
if( camp_mpi_rank( ) .eq. 0 ) then
#endif
camp_core => camp_core_t( "my_config_file.json" )
call camp_core%initialize( )
subroutine camp_mpi_init()
Initialize MPI.
Definition: mpi.F90:58
integer function camp_mpi_rank(comm)
Returns the rank of the current process.
Definition: mpi.F90:128

The camp_core_t::initialize() subroutine instructs the internal model elements to take their input data and condense it down into a small data block containing only the information they need to solve the chemical system during calls to camp_core_t::solve(). The camp_core_t MPI functions pass only this condensed data to other processes. So, after the core is passed, you will not have access to the raw input data or model property_t objects that we used to set up the rxn_update_data_t objects in part 3. Thus, all the setup of rxn_update_data_t objects must be done on the primary process, before passing the core and update objects to the other processes.

So, let's end our first MPI conditional block after we setup the \(\ce{NO2}\) photolysis rxn_update_data_t object and before the call to camp_core_t::solver_initialize(). The first step is to get the size of the buffer to be used to pass the objects (the existing check that the \(\ce{NO2}\) photolysis update data object is attached is included to show the placement of the following code block):

#ifdef CAMP_USE_MPI
! Pack the core and the NO2 photolysis update data object
pack_size = camp_core%pack_size( ) + &
no2_photolysis%pack_size( )
allocate( buffer( pack_size ) )

After we allocate the buffer on the primary process, we'll pack it with the object data:

pos = 0
call camp_core%bin_pack( buffer, pos )
call no2_photolysis%bin_pack( buffer, pos )
end if ! primary process

Next, we'll pass the species indexes we looked up. (Remember, we won't be able to do this on the secondary processes.)

call camp_mpi_bcast_integer( idx_o3 )
call camp_mpi_bcast_integer( idx_no )
call camp_mpi_bcast_integer( idx_no2 )
call camp_mpi_bcast_integer( idx_o2 )
subroutine camp_mpi_bcast_integer(val, comm)
Broadcast the given value from process 0 to all other processes.
Definition: mpi.F90:317

After we pack the objects and exit the primary process block, we'll pass the buffer to the other processes:

call camp_mpi_bcast_integer( pack_size )
if( camp_mpi_rank( ) .gt. 0 ) then
allocate( buffer( pack_size ) )
end if
call camp_mpi_bcast_packed( buffer )
subroutine camp_mpi_bcast_packed(val, comm)
Broadcast the given value from process 0 to all other processes.
Definition: mpi.F90:371

Next, we'll unpack the objects on the secondary processes:

if( camp_mpi_rank( ) .gt. 0 ) then
camp_core => camp_core_t( )
pos = 0
call camp_core%bin_unpack( buffer, pos )
call no2_photolysis%bin_unpack( buffer, pos )
end if
deallocate( buffer )
#endif

Note that we call the camp_core_t constructor without passing the input file list. This creates an empty core on the secondary processes that we can fill with the packed data from the buffer. After unpacking the objects and deallocating the buffer, our message passing is complete, and the rest of the code remains the same, beginning with the call to solver_initialize().

This is not a very useful parallelization of our box model, as we're just solving the same system on every process, but it demonstrates how to initialize and pass the camp_core_t and update_data_t objects. The camp_state_t::state_var(:) array can be accessed directly and passed however your model passes double-precision floating-point arrays, or you can use the camp_mpi_pack_size_real_array(), camp_mpi_pack_real_array(), and camp_mpi_unpack_real_array() functions.

To finish up, let's add a conditional block around the output to print the results from the first secondary process, just to make sure our message passing is working, and finalize MPI.

#ifdef CAMP_USE_MPI
if( camp_mpi_rank( ) .eq. 1 ) then
#endif
write(*,fmt_hdr) "time", "O3", "NO", "NO2", "O2"
do i_time = 1, 100
call camp_core%solve( camp_state, 1.0d-15 ) ! time step in s
write(*,fmt_dat) i_time*1.0e-15, &
camp_state%state_var( idx_o3 ), &
camp_state%state_var( idx_no ), &
camp_state%state_var( idx_no2 ), &
camp_state%state_var( idx_o2 )
end do
#ifdef CAMP_USE_MPI
end if
#endif
subroutine camp_mpi_finalize()
Shut down MPI.
Definition: mpi.F90:91

To compile the model code with mpi, be sure to include the USE_MPI flag definition:

mpif90 -o run_box_model box_model.F90 -DCAMP_USE_MPI -lcamp -I/usr/local/include/camp
mpirun -v -np 2 run_box_model > output.txt

In later installments of Boot CAMP we'll include a section towards the end that describes any MPI-related code needed to run the updates described.

Now that our messages are passed, it's aerosol time. That's the topic of the next installment of Boot CAMP!


Docker Instructions

To run a Docker container with MPI support, we'll need to build the image locally. So, we'll clone the CAMP repo, build the container with MPI and then run it:

git clone https://github.com/open-atmos/camp.git
cd camp
docker build -f Dockerfile.mpi -t camp-test-mpi .
docker run --name camp -it camp-test-mpi bash

Inside the container:

sudo dnf install -y gnuplot
mkdir boot-camp
cd boot-camp
cp ../camp/doc/camp_tutorial/boot_camp/part_4_code/* .
mpif90 -o run_box_model box_model.F90 -DCAMP_USE_MPI -lcamp -I/usr/local/include/camp
mpirun -v -np 2 run_box_model > output.txt
gnuplot plot.conf
exit

Back outside the container:

docker cp camp:/home/test_user/boot-camp/results.png .
docker container rm camp
open results.png

You should get the same results as described in Boot CAMP: Part 3 - Updating CAMP Parameters


< Previous: Boot CAMP: Part 3 - Updating CAMP Parameters Index Next: Boot CAMP: Part 5 - Aerosol Representations >