CAMP 1.0.0
Chemistry Across Multiple Phases
|
This part of Boot CAMP shows how to use CAMP's message passing functions. If you're only interested in using CAMP on a single processor, you can skip this part and move on to Boot CAMP: Part 5 - Aerosol Representations.
We'll wrap our MPI code with a compiler flag named USE_MPI
to make sure our box model can be built with or without MPI. The order of operations is important for MPI runs and is summarized in the following table.
Process | Operation |
---|---|
primary | camp_core => camp_core_t( input_files ) |
primary | call camp_core%initialize( ) |
primary | access camp_core_t properties/set up update_data_t objects |
primary | pack all objects on a buffer |
all | pass the buffer |
secondary | camp_core => camp_core_t( ) |
secondary | unpack the camp_core_t and other objects from the buffer |
all | call camp_core%solver_initialize( ) |
all | use update_data_t objects to update rates, etc. |
all | call camp_core%solve( camp_state, time_step ) |
all | deallocate all objects |
We'll go through this step-by-step, update our box model and discuss why each process in done when and where it is.
Note that the CAMP MPI functions use MPI_WORLD_COMM
by default, but they accept an optional comm
argument if you would like to use a different communicator. See the specific function documentation for details.
First, let's add the modules we need for MPI. We'll use the standard mpi module and the CAMP mpi module, with some custom functions.
Now we'll declare a buffer, a position index, and a pack size
Next, let's initialize MPI and wrap some of our existing code in a conditional statement that ensures we load the input data and initialize CAMP on the primary process only (we're including the existing call to the camp_core_t constructor and camp_core_t::initialize()
to show the placement of the start of our new conditional block):
The camp_core_t::initialize()
subroutine instructs the internal model elements to take their input data and condense it down into a small data block containing only the information they need to solve the chemical system during calls to camp_core_t::solve()
. The camp_core_t MPI functions pass only this condensed data to other processes. So, after the core is passed, you will not have access to the raw input data or model property_t objects that we used to set up the rxn_update_data_t objects in part 3. Thus, all the setup of rxn_update_data_t objects must be done on the primary process, before passing the core and update objects to the other processes.
So, let's end our first MPI conditional block after we setup the \(\ce{NO2}\) photolysis rxn_update_data_t object and before the call to camp_core_t::solver_initialize()
. The first step is to get the size of the buffer to be used to pass the objects (the existing check that the \(\ce{NO2}\) photolysis update data object is attached is included to show the placement of the following code block):
After we allocate the buffer on the primary process, we'll pack it with the object data:
Next, we'll pass the species indexes we looked up. (Remember, we won't be able to do this on the secondary processes.)
After we pack the objects and exit the primary process block, we'll pass the buffer to the other processes:
Next, we'll unpack the objects on the secondary processes:
Note that we call the camp_core_t constructor without passing the input file list. This creates an empty core on the secondary processes that we can fill with the packed data from the buffer. After unpacking the objects and deallocating the buffer, our message passing is complete, and the rest of the code remains the same, beginning with the call to solver_initialize()
.
This is not a very useful parallelization of our box model, as we're just solving the same system on every process, but it demonstrates how to initialize and pass the camp_core_t
and update_data_t
objects. The camp_state_t::state_var(:)
array can be accessed directly and passed however your model passes double-precision floating-point arrays, or you can use the camp_mpi_pack_size_real_array()
, camp_mpi_pack_real_array()
, and camp_mpi_unpack_real_array()
functions.
To finish up, let's add a conditional block around the output to print the results from the first secondary process, just to make sure our message passing is working, and finalize MPI.
To compile the model code with mpi, be sure to include the USE_MPI
flag definition:
In later installments of Boot CAMP we'll include a section towards the end that describes any MPI-related code needed to run the updates described.
Now that our messages are passed, it's aerosol time. That's the topic of the next installment of Boot CAMP!
To run a Docker container with MPI support, we'll need to build the image locally. So, we'll clone the CAMP repo, build the container with MPI and then run it:
Inside the container:
Back outside the container:
You should get the same results as described in Boot CAMP: Part 3 - Updating CAMP Parameters
< Previous: Boot CAMP: Part 3 - Updating CAMP Parameters Index Next: Boot CAMP: Part 5 - Aerosol Representations >