|
|
## Motivation
|
|
|
|
|
|
We use MPI parallelism in order to run bigger stuff faster. To achieve that, our library is made MPI aware, meaning that our disk object model explicitly uses an MPI communicator to distribute all it scalar field other a set of MPI processes.
|
|
|
We use MPI parallelism in order to run bigger stuff faster. To that end, our libraries are made MPI aware. And we use the same libraries to run our simulations and our pre/post treatment visualization.
|
|
|
|
|
|
That same library is used by our post treatment tools to modify, visualize and check the consistency of our generated disks.
|
|
|
Yet, there are a few situations where having to run an MPI application can be problematic.
|
|
|
* Some PHC facilities won't let you run an MPI application anywhere else than on compute nodes. Which rules out login nodes and pre/post treatment nodes.
|
|
|
* You do not have access to and MPI implementation (unlikely).
|
|
|
|
|
|
Unfortunately, some clusters do not allow running MPI job on login or post treatment nodes, even if it's on one MPI process, and some MPI implementations won't let us run a single process job without MPI launcher.
|
|
|
If you're unlucky enough, these two flavors of stupidity will eventually collide.
|
|
|
|
|
|
Because of that, we want to be able to have a non MPI version of our code. And since we're not number crunchers, we want to use the same source code and minimize the use of the pre-processor.
|
|
|
There offer two ways to deal with that.
|
|
|
|
|
|
## Selected solution
|
|
|
## Generate both parallel and sequential build
|
|
|
|
|
|
We developed a minimal dummy Boost.MPI in the `boost::noopmpi` that mimic an 1 process communicator. Such a library does not actually need an actual MPI implementation.
|
|
|
This is the default build configuration. It will compile the code twice, once with the regular MPI implementation and once with an empty implementation. Sequential versions of the tools are prefixed with `seq_` (`seq_fargoInit` is generated alongside `fargoInit` for example).
|
|
|
|
|
|
Instead of using Boost.MPI headers directly, our code uses "switching" header that will define the `fmpi` namespace.
|
|
|
Depending on the **FARGO_SEQ** macro variable definition, that header will import either the `boost::mpi` implementation or the `boost::noopmpi` implementation.
|
|
|
In this mode, the test are still run in MPI mode.
|
|
|
|
|
|
The header correspondence is of the form `#include "boost/communicator.hpp"` $`\Rightarrow`$ `#include "boostcommunicator.hpp"`.
|
|
|
## Generate a sequential version
|
|
|
|
|
|
There are a few other places where we still need to rely on the **FARGO_SEQ** macro definition to explicitly select the code to compile. We hope to clean that stain on our karma at some point. |
|
|
\ No newline at end of file |
|
|
If you configure the code with `cmake -DFARGO_SEQUENTIAL_ONLY=ON ...` everything will be compiled in sequential mode only.
|
|
|
|
|
|
In that mode, test involving more than one MPI process will be disabled. |