|
|
* [Requirements](#requirements)
|
|
|
* [Getting the code](#getting-the-code)
|
|
|
* [The current release](#the-current-release)
|
|
|
* [The current development version](#the-current-development-version)
|
|
|
* [Configuration](#configuration)
|
|
|
* [Compilation](#compilation)
|
|
|
* [Performance Notes](#performance-notes)
|
|
|
* [Testing](#testing)
|
|
|
* [Machine specific stuff](#machine-specific-stuff)
|
|
|
* [Occigen](#occigen-cines)
|
|
|
* [Licallo](#licallo-oca)
|
|
|
* [Ada](#ada-idris)
|
|
|
|
|
|
----
|
|
|
|
|
|
## Requirements
|
|
|
|
|
|
FargOCA installation rely on some tools and libraries. Some are mandatory:
|
|
|
* **Git**: you will need [git](https://git-scm.com/) to retrieve the code, you will need [git-lfs](https://git-lfs.github.com/) installed to run the tests and do any development work.
|
|
|
* **CMake**: we use [CMake](www.cmake.org) for configuration. Versions 3.9.x is currently used and tested. Older version might work.
|
|
|
* **MPI**: any conforming MPI2 implementations should do. Intel MPI 2017.4 and OpenMPI have been tested.
|
|
|
* **C++**: a C++11 compliant compiler. Intel and GNU compilers have been tested.
|
|
|
* **Boost**: a recent boost distribution with Boost.MPI activated. Must be compatible with the MPI implementation and C++ compiler used.
|
|
|
|
|
|
Some are (Strongly) Recommended
|
|
|
* **Python**: a [Python 3.6](https://www.python.org) with [numpy](http://www.numpy.org) and [h5py](http://www.h5py.org/) is used for testing and post-processing scripts.
|
|
|
* **HDF5**: with C++ bindings. We plan to use [HDF5](https://support.hdfgroup.org/HDF5/) for data storage. Although we can build without it at this moment (auto detection might work if no HDF5 library can be found on the machine) it is not as tested as it could be.
|
|
|
|
|
|
## Getting the code
|
|
|
|
|
|
#### The current release
|
|
|
|
|
|
Not available yet.
|
|
|
|
|
|
#### The current development version
|
|
|
|
|
|
A [git](https://git-scm.com) client is required, which is probably already available on you machine.
|
|
|
|
|
|
You will need an access to our [GitLab server](https://gitlab.oca.eu/).
|
|
|
Assuming you registered your [ssh key](https://gitlab.oca.eu/help/ssh/README):
|
|
|
|
|
|
```
|
|
|
$ git clone git@gitlab.oca.eu:DISC/fargOCA.git
|
|
|
Cloning into 'fargOCA'...
|
|
|
...
|
|
|
Checking connectivity... done.
|
|
|
$
|
|
|
```
|
|
|
|
|
|
## Configuration
|
|
|
|
|
|
It is advised (read mandatory) not to build directly in the source code directory.
|
|
|
|
|
|
```
|
|
|
[...fargOCA]$ pwd
|
|
|
/beegfs/home/alainm/tmp/fargOCA
|
|
|
[...fargOCA]$ mkdir build
|
|
|
[...fargOCA]$ cd build/
|
|
|
[...build]$
|
|
|
```
|
|
|
|
|
|
If all the required libraries are available in standard locations, you only need to run `cmake ..` (or, in its general form `cmake <fargOCA distrib path>`:
|
|
|
|
|
|
```
|
|
|
[alainm@pollux build]$ cmake ..
|
|
|
-- The CXX compiler identification is Intel 17.0.4.20170411
|
|
|
.....
|
|
|
-- Boost version: 1.65.1
|
|
|
-- Found the following Boost libraries:
|
|
|
-- mpi
|
|
|
-- serialization
|
|
|
-- Configuring done
|
|
|
-- Generating done
|
|
|
-- Build files have been written to: /beegfs/home/alainm/tmp/fargOCA/build
|
|
|
[alainm@pollux build]$
|
|
|
```
|
|
|
|
|
|
#### Troubleshooting
|
|
|
|
|
|
Sometime libraries are not installed in default location. In that case you need to tell cmake where to find them through configuration variable:
|
|
|
```
|
|
|
<builddir>$ cmake -D<varname>=<value> <sourcedir>
|
|
|
```
|
|
|
|
|
|
As an example, if the C and C++ compilers are not the defaults ones (or if the environment variables **CC** and **CXX** point to other compilers) you can explicitly specify them (here are using **icc** and **icpc**):
|
|
|
```
|
|
|
[alainm@pollux build]$ cmake -DCMAKE_C_COMPILER=icc -DCMAKE_CXX_COMPILER=icpc ..
|
|
|
-- Build files have been written to: /beegfs/home/alainm/tmp/fargOCA/build
|
|
|
[alainm@pollux build]$
|
|
|
```
|
|
|
|
|
|
##### Libraries
|
|
|
|
|
|
* **BOOST_ROOT** the root of the Boost installation (can be specified through environment variable)
|
|
|
* **HDF5_ROOT** the root of the HDF5 installation (can be specified through environment variable)
|
|
|
|
|
|
##### MPI
|
|
|
|
|
|
MPI installations can be quite exotic, especially on HPC cluster. If your installation is not automatically detected, you can try setting those variables:
|
|
|
* **MPI_C_COMPILER** the name/path of the C MPI wrapper, common names includes mpicc, mpiicc, mpiC.... If that does not work
|
|
|
* **MPI_C_INCLUDE_PATH** and **MPI_C_LIBRARIES** the include path and libraries required to build C MPI applications. If that does not work
|
|
|
* **MPI_CXX_COMPILER** the name/path of the C++ MPI wrapper, common name includes mpicxx, mpic++ mpiicpc.... If that does not work
|
|
|
* **MPI_CXX_INCLUDE_PATH** and **MPI_CXX_LIBRARIES** the include path and libraries required to build C MPI applications.
|
|
|
|
|
|
These are just provided as troubleshooting hints, please refer to your cmake documentation for more details.
|
|
|
|
|
|
## Compilation
|
|
|
From the build directory
|
|
|
```
|
|
|
[alainm@pollux build]$ make
|
|
|
```
|
|
|
Now that the makefile system generated by cmake supports parallel build:
|
|
|
```
|
|
|
[alainm@pollux build]$ make -j<nb core>
|
|
|
```
|
|
|
### Performance Notes
|
|
|
|
|
|
#### valarray
|
|
|
We are using [std::valarray](https://en.cppreference.com/w/cpp/numeric/valarray), recent Intel's compilers provides [special optimizations](https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-using-intel-s-valarray-implementation) that can significantly improve performances. So, if you're using that compiler, it is probably a good idea to add the `-use-intel-optimized-headers` in the `<build directory>/CMakeCache.txt` file:
|
|
|
```
|
|
|
[alainm@pollux ISO]$ grep header ../../../../CMakeCache.txt
|
|
|
CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG -use-intel-optimized-headers
|
|
|
CMAKE_CXX_FLAGS_RELWITHDEBINFO:STRING=-O3 -g -DNDEBUG -use-intel-optimized-headers
|
|
|
CMAKE_EXE_LINKER_FLAGS:STRING=-use-intel-optimized-headers
|
|
|
CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO:STRING=-use-intel-optimized-headers
|
|
|
[alainm@pollux ISO]$
|
|
|
|
|
|
```
|
|
|
## Testing
|
|
|
|
|
|
You need to have [Git LFS](https://docs.gitlab.com/ee/workflow/lfs/manage_large_binaries_with_git_lfs.html#using-git-lfs) enabled on your git client in order to retrieve the test data.
|
|
|
|
|
|
Testing is available on all platforms, from the build directory, just run:
|
|
|
```
|
|
|
$ ctest
|
|
|
....
|
|
|
$
|
|
|
```
|
|
|
|
|
|
It can take some time. Some script have been developped to speed up things on some clusters, see [Machine Specific Stuff](Building#machine-specific-stuff) section.
|
|
|
|
|
|
## Machine specific stuff
|
|
|
|
|
|
Some cluster specific scripts have been developed that allows to run the tests in parallel. These scripts will automatically dispatch the tests on the cluster and track the results. From the build directory:
|
|
|
|
|
|
* Licallo: `./tools/dev/licTest.sh`
|
|
|
* Occigen: `./tools/dev/occTest.sh`
|
|
|
|
|
|
refer to those script help for more details.
|
|
|
|
|
|
### Occigen (CINES)
|
|
|
|
|
|
* **Environment**
|
|
|
You need load the environment that way in order to workaround an *occigen* specific environment dependency issue:
|
|
|
```
|
|
|
module purge
|
|
|
module load intel/17.2 python/3.6.3
|
|
|
module rm intel
|
|
|
module load intel/18.0 openmpi/intel/2.0.1 boost/1.65.1
|
|
|
module rm intel
|
|
|
module load intel/18.1 cmake/3.9.0
|
|
|
module load hdf5-seq/1.8.17
|
|
|
```
|
|
|
Note that on *occigen*, loading the compiler module does not set the `CC` and `CXX` environment variables. So you need to:
|
|
|
```
|
|
|
$ export CC=icc
|
|
|
$ export CXX=icpc
|
|
|
```
|
|
|
Once this environment is loaded, you just need to run `cmake`:
|
|
|
```
|
|
|
[elega@login0:~/fargOCA/build]$ cmake [options] ..
|
|
|
```
|
|
|
* **Testing** platform specific scripts allows you to run the test in parallel
|
|
|
* `<builddir>/tools/dev/occTest.sh` dispatch the tests on the cluster and track the result. You can interrupt the monitoring phase and resume it later with...
|
|
|
* `<builddir>/tools/dev/occTrackTest.sh` track the dispatched tests.
|
|
|
|
|
|
refer to those script help (`-h`) option for more details.
|
|
|
|
|
|
### Licallo (O.C.A.)
|
|
|
|
|
|
#### Environment
|
|
|
You need load the environment that way in order to workaround an *occigen* specific environment dependency issue:
|
|
|
```
|
|
|
module purge
|
|
|
module load userspace/OCA
|
|
|
module load cmake
|
|
|
module load python/3.6.3_anaconda3
|
|
|
module load intel/mkl/64/2019.0.045
|
|
|
module load intel/mpi/64/2019.0.045
|
|
|
module load intel/compiler/64/2019.0.045
|
|
|
module load intel/tbb/64/2019.0.045
|
|
|
module load boost-intel19/1.68.0
|
|
|
module load hdf5-intel19/1.8.20-seq
|
|
|
module load git
|
|
|
```
|
|
|
#### Configuration
|
|
|
|
|
|
Once this environment is loaded, you just need to run `cmake`:
|
|
|
```
|
|
|
[elega@login0:~/fargOCA/build]$ cmake [options] ..
|
|
|
```
|
|
|
#### Build
|
|
|
As usual
|
|
|
|
|
|
#### Testing
|
|
|
|
|
|
Platform specific scripts allows you to run the test in parallel
|
|
|
* `<builddir>/tools/dev/licTest.sh` dispatch the tests on the cluster and track the result. You can interrupt the monitoring phase and resume it later with...
|
|
|
* `<builddir>/tools/dev/licTrackTest.sh` track the dispatched tests.
|
|
|
|
|
|
refer to those script help (`-h`) option for more details.
|
|
|
#### Build
|
|
|
As usual
|
|
|
|
|
|
### Ada (Idris)
|
|
|
|
|
|
#### Environment
|
|
|
|
|
|
The following environment has been tested
|
|
|
|
|
|
```
|
|
|
[roth005@ada338: bld2]$ more ~/.modules
|
|
|
module purge
|
|
|
module load intel/2018.2
|
|
|
module load boost/1.67.0
|
|
|
module load hdf5/seq/1.8.14
|
|
|
# for some reasons, hdf5 want to link static, we need shared (run path is embedded, so LD_LIBRARY_PATH does not need updating)
|
|
|
export WRAPPER_LDFLAGS="$(h5c++ -show -shlib | sed -e 's/.*-D\w\+//')"
|
|
|
module load python/3.6.1
|
|
|
module load gcc/4.9.4 # only loaded for cmake, which could have been linked static
|
|
|
module load cmake/3.7.2
|
|
|
[roth005@ada338: bld2]$ source ~/.modules
|
|
|
(remove) cmake version 3.7.2
|
|
|
(remove) gcc version 4.9.4
|
|
|
(remove) python version 3.6.1
|
|
|
(remove) hdf5 seq 1.8.14
|
|
|
(remove) boost version 1.67.0
|
|
|
(remove) Intel(R) Parallel Studio XE 2018 Update 2 Cluster Edition for Linux*
|
|
|
(load) Intel(R) Parallel Studio XE 2018 Update 2 Cluster Edition for Linux*
|
|
|
...blah blah blah....
|
|
|
(load) boost version 1.67.0
|
|
|
(load) hdf5 seq 1.8.14
|
|
|
(load) python version 3.6.1
|
|
|
(load) gcc version 4.9.4
|
|
|
(load) cmake version 3.7.2
|
|
|
[roth005@ada338: bld2]$
|
|
|
```
|
|
|
|
|
|
#### Configuration
|
|
|
MPI detect does not work that well, so wee need to provide the MPI wrapper as compilers:
|
|
|
```
|
|
|
[roth005@ada338: bld2]$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=mpiicc -DCMAKE_CXX_COMPILER=mpiicpc ../
|
|
|
-- The CXX compiler identification is Intel 18.0.2.20180210
|
|
|
-- Check for working CXX compiler: /smplocal/pub/Modules/IDRIS/wrappers/mpiicpc
|
|
|
-- Check for working CXX compiler: /smplocal/pub/Modules/IDRIS/wrappers/mpiicpc -- works
|
|
|
...blah blah blah....
|
|
|
-- Boost version: 1.67.0
|
|
|
-- Found the following Boost libraries:
|
|
|
-- mpi
|
|
|
-- program_options
|
|
|
-- chrono
|
|
|
-- filesystem
|
|
|
-- system
|
|
|
-- Configuring done
|
|
|
-- Generating done
|
|
|
-- Build files have been written to: /workgpfs/rech/oth/roth005/fargOCA/bld2
|
|
|
[roth005@ada338: bld2]$
|
|
|
```
|
|
|
#### Build
|
|
|
|
|
|
As usual
|
|
|
|
|
|
#### Test:
|
|
|
We test the unit tests and integration tests (with 4 MPI process) with:
|
|
|
```
|
|
|
<builddir>$./tools/dev/adaTest.sh
|
|
|
```
|
|
|
|
|
|
You can interupt the tracking phase and relaunch it wth:
|
|
|
```
|
|
|
<builddir>$./tools/dev/adaTrackTest.sh
|
|
|
``` |