As core number of many-core processors keeps increasing, MPI+X is becoming a promising programming model for large scale SMP clusters. It has the potential to utilizing both intra-node and inter-node parallelism with appropriate execution unit and granularity.
Argobots is a low-level threading/task infrastructure developed by a joint effort of Argonne National Laboratory, University of Illinois at Urbana-Champaign, University of Tennessee, Knoxville and Pacific Northwest National Laboratory. It provides a lightweight execution model that combines low-latency thread and task scheduling with optimized data-movement functionality.
A benefit of Argobots is providing asynchrony/overlap to MPI. The idea is to make multiple MPI blocking calls at the same time in multiple ULTs, if one MPI call is blocked in ULT A, MPI runtime will detect it and context switch to another ULT to make progress on other blocking calls. Once other ULTs finished their execution, they will switch back to ULT A to continue its execution. In this way, we can keep the CPU busy doing useful work instead of waiting the blocking call.
However, the two-level parallelism of MPI+X introduces new problems such as lock contention in MPI between threads. To avoid unnecessary locks between execution units, MPI+Argobots will explicitly control the context switch between User Level Threads (ULT) and Execution Streams (ES). When switching between ULTs in the same ES, no lock is needed.
|Argobot read-only clone URL||git://git.mcs.anl.gov/argo/argobots.git|
|mpich-dev read-only clone URL||git://git.mpich.org/mpich-dev.git|
To contribute to Argobots and MPI+Argobots, please contact Dr. Pavan Balaji.
Follow the instructions in https://collab.mcs.anl.gov/display/ARGOBOTS/Getting+and+Building to build Argobots.
$ export ABT_INSTALL_PATH=/path/to/install $ git clone --origin argobots git://git.mcs.anl.gov/argo/argobots.git argobots $ cd argobots $ ./autogen.sh # can be skipped if built from tarball $ ./configure --prefix=$ABT_INSTALL_PATH $ make -j 4 $ make install $ export LD_LIBRARY_PATH=$ABT_INSTALL_PATH/lib:$LD_LIBRARY_PATH
MPI+Argobots is currently under develop in mpich-dev repository. To get the source code, do
$ git clone --origin mpich-dev git://git.mpich.org/mpich-dev.git mpich-dev $ cd mpich-dev $ git checkout mpi-argobots
$ ./autogen.sh # can be skipped if built from tarball $ ./configure --prefix=$MPICH_INSTALL_PATH --with-thread-package=argobots CFLAGS="-I$ABT_INSTALL_PATH/include" LDFLAGS="-L$ABT_INSTALL_PATH/lib" $ make -j 8 $ make install
Build and Run MPI+Argobots Examples
Set path to use the newly install mpicc and mpiexec.
$ export PATH=$MPICH_INSTALL_PATH/bin:$PATH $ export LD_LIBRARY_PATH=$MPICH_INSTALL_PATH/lib:$LD_LIBRARY_PATH $ cd mpich-dev/examples/argobots $ mpicc -o ./sendrecv_ult sendrecv_ult.c -labt
$ mpiexec -n 2 ./sendrecv_ult
New Thread Level for ULT: MPIX_THREAD_ULT
We propose another thread level  for MPI and thread integration: MPIX_THREAD_ULT. In this level, there will be only one ES per process and multiple ULTs in the ES. Because ULTs do not execute concurrently, so there is no lock needed when enter or exit MPI calls. On the other side, when yielding, the current ULT will yield to other ULTs in the same ES, compared to yielding to other ESs with MPI_THREAD_MULTIPLE.
Because MPI+Argobots supports running multiple excution streams (ESs), when compiling MPICH, "--enable-threads=multiple" is used. In applications, you can choose whether or not multiple ESs are needed by choosing the thread level MPIX_THREAD_ULT or MPI_THREAD_MULTIPLE in MPI_Init_thread. MPIX_THREAD_ULT means there will only be one ES per process and multiple ULTs in the ES. MPI_THREAD_MULTIPLE means there will no restriction of ES and ULT.
MPI_Init_thread(&argc, &argv, MPIX_THREAD_ULT, &provided);
1. Argobots Home, https://collab.mcs.anl.gov/display/ARGOBOTS/Argobots+Home
2. Huiwei Lu, Sangmin Seo, and Pavan Balaji. MPI+ULT: Overlapping Communication and Computation with User-Level Threads. The 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC '15), New York, USA, August 24-26, 2015.