From Mpich
Revision as of 15:03, 20 August 2015 by Huiweilu (talk | contribs)

Jump to: navigation, search

As core number of many-core processors keeps increasing, MPI+X is becoming a promising programming model for large scale SMP clusters. It has the potential to utilizing both intra-node and inter-node parallelism with appropriate execution unit and granularity.

Argobots is a low-level threading/task infrastructure developed by a joint effort of Argonne National Laboratory, University of Illinois at Urbana-Champaign, University of Tennessee, Knoxville and Pacific Northwest National Laboratory. It provides a lightweight execution model that combines low-latency thread and task scheduling with optimized data-movement functionality.

A benefit of Argobots is providing asynchrony/overlap to MPI. The idea is to make multiple MPI blocking calls at the same time in multiple ULTs, if one MPI call is blocked in ULT A, MPI runtime will detect it and context switch to another ULT to make progress on other blocking calls. Once other ULTs finished their execution, they will switch back to ULT A to continue its execution. In this way, we can keep the CPU busy doing useful work instead of waiting the blocking call.

However, the two-level parallelism of MPI+X introduces new problems such as lock contention in MPI between threads. To avoid unnecessary locks between execution units, MPI+Argobots will explicitly control the context switch between User Level Threads (ULT) and Execution Streams (ES). When switching between ULTs in the same ES, no lock is needed.

New Thread Level for ULT: MPI_THREAD_ULT

We propose another thread level for MPI and thread integration: MPI_THREAD_ULT. In this level, there will be only one ES per process and multiple ULTs in the ES. Because ULTs do not execute concurrently, so there is no lock needed when enter or exit MPI calls. On the other side, when yielding, the current ULT will yield to other ULTs in the same ES, compared to yielding to other ESs with MPI_THREAD_MULTIPLE.

MPI_Init_thread(&argc, &argv, MPI_THREAD_ULT, &provided);

Build MPI+Argobots

Git repos:

Argobot read-only clone URL git://
mpich-dev read-only clone URL git://

Build Argobots

Follow the instructions in to build Argobots.

$ export INSTALL_PATH=/path/to/install
$ git clone --origin argobots git:// argobots
$ cd argobots
$ ./
$ ./configure --prefix=$INSTALL_PATH
$ make -j 4
$ make install


MPI+Argobots is currently under develop in mpich-dev repository. To get the source code, do

$ git clone --origin mpich-dev git:// mpich-dev
$ cd mpich-dev
$ git checkout mpi-argobots

Set paths to link Argobots library.



$ ./
$ CFLAGS="-I$INSTALL_PATH/include" ./configure --prefix=$INSTALL_PATH --enable-threads=multiple --with-thread-package=argobots
$ make -j 8
$ make install

Build and Run MPI+Argobots Examples

Set path to use the newly install mpicc and mpiexec.


Run examples.

cd mpich-dev/test/mpi/threads/argobots
mpiexec -n 2 ./hello_abt

Example of MPI+Argobots

/* Argobots calls */

This is a template for MPI+Argobots applications. Note ABT_finalize must be called after MPI_Finalize, because MPI+Argobots uses Argobots calls inside MPI, so Argobots should not be finalized before MPI. Also, some users may need to use Argobots calls after finalizing MPI so Argobots needs to be finalized manually by users.


Because Argobots supports both excution stream (ES) and user level thread (ULT), when compiling MPICH, "--enable-threads=multiple" is used. When executing, you can choose whether or not multiple ESs are needed by choosing the thread level MPIX_THREAD_ULT or MPI_THREAD_MULTIPLE in MPI_Init_thread. MPIX_THREAD_ULT means there will only be one ES per process and multiple ULTs in the ES. MPI_THREAD_MULTIPLE means there will no restriction of ES and ULT.