Difference between revisions of "PMI v2 Design Thoughts"

From Mpich
Jump to: navigation, search
(Imported the top part of the PMI design thoughts)
Line 1: Line 1:
TODO: port from http://www-unix.mcs.anl.gov/mpi/mpichold/developer/design/pmiv2.htm
+
TODO: port from http://www-unix.mcs.anl.gov/mpi/mpichold/developer/design/pmiv2.htm .  Only the first part has been moved.
  
 
[[Category:Stubs]]
 
[[Category:Stubs]]
 
[[Category:Design Documents]]
 
[[Category:Design Documents]]
 +
 +
The PMI interface has worked well as an abstract interface supporting process management functions, particular for MPI-1 programs. Experience in MPICH2 has shown some limitations with respect to MPI-2 functions, including dynamic process management, singleton init, and multi-threaded applications. This document reviews some of the issues and makes some suggestions. The document is divided into four parts: Issues and limitations of the current PMI interface, Changes to the PMI Client API, Changes to the PMI Wire Protocol, and Interaction of PMI, mpiexec, and applications.
 +
 +
==Issues and limitations of the current PMI interface==
 +
 +
As originally designed, PMI provided a unified API for three distinct groups of functions:
 +
 +
# Process management, used to start, stop, and abort processes
 +
# Information about the parallel job, such as the rank and number of processes
 +
# Exchange of connection information (this is the Key-Value-Space or KVS set of routines in PMI)
 +
 +
Experience has shown that the second of these (information about the parallel job) needs to add "Information about the compute node", this is needed to support topology information and optimizations for SMPs. These have been grouped together because a process manager is in an excellent position to support all of these services. However, man process managers only provide the first of these; some also provide facilities for implementing the second. With the exception of MPD (and some Grid process managers), no process manager provides the facilities for implementing the third (KVS) routines. The next version of PMI needs to address the following issues, arranged by primary issue:
 +
 +
===API changes===
 +
 +
# Modularity. The PMI interface contains two major sets of functions. One manages processes (starts, stops, and possibly signals). The other manages information that is used by processes to exchange information used to contact each other, as well as supporting the MPI name publishing features (the "KVS" features). While some process managers may provide both sets of services, others may only provide the basic process management functions. To allow MPICH2 to fit easily into such environments, a clean separation of these functions is needed. One test would be an implementation that used Bproc on a cluster to start processes and provided some other mechanism (such as LDAP or a separate distributed information manager) to implement the KVS routines.
 +
# Thread safety. The PMI routines are blocking and the current implementation assumes that only one thread at a time will call them. This violates the spirit if not the letter of the MPI standard; for example, in the current implementation, a call to MPI_Comm_spawn will block any other thread from executing a PMI call until the spawn completes.
 +
# Error returns from PMI operations (such as spawn) that may return multiple error codes. The error codes need to be efficiently integrated into the PMI wire protocol, and these error codes must be be convertible into MPI error codes.
 +
# Error codes from PMI routines should be MPI error codes rather than PMI-specific error codes. The reason for this is to take advantage of the powerful and detailed MPI error reporting without requiring some duplicated error reporting system. In addition, using MPI error codes instead of having the MPI routines convert PMI codes into MPI codes (as in the current implementation).
 +
# Info objects should not require any translation between the PMI and MPI layers. That is, there's no reason to have PMI define a different implementation of INFO objects than is used by MPI (if PMI was used by other tools, one could consider having a PMI-specific implementation of INFO. But like the error codes, there is little gained and much unecessary complexity introduced by adding this flexibility.
 +
# The current KVS design does not support dynamic processes in MPI. That is, the PMI_KVS_Get routine (and similar) cannot be used to get connection information about processes that are not in the MPI_COMM_WORLD of the calling process. This has lead to complex code within the CH3 implementation (which would need to be duplicated in other devices) that emulates the KVS routines. This part of the PMI interface should provide a way to make use of KVS when the MPI-2 dynamic process routines are included.
 +
# Topology and other system information. The current MPICH2 implementation discovers that processes are on the same node by comparing their nodenames. This is both a non-scalable process and may not reflect the intent of the user (who may not want to exploit the "SMP"ness of the system, particularly when developing algorithms or testing code). Instead, this information should be communicated through the process manager interface into the MPI process. This requires a new PMI interface, and should be coordinated with the topology routines.
 +
 +
===Wire Protocol===
 +
 +
# Backward compatibility is required, at least to detect protocol mismatches. It would be best if the first bytes exchanged by the PMI wire protocol established the PMI version and authentication, but that is not how the first version of PMI evolved. Any later version of PMI must at least identify PMI messages from earlier versions, and it must ensure that code implementing earlier versions of PMI can detect that a later version is being used, in order to generate a user-friendly error message.
 +
# Security in the connection is required, since PMI commands can cause the system to start programs on behalf of the user. Of particular concern is a man-in-the-middle attack, particularly with message packets with forged sources and correctly predicted TCP sequence numbers.
 +
# Singleton init. This is the process by which a program that was not started with mpiexec can become an MPI process and make use of all MPI features, including MPI_Comm_spawn, needs to be designed and documented, with particular attention to the disposition of standard I/O. Not all process managers will want to or even be able to create a new mpiexec process, so this needs to be negotiated. Similarly, the dispostion of stdio needs to be negotiated between the singleton process and the process manager. To address these issues, a new singleton init protocol has been implemented and tested with the gforker process manager.
 +
# Consistent command and response. A number of the commands return a value for success or failure, and some include a reason for failure, but the names and values of these names are not uniform. Many use "rc" for the return code and "msg" for the reason (if failure), but some use "reason" instead of "msg" and not all use "rc". A consistent set of names simplifies the code and ensures that errors are reliably detected and reported.

Revision as of 18:22, 23 May 2008

TODO: port from http://www-unix.mcs.anl.gov/mpi/mpichold/developer/design/pmiv2.htm . Only the first part has been moved.

The PMI interface has worked well as an abstract interface supporting process management functions, particular for MPI-1 programs. Experience in MPICH2 has shown some limitations with respect to MPI-2 functions, including dynamic process management, singleton init, and multi-threaded applications. This document reviews some of the issues and makes some suggestions. The document is divided into four parts: Issues and limitations of the current PMI interface, Changes to the PMI Client API, Changes to the PMI Wire Protocol, and Interaction of PMI, mpiexec, and applications.

Issues and limitations of the current PMI interface

As originally designed, PMI provided a unified API for three distinct groups of functions:

  1. Process management, used to start, stop, and abort processes
  2. Information about the parallel job, such as the rank and number of processes
  3. Exchange of connection information (this is the Key-Value-Space or KVS set of routines in PMI)

Experience has shown that the second of these (information about the parallel job) needs to add "Information about the compute node", this is needed to support topology information and optimizations for SMPs. These have been grouped together because a process manager is in an excellent position to support all of these services. However, man process managers only provide the first of these; some also provide facilities for implementing the second. With the exception of MPD (and some Grid process managers), no process manager provides the facilities for implementing the third (KVS) routines. The next version of PMI needs to address the following issues, arranged by primary issue:

API changes

  1. Modularity. The PMI interface contains two major sets of functions. One manages processes (starts, stops, and possibly signals). The other manages information that is used by processes to exchange information used to contact each other, as well as supporting the MPI name publishing features (the "KVS" features). While some process managers may provide both sets of services, others may only provide the basic process management functions. To allow MPICH2 to fit easily into such environments, a clean separation of these functions is needed. One test would be an implementation that used Bproc on a cluster to start processes and provided some other mechanism (such as LDAP or a separate distributed information manager) to implement the KVS routines.
  2. Thread safety. The PMI routines are blocking and the current implementation assumes that only one thread at a time will call them. This violates the spirit if not the letter of the MPI standard; for example, in the current implementation, a call to MPI_Comm_spawn will block any other thread from executing a PMI call until the spawn completes.
  3. Error returns from PMI operations (such as spawn) that may return multiple error codes. The error codes need to be efficiently integrated into the PMI wire protocol, and these error codes must be be convertible into MPI error codes.
  4. Error codes from PMI routines should be MPI error codes rather than PMI-specific error codes. The reason for this is to take advantage of the powerful and detailed MPI error reporting without requiring some duplicated error reporting system. In addition, using MPI error codes instead of having the MPI routines convert PMI codes into MPI codes (as in the current implementation).
  5. Info objects should not require any translation between the PMI and MPI layers. That is, there's no reason to have PMI define a different implementation of INFO objects than is used by MPI (if PMI was used by other tools, one could consider having a PMI-specific implementation of INFO. But like the error codes, there is little gained and much unecessary complexity introduced by adding this flexibility.
  6. The current KVS design does not support dynamic processes in MPI. That is, the PMI_KVS_Get routine (and similar) cannot be used to get connection information about processes that are not in the MPI_COMM_WORLD of the calling process. This has lead to complex code within the CH3 implementation (which would need to be duplicated in other devices) that emulates the KVS routines. This part of the PMI interface should provide a way to make use of KVS when the MPI-2 dynamic process routines are included.
  7. Topology and other system information. The current MPICH2 implementation discovers that processes are on the same node by comparing their nodenames. This is both a non-scalable process and may not reflect the intent of the user (who may not want to exploit the "SMP"ness of the system, particularly when developing algorithms or testing code). Instead, this information should be communicated through the process manager interface into the MPI process. This requires a new PMI interface, and should be coordinated with the topology routines.

Wire Protocol

  1. Backward compatibility is required, at least to detect protocol mismatches. It would be best if the first bytes exchanged by the PMI wire protocol established the PMI version and authentication, but that is not how the first version of PMI evolved. Any later version of PMI must at least identify PMI messages from earlier versions, and it must ensure that code implementing earlier versions of PMI can detect that a later version is being used, in order to generate a user-friendly error message.
  2. Security in the connection is required, since PMI commands can cause the system to start programs on behalf of the user. Of particular concern is a man-in-the-middle attack, particularly with message packets with forged sources and correctly predicted TCP sequence numbers.
  3. Singleton init. This is the process by which a program that was not started with mpiexec can become an MPI process and make use of all MPI features, including MPI_Comm_spawn, needs to be designed and documented, with particular attention to the disposition of standard I/O. Not all process managers will want to or even be able to create a new mpiexec process, so this needs to be negotiated. Similarly, the dispostion of stdio needs to be negotiated between the singleton process and the process manager. To address these issues, a new singleton init protocol has been implemented and tested with the gforker process manager.
  4. Consistent command and response. A number of the commands return a value for success or failure, and some include a reason for failure, but the names and values of these names are not uniform. Many use "rc" for the return code and "msg" for the reason (if failure), but some use "reason" instead of "msg" and not all use "rc". A consistent set of names simplifies the code and ensures that errors are reliably detected and reported.