SimGrid  3.11
Versatile Simulation of Distributed Systems
 All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Groups Pages
SMPI

Programming environment for the simulation of MPI applications. More...

Programming environment for the simulation of MPI applications.

This programming environment enables the study of MPI application by emulating them on top of the SimGrid simulator. This is particularly interesting to study existing MPI applications within the comfort of the simulator. The motivation for this work is detailed in the reference article (available at http://hal.inria.fr/inria-00527150).

Our goal is to enable the study of unmodified MPI applications, although we are not quite there yet (see What can run within SMPI?). In addition, you can modify your code to speed up your studies or otherwise increase their scalability (see Adapting your MPI code to the use of SMPI).

Who should use SMPI (and who shouldn't)

SMPI is now considered as stable and you can use it in production. You may probably want to read the scientific publications that detail the models used and their limits, but this should not be absolutely necessary. If you already fluently write and use MPI applications, SMPI should sound very familiar to you. Use smpicc instead of mpicc, and smpirun instead of mpirun (see below for more details).

Of course, if you don't know what MPI is, the documentation of SMPI will seem a bit terse to you. You should pick up a good MPI tutorial on the Internet (or a course in your favorite university) and come back to SMPI once you know a bit more about MPI. Alternatively, you may want to turn to the other SimGrid interfaces such as the MSG environment, or the SimDag one.

What can run within SMPI?

You can run unmodified MPI applications (both C and Fortran) within SMPI, provided that (1) you only use MPI calls that we implemented in MPI and (2) you don't use any globals in your application.

MPI coverage of SMPI

Our coverage of the interface is very decent, but still incomplete; Given the size of the MPI standard, it may well be that we never implement absolutely all existing primitives. One sided communications and I/O primitives are not targeted for now. Our current state is still very decent: we pass most of the MPICH coverage tests.

The full list of not yet implemented functions is documented in the file include/smpi/smpi.h of the archive, between two lines containing the FIXME marker. If you really need a missing feature, please get in touch with us: we can guide you though the SimGrid code to help you implementing it, and we'd glad to integrate it in the main project afterward if you contribute them back.

Issues with the globals

Concerning the globals, the problem comes from the fact that usually, MPI processes run as real UNIX processes while they are all folded into threads of a unique system process in SMPI. Global variables are usually private to each MPI process while they become shared between the processes in SMPI. This point is rather problematic, and currently forces to modify your application to privatize the global variables.

We tried several techniques to work this around. We used to have a script that privatized automatically the globals through static analysis of the source code, but it was not robust enough to be used in production. This issue, as well as several potential solutions, is discussed in this article: "Automatic Handling of Global Variables for Multi-threaded MPI Programs", available at http://charm.cs.illinois.edu/newPapers/11-23/paper.pdf (note that this article does not deal with SMPI but with a concurrent solution called AMPI that suffers of the same issue).

Currently, we have no solution to offer you, because all proposed solutions will modify the performance of your application (in the computational sections). Sacrificing realism for usability is not very satisfying, so we did not implement them yet. You will thus have to modify your application if it uses global variables. We are working on another solution, leveraging distributed simulation to keep each MPI process within a separate system process, but this is far from being ready at the moment.

Compiling your code

This is very simply done with the smpicc script. If you already compiled any MPI code before, you already know how to use it. If not, you should try to get your MPI code running on top of MPI before giving SMPI a spin. Actually, that's very simple even if it's the first time you use MPI code: just use smpicc as a compiler (in replacement of gcc or your usual compiler), and you're set.

Executing your code on top of the simulator

This is done though the smpirun script as follows. my_hostfile.txt is a classical MPI hostfile (that is, this file lists the machines on which the processes must be dispatched, one per line) my_platform.xml is a classical SimGrid platform file. Of course, the hosts of the hostfile must exist in the provided platform. ./program is the MPI program that you want to simulate (must be compiled by smpicc) while -arg is a command-line parameter passed to this program.

smpirun -hostfile my_hostfile.txt -platform my_platform.xml ./program -arg

smpirun accepts other parameters, such as -np if you don't want to use all the hosts defined in the hostfile, -map to display on which host each rank gets mapped of -trace to activate the tracing during the simulation. You can get the full list by running

smpirun -help

Adapting your MPI code to the use of SMPI

As detailed in the reference article (available at http://hal.inria.fr/inria-00527150), you may want to adapt your code to improve the simulation performance. But these tricks may seriously hinder the result quality (or even prevent the app to run) if used wrongly. We assume that if you want to simulate an HPC application, you know what you are doing. Don't prove us wrong!

Reducing your memory footprint

If you get short on memory (the whole app is executed on a single node when simulated), you should have a look at the SMPI_SHARED_MALLOC and SMPI_SHARED_FREE macros. It allows to share memory areas between processes: The purpose of these macro is that the same line malloc on each process will point to the exact same memory area. So if you have a malloc of 2M and you have 16 processes, this macro will change your memory consumption from 2M*16 to 2M only. Only one block for all processes.

If your program is ok with a block containing garbage value because all processes write and read to the same place without any kind of coordination, then this macro can dramatically shrink your memory consumption. For example, that will be very beneficial to a matrix multiplication code, as all blocks will be stored on the same area. Of course, the resulting computations will useless, but you can still study the application behavior this way.

Naturally, this won't work if your code is data-dependent. For example, a Jacobi iterative computation depends on the result computed by the code to detect convergence conditions, so turning them into garbage by sharing the same memory area between processes does not seem very wise. You cannot use the SMPI_SHARED_MALLOC macro in this case, sorry.

This feature is demoed by the example file examples/smpi/NAS/DT-folding/dt.c

Toward faster simulations

If your application is too slow, try using SMPI_SAMPLE_LOCAL, SMPI_SAMPLE_GLOBAL and friends to indicate which computation loops can be sampled. Some of the loop iterations will be executed to measure their duration, and this duration will be used for the subsequent iterations. These samples are done per processor with SMPI_SAMPLE_LOCAL, and shared between all processors with SMPI_SAMPLE_GLOBAL. Of course, none of this will work if the execution time of your loop iteration are not stable.

This feature is demoed by the example file examples/smpi/NAS/EP-sampling/ep.c

Simulating collective operations

MPI collective operations can be implemented very differently from one library to another. Actually, all existing libraries implement several algorithms for each collective operation, and by default select at runtime which one should be used for the current operation, depending on the sizes sent, the number of nodes, the communicator, or the communication library being used. These decisions are based on empirical results and theoretical complexity estimation, but they can sometimes be suboptimal. Manual selection is possible in these cases, to allow the user to tune the library and use the better collective if the default one is not good enough.

SMPI tries to apply the same logic, regrouping algorithms from OpenMPI, MPICH libraries, and from StarMPI (STAR-MPI). This collection of more than a hundred algorithms allows a simple and effective comparison of their behavior and performance, making SMPI a tool of choice for the development of such algorithms.

Tracing of internal communications

For each collective, default tracing only outputs only global data. Internal communication operations are not traced to avoid outputting too much data to the trace. To debug and compare algorithm, this can be changed with the item tracing/smpi/internals , which has 0 for default value. Here are examples of two alltoall collective algorithms runs on 16 nodes, the first one with a ring algorithm, the second with a pairwise one :


Selectors

The default selection logic implemented by default in OpenMPI (version 1.7) and MPICH (version 3.0.4) has been replicated and can be used by setting the smpi/coll_selector item to either ompi or mpich. The code and details for each selector can be found in the src/smpi/colls/smpi_(openmpi/mpich)_selector.c file. As this is still in development, we do not insure that all algorithms are correctly replicated and that they will behave exactly as the real ones. If you notice a difference, please contact SimGrid developers mailing list

The default selector uses the legacy algorithms used in versions of SimGrid previous to the 3.10. they should not be used to perform performance study and may be removed in the future, a different selector being used by default.

Available algorithms

For each one of the listed algorithms, several versions are available, either coming from STAR-MPI, MPICH or OpenMPI implementations. Details can be found in the code or in STAR-MPI for STAR-MPI algorithms.

Each collective can be selected using the corresponding configuration item. For example, to use the pairwise alltoall algorithm, one should add –cfg=smpi/alltoall:pair to the line. This will override the selector (for this algorithm only) if provided, allowing better flexibility.

Warning: Some collective may require specific conditions to be executed correctly (for instance having a communicator with a power of two number of nodes only), which are currently not enforced by Simgrid. Some crashes can be expected while trying these algorithms with unusual sizes/parameters

MPI_Alltoall

Most of these are best described in STAR-MPI

MPI_Alltoallv

MPI_Gather

MPI_Barrier

MPI_Scatter

MPI_Reduce

MPI_Allreduce

MPI_Reduce_scatter

MPI_Allgather

MPI_Allgatherv

MPI_Bcast

Automatic evaluation

(Warning : This is experimental and may be removed or crash easily)

An automatic version is available for each collective (or even as a selector). This specific version will loop over all other implemented algorithm for this particular collective, and apply them while benchmarking the time taken for each process. It will then output the quickest for each process, and the global quickest. This is still unstable, and a few algorithms which need specific number of nodes may crash.

Add an algorithm

To add a new algorithm, one should check in the src/smpi/colls folder how other algorithms are coded. Using plain MPI code inside Simgrid can't be done, so algorithms have to be changed to use smpi version of the calls instead (MPI_Send will become smpi_mpi_send). Some functions may have different signatures than their MPI counterpart, please check the other algorithms or contact us using SimGrid developers mailing list.

Example: adding a "pair" version of the Alltoall collective.