Speeding Sychronization and Reducing Communication for Exascale

The increasing size of simulations in many scientific domains naturally requires large(r) resources for feasible computation time. While such resources are provided by clusters/ supercomputers in particular for the upcoming Exascale era, scientists are in charge of partitioning scientific problems either in terms of data or model and distribute them on multiple computing nodes. This process is supported by parallel programming models – like the ones developed and enhanced in the EPEEC project.

Scientific problems often represent an iterative process in which initial guesses converge to the acceptable solution by frequently requiring synchronization and exchange of information among participating computing nodes on each iteration. Such exchange of information involving all computing nodes is called collective operations. Example for synchronization is barrier collective, meaning any process must stop at this point and cannot proceed until all other processes reach this point. The other is allreduce collective that conducts reduction (e.g. summation) over partial results distributed on each process and sends the accumulated result back to each process; it is often used for computing convergence criteria like residual.

In EPEEC, we aim to extend and enhance the current set of collectives of the GASPI programming model and its implementation GPI, as well as to provide customized collectives for machine and deep learning codes developed within the project.
 

Allreduce exchange of model updates
Figure 1. Allreduce exchange of model updates. Source: Distributed data-parallelism for existing ML applications. A. Kadav, E. Kruus, Ch. Ungureanu. EuroSys’15.
Eventually consistent data
Figure 2. Eventually consistent data

 

One scenario is to derive more powerful scalable collectives on top of GASPI using eventually consistent data types (when the data on each process is not necessarily the same) and hence dropping consistent views on global properties. Another scenario is to provide blocking / non-blocking collectives and try to overlap often heavy computations and communication, as well as to reduce significantly the latter. These two scenarios with (1) highly-scalable constructs, (2) customized collective operations, which go beyond the standards, and (3) the Comprex compression library will greatly help application developers target Exascale platforms efficiently, increasing their coding productivity.

A first prototype of the (eventually consistent) collectives library and the compression library can be found in our GitHub repository.