Application codes and porting scenarios

Application codes are an essential component of the EPEEC project as they help with designing as well as validating and demonstrating the capabilities of the highly productive programming environment developed in the project. The five application codes considered in EPEEC cover a wide spectrum of applications as summarized in Table 1. 

Application name URL Scientific fields
AVBP https://cerfacs.fr/avbp7x Fluid dynamics combustion
DIOGENeS https://diogenes.inria.fr

Nanophotonics

Plasmonics

OSIRIS http://epp.tecnico.ulisboa.pt/osiris  Plasma physics
Quantum ESPRESSO http://www.quantum-espresso.org  Material sciences 
SMURFF https://github.com/ExaScience/smurff/ Life sciences 

Table 1. Application codes in EPEEC.

 

These application codes have core features that will allow a smooth design and assessment of the capabilities of the productive programming environment that the project will develop. In particular, they cover the entire range of considered compute/data patterns and programming languages, and the most common combinations of programming models as a starting point: MPI optionally complemented with OpenMP and CUDA.

Application developers have thus engaged in porting activities to the GASPI+OmpSs and OmpSs@ArgoDSM models, by following two main scenarios in compliance with the possibilities offered by Parallelware technology at the start of the project. The general scenario for porting the applications to the programming models proposed in EPEEC is sketched in Figure 1. The first step is to exploit Parallelware technology to generate pragma-based code. Automatically-generated OpenACC, OpenMP and OmpSs pragmas will be an outstanding starting point for those domain developers not experienced with these programming models, as well as when coding from scratch or beginning to port code to the proposed programming model bundle. The high-level semantics of OpenACC/OpenMP accelerator kernel definition, however, are known to irremediably lead to low performance in those cases in which the compiler is not able to generate highly optimized code. Therefore, in a second step, those kernels (if any) which pose performance bottlenecks identified by profiling tools will be ported to low-level accelerator programming models such as CUDA or OpenCL. This second step will in fact run in an iterative way as depicted in Figure 1 so as to progressively refine the porting and optimize the performances. Because at project start, the current version of Parallelware technology is not capable of dealing with C++ and Fortran codes when using, two porting scenarios are indeed followed: (1) for application codes written in C, the workflow depicted in Figure 1 is adopted from the beginning; (2) for application codes written in C++ and Fortran, the adopted scenario starts with manual porting to OmpSs and GASPI before switching to Parallelware technology automatic parallelization tools.

EPEEC

Figure 1. High-level development methodology proposed to leverage EPEEC’s highly productive coding possibilities.