Type of publication

Pedro Ceyrat

Técnico Lisboa
Year of publication

With the increased effort to make supercomputers reach new levels of performance, it is crucial to investigate which are the most effective programming models to use in High-Performance Computing (HPC). Today, the most used frameworks in HPC are MPI, which follows a message passing paradigm, and OpenMP, a shared memory framework. Throughout the years, alternative frameworks have been emerging, such as GASPI and OmpSs. GASPI is an implementation of partitioned global address space, an approach of distributed shared memory paradigm which uses one-sided communications. OmpSs works on shared memory using tasks with data dependencies. The goal of this thesis is to compare the different programming models. To do so we started from a sequential, open source and simplified version of OSIRIS, a particle-in-cell code in the plasma simulation filed, called ZPIC. From ZPIC, we built different implementations using the different above-mentioned emerging programming models. The different versions were experimentally evaluated using three real test cases that use different capacities of ZPIC. We used the supercomputer MareNostrum, reaching the usage of 12,288 cores in our experiments. The results show that the strategy implemented with OmpSs achieved better results than OpenMP versions and that GASPI shows promising results.