Abstract

A typical large‐scale CFD code based on adaptive, edge‐based finite‐element formulations for the solution of compressible and incompressible flow is taken as a test bed to port such codes to graphics hardware (graphics processing units, GPUs) using semi‐automatic techniques. In previous work, a GPU version of this code was presented, in which, for many run configurations, all mesh‐sized loops required throughout time stepping were ported. This approach simultaneously achieves the fine‐grained parallelism required to fully exploit the capabilities of many‐core GPUs, completely avoids the crippling bottleneck of GPU–CPU data transfer, and uses a transposed memory layout to meet the distinct memory access requirements posed by GPUs. The present work describes the next step of this porting effort, namely to integrate GPU‐based, fine‐grained parallelism with Message‐Passing‐Interface‐based, coarse‐grained parallelism, in order to achieve a code capable of running on multi‐GPU clusters. This is carried out in a semi‐automated fashion: the existing Fortran–Message Passing Interface code is preserved, with the translator inserting data transfer calls as required. Performance benchmarks indicate up to a factor of 2 performance advantage of the NVIDIA Tesla M2050 GPU (Santa Clara, CA, USA) over the six‐core Intel Xeon X5670 CPU (Santa Clara, CA, USA), for certain run configurations. In addition, good scalability is observed when running across multiple GPUs. The approach should be of general interest, as how best to run on GPUs is being presently considered for many so‐called legacy codes.

Full Document

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document
Back to Top

Document information

Published on 01/01/2011

DOI: 10.1002/fld.2664
Licence: CC BY-NC-SA license

Document Score

0

Views 2
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?