Ludovic Räss

GPU computing and geo-HPC. Researcher at ETH Zurich, Switzerland.

Talks:

13:00 UTC

Differentiable modelling on GPUs

07/25/2023, 1:00 PM — 4:00 PM UTC
32-123

Why to wait hours for computations to complete, when it could take only a few seconds? Tired of prototyping code in an interactive, high-level language and rewriting it in a lower-level language to get high-performance code? Unsure about the applicability of differentiable programming? Or simply curious about parallel and GPU computing and automatic differentiation being game changers in physics-based and data-driven modelling.

14:30 UTC

Julia for High-Performance Computing

07/27/2023, 2:30 PM — 3:30 PM UTC
32-G449 (Kiva)

The Julia for HPC minisymposium gathers current and prospective Julia practitioners from various disciplines in the field of high-performance computing (HPC). Each year, we invite participation from science, industry, and government institutions interested in Julia’s capabilities for supercomputing. Our goal is to provide a venue for showing the state of the art, share best practices, discuss current limitations, and identify future developments in the Julia HPC community.

15:30 UTC

Massively parallel inverse modelling on GPUs with Enzyme

07/28/2023, 3:30 PM — 4:00 PM UTC
26-100

We present an efficient and scalable approach to inverse PDE-based modelling with the adjoint method. We use automatic differentiation (AD) with Enzyme to automaticaly generate the buidling blocks for the inverse solver. We utilize the efficient pseudo-transient iterative method to achieve performance that is close to the hardware limit for both forward and adjont problems. We demonstrate close to optimal parallel efficiency on GPUs in series of benchmarks.

18:00 UTC

Scalable 3-D PDE Solvers Tackling Hardware Limit

07/28/2023, 6:00 PM — 6:30 PM UTC
32-124

We present an efficient approach for the development of 3-D partial differential equation solvers that are able to tackle the hardware limit of modern GPUs and scale to the world's largest supercomputers. The approach relies on the accelerated pseudo-transient method and on the automatic generation of on-chip memory usage-optimized computation kernels for the implementation. We report performance and scaling results on LUMI and Piz Daint, an AMD-GPU and a NVIDIA-GPU supercomputer.

Platinum sponsors

JuliaHub

Gold sponsors

ASML

Silver sponsors

Pumas AIQuEra Computing Inc.Relational AIJeffrey Sarnoff

Bronze sponsors

Jolin.ioBeacon BiosignalsMIT CSAILBoeing

Academic partners

NAWA

Local partners

Postmates

Fiscal Sponsor

NumFOCUS