Ivan Utkin

Ivan Utkin's scientific record revolves around numerical modelling of natural processes in geosciences with a strong emphasis on resolving coupled multi-physics interactions. Specifically, the record includes studies of fluid flows in porous rocks, including compaction-driven flow focussing, ground displacement due to the elastic response of rocks to the fluid pressure, and the influence of chemical interactions between fluid and rocks on the flow dynamics and observed chemical composition in rocks. A particular focus of Utkin's research is the development of numerical techniques to resolve processes in high resolution using massively parallel computing architectures such as graphics processing units (GPUs).

Talks:

13:00 UTC

Differentiable modelling on GPUs

07/25/2023, 1:00 PM4:00 PM UTC
32-123

Why to wait hours for computations to complete, when it could take only a few seconds? Tired of prototyping code in an interactive, high-level language and rewriting it in a lower-level language to get high-performance code? Unsure about the applicability of differentiable programming? Or simply curious about parallel and GPU computing and automatic differentiation being game changers in physics-based and data-driven modelling.

15:30 UTC

Massively parallel inverse modelling on GPUs with Enzyme

07/28/2023, 3:30 PM4:00 PM UTC
26-100

We present an efficient and scalable approach to inverse PDE-based modelling with the adjoint method. We use automatic differentiation (AD) with Enzyme to automaticaly generate the buidling blocks for the inverse solver. We utilize the efficient pseudo-transient iterative method to achieve performance that is close to the hardware limit for both forward and adjont problems. We demonstrate close to optimal parallel efficiency on GPUs in series of benchmarks.

18:00 UTC

Scalable 3-D PDE Solvers Tackling Hardware Limit

07/28/2023, 6:00 PM6:30 PM UTC
32-124

We present an efficient approach for the development of 3-D partial differential equation solvers that are able to tackle the hardware limit of modern GPUs and scale to the world's largest supercomputers. The approach relies on the accelerated pseudo-transient method and on the automatic generation of on-chip memory usage-optimized computation kernels for the implementation. We report performance and scaling results on LUMI and Piz Daint, an AMD-GPU and a NVIDIA-GPU supercomputer.

Platinum sponsors

JuliaHub

Gold sponsors

ASML

Silver sponsors

Pumas AIQuEra Computing Inc.Relational AIJeffrey Sarnoff

Bronze sponsors

Jolin.ioBeacon BiosignalsMIT CSAILBoeing

Academic partners

NAWA

Local partners

Postmates

Fiscal Sponsor

NumFOCUS