I am an economist and computer scientist currently studying for a PhD in Trustworthy Artificial Intelligence (AI) at Delft University of Technology. My research is on the intersection of AI and Financial Economics. In particular, I'm interested in Explainable AI, Counterfactual Explanations, Bayesian ML and Causal Inference and their applications to Financial Economics.
Previously, I worked as an economist for Bank of England where I was involved in research, monetary policy briefings and market intelligence. I hold bachelor's and master's degrees in Economics, Finance and Data Science.
Treating deep neural networks probabilistically comes with numerous advantages including improved robustness and greater interpretability. These factors are key to building artificial intelligence (AI) that is trustworthy. A drawback commonly associated with existing Bayesian methods is that they increase computational costs. Recent work has shown that Bayesian deep learning can be effortless through Laplace approximation. This talk presents an implementation in Julia:
Julia Frank, Agustin Covarrubias, Valeria Perez, Saranjeet Kaur Bhogal, Marina Cagliari, Patrick Altmeyer, Garrek Stemo, Jeremiah Lasquety-Reyes, Dr. Vikas Negi, Martin Smit, Fábio Rodrigues Sodré, Arturo Erdely, Olga Eleftherakou, Charlie Kawczynski
This session hosts all of this year's experience talks.
CounterfactualExplanations.jl: a package for explaining black-box models through counterfactuals. Counterfactual explanations are based on the simple idea of strategically perturbing model inputs to change model predictions. Our package is novel, easy-to-use and extensible. It can be used to explain custom predictive models including those developed and trained in other programming languages.