This is an inspirational event, bringing people from various fields together. We want to mix up problem owners and solution providers. Throughout the day, we will schedule 4 main talks on the topic of uncertainty in research. We intertwine these with poster sessions in which we invite all participants to present their work, either from the perspective of a challenge or a machine learning solution waiting to be tested on a real-world problem. The target group consists of PhD and postdoctoral students in all engineering topics, data science and (applied) mathematics, but we welcome people from all fields.
As an expert in your field, you know exactly which problems are worth solving.
You know what to solve, but not necessarily the best way to do so...
Stuck on a problem in your research?
Do you feel like you are doing repetitive steps over and over again?
Do you feel like there is probably a smarter way to do things?
As an expert in your field, you tackle multivariate distributions daily.
You just know how to efficiently reach a solution.
Do you have a great machine learning method only demonstrated on toy data?
Do you need a real-world problem to test your novel solutions?
Then this is the symposium for you! Today, we bridge the world of the problem owners and the solution providers.
Everybody is free to present a poster. We propose some non-binding guidelines: The format is A0. Posters about challenges should have a primary colour that is red. Solution providers should present something using blue. These posters are primarily for attracting other people to collaborate with. Two toy examples are given above. You can include anything you like, but the focus should not be on tables, formulas etc. Focus on the possibilities!!!
Everything will take place at Campus Middelheim
(https://www.uantwerpen.be/nl/overuantwerpen/campussen/campus-middelheim/)
in Aula M.A.143 (aula Patrice Lumumba)
08:30 - 09:00 Welcome and registration
09:00 - 09:30 Introductory presentation
Prof. dr. Rudi Penne (UA) and dr. Ivan De Boi (UA)
09:30 - 10:45 Session 1, Uncertainty Quantification in Neural Networks: From Trustworthy Decision-Making to Efficient Data Collection
drs. Arthur Thuy (UGent)
10:45 - 11:15 Coffee break and poster session 1
11:15 - 12:30 Session 2, To BayesOpt and Beyond
Prof. dr. Henry Moss (Lancaster University, Cambridge)
12:30 -14:30 Lunch and poster session 2
14:30 - 15:45 Session 3, Making AI systems more trustworthy through uncertainty disentanglement
Prof. dr. Willem Waegeman (UGent)
15:45 - 16:15 Coffee break and poster session 3
16:15 - 17:30 Session 4, Approximate Bayesian Inference of Composite Functions
Prof. dr. Carl Henrik Ek (Cambridge)
17:30 -18:30 Networking Reception
Drs. Arthur Thuy - Uncertainty Quantification in Neural Networks: From Trustworthy Decision-Making to Efficient Data Collection
Abstract: Neural networks (NNs) are increasingly prominent in engineering applications and are deployed across a wide range of data modalities, including tabular, textual, and image data. While NNs demonstrate strong predictive performance, they are also known to be confidently wrong under distribution shifts, where the test data diverge from the training distribution. In such scenarios, model performance can degrade without clear warning. Uncertainty quantification (UQ) provides a principled approach to address this issue and indicates when outputs should—or should not—be trusted. This talk explores three key roles of UQ in enhancing model robustness and usability. First, we position UQ as a tool for model explainability, connecting it to downstream decision-making through classification with rejection. This approach allows uncertain predictions to be deferred to human experts, thereby reducing the number of misclassifications. Second, we examine the role of UQ in active learning, where it guides the efficient selection of informative samples for annotation, minimizing labeling effort while preserving model accuracy. Finally, we discuss methods for obtaining high-quality UQ from efficient neural network ensembles, reducing computational costs and increasing the practicality of UQ in real-world applications.
Biography: Arthur Thuy is a doctoral researcher in the research group of Prof. dr. Dries Benoit at Ghent University and a PhD fellow of the Research Foundation –Flanders (FWO). He is also affiliated with the CVAMO Lab at Flanders Make. His research focuses on supporting business decision-making with uncertainty quantification in neural networks, communicating when a model's output should (not) be trusted. His work spans applications in both the manufacturing industry and educational data mining. His research has been published in the European Journal of Operational Research, Annals of Operations Research, and presented at ECML-PKDD.
Prof. dr. Henry Moss - To BayesOpt and Beyond
Abstract: Bayesian optimisation (BO) pairs Gaussian-process surrogates with exploration-aware acquisition rules to locate the optimum of costly, black-box functions in just a handful of trials. In this introductory talk we unpack how GPs supply calibrated uncertainty that powers the explore-exploit trade-off, walk through the classical BO loop and its staple acquisition functions, and outline practical considerations for noisy, constrained, and moderately high-dimensional settings. We then cast an eye to the GenAI era, sketching how BO’s core ideas adapt to this new landscape.
Biography: Henry Moss is a Lecturer (equiv. Assistant Professor) in Mathematical AI at Lancaster University and an Early Career Advanced Fellow in the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge. His interests are Scalable Bayesian models to help scientists better understand the world around us, Active learning and Bayesian optimisation to accelerate the design of new technologies, Machine learning for calibrating scientific models and Molecular search and gene design.
Prof. dr. Willem Waegeman - Making AI systems more trustworthy through uncertainty disentanglement
Abstract: Given the increasing use of machine learning (ML) models for decisions that directly affect humans, it is essential that these models not only provide accurate predictions but also offer a credible representation of their uncertainty. Recent advances have led to probabilistic models capable of disentangling two types of uncertainty: aleatoric and epistemic. Aleatoric uncertainty is inherent to the data and cannot be eliminated, while epistemic uncertainty is related to the ML model and can be reduced with better modeling approaches or more data. In this talk I will elaborate on the opportunities and limitations of uncertainty disentanglement in explaining why an ML model fails to deliver accurate predictions. Furthermore, I will discuss several use cases that demonstrate the potential of uncertainty disentanglement for biotechnology applications.
Biography: Willem Waegeman is an associate professor at Ghent University, and group leader of the BIOML group of the Department of Data Analysis and Mathematical Modelling (www.bioml.ugent.be). His main research interests are machine learning and bioinformatics. Specific interests include uncertainty quantification and complex prediction problems, such as multi-target and structured prediction problems. He is an author of more than 100 peer-reviewed papers in journals and conferences, and his work has won several prizes. In recent years he has served on the program committees of leading conferences in AI (ICML, Neurips, ECML/PKDD, ICLR, UAI, AISTATS, IJCAI, etc.).
Prof. dr. Carl Henrik Ek - Approximate Bayesian Inference of Composite Functions
Abstract:
Over the last decade, machine learning methods built on composite functions have lead to a remarkable increase in predictive power. Building complex structures from compositions of simple mathematical objects has led to a surprising generalisation of success across many domains. The aspiration of Bayesian deep learning is to combine the predictive advantages of deep neural networks with the theoretical justifications of the Bayesian framework, hoping to achieve the benefits of both. Success in achieving the promises of a Bayesian treatment of deep neural networks has been so elusive that it raises the question of whether there is something inherent in their structure that renders it numb to a Bayesian approach.
In this talk, I will first highlight the inherent structures in the compositional models that makes Bayesian approximate inference challenging. We will provide a characterization of why Bayesian learning has not succeeded, showing that for deep learning to harvest the benefits of a Bayesian treatment, this structure needs to be reflected in the approximate posterior. We will then discuss how we can use tools from differential geometry to characterise the structures of these parameter spaces to design efficient computational models to circumvent these issues and design tailored approximate inference schemes.
Biography:
Carl Henrik Ek is a Professor of Statistical Learning at the University of Cambridge and a Professor at Karolinska Institute in
Stockholm. He is the co-director for the UKRI AI Centre for Doctoral Training in Decision Making for Complex Systems which is a collaboration between University of Cambridge and the University of Manchester. He is also a fellow and Director of Studies in Computer Science at Pembroke College, Cambridge.
Carl Henriks research interests is focused on machine learning and in specific probabilistic models. He has worked extensively in Bayesian non-parametrics and have developed techniques for latent variable models using Gaussian processes and how to perform efficient inference in such models. He has also worked extensively on applications of these techniques in a range of different domains, from renewable energy to astrophysics. Over the last few years his main focus area have been applications within health and medicine.
Use this link to register: https://forms.gle/bcKBrvYTzL21c8157
The symposium is free for all, but with limited capacity. So, please let us know if you wish to cancel your registration. Thank you for understanding.
You can contact us at ivan.deboi@uantwerpen.be.
Copyright © UAntwerp InViLab research group 2025