Home | Call For Papers | Organizers | Keynote | Accepted Papers | Schedule |
2:00 – 2:10 pm | Opening Remarks |
2:10 – 2:40 pm |
Keynote Talk 1: Uncertainty Quantification in Machine Learning: From Aleatoric to Epistemic Dr. Eyke Hüllermeier, Ludwig-Maximilians-Universität München, Germany Abstract: Due to the steadily increasing relevance of machine learning for practical applications, many of which are coming with safety requirements, the notion of uncertainty has received increasing attention in machine learning research in the recent past. This talk will address questions regarding the adequate representation and quantification of (predictive) uncertainty in (supervised) machine learning and elaborate on the distinction between two important types of uncertainty, often referred to as aleatoric and epistemic. Roughly speaking, while aleatoric uncertainty is due to the randomness inherent in the data-generating process, epistemic uncertainty is caused by the learner's ignorance of the true underlying model. Bayesian methods are commonly used to quantify both types of uncertainty, but alternative approaches have become popular in recent years, notably so-called evidential deep learning methods. By elaborating on conceptual and theoretical issues of such approaches, the challenging nature of epistemic uncertainty quantification will be elucidated. |
2:40 – 3:40 pm |
Accepted Paper Talks 1
|
3:40 – 4:10 pm |
Keynote Talk 2: Uncertainty in Cyber Security: Origin and Opportunities Dr. Yufei Han, National Institute for Research in Computer Science and Automation, France Abstract: The uncertainty in AI-based cyber attack detection mainly comes from the diversity of attacker’s action choices. Attackers may choose different vulnerabilities to exploit, but end up with reaching the same attack goal. The security researchers and service providers have to rely on a set of sources profiling cyber attacks are delivered, including summarizing threat intelligence reports, conducting sandbox analysis, setting up honeypots receiving network traffics. A key question is thus: how to conduct reasoning so that one can draw certain estimation of a system's security status and predict potential threats. In this talk, we will offer a comprehensive introduction to the previous research efforts in uncertainty reasoning techniques deployed in cyber security knowledge encoding and cyber risk prediction. |
4:10 – 4:25 pm | Coffee Break |
4:25 – 4:55 pm |
Keynote Talk 3: Uncertainty (Mis-)Estimation: Beware of the Costs! Dr. Meelis Kull, University of Tartu, Estonia Abstract: In machine learning, uncertainty estimates are inherently approximate and, even with careful calibration, they remain imperfect—sometimes overestimated and sometimes underestimated. The Expected Calibration Error (ECE), a widely used evaluation metric, treats both over- and under-estimations as equally problematic. However, in real-world applications, particularly in safety-critical domains such as autonomous driving, the costs associated with these errors can be highly asymmetric. This talk explores the implications of this cost asymmetry and how ignoring it leads to serious consequences. We will look into the interplay between cost scenarios and proper scoring rules, discuss strategies for asymmetry-aware estimation, and how to measure their effectiveness. |
4:55 – 5:55 pm |
Accepted Paper Talks 2
|
5:55 – 6:00 pm | Closing |