Speaker: Daniel Ramos Castro

Abstract: In this talk, we will present the article Nixon et al. 2020, “Measuring Calibration in Deep Learning”, published in CVPR Workshops 2020. In this paper, the current most popular measure of calibration for deep learning, i.e. the Expected Calibration Error (ECE) is criticized, and some alternatives are proposed to tackle its identified problem. We further criticize the paper, in order to define a line of research of the AUDIAS group to develop a measure for calibration that overcome all the identified difficulties.