Speaker: Sergio Márquez
Abstract: Today’s Deep Neural Networks (DNNs) are used for numerous classification tasks, achieving high performance in terms of accuracy. In some cases, probabilistic classifiers, which assign a confidence value to each of the predictions made, are used. For these systems to perform optimally, these confidence values must be well calibrated, understanding calibration as the concordance between the confidence reported by the classifier and the real probability of success. Recent studies have shown that modern probabilistic classifiers are not well calibrated.
This final degree project aims to design and implement new calibration strategies for multiclass probabilistic classifiers, particularizing the case study of multimedia classifiers (audio, image), in such a way that they serve as an alternative to other current methods that do not perform properly in certain cases.
With this aim in mind, a study of various calibration techniques with state-of-the-art performance in calibration (Matrix Scaling, Temperature Scaling, Gaussian Backend), has been carried out, evaluating their performance both on synthetic data sets, and on the outputs provided by a neural network (EfficientNet-B0) trained with image data sets (CIFAR-3, CIFAR-10, CIFAR-100).
Once these tests have been done on the different calibration methods, various algorithms based on Normalizing Flows are proposed to overcome the limitations found in the previous study without compromising their performance.