Speaker: Almudena Aguilera
Abstract: Speaker Recognition Systems aim to automatically recognize the identity of an individual from a recording of his/her speech or voice. Despite the progress of these systems in terms of accuracy, we must ask ourselves: “What happen when we take important decisions about someone based on the scores given by the system”, the answer is clearly, discrimination and lack of fairness problems arise. In the exposition and in the paper (which is based on), we aim to explore the disparity between different demographic groups (divided by their gender, age, and language) in performance achieved by state-of-the-art deep speaker recognition systems. However, the researchers investigate if having a balancing representation of the different groups in the training set and test set can help to mitigate or reduce this problem. The main idea of the study is to provide a soils basis of the fairness problem in speaker recognition.