AUDIAS group has regularly taken part in several competitive technology evaluations organized by different prestigious institutions. These evaluations drive research lines in the field, provide a common framework to test the systems developed and bring together well-recognized research laboratories and industrial companies around the world.
In the past recent years, we can highlight our participation in the following evaluations:
NIST (National Institute of Standards and Technology) Evaluations
- NIST SRE (Speaker Recognition Evaluation) 2019: For this evaluation, AUDIAS collaborated again with Brno University of Technology, Phonexia, Shanghai Jiao Tong University, Omilia – Conversational Intelligence and CRIM. In this evaluation, there were two tracks: telephone and audio from video (VAST). Moreover, this last track did account not just for audio, but audio visual systems were implemented.
- NIST SRE 2018: In this evaluation, AUDIAS participated in collaboration with Brno University of Technology, Phonexia, Nuance Communications, Omilia – Conversational Intelligence and CRIM. The systems submitted were all based on x-vector paradigm with different features, DNN topologies and backends.
- NIST LRE (Language Recognition Evaluation) 2017: The aim of this evaluation was similar to LRE 2015. In this case, AUDIAS participated in collaboration with Brno University of Technology (BUT, Czech Republic), Politecnico di Torino (Italy) and Phonexia (Czech Republic). The final submission was composed of different i-vector systems based on bottleneck features and a DNN embedding based system.
- NIST SRE 2016: This evaluation aimed at speaker verification, and AUDIAS took part as a collaboration with SRI International and Universidad de Buenos Aires and CONICET. The systems developed included different DNN, bottleneck features and i-vector systems, with several strategies for domain adaptation and calibration.
- NIST LRE 2015: This evaluation aimed at identifying the language of given recordings among more than 20 languages, grouped according to clusters of similar languages. The AUDIAS submission for this evaluation consisted on the recently introduced LSTM based end-to-end systems for language recognition, in combination with the well-established i-vectors.
Previously, we have participated in most of the NIST speaker and language recognition evaluations since 2000.
For details on the NIST Speaker evaluations, we suggest reading (open access):
- J. Gonzalez-Rodriguez, “Evaluating Automatic Speaker Recognition systems: An overview of the NIST Speaker Recognition Evaluations (1996-2014)”, Loquens, CSIC, Vol. 1, n. 1, pp. 1-15, January 2014.
For a sample AUDIAS/ATVS participation in NIST LRE:
- J. Gonzalez-Dominguez, I. Lopez-Moreno, J. Franco-Pedroso, D. Ramos, D. T. Toledano and J. Gonzalez-Rodriguez, “Multilevel and Session Variability Compensated Language Recognition: ATVS-UAM Systems at NIST LRE 2009”, IEEE Journal on Selected Topics in Signal Processing, VOL. 4 (6), PP. 1084-1093, Diciembre 2010.
For a sample AUDIAS/ATVS participation in NIST SRE:
- J. Gonzalez-Rodriguez, D. Ramos-Castro, D. Torre-Toledano, A. Montero-Asenjo, J. Gonzalez-Dominguez, I. Lopez-Moreno, J. Fierrez-Aguilar, D. Garcia-Romero and J. Ortega-Garcia, “On the Use of High-Level Information for Speaker Recognition: the ATVS-UAM System at NIST SRE 05”, IEEE Aerospace and Electronic Systems Magazine, Vol. 22, No. 1, January 2007.
ALBAYZIN Evaluations
As part of the IberSPEECH conferences, and sponsored by the Spanish Speech Technology Thematic Network (RTTH), ALBAYZIN evaluations are organized biannually since 2008.
AUDIAS has co-organized one of the evaluations (ALBAYZIN Search-on-Speech) in 2012, 2014, 2016, 2018 and 2020. This evaluation deals with the problem of searching for words or sequences of words within an audio repository. The queries can be represented in textual form or as audio samples (Query-by-Example). AUDIAS has participated in all of these evaluations except in 2014.
AUDIAS has also participated in different ALBAYZIN evaluations uninterruptedly from 2008, particularly in the fields of speaker diarization and langauge recognition.
Other Evaluations
In 2017 we participated in ASVspoof 2017 (Automatic Speaker Verification Spoofing and Countermeasures Challenge). We developed an audio fingerprinting system, and evaluated in the framework provided by the ASVspoof 2017 challenge. However, our system did not comply with the protocol needed for the evaluation and therefore, it was not presented. For the description of our system, we refer to the following paper:
- J. Gonzalez-Rodriguez, A. Escudero, D. d. Benito-Gorrón, B. Labrador and J. Franco-Pedroso, “An Audio Fingerprinting Approach to Replay Attack Detection on ASVSPOOF 2017 Challenge Data”, in Proc. Odyssey 2018 The Speaker and Language Recognition Workshop, pp. 304-311, 2018.