ASVspoof 2017:

Automatic Speaker Verification

Spoofing and Countermeasures Challenge

Theme in the 2017 edition: Audio replay attack detection


The ASVspoof 2017 joint ASV+coutermeasure protocol related to our Interspeech 2018 paper can be donwloaded here


The ASVspoof 2017 Version 2 database with evaluation keys and extended metadata is available online:

The 2nd Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2017) Database, Version 2

An overview paper (accepted to Odyssey 2018) of the ASVspoof 2017 Version 2.0 database is available here
 

Slides of the challenge overview presented in the Interspeech 2017 special session are available here
 

Introduction

Are you good at machine learning for audio signals? Are you good at discriminating 'fake' signals from authentic ones? Are you looking for new audio processing challenges? Do you work in the domain of speaker recognition or a related field? If so, you are invited to take part in the Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof) Challenge.

The overall objective of the challenge series is to enhance the security of automatic speaker verification (ASV) systems against intentional circumvention using fake audio recordings, also known as 'spoofing attacks' or 'presentation attacks'. ASVspoof 2017 is a second edition of a challenge kicked off in 2015 (find old page here), that focused on detection of artificial speech created using speech synthesis and voice conversion methods. The new perspective in the 2017 challenge edition are replay attacks, especially those encountered under 'unseen' conditions - for instance, containing replay environments, playback devices or talkers that might be different from those in the training data. An ideal replay detection system should be reliable to both known and unknown conditions – our data will contain a mixture of both. This is in accordance with the general direction to enhance generality of spoofing attack detection to unforeseen conditions.

Despite 'ASV' being in the title, no prior knowledge of automatic speaker verification technology is required to participate in the challenge! The task is a 'standalone' replay audio detection task that can be addressed as a generic audio pattern classification problem using your favorite machine learning techniques from other domains. We welcome as many new ideas as possible!

Challenge task

Given a short clip of speech audio, determine whether it is a GENUINE human voice (live recording), or a REPLAY recording (fake). You will be provided a labeled development set of genuine/replay labeled audio examples, along with further metadata about the speech content, devices and replay environment. Your task is to develop a system that assigns a single 'liveness' or 'genuineness' score to a new audio clip, and to execute your system on a set of test files for which the ground truth is not provided

For more details, refer to the evaluation plan: Automatic Speaker Verification Spoofing and Countermeasures Challenge Evaluation Plan (PDF)

Obtaining the data

ASVspoof 2017 data is based primarily on the ongoing Reddots data collection project (link) processed through various replay conditions. To obtain the development data,

  1. Please send a request to info@asvspoof.org. to obtain a download link. Please indicate your institute in the email.
  2. The development file size should be 346.87 MB. You may additionally verify the md5 checksum of the package: 3a7e3fffa50609dc31781d5ba1807581

In addition, there will be also a mailing list for the challenge

Baseline replay attack detector

In order to kick-off quickly with your experiments on the dev-data, you may use our Matlab-based reference replay attack spoofing detector here: baseline_CM.zip

Further information

Please refer to the earlier 2015 challenge edition here for general background. We will also keep adding other useful readings to this page.

The ASVspoof 2017 challenge overview paper to appear at INTERSPEECH 2017 is available:

Tomi Kinnunen, Md Sahidullah, Hector Delgado, Massimiliano Todisco, Nicholas Evans, Junichi Yamagishi, Kong Aik Lee, "The ASVspoof 2017 Challenge: Assessing the Limits of Replay Spoofing Attack Detection", manuscript, submitted to Interspeech 2017. [PDF]

T. Kinnunen, M. Sahidullah, M. Falcone, L. Costantini, R. Gonzalez Hautamäki, D. Thomsen, A. Sarkar, Z.-H. Tan, H. Delgado, M. Todisco, N. Evans, V. Hautamäki, K. A. Lee, ”RedDots Replayed: A New Replay Spoofing Attack Corpus for Text-Dependent Speaker Verification Research”, Proc. ICASSP 2017 [PDF]

Z. Wu, J. Yamagishi, T. Kinnunen, C. Hanilçi, M. Sahidullah, A. Sizov, N. Evans, M. Todisco, H. Delgado, “ASVspoof: the Automatic Speaker Verification Spoofing and Countermeasures Challenge”, IEEE Journal on Selected Topics in Signal Processing (to appear, https://doi.org/10.1109/JSTSP.2017.2671435) [PDF]

M. Todisco, H. Delgado, N. Evans, “Constant Q Cepstral Coefficients: A Spoofing Countermeasure for Automatic Speaker Verification”, Computer Speech and Language (to appear, http://dx.doi.org/10.1016/j.csl.2017.01.001) [PDF]

Submission of results

Each team is required to submit a brief system description in PDF format, and can submit up to six score files as specified in the evaluation plan. Score files can be submitted by different emails, or all the score files can be compressed into one archive. The name of the archive should include the team name. Participants should indicate which one is their primary submission that will be used by the organizers to rank the results. The evaluation set score files should be submitted as email attachment to info@asvspoof.org.

Paper Submission

The results of the Challenge are planned to be disseminated at a Special Session of INTERSPEECH 2017 (official information pending) (link). We invite you to submit a manuscript of your challenge work to INTERSPEECH 2017. The organizers will also submit a general challenge overview paper that will be uploaded on this page before the submission deadline (estimated mid February).

Challenge papers and results

Will be added here later.

Important dates

December 23, 2016 Development data published
February 10, 2017 Evaluation data published
February 24, 2017 Evaluation scores due
March 3, 2017 Results available
March 14, 2017 Interspeech paper submission deadline
May 2017 Metadata/keys of evaluation data released
August 2017 Interspeech 2017 (Stockholm, Sweden)

Organisers

Tomi Kinnunen, University of Eastern Finland, FINLAND
Nicholas Evans, EURECOM, FRANCE
Junichi Yamagishi, National Institute of Informatics, JAPAN / University of Edinburgh, UK
Kong Aik Lee, Institute for Infocomm Research, SINGAPORE
Md Sahidullah, University of Eastern Finland, FINLAND
Massimiliano Todisco, EURECOM, FRANCE
Héctor Delgado, EURECOM, FRANCE