Two-channel mixtures of speech and real-world background noise
We propose to repeat the Two-channel mixtures of speech and real-world background noise without Chime corpus because the reference speech data has been already provided in the second ChiME challenge.
This task aims to evaluate denoising and DOA estimation techniques by the SiSEC 2010 noisy speech dataset.
Description of the dataset
We consider two-channel mixtures of one speech source and real-world background noise sampled at 16 kHz.
These data are part of the SiSEC 2010 noisy speech dataset. Background noise signals were recorded via a pair of omnidirectional microphones spaced by 8.6 cm in six different public environments:
Su1: subway car moving
Su2: subway car standing at station
Ca1: cafeteria 1
Ca2: cafeteria 2 (another cafeteria than Ca1)
Sq1: square 1
Sq2: square 2 (another square than Sq1)
and in two different positions within each environment:
Co: corner (except in
Two recordings identified by a letter (A or B) were made in each case. Mixtures were then generated by adding a speech signal to the background noise signal. For the reverberant environments
Ca, the speech signals were recorded in an office room using the same microphone pair. For the outdoor environment
Sq, the speech signals were mixed anechoically through simulation. The distance between the sound source and the array centroid was 1.0 m for female speech and 0.8 m for male speech. The direction of arrival (DOA) of the speech source was different in each mixture and the signal-to-noise ratio (SNR) was drawn randomly between -17 and +12 dB.
Download the test set (13 MB)
The data consist of 20 stereo WAV audio files that can be imported in Matlab using the wavread command. These files are named
: noise environment (
: recording condition (
: take (
Download the development set (11 MB)
The data consists of 36 WAV audio files and 10 text files. These files are named as follows:
dev____src.wav: single-channel speech signal
dev____sim.wav: two-channel spatial image of the speech source
dev____noi.wav: two-channel spatial image of the background noise
dev____mix.wav: two-channel mixture signal
dev____DOA.txt: DOA of the speech source (see the SiSEC 2010 wiki for the convention adopted to measure DOA)
: noise environment (
: recording condition (
: take (
Since the source DOAs were measured geometrically in the
Ca1 environments, they might contain a measurement error up to a few degrees; on the contrary, there is no such error in the
Sq environment, because the spatial images of the speech source were simulated. The
Co condition of the
Ca1 environment has take
Tasks and reference software
We propose the following 3 tasks:
- speaker DOA estimation: estimate the DOA of the speech source
- speech signal estimation: estimate the single-channel speech source
- speech and noise spatial image estimation: decompose the mixture signal into two two-channel signals corresponding to the speech source and the background noise
Participants are welcome to use some of the Matlab reference software below to build their own algorithms:
- stft_multi.m: multichannel STFT
- istft_multi.m: multichannel inverse STFT
- example_denoising.m: TDOA estimation by GCC-PHATmax, ML target and noise variance estimation under a diffuse noise model, and multichannel Wiener filtering
Each participant is asked to submit the results of his/her algorithm for task 2 and/or 3 over all or part of the mixtures in the development dataset and the test dataset. The results for task 1 may also be submitted if possible.
Each participant should make his results available online in the form of a tarball with the following file naming convention:
____src.wav: single-channel speech signal
____sim.wav: two-channel spatial image of the speech source
____noi.wav: two-channel spatial image of the background noise
____DOA.txt: DOA of the speech source
Each participant should then send an email to “onono (at) nii.ac.jp” and “zbynek.koldovsky (at) tul.cz” providing:
- contact information (name, affiliation)
- basic information about his/her algorithm, including its average running time (in seconds per test excerpt and per GHz of CPU) and a bibliographical reference if possible
- the URL of the tarball
The submitted audio files will be made available on a website under the terms of the Licensing section below.
We propose to use the same evaluation criteria as in SiSEC 2010, except that the order of the estimated sources must be recovered.
The estimated speaker DOAs in task 1 will be evaluated in terms of absolute difference with the true DOAs.
The estimated speech signals in task 2 will be evaluated via the energy ratio criteria defined in the BSS_EVAL toolbox allowing arbitrary filtering between the estimated source and the true source.
The estimated speech and noise spatial image signals in task 3 will be evaluated via the energy ratio criteria introduced for the Stereo Audio Source Separation Evaluation Campaign and via the perceptually-motivated criteria in the PEASS toolkit.
Performance will be compared to that of ideal binary masking as a benchmark (i.e. binary masks providing maximum SDR), computed over a STFT or a cochleagram.
The above performance criteria and benchmarks are respectively implemented in
An example use is given in example_denoising.m.
All files are distributed under the terms of the Creative Commons Attribution-Noncommercial-ShareAlike 3.0 license. The files to be submitted by participants will be made available on a website under the terms of the same license.
Public environment data were authored by Ngoc Q. K. Duong and Nobutaka Ito.
Task proposed by the Audio Committee