Audio Research Group conducts internationally renowned research in audio, speech, and music signal processing at Tampere University.

Main Research Areas

Content analysis of sounds

Content analysis of general audio is concerned with the analysis of audio for the automatic extraction of relevant information such as acoustic scene characteristics and sound sources present. Applications range from simple classification tasks for enabling context awareness, to security and surveillance systems based on detected sounds of interest, to the organization, indexing, tagging, and querying of large audio databases.

Spatial audio

Microphone arrays provide a link between the physical locations of sound objects for the computer software and can allow capturing the sound field. Applications of microphone arrays include physical location determination such as speaker localization and speaker position tracking, signal enhancement, and separation.

Source Separation and signal enhancement

Source separation means the tasks of estimating the signal produced by an individual sound source from a mixture signal consisting of several sources. This is a very fundamental problem in many audio signal processing tasks since the analysis and processing of isolated sources can be done with much better accuracy than the processing of mixtures of sounds.

Interested in collaborating with us?

Sound event localization and detection (SELD) is the combined task of identifying the temporal onset and offset of a sound event, tracking the spatial location when active, and further associating a textual label describing the sound event. Our work on SELD was recently published in the reputed journal of selected topics in signal processing (JSTSP). The proposed method was compared with two...
Acoustic scene classification is the task where we try to classify a sound segment (e.g. 30 seconds long) to an acoustic scene, like airport, metro station, office, etc. We get a recording, we give it as an input to our acoustic scene classification method, and the method outputs the acoustic scene where this recording came from. To develop our method, we use a dataset of recordings of a list of...
A collection of Python utilities for Detection and Classification of Acoustic Scenes and Events research has been released. These utilities were originally created for the DCASE challenge baseline systems ( 2016 & 2017 ) and are bundled now into a standalone library to allow their re-usage in other research projects.

Latest demos


Latest publications