23 febrero, 2021
12:00

Título:  Deep learning architectures for music audio classification: a personal (re)view

Ponente: Jordi Pons i Puig (Dolby Laboratories)

Organizador: Jesús Javier Rodríguez Sala

Fecha: Martes 23 de febrero de 2021 a las 12:00 horas.

Lugar:  Online. https://meet.google.com/toj-jchk-ahh

Abstract: A brief review of the state-of-the-art in music informatics research and deep learning reveals that such models achieved competitive results for several music-related tasks. In this talk I will provide insights in which deep learning architectures are (according to our experience) performing the best for audio classification. To this end, I will first introduce a review of the available front-ends (the part of the model that interacts with the input signal in order to map it into a latent-space) and back-ends (the part predicting the output given the representation obtained by the front-end). And finally, in order to discuss previously introduced front-ends and back-ends, I will present some cases we found throughout our path researching which deep learning architectures work best for music audio tagging.