Do Music Source Separation Models Preserve Spatial Information in Binaural Audio?

Do Music Source Separation Models Preserve Spatial Information in Binaural Audio?

Richa Namballa

rn2214@nyu.edu

Music Technology
New York University
New York City, USA

Agnieszka Roginska

ar137@nyu.edu

Music Technology
New York University
New York City, USA

Magdalena Fuentes

mf3734@nyu.edu

Music Technology / IDM
New York University,
New York City, USA


Accepted at ISMIR 2025


Summary

Binaural audio remains underexplored within the music information retrieval community. Motivated by the rising popularity of virtual and augmented reality experiences as well as potential applications to accessibility, we investigate how well current state-of-the-art music source separation (MSS) models perform on binaural audio. Although these models process two-channel inputs, it is unclear how effectively they retain spatial information. In this work, we evaluate how popular MSS models preserve spatial information on both standard stereo and novel binaural datasets. Our binaural data is synthesized using stems from MUSDB18-HQ and open-source head-related transfer functions by positioning instrument sources randomly along the horizontal plane. We then assess the spatial quality of the separated stems using signal processing and interaural cue-based metrics. Our results show that stereo MSS models fail to preserve the spatial information critical for maintaining the immersive quality of binaural audio, and that the degradation depends on model architecture as well as the target instrument. Finally, we highlight valuable opportunities for future work at the intersection of MSS and immersive audio.



Audio Examples