In my research, I combine the fields of acoustics, audio technology, and digital signal processing to reliably measure, simulate, and design spatial soundscapes. A central focus is on virtual acoustics, i.e., room acoustic simulation and auralization, including dynamic sound sources such as moving musicians and pitch-dependent instrument directional characteristics. Building on this, I work on methods for immersive recording and rendering (e.g., motion-tracked binaural, ambisonics, headphone playback with head tracking) as well as on perceptual evaluation methods for spatial audio quality.
In application-oriented projects, I develop measurement methods and models for microphone data, work on the standardization of directional characteristics, and investigate acoustic feedback systems in cooperation with industry partners. Another focus is on generative acoustics, i.e., the automated optimization of room acoustic measures by combining numerical simulation (including FEM) with machine learning and optimization methods. I am open to collaborations with partners from the audio and media industry—from product development (microphones, DSP, spatial audio) to the planning and optimization of sophisticated production and event environments.