INTERACTIVE AUDIO SYSTEMS SYMPOSIUM

Image result for symposium audio systems

Recent advances in low-cost motion-tracking and sensing technologies, coupled with increased computing power have paved the way for new methods of audio production and reproduction. Interactive audio systems can now readily utilise non-tactile data such as listener location, orientation, gestural control and even biometric feedback, such as heart rate, to intelligently adjust sound output. Such systems offer new creative possibilities for a diverse range of applications ranging from virtual reality, mobile technologies, in-car audio, gaming, social media and film and TV production.

On the 23rd September 2016, The University of York held the first symposium dedicated to the topic of interactive audio systems. The symposium explored the perceptual, signal processing and creative challenges and opportunities arising from such audio systems affected through enhanced human-computer interaction.

Posters

A comparison of subjective evaluation of soundscapes with physiological responses – Francis Stevens, Damian Murphy and Stephen Smith
A Filter Based Approach to Simulating Acoustic Radiation Patterns Through a Spherical Array of Loudspeakers – Calum Armstrong
Boundary element modelling of KEMAR for binaural rendering: Mesh production and validation – Kat Young, Gavin Kearney and Tony Tew
Echolocation in virtual reality – Darren Robinson and Gavin Kearney
In 3D Space, Everyone Can Hear You Scream… From All Directions – Sam Hughes
Vertical Amplitude Panning for Various Types of Sound Sources – Maksims Mironovs and Hyunkook Lee
Virtual Headphone Testing for Spatial Audio – Hugh O’Dwyer, Enda Bates and Francis Boland
Taking advantage of geometrical acoustic modeling using metadata – Dale Johnson and Hyunkook Lee
The effects of decreasing the magnitude of elevation-dependent notches in HRTFs on median plane localisation – Jade Clarke and Hyunkook Lee

Demonstrations

Spatial Audio for Domestic Interactive Entertainment – Gavin Kearney
Multi-User Virtual Acoustic Environments – Calum Armstrong and Jude Brereton
Listener-adaptive object-based stereo – Dylan Menzies
Demo of an array for adaptive personal audio and adaptive Transaural reproduction – Marcos Simón
Spatial sound via cranial tissue conduction – Peter Lennox and Ian McKenzie
High spatial-resolution parametric strategies for spatial sound recording and reproduction – Archontis Politis and Ville Pulkki
Sonicules – Jude Brereton
Preliminary Investigations into Virtual Reality Ensemble Singing – Gavin Kearney, Helena Daffern, Calum Armstrong, Lewis Thresh, Haroom Omodudu and Jude Brereton
Object-based reverberation for interactive spatial audio – Philip Coleman, Philip Jackson, Andreas Franck, Chris Pike
Listener-adaptive object-based stereo – Dylan Menzies
Bela – an open-source embedded platform for low-latency interactive audio – Giulio Moro
A simple algorithm for real-time decomposition of first order Ambisonics signals into sound objects controlled by eye gestures, Giso Grimm
Spatial Audio in Video Games for Improved Player Quality of Experience, Joseph Rees-Jones

Be the first to comment on "INTERACTIVE AUDIO SYSTEMS SYMPOSIUM"

Leave a comment

Your email address will not be published.


*