Object-Based Audio

Immersive and interactive audio across all devices

Adaptive and personalised audio

All our technology is built, from the ground up, with object-based audio at the heart. The main premise of object-based audio is to keep the audio components separate right through the signal chain with accompanying metadata describing location, source type and other characteristics.

Keeping the components separate like this means they can be adapted and remixed as required at various parts of the production/reproduction chain.

MIXaiR uses AI processing to extract the different audio components and uses triangulation to construct a metadata stream describing where each source is. The metadata is built into an S-ADM stream which can be output over IP for multiple purposes.

Automatic separation and processing of audio objects
Object-based audio means sources can be adapted to meet end-user preferences.
Automatic extraction and localisation of objects for immersive rendering across any format.
Allow users to change the mix themselves e.g. change level of commentary
Easily integrate with other next generation audio protocols such MPEG-H, Dolby Atmos etc.
Serialised ADM (S-ADM) output communicates metadata over a network.
XR/360° video
Provides adaptive audio for immersive/volumetric video.

"This is like being there live, incredible."

Fan feedback

Object-Based Audio is the future of live broadcast for more immersive and interactive experiences for viewers across all devices. Get in touch to see how MIXaiR can help you implement it in your context.

Thank you, we will be in touch soon!
Please make sure you have filled out every part of the form and try submitting it again.
Virtual crowd solution for games behind closed doors
AI audio mixes for immersive, immediate broadcast through multiple channels
Bespoke cross-platform fan sound experience
AI driven on-pitch sound detection and presentation