All our technology is built, from the ground up, with object-based audio at the heart. The main premise of object-based audio is to keep the audio components separate right through the signal chain with accompanying metadata describing location, source type and other characteristics.
Keeping the components separate like this means they can be adapted and remixed as required at various parts of the production/reproduction chain.
MIXaiR uses AI processing to extract the different audio components and uses triangulation to construct a metadata stream describing where each source is. The metadata is built in to an S-ADM stream which can be output over an IP network.
Object-Based Audio is the future of live broadcast for more immersive and interactive experiences for viewers across all devices. Get in touch to see how MIXaiR can help you implement it in your context.