THX®, Qualcomm and NYU Demonstrate Next-Gen Immersive Live Stream at AES NY 2018
THX will be out in full force at AES NY 2018 this year to educate attendees on THX Spatial Audio platform using MPEG-H for the delivery of next-gen immersive entertainment content. In addition to hosting a panel and presenting new research on audio perception, THX and Qualcomm Technologies Inc. are working with New York University (NYU) to conduct a live stream of international platinum Sony Music recording artist Ozark Henry, using higher-order Ambisonics and MPEG-H at NYU’s Frederick Loewe Theatre. Musicians from Trondheim, Norway, and Buenos Aires, Argentina will accompany Henry during the performance.
Ozark Henry collaborates with the National Orchestra of Belgium – photo credit: Veerle Vercauteren
The concert will use THX Spatial Audio, a suite of immersive sound technologies, motion capture-driven avatars, and interactions with live dancers.
For this broadcast, the NYU Holodeck team will leverage the THX Spatial Audio platform and MPEG-H solution to deliver a live MPEG-H broadcast using immersive sound technologies such as Ambisonics, multi-channel immersive sound, and visual systems. The demonstration will utilize an off-the-shelf Ateme real-time encoder to stream MPEG-H audio and render THX Spatial Audio over loudspeakers in a 5.1.4 configuration.
We invite all AES NY attendees to experience THX Spatial Audio and MPEG-H during this live demonstration at the Frederick Loewe Theatre on NYU’s campus on October 18th. This is a free event open to the public.
MPEG-H has gained momentum globally due to its advantages in supporting the transport of multiple audio formats including objects, ambisonics and channels over existing broadcast networks, satellite and next generation 5G. The 3rd Generation Partnership Project (3GPP), the standards body for the global mobile industry, has selected MPEG-H as the audio codec for VR streaming over mobile networks.
MPEG-H uses up to 600 to 1 compression to deliver a fully immersive 360° sound field with up to 1 GB of audio data in a 2.7 MB file. MPEG-H can also be integrated seamlessly into today’s content development workflows. Using Ambisonics with MPEG-H, content creators can use one audio stream to efficiently deliver immersive entertainment experiences across movies, live sports, gaming, 360° video and VR/AR. With Higher Order Ambisonics broadcasts, users will be able to zoom into individual microphones placed throughout the stadium (e.g. crowd, athletes, commentators), change the commentary language, or adjust the volume of each sound source on the fly.
MPEG-H is not just a conceptual technology. In the United States, the ATSC has selected MPEG-H as an option, while both China and Korea have adopted MPEG-H as the standard for next generation Ultra-High Definition broadcasts. The first broadcast of MPEG-H using Higher-Order Ambisonics (HOA) was done by the European Broadcasting Union (EBU) for the 2018 European Athletics Championships in Berlin. The trial leveraged over 19 industry partners of production workflows to shoot, process, record, and distribute live Ultra High Definition (UHD) content, with High Frame Rates (HFR), High Dynamic Range (HDR), and Next Generation Audio (NGA) using MPEG-H. MPEG-H was also successfully tested this year during the 2018 Winter Olympics and 2018 FIFA World Cup.
At THX, we’re very excited about the possibilities of delivering spatial audio experiences using higher-order Ambisonics and MPEG-H, and as OEMs and device manufacturers begin integrating MPEG-H playback capabilities into their devices, we believe consumers will be the real winners.
To learn more about THX Spatial Audio or inquire about partnership opportunities, visit our THX Spatial Audio page here.