THX Spatial Audio Platform

The THX Spatial Audio Platform provides consumers with immersive sound while watching movies, listening to music, or playing games on mobile devices, personal computers, and other consumer electronic devices. With the proliferation of next-generation audio standards like MPEG-H, and video technologies including Ultra-High Definition (UHD) for broadcasting and video streaming, there is a growing need to deliver high-fidelity, immersive entertainment experiences to consumers.

The THX Spatial Audio Platform is an end-to-end positional audio solution built with the flexibility to support legacy entertainment content, open standards like Higher Order Ambisonics (HOA), and emerging formats for XR and 360 video.

Platform Capabilities

Seamless Integration

Seamless integration into existing content creation tools, distribution workflows, applications, and playback devices.

Optimization by Device

Optimized audio playback optimization by device (mobile, PC, CE), content type (music, movies, games) and listening mode (headphones and speakers).


Planned support for decoding and rendering of content transmitted using the MPEG-H audio standard.


Personalized audio experience using the Head-Related Transfer Function (HRTF), or hearing physics, of each individual user.

Open and Flexible

Open and flexible rendering engine with support for legacy channel content, ambisonics, and objects.

End-to-End Solution

Features available for implementation independently, or as a complete end-to-end solution to ensure maximum flexibility.

Driving High Fidelity Audio from Content Creation to Playback

THX is working with partners, content creators, and content distributors to deliver a comprehensive set of tools and features to provide a THX Spatial Audio experience for consumers across a broad range of devices and use cases.


Tools and Features

Content Creation

Content Creation

Content creation plugins support ambisonics, object-based, and legacy formats.

  • THX Spatial Audio content creation plugins seamlessly integrate into industry standard audio design tools and eliminate the need to learn new tools or master new proprietary audio formats

MPEG-H Encoding and Decoding

Planned support for the encoding, decoding and transport of audio technology for next-generation television broadcasts and streaming video.

  • MPEG-H scene-based audio format uses ambisonics and objects to encode and decode complex sound scenes in a single scalable format
  • The THX Spatial Audio Platform supports scene-based and object-based audio using MPEG-H with Qualcomm, and rendering of legacy content to provide a cost effective and flexible solution for the industry

Rendering Engine

Spatializes legacy movie, music and gaming channel content, ambisonics, and object-based content through headphones and speakers across all consumer devices.

  • Headphone rendering engine optimizes playback of ambisonics, object and channel-based content
  • Cross-talk cancellation technology enables realistic, immersive playback of binaurally-processed audio over speakers
  • Advanced processing algorithms ensure minimal impact on battery life by utilizing a device’s DSP
  • Specially designed content modes (Music, Movies, Game, Podcast) reproduce the size, reflections and depth of virtual rooms to enable custom spatialization of audio content

Tuning and Device Optimization

Measures and calibrates audio playback to deliver the highest fidelity audio experience over headphones and speakers.

  • Spatialization filters provide a neutral frequency response to compensate for non-linearity in individual models of headphones
  • Stereo filters adjust individual headphone models to match the THX Certified frequency response curve
  • THX Loudness Plus technology maintains a consistent audible tonal balance as the listener adjusts the volume


Delivers personalized audio profiles using Head-Related Transfer Functions (HRTFs) that are optimized for a listener’s unique hearing physics.

  • 2D Ear-image capture process rapidly analyzes tens of thousands of data points derived from an image of the user’s ear
  • Cloud-based deep learning system generates and applies a personalized HRTF based on the user’s unique hearing anatomy
  • Download and delivery system for applying the personalized HRTF to a wide range of consumer devices, systems, and applications

Become A Partner