Sprint 4 - DSP & 3D Audio
I needed to deviate off plan a little in Sprint 4 and spend time researching native audio and DSP in Unity and review solutions for 3D Sound. Ultimately, given the wide range of skills required to create a complete immersive experience of any size, I decided this would be a smart focus for future projects.
The Unity audio engine is based around Audio Sources than can either be audio files or tracker modules, routed through a hierarchy of mixers to balance levels and optionally apply FX. The sources can be played back either 2D or 3D. This video from Unite 2015 gives a decent overview of the concepts.
That’s cool but I was a little surprised (perhaps naively) that there was no built in DSP for procedural audio - a term I was introduced to by Andy Farnell back in 2011. Certainly for the prototype ideas I’m considering, I was hoping to work directly with DSP on device. The approach from Unity seems to be to expose a plugin SDK and leave synthesiser creation as an exercise for the user if required. The audio dev community has filled in some of the gaps - the JUCE framework from ROLI is now supporting Unity as a build target (desktop only) and there are projects to publish Faust and PD via UnityPD or Heavy based instruments as plugins.
For more simplistic scenarios, you can bind to
OnAudioFilterRead directly from a C# script as in this example oscillator but I suspect there’s a performance limit to how far you can go with this approach - here’s a quote from the Native Audio Plugin SDK:
“Unlike scripts and because of the high demands on performance this has to be compiled for any platform that you want to support, possibly with platform-specific optimizations”
A higher level commercial plugin solution does exist in the form of Audio Helm which looks really promising and supports desktop and mobile targets. It has a built in sequencer, sampler and synth engine with GUI and scripting support. There’s also a standalone version of the synth engine you can use for sound design and then publish the patch to Unity for native app use. Your patches can then have their parameters scripted which is exactly what I was hoping for. It costs ~70 EUR but seems like a good workflow with minimal fuss.
Worth noting here is that Unreal Engine 4 from Epic seems to be well ahead in this regard as this video from 2017 details. There’s still doesn’t seem to be much documentation around but the API is here. It’s not clear to me if this stuff is out of beta but this functionality alone is enough for me to take another look at UE4!
Audio immersion is essential for realistic XR experiences and Unity ships with spatialization baked in - it includes a basic version of the Oculus Native Spatializer (ONSP) which can be upgraded to the full version or swapped out for solutions from Microsoft or Google’s Resonance Audio project. Check here for the high level 3D sound concepts from Oculus and this guide for the details. I found this Spatial Audio video from Oculus Connect to be a really good overview of the concepts and tooling required for 3D sound.
For a real world, end to end immersive audio production guide check out this recent Spatialized Music for AR/VR presentation. It follows the entire process of producing the audio experience for the interactive Oculus First Steps tutorial that ships with the Quest. It follows the journey from sound designer/composer, through studio recording of the orchestra, post production and to final application integration. It covers the strengths and weaknesses of ambisonic mixing along the way and when to use head tracking for positioning audio or when to just mix in quad.
I also bookmarked the Facebook 360 Spatial Workstation which looks interesting: “..a software suite for designing spatial audio for 360 video and cinematic VR. It includes plugins for popular audio workstations, a time synchronised 360 video player and utilities to help design and publish spatial audio in a variety of formats.” This Spatial Audio for Cinematic VR and 360 Videos article covers creating immersive audio content targeting Facebook and Youtube as well as XR hardware.
Unity supports the integration Ambisonics out of the box so you can import multi channel pre-spatialized sound files but you’ll need a decoder from either the Oculus or Google audio SDKs I mentioned above.
Published on 17 Nov 2019
This project is gratefully supported by the Arts Council England as part of their DYCP fund