So what is The Ambisonic Toolkit (ATK)?
The Ambisonic Toolkit (ATK) brings together a number of classic and novel tools for the artist working with Ambisonic surround sound. The toolset is intended to be both ergonomic and comprehensive, framed so that the user is enabled to ‘think Ambisonic’.
The Ambisonic Toolkit addresses the holistic problem of creatively controlling a complete soundfield, facilitating spatial composition beyond simple placement of sounds in a sound-scene. The artist is empowered to address the impression and imaging of a soundfield – taking advantage of the native soundfield-kernel paradigm the Ambisonic technique presents.
The ATK is freely available for Reaper and SuperCollider. In this article I'll focus only on the Reaper version of the ATK.
The ATK comes as a set of multiple decoders/encoders/filters written in Reapers own JSFX scripting language and working with ATK in Reaper is pretty straight forward.
The basic workflow is to encode sound sources/recordings into a soundfield, apply filters and transformations to it, and finally decode it to the desired target format(s) on your monitor master channel.
Image Credits: ATK Project
An overview of the available encoders/decoders can be found here.
Reapers very flexible multichannel/routing features makes setting everything up really easy. For a detailed step by step guide and usage example please refer to the official ATK with Reaper turorials. I highly recommend watching the video linked there as it gives a comprehensive rundown of the available ATK encoders/decoders/transformers.
Also, this informative video about mixing with ATK has some really useful information regarding setup/routing. Since I don't have a 5.1 setup in my homestudio I've mainly worked with the Binaural Decoder so far, which comes with a set of HRTF presets which are working pretty well.
The ATK also doesn't seem to have any negative impact on the performance. I'm currently working on a sound redesign of the 360° Clash of Clans video. The session currently is using ~20 encoders, ~20 transformers, two decoders, a couple instances of Proximity and the CPU is barely scratching the 7% mark.
Rendering to different target formats is pretty straight forward too. You essentially render to B-Format or use any of the available decoders. Reaper now also supports multi-channel audio rendering for video. I've yet to try to render a 360° video with multichannel audio from within Reaper though. I'm not 100% sure it works, but I'm fairly confident it does since Reaper is relying on ffmpeg which I would use instead.
The only thing that I'm currently really missing in my workflow is a proper way to sync to video. At the moment I'm using my own little tool to synchronize KolorEyes to Reaper using OpenSoundControl. But Kolor Eyes doesn't really allow for smooth playhead positioning. So spotting sounds to the right position on the timeline is really clumsy and requires more time than I would like.
I'm curious if there will be support for 360° for the VLC video player in the future and if Reaper will then support it too and ideally provide ways to access the view port positioning data so that tools like the ATK could access it. That would be a dream come true :].
In my next VR Audio blog post I will have a closer look at the Oculus Audio SDK. Oculus recently acquired TwoBigEars and discontinued their Plugins for spatialized Audio. I expect some cool updates to the Oculus SDK in the near future.
I'm always looking to collaborate on exciting projects!