top of page
Audiovisualisr

 

Personal Project
2D: Touchdesigner / Resolume
Audio: Ableton, various hardware synths
Controllers: Novation ZERO SLMk2 Midi controller, Ipad using TouchOSC, Novation Circuit Tracks, Arturia Keystep Pro

Audiovisualisr is a realtime generative audiovisual instrument / setup / process / whatever. It's an ongoing personal project which I dip into now and again, purely for fun. 

I've always loved the idea of jamming stuff out. I like pootling around on something for hours, and to edit it down, find the bits that work really well and snip them out. I used to do this with synths, recording huge lengths of me just messing about, just to find these interesting mistakes and oddities that you can never plan for. So I set about doing this with an audiovisual instrument, the aim being to generate audio and visuals at the same time on the fly, to create a sort of realtime synergy and feedback that you can't get with doing them separately (usually music is added to visuals in post, or VJs sequence pre-packaged visuals to suit the music).

This presents a lot of problems. How do generate visuals using audio data (and vice versa)? (A: touchdesigner, ableton, and a shitton of m4l devices, and a lot of patience) Can 2 programs run at the same time at a decent frame rate? (A: Yes barely, but at this level of complexity it's starting to creak) How does all the data move back and forth? (A: Midi, OSC) How do I realistically control both at the same time with only 2 hands, given there's dozens, maybe hundreds of parameters to change, menus to scroll etc? (A: Firstly, I used several controllers. The main one, the Zero, is a bunch of sliders and knobs which I mapped to loads of parameters in Touch designer. That's the visual side. Then for the audio I made a custom UI in TouchOSC on the ipad. This mainly controlled things that could be randomly changed on the fly, but still make sense. Theres a few euclidean sequencers running, lots of variable instruments, a few universal LFOs etc, so it can be run almost entirely generatively, and produce something totally new each time. Or, there's the option of turning off autopilot and having more musical input yourself (using the keystep or tracks), and the visuals will still generate from that). 


I won't go into it all fully, it's incredibly, head-hurtingly complex, far more so than any of the more traditional CG I usually do. Plus it's constantly evolving. This general concept has had so many false starts, wrong turns and restarts it's hard to keep up. 

What I will say is that whilst the end result on this page might not look particularly different or innovative, when you're in the moment and making this live, it really feels like you're creating something audiovisual, as opposed to making visuals for audio or vice versa. The 2 concepts join as one in your head and you get lost in it in a way that I don't feel about any other medium. In this way, more than any of my other work, it feels genuinely new. I just need to work on how to get that across better. 
 

bottom of page