MAINFRAME will be an entire ecosystem of real-time audio-visual products, allowing music producers to explore the relationship between sound and light in a way that was never possible before MAINFRAME. The first product within this lineup is MAINFRAME-B.

HISTORY

I have been developing MAINFRAME since October 2020. The idea came to my when I was working on my DIY modular synthesizer, The Beat Doctor. The latest iteration of the synth had a little visualizer using 8 segment displays which changed based on the pitch of the oscillator. This seemingly small part of the product turned out to be the most popular with users. I wasn’t sure why but I was curious.

Thus MAINFRAME was born.

Initially the plan was for MAINFRAME to be a more effective visualizer for my city project, as early experiments with visualizing music on the city were horrid. However, after development on MAINFRAME has begun, its intended application has expanded far beyond anything I could have ever imagined.

MAIN GOALS OF MAINFRAME

Easy raves

Can you imagine a rave or concert without any lighting? Why does a live set at home have to be any different? I believe large-scale lighting setups (using DMX512 protocol) are not suitable for the general public to use at home or to bring for small shows. The gear is usually large, expensive and requires specialized knowledge in setup and operation. How many people are really going to set up an entire truss system in their homes? Some might, most won’t.

I want to give music producers the opportunity to create a rave-like experience within their own homes and small live shows, through synchronization between audio and visual.

Furthermore, in live productions and venues I believe real-time synchronization is sorely lacking. How many raves or clubs have you been to where the lighting was displaying random patterns and not at all synchronized with the music? Unless they have someone “playing” the lights live or have a team pre-render a show, at best they synchronize a strobe going with the beat. I believe this is a wasted opportunity to create an amazing, immersive experience for the crowd. This observation will eventually lead me to develop DMX-based versions of MAINFRAME in the future.

Visual instruments

Initially MAINFRAME started with simple trigger-based animations, meaning you send a MIDI note, it does some basic, predictable animation. It was cool for 30 seconds then got boring. At best MAINFRAME behaved as an interface or a utility.

MAINFRAME has since developed to become an instrument in its own right. Its output is not entirely predictable and encourages exploration, just like a modular synthesizer. Turn a knob here, patch this over here, turn another knob, and you create something brand new. MAINFRAME behaves in a similar way, especially when outputting to an LED grid. 

Through customization, a key feature for all MAINFRAME products, artists can personalize their MAINFRAME to express themselves just like any other instrument.

There are two guiding principles that I have adhered to throughout the development of MAINFRAME:

Real Time Performance: After some experimentation in audio-visual synchronization, I found that visuals which perfectly synchronize to audio is a pleasurable and underutilized experience in the realm of music production. Functionally, an output must be observed within 30ms of the input sound.

Accuracy : What you hear should match what you see. Immersion and a pleasurable experience is based on this. If the visuals ever seem random and not based on the input sound’s characteristics, immersion is broken.

Development of Technology

MAINFRAME, and especially the first product MAINFRAME-B is a stepping stone on (what I assume will be) a long journey towards interesting and novel technology. There are a number of questions I want to explore with MAINFRAME.

What is the nature of “immersion”? 

My hypothesis is that immersion, or the experience of it, is based on sensory synchronization. Imagine an FPS video game. When you shoot a gun, there’s the visual of the gun with recoil and muzzle flash, there’s the sound of gunshot and (usually) the controller vibrates a bit. Imagine if any one of those were out of sync or missing completely, immersion would be diminished or lost completely.

How can a device “feel alive”?

I want MAINFRAME to feel like it’s alive, which raises some interesting questions. How can a digital system be imbued with life? My current hypothesis is that the answer resides in the realm of analog, thus MAINFRAME internally runs analog simulations which form patterns based on the input MIDI data.

MAINFRAME is not an interface, or just a “tool” that does exactly what you want it to do. It has character and must be explored. The goal is for MAINFRAME to feel like it is a bandmate, it has it’s own ideas and style, and you with it, sharing ideas. 

All these goals are lofty I understand, but I can’t help but dream. MAINFRAME-B has very early stage, or even primitive versions of all this concepts. 

Related Posts