I have a list of posts in regards to composition concepts I will be sharing in the coming week, once the first wave of material is released - and the first performances are completed at the gallery. The time-frame for preparing these first materials required a very quick turnaround; meaning that over this past week I have been unable to get my thoughts down quite as often as I would like (or as deeply). However, I am installing the first wave of audio files tomorrow, and will be turning on the installation following the first performances tomorrow evening in the gallery stairwell. In the meantime, I wanted to share some of the progress that has been made from the digital side of things as the first performance approaches - whereas most of my recent posts have been dealing with the physical / hardware installation, this will focus on the digital systems and tools.
This first wave of material utilizes four primary recorded sound sources: radio broadcasts, sounds taken from a circuit-bent radio, EMF recordings of small electronics equipment, and sounds recorded using Ewa Justka's "Voice Odder" instrument design. These sounds were constructed so they could easily be "excerpted" quite easily while still maintaining some kind of sonic relation to one-another. I focused first on creating relatively short and sonically active miniatures that could suit the space, and operate as a sort of sonic "lego" for developing within a performance environment. At that point, I worked with the small pieces, and started to develop a general outline / arc for the performance to follow; after a lot of experimentation and practice, I began to achieve some consistency from performance to performance, and the piece emerged relatively smoothly. In the end, it came together as a formally challenging, and sonically aggressive piece which is very carefully composed, while still maintaining some improvisational elements. As a result, the performance system built in Max/MSP is more concrete than at the outset, in order to provide the performances additional consistency - it utilizes these fixed-media chunks as a skeleton which guides the system, while I perform supplemental material, record new sound, and make changes / adaptations on-the-fly. The system in place for guiding the piece is a constantly running real-time analysis engine, which controls aspects of the additional DSP, synthesis (mostly granular), and processing automatically - all while giving me a spectrographic display to use for feedback. This allows the small fixed-media portions to dictate / control the compositional relationship with the live-generated materials. Meanwhile, there are degrees of control over the system I have, though the GUI has been streamlined as much as possible. As a result, my primary, and most challenging, task during performances is to supply the system with additional sonic material to weave into the fixed-media portions. This requires me to perform both with a circuit-bent radio and the Voice Odder, recording into buffers and controlling the larger-scale system interactions. Generally, the performed material is not amplified directly through the system, but is recorded directly into buffers for granular resynthesis, wavetable synthesis, and so on. The result is a chaotic soundscape, which I have successfully recorded a couple of times now in preparation for the performance - and will be sharing following the beginning of the exhibition. However, I have included a couple of screenshots below showing some of the work as it currently stands - both from a fixed-composition perspective, and the performance perspective (Max/MSP GUI, etc).
Comments