[re]Glossolalia // Complete Performance
I am somewhat late on this post, which is a result of taking some time to rest / recover following the complete performance of [re]Glossolalia that took place on the evening of June 19 in the Talbot Rice Gallery's Georgian Gallery. While I apologize for the gaps in blog posts, this is the result of two primary factors. The first, is that I needed some time to get my health in check. Unfortunately, as a direct result of pushing myself for many weeks without sufficient sleep or rest, my health eventually started to fail. On the performance night of June 19, I was actually starting to become very ill - and in the end gave the performance with a fever. The next several days following that performance and tear-down, were then dedicated to recovering. The second reason for the delay is simply due to the schedule requirements for the performance, and the following "catch-up" I had to go through in order to edit and collate the various pieces of documentation I collected in the process. However, with that said, in this post I aim to achieve a number of things:
Document the details of the performance, including setup and tear-down.
Share an edited video and audio recording of that performance.
Document what developments have occurred in this project since the most recent post.
Cover a plan for where the project is headed next, including upcoming performances.
Explore the interrelationships of performer action and resultant sound in regards to this piece, and the performance of this piece.
I hope you're prepared for a long read, thanks in advance!
First off, is documenting the details of the performance - including all of the technical details of setup, tear-down, and so on:
As a result of the schedule requirements to make this project a success, I was still finishing the actual composition of [re]Glossolalia up until a day or two before the performance. However, given that I intimately knew the acoustic / spatial nature of the Georgian Gallery, this allowed me the flexibility to tailor the performance for the space as best as I could. Thankfully, this allowed me to somewhat "internally delegate" tasks of the project. For example, I spoke at length with Martin Parker in a one-on-one meeting about speaker choice, placement, and the desired sonic outcome for the final performance. Despite the fact that this conversation happened when the piece was still only about 2/3 complete, it allowed me to move forward and tailor materials and sound-design to suit the specific speaker setup already agreed upon for the space. In the end, I decided to make use of a Meyer Sound PA system, a very powerful quadrophonic speaker rig with a dedicated sub for both the front and back of the room. On the morning of Monday June 18, I picked up the entire rig from Alison House - at which point it was taken directly to the Talbot Rice Gallery with the help of a hired van. Making this happen smoothly required a lot of coordination between myself, Roderick Buchanan-Dunlop, James Clegg, and Tommy Stuart - and I am incredibly thankful to all of them for their flexibility and help. However, since this allowed load-in to take place the day before the performance, it really provided me the perfect amount of time to tweak the setup and craft the sonic presence I desired for the performance. What further helped, was the fact that the only other requirement for the sound system that evening was a poetry reading, meaning that this system didn't need to suit a lot of "needs", only my own.
During this setup period, there was also one final question I needed to answer for myself in regards to the speaker setup and the performance: how to prepare the performance for a quad-channel rig? I had planned on working in quad for a couple reasons, first and foremost however was simply out of a desire to really fill the space with sound and provide a listening environment that allowed for as large a space of "good audio" as possible. This was because I knew the audience was potentially going to be somewhat large, dispersed throughout the space, and that they would not be traditionally sat (allowing them to move if they so chose). With these factors considered, a quadrophonic rig seemed a wise solution given the scenario. However, that still left the question of would it be traditionally quadrophonic, would it be some form of "cheap" quadrophonic, or expanded stereo? On Monday, once the physical setup was completed, I took some time and tried each method out. I had prepared a Max patch that diffused the various audio streams across the full quadrophonic range, and in the end decided that while it sounded good, it was a little bit underdeveloped for the following day's performance. Then, I tried a couple of "cheap" quadrophonic methods, most notably simply delaying the rear channels by varying numbers of samples in order to provide a slight delay between events - ultimately simulating a quadrophonic output from a stereo source. However, what I found with this method was that it robbed the intensity of events which I wanted to line up perfectly in all channels temporally. To be honest, It would work quite well in theory if I developed a more intelligent system that was driven by the analysis module of the patch, which could change the delay parameters subtly in order to allow transient and rhythmically active events to still line up, while allowing more static events to shift slowly through a continuum of delay times. However, this would then possibly require the patch working on a delay of a second or two minimally in order to allow time for the system to adjust to "upcoming" events, seeing as how it cannot read the future. Surely there is another solution here for this method - at the moment I'm simply speculating a possibility. In the end, I decided for this performance to work in an expanded stereo format - changing instead of temporal aspects, subtle spectral aspects of the mix between the front and rear sets of speakers. While at the time, this felt like a concession made due to the technical restrictions available to me in the time frame - in retrospect I actually found that the sonic result still achieved exactly what I was looking to accomplish.
In regards to the specifics of the sound system used, the four primary speakers were Meyer Sound UPA-1P models, while the subwoofers were Meyer Sound USW-1P models. Further, with the help of Roderick, we used two DBX Driverack PX-6076 units to create a customized EQ for the room in order to complement (and combat) the nuances of the space. In regards to signal flow, the source of the signal was a 13" 2015 model Macbook Pro running Max/MSP 7, with an RME Fireface UFX audio interface. The RME was running four channels out to a Mackie 1642 VLZ4 mixer, which allowed me some diffusion / further nuance for tailoring the sound in the room (as well as a quickly accessible master fader). The output of the Mackie was then running to the DBX units which were conditioning the signal with the custom EQ, and distributing our four-channel output across the subwoofer / loudspeaker in regards to frequency spectrum. While the custom EQ got us extremely close to where I wanted the sound to be, having the mixer there allowed me to further tailor the balance and frequency content to be absolutely perfect for the piece. Nothing extreme was altered, I just applied a slight boost in the low-end and a slight reduction in the extreme high-end. The midrange was untouched mixer-side as the DBX units had done a superb job balancing that out already.
Below, is a series of images which cover the equipment setup in regards to speaker placement, load-in, etc:
With the equipment all in place, and with my final decisions made, I was able to leave on Monday secure in the knowledge that the system was already in place, and that by and large I was already thoroughly sound-checked. As such, I arrived early Tuesday evening ready for the performance - despite my aforementioned health issues. I took a little bit of time to make a final sound-check, mostly in the interest of setting volume levels somewhere in the ballpark of fulsome, present, and aggressive, without pushing it too far so as to cause anyone pain or discomfort (at least not due to the volume levels). In the end, I strongly feel that the preparation I afforded myself really went a long way towards making the performance as successful as it was. The result of that dedicated setup day is that I was able to focus purely on the musical, compositional, and aesthetic nuances of the performance - certain in the knowledge that the technical aspects were thoroughly taken care of. Following the performance, I had a lot of audience members approach me wanting to learn more about both the content of the piece, and the system driving the performance. I have found that considering there is not always a 1:1 feedback between my physical actions and the resultant sound, people tend to be more curious and inquisitive about the link between my actions and the sound - more on that in a later section of this post. As such, I spent quite a lot of time talking through various aspects of the piece and its performance with people at the venue and afterwards at a bar, demonstrating some aspects as well as I could depending upon the question. Ultimately, the performance was met with a lot of praise, and I'm ecstatic with the outcome; I couldn't have asked for a better way to wrap up my participation with the Trading Zone Exhibition.
The following day, Wednesday June 20, I proceeded to tear-down the equipment in the gallery and prepare it all for transport back to Alison House. There's a certain kind of catharsis that comes for me when I start tear-down, and this was one of the first times I have ever been able to delay that part of the process until the following day. As such, I was able to reflect quite thoroughly on the setup and performance, as well as take the time to get some key elements of documentation taken care of (photos, notes, etc). Finally, the equipment was returned to Alison House early on the morning of Thursday June 21 and checked-in to Roderick accordingly.
For the second topic of discussion, I want to share some media related to the performance itself. In the previous section, I just finished talking about all of the technical work which was required to make the performance successful, this is where I want to share the actual musical results. First off is a video taken of the performance, which has now been edited and placed onto my YouTube page. The video both covers the thematic and musical aspects of the work, as well as serving as a document of the performance itself. I must give some special thanks to Sam Gluckstein for taking video, even I was too distracted to have asked him. That video can be found below:
Additionally, if you are interested in only listening to the audio - or perhaps if you are looking to spare yourself the data of streaming a video on a mobile device, the following link is to a copy of the same recording posted to my SoundCloud:
The third topic of discussion is for me to cover the developments that have occurred with this project since the performance. While I have already discussed the time I took for recovery, that is not at all to suggest that nothing has been done between the performance and now. Quite the contrary, I have taken this as a chance to strike a number of small tasks off of my plate for the project.
First and most important is the editing of the recordings and materials created for and taken from the performance, an coincidentally a thorough reflection on the nature of the project as a whole. The final performance recording of approximately sixteen and-a-half minutes of continuous material which I have linked to above is what I consider to be the most "essential" short-form single representation of what [re]Glossolalia is as a creative work. What I mean when I say this, is that I acknowledge that the continued life of this project beyond the Trading Zone Exhibition will mostly likely result in it existing as only an experimental electronic music composition for performance (essentially exactly what I presented in the final performance) - and indeed it already in that capacity has additional performances quickly being scheduled in the United States at various electroacoustic music festivals this fall. The opportunity afforded me by the Talbot Rice Gallery to craft the performances and installation aspects is in many ways a "one time" opportunity. However, I also recognize that it can be difficult to collect the multiple aspects of this project together as a cohesive front-to-back "work" - their totality as a project is the essence of what [re]Glossolalia was always intended to be. The result of that fact is that the most complete impression of this project is its presentation at Trading Zone, with the multiple performances of the work in stages of completion, and the fragmentary sonic structures which then inhabited the space as "echoes" of those various performances. In the end, I thought a lot about how I could conceive of the form of this project as one structure. These thoughts took me back to an aspect that has existed in previous works of mine, the concept of understanding a multifaceted work's structure as one of paratactic interrelationships, as opposed to syntactic or hypotactic ones. As such, all of the various aspects of this piece are interrelated paratactically - and instead of thinking about its totality "continuously", you might instead think about its totality "contiguously": the composition itself, the performances, the installation, the various fragments of material / echoes, the impulse responses of the spaces, the research necessary to make the piece work, the technical work and labor required for equipment, etc. As such, I've been trying to sum up the core idea of this piece into as succinct a "program note" as I could:
"Written in 2018, [re]Glossolalia was composed as a companion work to another composition of mine from early 2017 for alto saxophone and live electronics. Both pieces heavily critique the sometimes insidious and subversive nature of late-night rural US radio broadcasts, utilizing excerpts recorded from a circuit-bent radio. The broadcasts used in this piece vary in content, covering topics such as: predicting the United States’ role in bringing about the biblical apocalypse, why monetary donations earn entry into the afterlife, to the “comforts” of mutually assured destruction, and so on. Compositionally, the piece presents an unpredictable, dense, and continuously fracturing interpretation and dialogue of that radio content – exploring the balance between real and imaginary sound worlds in a type of liminal space. The sonic materials for [re]Glossolalia were created and transformed using custom-written granular synthesis programs, duffing oscillator synthesis, FM synthesis, and through hardware hacking. Please note: I don’t intend for this piece to present a prescriptive worldview, or condemnation of socio-political affiliation and / or religious beliefs. However, I do intend to utilize this as a platform in which I can bring blind hatred, anti-intellectualism, and dangerous zealotry under scrutiny. A further element of this work is the installation portion, which was placed in the Talbot Rice Gallery stairwell and elevator entry routes. These placed fractured elements derived from this composition within the space of the gallery as "echoes" which inhabited the gallery."
The second task I have taken the time to work on, is the collation of many of the smaller elements of documentation that I have previously let pile up. This has meant contacting photographers or audience members whom I knew were at various performances or events related to the work, and collecting their various pictures, videos, drawings (!!), and collating them along with my own in an organized fashion. As a result, I now have a large amount of documentation which can accompany both my submission, and record of this project.
The third task I have completed is the creation of the next two images which accompany to the work, an element I have not previously spoken at much length about. While not necessarily a crucial element to the work, I have been working consciously to develop a visual aesthetic that I wanted to accompany this project, and have developed five visual art pieces to date, with more in development. While these images have appeared in other posts on this blog, I have collected all of them below:
The fourth task I have completed is the editing and documentation needed for the impulse responses taken in the stairwell and areas dedicated to the installation. My goal for this, is to utilize them in order to create a binaurally encoded "walkthrough" video of the installation aspects of the project. This is the result of a lot of time spent thinking about how best to document the installation elements of the work - thinking about how best to represent something that is essentially ephemeral as a fixed-media product. To accompany this film, I also hope to create additional diagrams which demonstrate the placement of the speakers within the space, and explain the rationale behind their placement.
The fifth task I have taken some time to work on, is collating my various notes and writings into the beginnings of a document. I am thinking about the large-form structure of the document considering the high number of topics which it has to cover, and how I can best accommodate all of that material within only 6,000 words.
This brings me to the final task I have been working on, which is peripherally related to this project... Glossolalia, the original companion piece which [re]Glossolalia was derived from thematically will be performed at the 2018 New York City Electroacoustic Music Festival (NYCEMF) on July 20 - and I am working on the final touches for the performance Max/MSP patch. As such, this particular task will occupy me for the next couple of weeks, at which point I can return to document writing, etc.
For the fourth topic of discussion in this post, I simply wanted to cover the basic outline of the plan for this project moving forward. As I said previously, there are a number tasks which I am currently working on, and many of which that still have to be done. The plan roughly for the immediate future is to focus on finishing the preparations of Glossolalia for NYCEMF, an opportunity which I am thankfully able to undertake due to the generosity of the University of Edinburgh's Reid School of Music and the Andrew Grant Travel Bequest. As these two works are companions to one another, this is in many ways crucial work to my MSc thesis - and this performance will actually be Glossolalia's world premiere, almost a year since it was originally finished! A detail I still need to determine is how to weave this work into the documentation that accompanies my final MSc submission - if it will simply end up as appendix material, or rather if it will end up being covered more closely?
Beyond NYCEMF, and hopefully editing the recording I will get from the performance and rehearsals, I then need to take the remaining time to really focus in on the document and remaining documentation for submission. A crucial element to the submission of this work will be the final "release" of all of the material onto BandCamp as a digital album. As such, one of other final details I'll need to spend time on is the final edits / mixes of all of the audio for official release.
For the fifth and final section of this post, I wanted to spend some time sharing my thoughts on the relationship between performer action and resultant sound in electronic music and how that has manifested in this work.
In his article "Defining Timbre - Refining Timbre", Denis Smalley makes an excellent point about what he terms "source-bonding", defining it as:
"...the natural tendency to relate sounds to supposed sources and causes, and to relate sounds to each other because they appear to have shared or associated origins."
In thinking on this, and on other parts of Smalley's article, I have been thinking about agency in sound - and specifically the concept of agency in electroacoustic music performance. For example, in the performance of acoustic music, there is almost never a question of source bonding and agency in the performance of a piece of music. The acoustic / sonic properties of a violin are the result of the physical design and properties of the instrument, and the tradition of performance practice that has emerged over hundreds of years. For most people, a violinist on a stage predisposes them as listeners to a certain source bonding paradigm, and the acoustic ability to defy that paradigm is almost non-existent without the musical equivalent of a "man behind the curtain" so to speak. However, while that predisposition has already occurred, and while their are negative aspects related to that from the perspective of the composer, it also provides the audience no question in regards to performer agency and resultant sound. If the violinist is performing in front of the audience, the listener will reasonably recognize the agency of the performer as the cause for the sound they hear - this is a 1:1 relationship that's the result of one of the most fundamental laws of physics: for every action, there is an equal and opposite reaction.
However, electronic music performance is not bound by the 1:1 relationship of agency and result, nor is it bound by the traditional source-bonding paradigm. While this is not at all a revelation or original thought, it is a topic that I had to think about quite carefully in the performance practice and performance needs of this piece.
When I am performing the core piece at the center of [re]Glossolalia, I am making use of a number of tools, which are as follows:
13" Macbook Pro (2015 Model), running Max/MSP 7 - with the mousepad enabled as an X/Y control grid
A radio, with the circuit board exposed, and points of interest identified for circuit-bending
The "Voice Odder", an instrument designed by London-based musician Ewa Justka
A Korg NanoKontrol2, a cheap USB control surface which I use often due to its compact nature and easy replacement factor
This is by no means a lot of equipment, and most of it is very familiar to me. I have spent the better part of the last few years developing my performance abilities with only a laptop and a NanoKontrol2 - practicing it with various patches much like you would a traditional instrument. A discipline that comes from my background as a saxophonist and guitarist. However, with the addition of the radio and the 'voice odder' as desired elements of the performance setup for this piece, I quickly ran into the problem where my limitations as a performer were not ones of practice or familiarity, but sheer human ability. The reality is that I only have two hands, and while I'm using certain parts of the system, I am unable to manipulate others. Of course, there are two potential solutions to this problem - the first solution being to scale back the compositional elements of the piece to a point where it becomes physically possible in a 1:1 relationship. The second potential solution is to re-frame the performance aspects of the system as no longer strictly 1:1, but rather as a mix of different strategies. In this case, the most desirable solution for me was an easy choice as the thought of compromising the compositional facets of the piece to accommodate myself was completely out of question. I knew from very early on that I wanted to explore the compositional limits of fragmentation and various extremes in regards to timbre, temporality, spatialization, and so on - so the concept of performing it at a 1:1 relationship was sacrificed. However, what this choice opened up is ultimately what I think has proved to be very fruitful territory for my own work...
By not restricting the piece to the 1:1 performance paradigm, I created a performance system which blends fixed-media, real-time algorithmically generated materials, and performed materials. The system has a fixed-media "skeleton" which is currently broken into two large-scale parts, though it can be fractured into as many or as few as desired. These parts are then triggered by the performer as desired from the NanoKontrol2 surface, at which point the system runs them through a real-time analysis system which automatically controls digital synthesis parameters that provide a significant additional layer of granular synthesis on top. However, this system is only automatically providing elements such as spectral dispersal of the synthesis, and volume - not the finer details such as grain size, grain rate, or the read area for example. This part of the system which is not automatically covered, and not fixed, is the performer's domain - and my primary degree of agency over the performance of the piece. As such, in performance, my tasks are to control the various aspects of the digital synthesis methods which are not automated, record new sounds into buffers for re-synthesis from the radio and voice-odder, trigger fixed elements, and control further DSP elements as desired such as distortion, reverb, comb-filtering, and additional spatialization parameters (ideally in the potential future quadrophonic development outlined earlier). The result of all of this is that there are parts of the performance of this piece which are 1:1, but much of my actions as the performer are more behind the scenes. However, this allows for the composition to maintain the musical aspects I originally wanted it to have - though that admittedly comes at the loss of the audience's understanding of source-bonding and agency in performance...
This is because while source-bonding and agency in electronic music is not as understood as it is in acoustic music, it is still something that I would argue the listener attends to - even if only subconsciously. Back to the violin briefly: if I was performing with an acoustic instrument, my actions will of course yield a direct sonic result. This can still exist in electroacoustic music, though the lack of historical context and collective cultural unconscious understanding makes it somewhat nebulous at best. As an example, If I'm performing on a traditional MIDI control surface in a 1:1 performance paradigm - while perhaps not as immediately apparent, it becomes obvious to a listener that the turning of a specific knob (depending on their sight-line) may have a specific consequence. As such, they can begin to source-bond physical actions to sonic results, much the same way they might with an acoustic instrument. Many composers and electronic musicians have taken this a step further, making use of tools such as the Microsoft Kinect to explore further performer agency in electronic music. However, this is a relationship between action and result that can be consciously distorted - something which I explored in the Digital Media Studio Project class, in my group's research project into "Bias in Creative Technology". This is due to the flexibility of digital systems, something which Tom Mudd has explored directly in his installation work Control. However, when I'm performing aspects of [re]Glossolalia that are not 1:1, my role as a performer becomes even more nebulous to the audience. They can see that I'm working, but they don't necessarily understand the relationship between my actions and the resultant sound. For performers who are only using a laptop, this is what you may call the "checking your email" dilemma - in which there is no certainty from the audience's perspective that what you are doing as the performer actually has any consequence on the sound they're hearing. Speaking as an active performer on instruments which are not purely digital (guitar and saxophone), I do acknowledge that I think agency is an important element of performance in any kind of music. However, in this instance, I also recognize that the compositional goals I had were at odds with the possibility of performing it in a 1:1 fashion.
In the end, I can only hope that the degree to which my performance influences the piece is still rewarding for the listener / audience, and still effective. Regardless, these are some of my specific reflections on this topic in regards to the nature of performing [re]Glossolalia specifically.
Well, that wraps up this post. I hope to get back to more regularly posting here, and apologize for the length of this post in particular. However, I hope that this explains the gap while also successfully continuing from where I had previously left off. If you've read this far, thank you for sticking around!