operatorError was created as a result of utilizing different AI tools to create neat little
video processing programs that use Python to manipulate video snippets. These programs use
common glitch-art techniques to generate material that is completely foreign to the original
files used as input. Additionally, when designing these programs, I really emphasized the idea of
creating a feedback loop, where the video is processed and then the processed video is fed back
into the input of the program a number of times to further corrupt, distort, and layer the video footage/effects.
Both programs are available on my GitHub, you should check them out if you are interested, however
they will more than likely take some tinkering to get working properly.
For this piece, I really leaned into the idea of overprocessing the video material as well as the audio material.
This method produced a lot of really, really interesting moments, especially when I added
additional processing in Reaper. There are a lot of super-saturated segments, a lot of unintentional
artifacts and glitches that I learned to embrace and exploit, and some sections I found that I was able to
be quite expressive with by leaning into the aggressive distortions that occurred after rendering things
over and over again.
This was the first time I had attempted to create a 8-channel fixed media piece.
I originally composed the piece for 4-channels in my living room, but with some help from Dr. John Thompson, I was able to utilize a special plugin that could extrapolate the audio to 4 more additional channels and decode that 8-channel information down to stereo so that I could post it on YouTube.
This piece was really fun to make because I used a plethora of unconventional techniques to create the video you see here.
I used some of the same ideas from processOfAmalgamation, where I stacked a lot of footage on top of itself and used a lot of various image opacities on the different video feeds.
Additionally, I used an old JVC camcorder and a CRT television to create a sort of feedback loop between OBS and the television, this way I could get a lot of strange analog distortions that wouldn't be exactly the same had I used some of the digital tools available.
This was also the first time I've used any footage that wasn't computer generated in a fixed-media composition.
processOfAmalgamation is a piece that I created when I was really interested in the idea of a concept called 'datamoshing.'
Datamoshing (at least the definition that appeared when I typed it into Google just now) is, "a glitch art technique that manipulates video data to create
distorted, surreal, and often unexpected visual effects." Throughout my life I have always found myself driven to explore these kind of fringe areas of crafting error into an aesthetic,
so when I discovered an application that sort of allowed you to morph video footage together and acheive this effect, I started working on this piece.
Additionally, in keeping with the name of this piece, the process for this piece consisted of creating a sequence of visuals, and then distorting them with various techniques
to acheive more and more glitchiness as time went on. As I did this I would notice more and more unintended artifacts appearing in the visuals that I didn't set out to create.
I found the idea of just accepting these typically undesirable artifacts to be quite an interesting process.
It was almost as if the process itself was a co-author of the piece, this technique aided me in creating a piece that I would have never created had I attempted to better control these errors and artifacts.
syntacticSugar is probably one of my favorite pieces I have ever composed. I remember that the inspiration for it came while I was watching an Amazing Max Tutorial video on YouTube on the jit.bfg object,
and I wondered if I could somehow create a feedback loop between Reaper and Max/MSP after I found a script from the internet that allowed you to record live video into Reaper like an audio track.
I then used some of Reaper's other functions to send audio back into Max and tied elements from the audio to certain visual elements so that there would be a direct correlation between the sounds created and the visual representation.
I did this multiple times 'and eventually gathered several takes to work with that I then edited further to create an even more cohesive piece.
I very much enjoy the sort of 'quiet-intensity' I get from this piece. A lot of electroacoustic music tends to be on the more aggressive or in-your-face kind of vibe, with this piece I really tried to keep a
subdued and nuanced approach to sound design. I acheived this by limiting the pallet I was using to only dial-up internet sounds, quiet mechanical noises, prepared piano samples, and an 808 drum machine.
This piece was actually a final for a class that I was taking my final semester of grad school. It was during the making of this piece that I learned I very much enjoyed composing fixed-media, contrary to my inner-thoughts telling me that not performing the piece live would somehow lessen the experience.
The inspiration to create this piece stemmed from watching a video of the infamous 'televangelist' pastor Kenneth Copeland declare his own personal spiritual war
against the devastating/worldwide Covid-19 pandemic. The piece was an attempt to really portray how the cult-like behaviors of Christian-Nationalists at the time came across to me, personally.
Throughout the piece you will hear audio snippets of him 'banishing' Covid-19, singing, and speaking in 'tongues,' all of which appear against a dystopian, almost void-like 3d animation I created in Jitter.
I assembled this piece by recording the visual output of Max via OBS and editting both audio and video simultaneously in Cocko's Reaper.
I found that this process really granted me autonomy over the final product, and allowed me to really dial in the synchronization of how the audio and video interacted with one another.
This performance served as the first live demonstration of my graduate project 'thwacK.' thwacK was an interactive interface for drummers and percussionists that utilized basic machine learning concepts to differentiate between multiple sound sources using one microphone.
If you watch the performance, you will see that there is one singular AKG 414 that is feeding my audio into thwacK and manipulating the sample provided to it.
The program is trained on what sounds to expect prior to the performance and there are several variables that can alter those sounds further after the sound source has been identified.
huskMeditations was a piece composed much like closure. This piece was just a deeper exploration into the ideas presented in closure, and featured a much cleaner, more in depth notation system.
The instrumentation of this piece features what would be a traditional drumset, with the toms replaced with snare drums. The piece is performed with what is colloquially known to drummers as 'broom sticks' which allowed for me to play and improvise with much lower volumes. This decision allowed for the electronics to really shine amongst the acoustic elements of the composition.
This piece was featured in the 2021 New York City Electroacoustic Music Festival.
closure was the first piece I wrote for a very large and complex Max/MSP patch I was working on that later became my final project for my graduate degree.
This piece called for a close-micc'd drumset, the signals from which were fed into a max patch which would manipulate a single sample of audio dependent upon which element of the drumset I played, and how I played it.
The overarching goal of this piece, and the pieces that followed, was to facilitate interaction between the performer and the electronics. In order to better capture this interaction, I developed a
system of musical notation that gave the performer loose improvisatory guidelines to give the piece structure, but still allow for the performer to make adjustments based on the feedback they got from the computer audio.
Fun Fact: the sample being manipulated in this piece, is actually the shutdown sound featured in the Windows XP operating system.
waveTrain was composed for three percussionists, and was the first (and only) instance where I had enough proficient players available to not have to perform it myself.
Much like lessThanFidelity, this piece was composed for 3 performers (only percussionists in this case), supplied with a tape player and a mixer. The idea for this piece evolved from writing a ton of marching band music for a local highschool at the time.
I found that making the students play conflicting simplistic rhythms, sometimes metered differently, created really interestic rhythmic content. I took this idea and applied it to a smaller instrumentation, and provided the performers with a shared stack cymbal.
I found that supplying the performers with a shared element amongst all the intertwining rhythms created the illusion of a sort of psuedo-fourth-performer.
Additionally, all of the elements were amplified through an octophonic speaker system where a generous amount of reverb was added. Each player and tape player was assigned a localized group of speakers in the hall that were strategically mixed to
give the audience a more spatialized experience.
The performers for this piece were Alex King, Ty Cundy, and Matthew Goodman.
This was the first piece I ever had performed live for an audience. This particular performance was for the Channel Noise Concert Series.
During this period of my life I was obsessed with cassette tape loops, and I wanted to find a way to incorporate them into a performance with real instruments.
The piece was composed for a guitarist, a drumset player, and a pianist; each outfitted with a cassette player of mine, and a small format mixer.
The score indicates for the performers to prepare cassette loops and pre-record loosely defined material onto them. Additionally, I had to create a custom parametric score, that both instructed the performers on what to play on their traditional instruments, what tape to put in the tape player for a given movement, and how to manipulate the mixer to 'play' the tape loop.
This piece was debuted and performed by Andres Correa, Eric Kollars, and myself.