Posted by & filed under Blog, Interviews.

I had the opportunity to interview Jeff Kaiser, the trumpet playing experimental electronics wizard, pfMentum label head, and professor of Composition and Music Technology at the University of Central Missouri about his musical practice, the relationships in which he finds inspiration, and the questions that drive his unique and towering musical accomplishments.

 

For starters, can you tell us who you are and what you do?
My name is Jeff Kaiser. I am a performer, composer, and arts technologist who works mostly in improvisational settings. The music I love—and that has a strong influence on me—comes out of the experimental traditions in jazz and Western European art music. My preferred instrument is quartertone trumpet with electronics, but sometimes I just play the trumpet, and other times just electronics.

How did you get started patching in Max/MSP?
I started working in Max due to my extreme dislike (bordering on phobia) of carrying gear on tour. I first shared the story in a paper I presented at Spark in Minneapolis in 2006, “How I lost 150 lbs. thanks to Max/MSP!” The paper, in all its glorious naiveté is still available at https://jeffkaiser.com/gear/. The basic storyline is: I am on tour with Andrew Pask in the UK, and I’m carrying around way too much gear. Two large cases each weighing 75 lbs. Awful. Miserable. Especially after having a few drinks at the pub following a gig. Or after a long flight where the flight attendants keep giving you free drinks. Wrestling the bags and instrument cases into and out of planes, trains, and taxis. It can take a lot of joy from a tour. And then there is Andrew: sax on one shoulder, laptop and interface on the other. I wanted that. (I also didn’t want to troubleshoot failing boutique hardware on the road with no replacement anywhere to be found.) I was the model target for performance software options. I explored the options, and decided on Max for various reasons. What then happened, typical behavior for me as my friends will note, is I buried myself in it…fueled on tea, vegan grub and the occasional cigar, I immersed myself in the forums and patches for three months and at the end of that time had created a software patch that emulated my hardware rig to a close-enough degree. I began performing with it right away, sold my hardware, and have not looked back. My borderline phobia of heavy hardware continues to this day, with a preference for working in the box, and with (mostly) gear that doesn’t weigh much.

What is your performance rig and Max patch setup these days?
My Max patch on the surface, appears not to have changed too much; the visual interface is very direct. On the inside, it keeps getting crazier and crazier. There are a series of twelve modules that take the mono input of my trumpet and processes it into four discrete channels of whatever that module does (delays, granulation, distortion, and way, way more). These modules then feed into a variety of asynchronous four channel phrase samplers and get further processed. While this is going on, my KaiGen software (video available here https://vimeo.com/jeffkaiser ) is generating musical information that drives a Kontact Library made up of audio samples of the isolated mechanical internal sounds of trumpet: when the valves are being depressed, slides released, clunks, scrapes, hisses, et cetera. All of this is then spatialized using ambisonics, a surround sound variant.

How do you use SoftStep with Max/MSP?
The SoftStep was one of the missing links in my gear. Up until that time, the decent MIDI foot controllers were SO HEAVY. They were also so unidimensional with mostly On/Off technology. The SoftStep is not only lightweight and easily portable, but the multiple axes of control via pressure allow the information to be parsed out in so many different and creative ways. Yes, it can function as a bank of On/Off switches like the old MIDI foot controllers, but the buttons can also be mapped to information that varies: speed, volume, pitch…so much fun. For me, I use it not only as a basic trigger, but as a core component of my phrase sampling that allows me to control the different parameters mentioned above, plus a few more: directionality, spatialization, to snap between asynchronous and synchronous modes, et cetera.

What are your thoughts on composing and improvising in multi-channel environments? Does it require a different sensibility of performance?
I love working in more than two channels of audio. It requires a different way of thinking, panoramically, that makes sense to me. In stereo, if you have 48 sources located in the sound field and there is any complexity of movement, things get lost and can get muddy. Placing those same elements in a 360 degree environment allows the listener to still identify individual sources to a greater degree, which is something I like. I.e., you can put your focal attention on a sound and follow its movement through complex trajectories, or you can just listen to everything at once. Of course, the temptation then is to keep upping the sound sources that I am putting into the space. With granular stuff, you can end up with thousands of points of sounds. Improvising requires an immediate relationship with musical elements, including spatialization, different from composing (where you can plan things out). I have a set of twelve trajectories that I use regularly, as well as doing things like anchoring the sample library in the corners while putting the processing modules into swarming patterns et cetera. I have written a few patches that allow me to use the musical information being generated live to spatialize the sound, but have not incorporated that into my live patch as of this moment.

 

You also run the record label pfMENTUM, can you tell us a bit about the pfMENTUM philosophy?
pfMENTUM—and its sibling Angry Vegan Records—act as curated collectives that help document underrepresented musics, and get those documentations, from digital to vinyl, out into the world. We are interested in the more experimental forms, or music that blends traditional and experimental music. For example, we have an album of sea shanties coming out, but the songs go very unexpected places.

What’s been inspiring you lately?
Relationships, and the dialogues that arrive out of relationships, always inspire me. My friendship with David Borgo and our duo KaiBorg, (http://kaiborg.com/) is a constant source of inspiration. Our work together has had a powerful effect on how I play, and also on how I think—and articulate what I am thinking—about music. David has written about our duo in a fantastic collection on improvisation, Negotiated Moments: Improvisation, Sound, and Subjectivity. His chapter, “Openness from Closure: The Puzzle of Interagency in Improvised Music and a Neocybernetic Solution” is engaging, not just because I’m in it, but because it makes me think about the role of nonhuman networks in the creative act. I get excited reading about these ideas and find them inspiring. We also have an earlier co-authored work on this idea, Configurin(g) KaiBorg: Interactivity, ideology, and agency in electro-acoustic improvised music, that is available free online.

There is a direct relationship between my work in Max and what I’m reading, writing, and thinking about: the words effect what I do. Struggling to define what improvisation meant to me, led to the creating of my KaiGen-I software, improvising software where the algorithm was developed grammatically at first. The definition I work with is that improvisation is a live interactive construction and ordering of sound where the players/actors are not only constructing and ordering, but are being informed and presented with possibilities as to how to proceed by that which is being interacted with, constructed and ordered. This creates a feedback loop of possibilities where actors are both influenced and influencing, configured and configuring. The elements of influence are not just sound, but value systems, lineages, culture, and more. There is a cognitive portion that is as important as the aural component, but it is the aural component that is frequently given prominence, and the cognitive portion is often times left out of the conversation by words like, “I just play,” or such. So KaiGen-I uses an algorithm that is a simple feedback loop, where the sonic input of the player go into the mix, and the computer then reacts and makes decision…and then all that goes into the feedback loop. This algorithm, according to several researcher friends, should not work, it should just end up playing octaves. But because the space is also listened to, complexity creeps in and makes it all a bit messy. The KaiGen-I Max patch and Max for Live plugin (used in Ableton Live) are available for free on my website. You can take it apart and hack away at it as well.

Other relationships that inspire me: my colleagues at the University of Central Missouri who are incredibly hard at work on their creative practice/research in the midst of a heavy teaching load. In particular, Eric Honour, is able to find the time to create new compositions, Max patches, and just a load of wild stuff. For example, he just created and performed a new work for Max and fences. Big chunks of fencing from Home Depot, using contact mics on the different types of fences and then performed with percussion mallets, brushes, sticks. Elisabeth Stimpert, is a remarkable clarinetist and improviser at UCM, we recently collaborated on an improvisatory work called Wise Toupée. Jake Sentgeorge, an operatic tenor at UCM, and I have also collaborated on building him an instrument for processing his voice that is based on Max and the Push 2. We have a recent work called, Etched Tread of Charcoal Teeth, You can see the scores for this, and other of my works at https://jeffkaiser.com/scores/.

Your duo project with Trevor Henthorn “Made Audible” recently had a residency at STEIM (steim.org). What’s the process of getting a residency at a research lab? What did you do?
The process to getting a residency seems direct, i.e., you apply. But of course, it is always much more than that: does your work fit the goals of the organization? Do you have a track record of completing projects? Do you have letters of support from known people in the world of that organization? Are you known as an individual to that organization? I had dreamed of working at STEIM as a young college student in the 80s, but it seemed unattainable. In 2008, David Borgo invited me to join him on a summer tour in Sweden and The Netherlands, which included a stay at STEIM. It was an amazing time. Since then I’ve had five or six residencies there, and was fortunate to have them at the old facility, which provided guest housing, studios, artistic and technical support, and usually an opportunity to be part of a concert. STEIM has since been forced to move due to budget cuts to art organizations from the Dutch government. Trevor and I were very grateful to have one of the last residencies at the old location. There, we spent our time developing our Max for Live plugins. For me, that meant the beginnings of the KaiGen Suite (available for free at https://jeffkaiser.com/max/) and Trevor focused on his TrevoScrub plugs that would sonify data from MySQL databases. Our duo, Made Audible (http://madeaudible.com/), performs with these plugins, using Ableton Live and the Push 2, turning probabilities and other data into sound. It is so much fun. We recently performed in Tijuana, Mexico at Tres Generaciones, a 12 plus hour event.

 

Thanks for your time and wealth of information, Jeff! Wishing you the best of luck creating and spreading your musical practice far and wide.
Find more about Jeff’s music at: https://jeffkaiser.com/