Friday, November 25, 2016

Going Live

I've always been fascinated by how the atmosphere stratifies, and that it does so with such sharp boundaries.  Growing up in New Mexico, I was often treated to spectacular views of these layers, by virtue of the common 100-mile visibility and the mile-high -- and higher -- altitudes of the places where I lived.

This piece was inspired by my experiences during two trips I made this fall when I had the opportunity to observe the daytime sky a great deal, both from below on the ground and from within on the plane.  In the Land of Enchantment and southern Massachusetts, I was treated to incredibly varied and rapidly changing sky moods, and the light and the layers and the motion inspired me to express these experiences musically.

I'm very excited about this new piece for several reasons.  On a musical level, it's my first attempt at leaving behind the drone as a unifying structure and I'm very happy with the result.  Some of the people who's music I admire most, even if their work is considered drone music, can vary the structure and timbre of a piece in evolving ways that, if one compared brief samples from different moments in the work, they might not sound related, but as the work unfolds they feel like natural and intuitive developments.  I was pleased with the degree to which I was able to achieve that here.

On a technical level, it's the first piece I've written with Ableton Live, a new DAW that I recently invested in.  I had been growing increasingly frustrated with Logic Pro 9, in terms of the quality of its plugins and its lack of interface with Max.  Too, I had been seeking for some time a good MIDI controller interface, both an alternative to the standard keyboard and a good set of rotors, launch buttons, and sliders -- and, ideally, a nice, multitouch X/Y pad.  In a brief serendipitous conversation with a local musician, I was reminded that Live interfaces seamlessly with Max and decided to give the trial version a chance, taking my idea for Layers of Sky as the guinea pig.

I was thrilled to be able both to import favorite Max patches into Live -- the super-long-delay in the beginning of the piece came from the latest iteration of a patch I built when I could not find a commercial plugin that would let me go more than 10 seconds -- as well as modify native Max for Live (M4L) patches to meet my needs.  In exploring Live, I was also reminded of the Ableton Push, which I had looked at previously in my investigations into hardware interfaces for Logic but had dismissed because it was so intimately tied with Live.  It's not an inexpensive piece of gear, but, as I fell more and more in love with Live, it made increasingly more sense to make the investment.  Before my Live Trial (30 days) was up, I was sold on the whole kit and kaboodle.

A few technical notes about this piece in particular.  I used a range of native Live and third-party plugins, as well as some of my own samples.  These include several tracks running the Chromaphone 2 physical modeling synth, and Robert Henke's M4L granular synth Granulator II, which I set to chewing on a sample of one of my mother's Coniff windbells.  The strings in the final section consist of my own viola playing (which, I think, is the first time that has made it into a final version of anything), plus a solo 'cello from Ableton's sampled orchestral strings.  Finally, I used two delays, the super-long filtered delay I mentioned above and a three-channel native M4L delay I modified, and then fed it all through Sean Costello's Valhalla VintageVerb, which also features a super-long (up to 70 second) reverb.

Right now, I am as excited about the path my music is taking as I have been in years:  I feel like I have the tools I need and can integrate them in efficient, flow-supporting ways.  I have several new works in mind (and new approaches to long-shelved ideas) and have already started what's next...

Sunday, October 2, 2016

The Circle Game

“I’ve been here before.  I feel like I’m going in circles.”  Who has not had that experience?  Sometimes it seems like we’re walking the same path, over and over.  On the other hand, it’s also a truism that “you can’t step in the same river twice.”

I’ve come to see how both of these can be true, that we can feel we are going in circles and yet also never be the same.  We can appear to be going around and around, repeating the same behaviors, having the same experiences, but, if you think about it, we are not, because our hindsight belies a new experience.  Our experience of now as being like the past includes that past, layer upon layer.  Rather than going in circles, a better metaphor might be that we travel in a helix, each trip around looking in some ways the same, yet moving inexorably forward  Sometimes there are big, long patterns in which the helix is high frequency (loops are close together) and high amplitude (loops have a wide diameter), sometimes low frequency, low amplitude, sometimes different combinations.  Life is a series of helices, nested, stacked, winding and unwinding concurrently and in series.

This piece is something I first imagined in the winter of 2013/14; its long gestation has been due both to the technical challenges in realizing it and to an evolution in my musical sensibilities.  It’s not meant to be directly allegorical, as several of my pieces have, but rather I sought to express musically the way the patterns of our lives can play off of each other, creating new patterns and disrupting others.

It’s best listened to with a good set of headphones or between well-separated speakers.

Building the piece started with an attempt to create as literal a helix as can be sonically represented in Max/MSP (indeed, I purchased the software specifically to make this piece); the result of that effort was the patch used to produce the low drone heard throughout the piece.  Apart from the obvious left-to-right panning and the conceit of volume and timbre working in tandem to approximate distance, careful listening will reveal that the pitch of this drone increases slowly throughout the piece, ending about a minor third higher than it started.  A strong reverb (a plate emulation, built by Tom Erbe in Max/MSP) was applied to the drone as well, creating a kind of shimmering in the overtones that I especially enjoy.  Two other voices were created in Max/MSP, beginning about the eight minute mark and again at about 9’ 30”, the first based on the same helix patch that was used to produce the bass drone and the second a sped-up Shepard tone that ended up sounding a bit like an air-raid siren.

In recent years, I’ve become increasingly interested in bells; I am fascinated by the stretched and otherwise inharmonic overtones they typically generate.  I’m especially interested in alternative means of playing them, such as with “singing bowls” and bowed gong.  Using mathematical models of the physical properties of various materials, good physical modeling synths (PMSs) can produce surprisingly natural bell tones and allow them to be played in unconventional ways, including some that would be impossible in the real world.  Logic, the DAW I use, has a native PMS, but I found it to be limited and not very natural sounding (sometimes you don’t want “natural” sounds, but it’s much easier to get a synthetic sound from a good PMS than a natural sound from a poor one).  All of the voices in Helix, other than those mentioned above, were created using a PMS called Chromaphone 2.  It’s not terribly intuitive to use, but I’m very happy with the textures I’ve been able to create with it; for most of them, I started with a metallic bar or plate as their primary “physical” component and then “excited” them using a bow-like function.

I’ve generally found Max/MSP to be much more intuitive (which is still not very) in the Max (control functions) objects, than the MSP (digital sound production) objects.  As a consequence, I have, thus far, used it mostly as a kind of robot-musician, playing virtual instruments, rather than as a means of generating timbres, as I intended when I initially started working in it.  This is reflected in Helix both in the relative paucity of MSP-generated sounds (and the simplicity of the ones there are) and in my use of a Max-created MIDI controller that I used to give the “bouncing bell” Chromophone voice its “bounce.”  Other performance-related aspects of the piece were either controlled from a keyboard or in Logic’s automation.

Finally, I recently was introduced to VintageVerb, a reverb plug-in.  In some of the Chromaphone-generated voices, I used a touch of the program’s native reverb, which is very nice, and I used Erbe’s reverb, mentioned above, for the bass drone, but I wanted an output-level reverb to tie the various voices together and give the piece a sense of expanse.  In my attempts to implement this, I became increasingly dissatisfied with Logic’s native reverbs, which generally sound muddy to me.  I learned about VintageVerb through a Max/MSP newsletter and initially considered it for another project -- among many other wonderful features, it has an outrageously luxurious 70-second reverberation period -- but when I began playing with it, I discovered that it was capable of producing a much clearer, smoother, and more natural sounding reverb than what I’d been able to get with Logic’s reverbs.  The amount of reverb I added using it is small, but it provided what I felt was a necessary final touch.

I’ve learned a great deal putting this piece together, both technically and musically.  It’s hard to separate how much of that was as a result of my work on it and how much was a result of how long it’s taken.  In the end, though, I’m very satisfied with it and excited about the new directions I have been inspired to go as a result.

Tuesday, March 22, 2016

Theme and Experimentation

This is a piece I’ve been working on for several months.  It was initially inspired by a sonification by Milton Mermikides of a video of “pendulum waves.”  I had seen this particular video before and several others like it:  a set of pendula of slightly different but related periodicities are set in motion simultaneously and then viewed longitudinally, so, together, their differing periodicities create a visual wave and other patterns that eventually repeat.  However, Mermikides took it a step further and sonified the visual wave by coordinating the pendula’s periods with a marimba playing a D pentatonic scale.

When I saw Mermikides’ sonification, I thought, “Hey, I’ll bet I could do that in a Max/MSP patch.” It took quite a bit of futzing around, but I figured out how the pendulum waves worked and was able to build an emulation that followed Mermikides’ sonification note-for-note.  In the process, it also occurred to me that the emulator would work as a music controller.  Out of those ideas was born this piece.

The first section is the emulation of Mermikides’ sonification.  Following that are five experimental variations using Max patches based on or inspired by it as synth controllers.  I’ll discuss their structure and, where appropriate, original voices below.

It took me quite a while and several failed attempts to figure out the math for this.  Initially, I assumed a linear relationship between each pendulum, specifically, p+nx, where p = the period of the shortest period pendulum, n = the pendulum number (1-16, in this case), and x = the difference in length between the shortest and the next shortest pendula.  However, I found that, although my Max patches, effectively collections of virtual pendula, did interesting things, they never “wrapped around,” never repeated the way the real ones in the video did.  After puzzling about this for some days, it occurred to me that the waves manifested by the video’s pendula behaved analogously to the way waves in a vibrating string would, as in this diagram:

Since the ends of the string are bound, the waves are necessarily fractions of the total length of the string, or fundamental, and so have resonant relationships with each other.  In the case of the pendula in the video, I guessed that, if each one were to represent a partial of the same fundamental, it could explain how they wrap around as they do.  In other words, it might work if each pendulum frequency was 1/n, where n = the number of the harmonic of some fundamental.

If you’re familiar with the overtone series, however, you know that the partials in the low end of the series are fairly far apart from each other frequency-wise (being an octave, a seventeenth, two octaves, etc., above the frequency of the fundamental).  You’ll also note that in the video there are no adjacent pendula that swing in ratios of 1:2 or even 4:5, so they had to be pretty high up in the series.  And, indeed they were:  I finally got my 16 virtual pendula to match those in the video swing-for-swing when I calibrated them with the lowest pendulum as the 51st partial of a base periodicity 60.5 seconds long.  In other words, the pendula in the video represented a 16-partial section of the top 66 harmonics in an overtone series with a fundamental of 0.0165 Hz.  Once I had this worked out, it was a relatively simple thing to connect the patch’s MIDI output to a marimba sample.  It is that output that I recorded for the Theme.

Variation 1
The 1/n overtone series structure of the pendula begged the question:  what would it sound like to emulate a pendulum wave for the entire overtone series up to that partial (1/66)?  Building this was relatively simple, once I had worked out the patch for the theme:  I simply took the individual pendulum patches and stacked them, ending up with a 72-pendulum system, just because stacks of 12 were easier than stacks of 16.

A 72-note pentatonic scale would have an auditory range wider than was practical to listen to, so I thought it would be interesting to have the overtones themselves broken out, such that each pendulum might play a harmonic that is analogous to its frequency.  Even this represented a wide range, so I pitched the note for the fundamental pendulum at A at the bottom of a piano’s keyboard (27.5Hz), so that by the time we got to the top pendulum we’d still be well within human auditory range (1980Hz, or 4.5Hz sharp of the B three octaves above middle C).  I kept the fundamental periodicity about where it had been in the theme, at 0.0167Hz, or once every minute, making the periodicity for the top pendulum 1.2Hz.

The MIDI instruments that I have mostly do not allow coding for non-tempered pitches, so, if I wanted to hear how the overtones were actually pitched, I needed to have the pendula output to a synth that could take frequency directly.  So, I added a simple softsynth to each pendulum:  a couple of sawtooth oscillators and a square wave, evenly balanced dynamically, with the duty cycle for the square wave set at about .65 and the whole thing passed through a low-pass filter set to 1760Hz cutoff (2-stage, so a shallow slope) and a resonance around .5.  This was run through an ADSR with a sharp attack and about two seconds of decay (0% S and 0ms R) — not much more than a “ping!”  My intention here was to make distinguishing the cycles and pitches of each pendulum as clear as possible, especially the highest notes.  Finally, just to make it a bit more kind to the ear, I added a bit of virtual plate reverb.  I then recorded this through a single cycle (1/1 of the fundamental).

The result was not terribly musical, although I did find it interesting in other ways, especially as it reveals structures of the overtone series.  For example, you may notice that it appears to play more than one cycle.  What is actually happening is that what plays up through about 30 seconds is performed in reverse in the second 30 seconds; these two sections are separated by the sounding of the second harmonic/first partial (second lowest pendulum) — in other words, the 1/2 period frequency.  So, in terms of the sequence of pitches, it’s a palindrome; the first half “winds up” a sequence which is then “unwound” in the second half.  If you look back at the visualization of the vibrating string above, this makes sense:  imagine tracking center-crossings from left to right and you’d get the same pattern.  If you listen closely, you can hear other parts of this pattern in the sequence, for example, the 1/4 pendulum also punctuates the ~15-second and ~45-second marks, and the top end pendula play in little runs together off and on.  Too, I found it interesting to hear how close together the partials in the upper range are, so much so that they almost make a glissando when played in order.

Variation 2
I felt that, although I was very proud of having “unlocked” the math behind the pendula in the video and thought that the result was interesting from a theory-of-sound point of view, the 1/n model was musically limited.  The most I saw that could be done with it was to sample sections of it, e.g., pendula 1/12-1/24 or 1/72-1/88 or 1/1-1/6, and possibly vary the speed, but they would always make essentially the same pattern.

My initial incorrect model, p+nx, I thought, ironically, had more potential for creativity:  by varying the periodicity of a target pendulum (p) and the size of the difference between it and the next pendulum (x), I could produce a wide range of patterns.  This model would be the basis for the remaining variations.

For #2, along with the periodicity and difference ratios, I had also been playing around with key-center changes.  One configuration, with a fairly high periodicity for the highest-frequency pendulum (quarter note equals 250bpm, or a pulse every 100ms) and the same difference between pendula (i.e., the fastest pendulum pulses every 100ms, second every 200ms, third every 300ms, etc.) produced something that, rhythmically, reminded me very much of Steve Reich.  This is not surprising, as you could argue that each pendulum represents a polymetric pattern relative to the other pendula.  I also played with different key centers and programmed in a sequence that felt pleasing, even if it is not especially sophisticated.  This output was sent to a Yamaha piano sample I have, resulting in the above recording.

Variation 3
This piece used essentially the same controller (actually it’s an earlier, simpler version than Variation 2), but with two critical differences:  First, the primary pendulum periodicity was shorter than the differences between pendula (2:3 ratio), setting up a syncopated feel to the rhythm.  Second, by toggling the top (fastest) pendulum on and off separately from the rest and repeatedly starting and restarting the controller, I could “play” the emulator in an instrument-like way.  The result, to my sensibilities, has a more intentional, and therefore more musical, feel to it.  The voice was something I had originally created in Logic Pro 9’s ES2 for another piece some years ago which I never used but really liked.  I recorded myself improvising with the controller and sent Max’s MIDI into Logic to control the ES2 voice.

Variation 4
For this piece, I took the p+nx model and theme from Variation 2 and slowed them down quite a bit to quarter note equals ~40.  However, I did not make them precisely the same; the periodicity was 380ms and the difference between pendula was 375ms, which meant that the pendula would initially sound like they were playing together, but eventually drift apart.  Additionally, I took this variation as a chance to use a bell voice I had developed in Max/MSP and especially like.  The result, to me, sounds more aleatoric than the previous variations, especially as the 5ms difference between p and x progressively de-coordinates the pendula.

Variation 5
For this final experiment, I wanted to play with more scales than just the pentatonic or the overtone series and to arrange the pendula in something other than highest-to-lowest order.  I reconfigured the pendula such that the fastest pendulum (#1) would be the center pitch and the increasingly longer periods would alternate to either side, i.e., #2 would be next up from #1, #3 would be next down from #1, #4 would be next up from #2, #5 next down from #3, etc.  I then set up a mechanism to change scales periodically, beginning with a chromatic scale, then octatonic, major, minor, septatonic blues, whole step, hexatonic blues, pentatonic, minor thirds, major thirds, fourths, and, finally, fifths, which the controller then cycled back through palindromically.  The velocity was allowed to vary increasingly from beginning to middle and then decreasingly from middle to end.  Middle C was retained as the tonal center through all of these changes.  For the voice, I chose a physical model marimba, rather than the sampled instrument from the theme; along with feeling like it was just a good instrument for the music, I liked the symmetry of using a marimba again for the final variation, with the twist of it being an entirely synthetic sound.  This piece also has a kind of aleatoric feel to it for me, although at times a flavor of intentionality seems to chime in, which I construe simply as artifacts of the tonal scales.

Overall, I am proud of this piece primarily because of the total hours invested in it, which are far more than for any other musical work I’ve done so far.  This is not important in a gratuitous sense of more hours = better, but rather as a reflection of my growing ability -- and confidence in my ability -- to see larger, more complex projects through.  Aesthetically, I think Variation 3 is the most interesting (indeed, I have some thoughts about building a more “performable” controller from it).  Timbrally, I’m very pleased with the bell tone synth in Variation 4 and it also is the result of many hours of experimentation.  Too, I’m proud of having figured out the math of the theme; this is not my strong suit and that I was able to work it out at all left me encouraged about future adventures in sound.  Much of the rest of the piece is not terribly musical or, to my ear, very interesting, but the project has from the start been an experiment, and the nature of such efforts is that some things work and some things don’t.  I am happy and grateful to be able to share the successes and the failures here.

Thursday, February 25, 2016

Electric Meditation

Short post about a (relatively) long piece (it's about 13 minutes).  This is something that was more or less tossed off in a few sessions.  I am, however, proud of it for a couple of reasons.  First, I'm just really happy with how it came out musically, despite the relatively brief recording and mixing time.  Second, two of the voices (four of the five tracks) I developed from scratch using Max/MSP and invested a considerable amount of time in doing so.  This, in turn, is particularly exciting for two reasons:  one, I originally got Max because I wanted to use it to develop sounds and so am very happy finally to be doing so successfully; two, taking those sounds and playing them polyphonically was a bit of an additional educational/technical leap that I also finally succeeding in making.

As I've discussed in previous posts, I am especially interested in the timbral aspects of music and I'm learning that there is an entire class of minimalist music known as "drone" music that much of my taste and output fits into.  As thusly here.

Some quick technical notes:  two voices, a bass based on square waves with duty-cycle sweeps plus a little triangle wave and a midrange based on saws with bandpass sweeps, were constructed in Max/MSP 7.1.  The choir that rides over the top of these is from Logic 9's native sampler, EXS24, and the whole thing was recorded and mixed in Logic.

Monday, January 4, 2016

Mom's Bells

I have inherited my mother's bell collection.  Thankfully, this did not require her death -- she's alive and well -- but as I've been telling her for decades that I wanted her bells when she was gone and she and Dad are clearing out the house, 2015 meant the transfer of the bells to my care.

And what exquisite little clangers they are!  They range from giant 19" (not including the clapper) sheet metal wind chimes to tiny little 1cm jingles, with everything you can imagine in between.  Dozens of brass bells, animal bells, sleigh bells, gongs, crystal bells, even wooden ones, and they all produce the most delightful sounds.

When the bulk of the collection had arrived last fall, I spent an afternoon sampling some of my favorites.  I then took these samples and have been playing with them since, finally arranging them into a roughly eight minute piece intended to show them off in a pleasant, meditative way.  Thus:

The principle bell here is a very large Conniff wind chime; it is constructed from three triangular pieces of sheet metal of different thicknesses (and thus having three pitches), struck by a nylon disk clapper suspended on a chain and normally activated by the wind.  In this piece, however, I played the chime like a bell, moving the clapper with the chain by hand to sound the different notes, but also rolling the clapper and dragging the chain along the edge of the chime for some lovely "extended" effects.  I also play a second, slightly smaller Conniff chime, a large dinner bell, several animal bells, and two sets of sleigh bells.  None of the bells' sounds are manipulated electronically except for volume and pan.

My intention here was, as I said, simply to enjoy the sounds of each bell and how they interact with each other.  I especially love the rich and often dissonant overtones that bells have; this piece was meant to give the listener time to explore them.  Too, I love the bright jitteriness of a cacophony of bells, and so included some of that in the piece as well.

So, this piece is for my mom:  Thanks, Mom!  I've always loved the sounds of your bells; I hope to make many new and wonderful noises with them to share with you.