Friday, December 6, 2013

Aurrery

Here's a new piece!  I invite you to listen to it before reading my comments: 



This came from a flash inspiration I had while driving home one night (something I spend a lot of time doing these days), when Venus was blazing in the sky well after sunset.  The sight got me thinking about the orbital period of the planets and how they relate to each other and it occurred to me that these relationships might be represented aurally, in a kind of musical orrery.

Over Thanksgiving, I spent several hours pulling numbers off of Wikipedia into a spreadsheet and figuring out how to get the ratios of the planet parameters to fit within the range of human hearing and the limits of my softsynth modules.  I had decided to represent the orbital periods with LFO-controlled LPFs (making a wah-wah), while the pitch of each line would represent the mass of each object.  The orbital periods converted easily enough to frequencies within the range of my LFO, but the range of the masses was wild:  if I assigned Jupiter the lowest pitch, say a nice fat 32 Hz, Mercury would be about 185 KHz.  So I took a page from my statistics classes and did a square-root transform on them and they fit very nicely!

This was all well and good, but it wasn't very musical:  if you turned them all on at once, it was pretty cacophonous, kind of like Philip Glass, Steve Reich, and John Adams all going after each other with Wiffle Bats and piped through a tape delay.  Not to say that I don't enjoy a good cacophony now and again, mind you, but it needed, well, something. 

You can decide for yourself whether I succeeded in creating a passably engaging piece of music here; I was personally surprised by how well it turned out.  I filled out the family, of course, with tracks for the sun, the Asteroid Belt, a couple of innominate comets, and the Kuiper Belt.  As I worked on the piece, I imagined myself flying slowly (relatively speaking) outward from inside Mercury's orbit, through the rocky planets, across the Belt, around the Giants, and finally ending among the KBOs.  As I often do, I relied on texture to provide interest, playing with the wave forms (pre-LPF) and a few simple effects to create a voice for each object.  Overall, the piece is fairly representational, but not strictly so.  I hope you enjoy listening to it as much as I did making it. 

This was created using Apple Logic Pro 9, its native ES2 softsynth, TempoRubato's NLog PolySynth, and sounds shared on Freesound.org

Tuesday, October 29, 2013

Perfect Existential Angst

Like many of us, I studied Macbeth in high school, along with a handful of other Shakespeare plays, but, also like many of us, I didn't really grasp it at the time.  In fact, it wasn't until the last year or so, when I've been on sort of a Bard binge, that I've come to feel I can really appreciate the play.  I think one has to accumulate a certain number of years to understand the fears underlying the self-destructive ambition that drive Macbeth and his wife.  In particular, the existentialism of his soliloquy in Act V, scene 5, upon learning of Lady Macbeth's suicide, demands of the reader/audience at least some awareness of one's own mortality. 

Tomorrow, and tomorrow, and tomorrow,
Creeps in this petty pace from day to day
To the last syllable of recorded time,
And all our yesterdays have lighted fools
The way to dusty death. Out, out, brief candle!
Life’s but a walking shadow, a poor player
That struts and frets his hour upon the stage
And then is heard no more. It is a tale
Told by an idiot, full of sound and fury,
Signifying nothing.

The more I saw productions of the play this past spring and summer, the more I was moved by the perfection with which Shakespeare conveys the inherent meaninglessness of life, the irony of existence.  Here is a man (Macbeth) who, as nearly all of us do, sought meaning outside himself, and is forced to face the empty-handedness of his grasping attempts.  (Fortunately, most of us do not come to this eventuality with the cruelty and finality that Macbeth does.) 

I do not find this speech dispiriting; rather, to me it describes the blank canvas of our lives, the surface upon which we may express whatever we wish.  Life does not come with meaning; if we want it, we must make it ourselves.  As, again and again, I listened to great (and not-so-great) actors say these words, as I read them to myself, I began to hear music with it.  Eventually, it became this: 



As you can tell, this is a very different piece from what I've been doing, but it was a lot of fun and a new kind of challenge to compose.  I was inspired by the words in ways I have not experienced before.  I don't think of myself as a songwriter and have gotten further away from music with words in it as I've gotten older; nonetheless, this really grabbed me.  

It was written as a duet for tenor and viola.  Unfortunately, as I confirmed in the attempt to record the piece, I am neither the tenor nor the violist that the music requires; hence the electronic version.  This performance was constructed in Logic 9 using, for the voice, the native EVOC-20 digital vocoder over a sawtooth wave from the native ES2 digital synth, and Native Instruments' Kontact Player plug-in with the Garritan Personal Orchestra's solo viola sample.  Despite the piece's conceptualization as acoustic, I endeavored not to shy from the electronic sound; I believe it works reasonably well, if nonetheless different in character from my original intent.  In composing it, I started with the vocal line and struggled for several weeks with how it should be accompanied.  After futzing with a lot of different ideas, I began to hear a simple accompanying line, which I worked out in its entirety before deciding -- realizing, really -- that it should be played on a viola.

As it was my original intention that it be performed acoustically, I would be thrilled if someone who has the chops I lack is interested in playing it.  I have the score and would happily share it with anyone who wished to take it on; I would only ask for a recording of the performance, either audio or video.  Just contact me via the comments. 

Friday, June 28, 2013

On Traveling and Distance

This piece started out with a very different intent and mood from how it ended up.  Initially, I was striving for something pretty dark, which is where I was at at the time.  However, as I revisited it over the last eight weeks or so, I began hearing other things in it; it called from unexpected directions.  The experience was a lot like I've what heard from novelists talking about how characters develop:  you start, but they tell their own stories which you are privileged to hear first and record.  So the dark tritones with all the overtones and the saw pulse with the pinging echo that I initially imagined being the foundation of a brooding meditation began demanding a more present, less introverted evolution.  What could I do but listen?



The title is also a reflection of this process.  The initial name, which actually made itself known at the same time the original ideas for the piece did, became not merely inapplicable to the final version, but actually felt counter to it.  The current title came to mind as I was working on it this evening and seemed to fit perfectly its new mood.  It may merely be the fact that I was primed to think of it because I have been reading about recent developments in and corrections to scientists' understanding of exactly where Voyager 1 is, but, regardless, it felt right.  It was not merely a descriptive name for the piece as it revealed itself, but a metaphor for how my own life feels right now:  passing a profound but poorly demarcated boundary, crossing into a new phase in a long, important journey.

This was performed on Arturia's Moog Modular V 2.6, controlled in real time variously by keyboard and Lemur 4 for iPad; it was recorded and tweaked in Apple Logic Pro 9.

Thursday, June 6, 2013

Messing Around

One thing I've been trying to do in organizing my studio is to set myself up so I can be more improvisatory in my music-making.  Initially, didn't have a controller keyboard attached to my DAW; it took a while, but now I do.  I also got Lemur for iPad a while back and have spent quite some time playing with different configurations for that; I recently settled on one that covers a good bit of functionality for the Moog Modular V.  And getting to know one softsynth reasonably well (the MMV) has been important, too, making it easier for me to figure out how to create the sound that I want.  String all of this together, and music-making looks a lot more musical and a lot less like programming.

This makes it easier to create the kind of music I've been interested in making from the beginning.  At first, I thought the way into that sound was through careful -- even obsessive -- attention to structure and detail.  It turns out I'm not so good at that; I get overwhelmed and lose my focus and inspiration trying to manage tiny particles of music.  After reading some of Morton Subotnick's writing and hearing/witnessing a lecture/performance of his, I've discovered that I can get much closer to what I hear in my head by just messing around, creating some imperfect but honest expression of what I feel and then going back and pushing, prodding, tweaking it into shape.



This piece started with a couple of sounds I heard in my head:  I wanted to see how closely I could synthesize a dripping faucet, and there was a kind of crash sound that I made using the LMMS in In Three II that I really liked and wanted to try to recreate in the MMV.  As I worked on the faucet sound, I came up with the sound you hear here and decided I liked it more than a faucet.  I then controlled this "singing faucet," as I called it, with the sequencer in the MMV, coming up with a semi-random pattern controlled by two out-of-synch LFOs to trigger the sequencer on and off.  While looking through some of my old sound designs in the MMV that I might use as a basis for the crash, I found this bell-like sound I liked and added some noise to it, running it through the LPF with the frequency tied to the keyboard.  However, instead of controlling it with the keyboard, I used a pad grid I built in Lemur with a 1/2 step (x-axis) by tritone (y-axis) design.  As I messed with these, I began to imagine a shooshing sound cycling in and out; I implemented this using the MMV's pink noise run through its Bode frequency shifter (the change in the phase gives it the shooshing) and controlled it on the Lemur. After that, it seemed to want some thick, dark, bottom notes; this was just a series of the MMV's square-wave oscillators tied together, with one of them taken out of phase periodically to give it some unpredictability, and then run through the LPF.  Using the Lemur, I was able to control the LPF's frequency and resonance simultaneously.

What's exciting about this for me is that it's much easier to produce expressive sounds with this approach, and it sounds much more natural, too.  The real-time nature of it makes it feel more like a performance and less like I'm Ray Harryhausen's apprentice.  I hope you enjoy it!

Wednesday, May 1, 2013

Making Room

I decided to experiment with SoundCloud for posting new music.  It's a great place to post works, but it's not really a blog.  I like having the space to write about what I post and being able to customize my page, but Blogger doesn't allow uploading audio files, so you have to make videos, and the videos they import get compressed, so they sound like crap.  The solution is to post videos on YouTube, which I've been doing, and link them to Blogger, but all that work to make a video is a pain and superfluous, since none of my content is visual.  On the other hand, it's nice having an object to put in the text to mark the music, like "Here it is!" and Blogger doesn't have an equivalent object for SoundCloud.  Maybe I could have a pic of a crow or something and attach the link to that.  Dunno; we'll see.

In the meantime, here is a new piece:  I call it "(more space)".  It's a quickie and has lots of flaws -- it's barely mixed, there are several places where the gain is still to high, and the ending is clunky, just to name a few -- but I felt the need to do a piece in one sitting and get it published.  Things have been pretty challenging in my little corner of the universe recently, it felt good to focus on my inner ear and just get something out into the world.

The piece is best listened to on a system with a good, clear bass; it's not bass-heavy, but its low end knits the whole thing together.  It represents the sort of thing I'm most interested in doing:  almost exclusively timbral in nature, minimally rhythmic, minimally melodic.  Harmony (in the broadest sense of what happens when more than one pitch plays at a time) cannot be ignored, but it's not emphasized either.  I find how sounds change fascinating, and I aim to produce music that draws the ear into that.  I hope you find something that draws you in here.

Tuesday, March 19, 2013

Heir on a G String

Looking at the DAW file, I see it was nearly a year ago when I started this.  It happened one night when I was in the grocery store and thought I heard the Bach "Air" from Suite #3 (BWV 1068) playing over the store sound system -- by a 60s blues band.  As I listened more carefully over the reverberating shoppers' din, I realized it wasn't uncle Johann, but Procol Harem playing what for many qualifies them as one-hit-wonders.  It struck me then that their instrumentation would actually make a nice ensemble for the Bach "Air."  (And, yes, PH seemed to have "inherited" from Mr. B. some especially lovely harmonic and melodic structures for their song.)  Thanks to the wonders of modern technology, a short trip home allowed me to assemble the necessary virtual performers and have a go at it.


You will probably notice a number of problems with it:  the expression in the organ part is uneven (at best) and I did nothing with the tempo except to put the ritardando in at the end (live musicians would have varied it slightly for expression).  By the number of hours I put into it, you might expect more nuance, but, as an experiment and self-tutorial, it served its purpose and was fun to do. 

There is little for me to say about it technically.  I didn't do much to modify the emulators' presets (I know:  "Presets are for the weak"), but my interest here was not in sound design, but in learning how to coax musical expression out of a black box.  My principle avenues for this were velocity (which was the only expressive control for the guitar and bass tracks) and the "Expression" parameter on the tonewheel organ.  This latter is essentially a central portion of the total volume range; it does not affect the timbre as a true swell might, thus the sense of the organ moving away and coming closer, rather than getting softer or louder as such. 

So, in the end, it's down to being unwilling not to post something I put so much time into, even if it's merely the result of an exercise, rather than a small piece of my soul.  Still, I hope it is enjoyable. 

Sunday, March 10, 2013

11 Mar 2004

First:  yes, it has been nearly a year since my last post, and more than that since my last new piece (which was itself a very modest effort, to say the least).  Paradoxically, it's not been for lack of inspiration:  I've been working on dozens of projects, some of which have fallen by the wayside, some of which have been ignored in favor of new ideas but to which I plan to return, and some of which I'm still working on.  'Twas ever thus

The piece I'm posting today is deeply personal; I debated with myself about putting it up.  In the end, I decided to go forward because I'm  rather proud of it.  It is the first piece I've done with any spoken part (the voice in "Cars" was synthetic).  Also, the text is based on an eponymous journal entry from a particularly challenging time of my life.  Using it for a piece is an idea I've had in my head for some time now; I only just noticed that the date of its fruition is (roughly) the anniversary of its source. 


I initially intended to include the text as a scroll in the video, but decided that the act of reading interfered with the aural space I wanted to create.  I recommend listening to the piece first, without reading the text, then, if you wish, go back and listen again with the text.  It is as follows: 

I am walking in a dismal bog under dark branches and an overcast sky.  Next to me is a small, brown boy dressed in ragged cutoffs and a grubby red t-shirt.  He carries a white plastic bucket of rotting, infected, poisonous crabs.  He ladles crabs into the bog every few steps. 

As we walk, I sink deeper into the muck.  Eventually, the bog bottom dives down into the water and the water begins to move.  I leave the boy behind and follow the stream.  I am soon carried off. 

The clouds clear and the trees become verdant in the sun, the air sultry.  The stream is covered with tiny green petals, obscuring the water below.  I fear what I cannot see, but the stream is beautiful. 

I reach out an arm to stroke the water and it is seized by two unseen hands.  Frightened but deliberate, I lift my arm from the water and pull up a man, gasping for air as if nearly drowned.  He struggles, coughing, clinging desperately to me for a few moments.  He is naked and hairless and looks as if he were made of riverbottom. 

Soon, he is able to breathe normally.  He tells me he has been underwater for two and a half years.  He had tried to kill himself, but then refused to die.  He was very frightened. 

We drifted downstream together, he in my arms.  As we bobbed gently in the water, the mud washed from him, revealing his pink, wrinkled flesh.  Bits of him had died during his long immersion and this skin began to fall away into the stream. 

The water widened.  We began to see houses along the banks, a woman hanging laundry in her backyard.  


I leave the reader to interpret its meaning (note the duration of the man's immersion). 

Technically, I'm pleased with the result, even if it is a simple piece.  It is my first experiment with recording my voice and it went better than I expected (not to say there aren't problems).  Also, the realization of the piece matches closely what I heard in my head.  This represents a small triumph in part because it indicates that I'm getting more facile with the technology:  I typically spend a frustratingly large portion of time on a project figuring out how to do what I want (which sometimes ends in me giving up and compromising or even quitting the project).  With this piece, I was able to develop the synth sound I wanted, as well as the vocal effects I had in mind, relatively quickly. 

As I have several pending pieces I'm excited about, with luck (and energy) my next post will come sooner.