Program Notes

Seven Words, Seven Worlds

Seven Words, Seven Worlds is a piece for 12 singers that was premiered by OSSIA New Music on Oct. 25th 2008. The following are the program notes from the premier.

“The germ of this piece, the all-influencing cell from which the piece was birthed, is the ‘sameness’ of dissimilar things. Traditional ideas of unity and harmony are largely abandoned in favor of a piece-view that sees unity and harmony in the sameness of everything, no matter how disparate. This means that there are, amongst the musical organs of this work, those that are traditionally constructed using, for example, the idea of canon, and those that are generated by far less conventional methods. These sections, however, are not viewed as contrasting or conflicting, nor are they viewed as superior or inferior. The sameness value-system that informs this work stipulates that from note-to-note and section-to-section, sounds are simply sounds and pieces are pieces, and everything that exists is a measure of itself and nothing more.”

“I must rely on the acceptance of a zen precept of immediacy if I am to make any further attempt to define this ‘sameness’ beyond the above. The precept is something like ‘That which is true and immediate is true, and that which is true, but is labored over is likely to lose its perceived trueness.’ As concerns this piece, trueness simply defines the honest creation of a self-reliant and self-fulfilling bit of music. While I cannot say that I did not labor over parts of this work, I can honestly attest to my willingness to get out of my own creative way. The individual sections of the piece were all composed from beginning to end individually before contemplating the next, and only after all were composed were they placed together in as honest a sound-universe as I could conjure up for them. Practically, this honesty is embodied in the transitionless nature of the whole.”

“The piece is, at the end, a putting-together of seven sections of music with five ‘interludes.’ The material for the larger sections was composed with no thought for what came before or after. They are self-contained musical worlds that exist and operate under their own, unique set of conditions. They share one common element; each is motivated by only one condition or state. As a mundane example, a section’s MO could be “gravity”, where all of the notes could only go down from where they started. On a more interesting and larger scale, this idea could be expanded to control sound objects in a 3D virtual space where gravity would mean that the pull of one musical idea affects that of another in a rule-based sound environment like planets circling a sun. A ‘state’ that is employed in one of the sections of the work is controlled chaos.”

“The Interludes were constructed differently. The basic pitch material for the interludes was generated by an algorithm I wrote in a computer synthesis language called SuperCollider. This algorithm, when supplied with the appropriate information (number of voices, number of notes desired) spits out a string of pitch class numbers. I then took these numbers and employed them in different ways to achieve different musical textures and styles from interlude to interlude.”

“Some of the musical material, the constructive ideas and realization thereof, are very simple. Some are very complex. I hope that in experiencing this work, the listener will gain some insight into the ideas of sameness as described above. More than this, I hope the listener will enjoy the piece from moment to moment.”

Evolution I

Recorded August 19 through September 9, 2012 Scott Petersen, violin

This set of pieces, Evolution 1a, 1b, and 1c was recorded over a period of 3 weeks. The works themselves are a blend of musical intuition (experience), planning, and chance. They are one-time creations and have not undergone post-production editing. The basic premise of the series is an exploration of a single musical idea while additional ideas are introduced by chance. Each additional element is explored and either incorporated or discarded. This process mimics algorithmic processes such as Markov chains and cellular automata, but in an instinctual, non-formalized way.

Each piece employs extensive use of extended techniques, and each has a different number of parts. For 1a, a single improvisation was performed with the bow in a semi-traditional manner. As changes were introduced by accident, they were incorporated into the rhythmic and harmonic language of the piece. For 1b, two parts were recorded with both only employing plucking techniques. The first part was recorded as in 1a, incorporating changes are they were introduced by chance. The second part was improvised while listening to the first, synthesizing the music of the first track with its own. 1c has three recorded parts. As in 1b, each successive recording is informed by the previous recording with its musical possibilities expanded by both chance and the elements of the previous tracks.

Because the time scale is long and the rate-of-change is extremely slow, careful attention to micro- changes and polyphony will help the listener remain engaged. Additionally, headphones are highly recommended for listening.

keraclem

Notes for the premier, February 27, 2006

It is with some amount of difficulty that I attempt to sum up in a few words a piece whose composition was extremely difficult for me both conceptually and technically. There are many ideas that form the conceptual base of this piece. At the highest level all ideas work to form an audible narrative. Atlower levels, they consist of the employment of theories of probability, flocking and grouping to inform linear movement. The vertical considerations of the work are controlled by two pitch-class clusters of different size. These in turn inform a tri-chord system that factors into the piece’s long-range harmonic progression. All of these base processes undergo a constant and relentless process of transformation which results in the audible narrative. In addition, a complex relationship exists between each of the player’s parts; nothing occurs randomly and no action is isolated. All of the individual parts work together to form the whole.

For a first hearing, I recommend that the listener, to grasp some order from a very dense and sometimes difficult texture, not try to hear the complex linear and vertical systems described above. Instead, listen for general, more obvious changes over time. It can take as long as several minutes for one idea to play out and transform into another, so pay attention as best you can to the music as it occurs. Do not try to remember the music in detail, just listen to the sounds as they grow, change, and interact with each other as you would to a story unfolding.

IFS I

Improvised Jointly with Brian Kane as El MuCo Notes for the premiere, March 20, 2011

In this program, we work with custom designed electronic instruments, digital signal processing and feedback systems. We have designed a digital modular system to allow maximal configuration/ reconfiguration in real-time. The system is quasi-deterministic and semi-autonomous, into which we place sounds, both analog and digital. These sounds are the basis of an evolving, complex, self- sustaining sonic texture with a high degree of complexity, unpredictability, and internal life.

1204-10-29

Recorded October 29, 2010, Scott Petersen, 1204 mixer

This piece is the result of an improvisation I performed on a Behringer 1204FX hardware mixer. I recorded 1204-10-29 in one take using only the 2-channel output from the mixer. There is no additional material in the recording, nor any post-processing aside from normalization.

Typically used for audio routing and recording, any audio mixer can be turned into a feedback instrument when routed properly. This particular mixer has built-in digital signal processing which allows for more sonic possibilities and richer timbres. The following is the routing recipe I used for this piece:

The first pair of feedback loops was connected as follows:

  • Alt 3 output –> channel 1 input (trim at +60) –> sent to Alt 3-4
  • Alt 4 output –> channel 2 input (trim at +60) –> sent to Alt 3-4The second pair of loops was connected like so:
  • Aux Send 2 –> channel 5/6 L (+4) –> Main Mix (no Alt 3-4) –> Aux Sends 1-2 alternately asdesired
  • Aux Send 1 –> channel 7/8 R (+4) –> Main Mix (no Alt 3-4) –> Aux Sends 1-2 alternately asdesired
  • Aux Sends 1-2 at +15
  • Aux Returns at +5 to +10
  • Aux Return 1 to Aux Send 1 at +5

The reverberation heard is the built-in “Chapel” reverberation, program 19 on the mixer. I used the Control Room R & L output channels to route the audio to my laptop for recording. I monitored the sound using the headphone jack on the mixer with the volume as near to zero as I could get it.

The overall aesthetic of this piece is a dark one, the sonic palette is wet and mechanical and the surface texture is both elusive and combustible. A formal analysis reveals a Moment Form with superimposed elements that return at semi-symmetrical intervals. Headphones are recommended for listening, with a caveat: the piece begins very softly with the first loud sound at 1:23. Please adjust the overall volume of your sound system to this level.

EI4

Notes for the premiere performance Scott Petersen, laptop, microphones

Electronic/computer music improvisation is not new. As early as the late 1960s composers and musicians were forming groups to improvise music with analog electronic instruments. By the 1970s at centers in the US and Europe, they were improvising not only with analog synthesizers, but with computers as well. Today, improvisers have a vast number of tools at their disposal, both analog and digital, commercial and home-brewed. Through advances in technology and in computer music-specific languages, it is easier than ever to create sounds and music “on the fly.” What remains are the artistic hurdles; those musical considerations that are the same today as they were then. This performance is an homage to the practices of those early pioneers. While the aesthetic is different, I have limited myself to the methods employed in those early days, namely the use of microphone inputs, feedback loops, and delay lines. I hope you enjoy it.

VT 1

VT 1 is one of a set of pieces currently in progress under the working title of Variations. Each piece in this set uses as its ‘theme’ a single sound source, most typically brief samples of either instruments or voices. For VT 1, a 7” audio clip of a man’s voice was used as the base material. All of the sounds in the piece are derived from this one sample. Many processes were used to create the resulting work, including phase vocoder analysis and resynthesis, spectral analysis and extraction, and time and frequency manipulation.
The compositional structure of VT 1 may be thought of as akin to a double canon in arch form. Different, yet similar versions of the primary ‘theme’ (a regular, percussive and occasionally noisy version of the source sound) are layered in semi-regular time intervals while the second ‘theme’, a less rhythmic and more melodic version of the source sound is added at specific moments throughout the first half of the work. The piece builds in density of texture and content until, at roughly the mid point, a brief fusion of the two ideas occurs, after which the second theme material maintains prominence while the first theme material dies away. The second theme sounds end the work in a similar manner to the way in which it began.
The programs used to create this work were SuperCollider, SoundHack, Fscape, and Logic 8.

QU 1

QU 1 is one of a set of pieces currently in progress under the working title ofVariations.  Each piece in this set uses as its ‘theme’ a single sound source, most typically brief samples of either instruments or voices.

The sound source for QU 1 is a 9 second, three-note sample of a quena.  The piece itself is not subtle.  The several versions of the source sound that are used are presented plainly.  Panning, spatialization, and other typical devices employed in “tape” pieces are either not used or are used sparingly.  The recording is highly compressed as most pop/dance tracks are mastered.  The only program used to produce this piece was Audacity along with a vast collection of AU and VST plug-ins.

Measuring Time and Place

All elements of Measuring Time and Place — pitch, rhythm (time), and spatialization—are connected.  The work is, literally, the measurement of a virtual space in time. These measurements result in a series of numbers that correspond to specific points in the performance venue.  The circumference of the space measures 14 by 19 seconds square.  Within these measurements of the space are areas of importance. They are important because they move from the virtual space to places of consequence in the real performance space. Examples of these places are where the performer begins the piece and where the speakers are positioned. The measurement of these places is made in the time it takes to traverse the distance from one to another.  For example, midway through the stage, where the player begins the piece, is 0, and it takes 7 seconds (steps) to either side to reach the end of the stage.  11 steps from 0, and he is at one of the front speakers. It takes 26 steps to reach either of the rear speakers in the hall, etc. These numbers, 7, 11, 14, and 26, join other numbers of importance, such as the time it takes to circumnavigate the entire space, to inform the large-scale structure and organization of musical events, as well as to inform local-level events such as the placement of accents or the number of repetitions of a note. In addition, the pitch material is related to these numbers. Numbered on the quena from bottom to top, the pitches were chosen according to their relation to each other in reference to the important numbers.

An additional element of Measuring Time and Place is the interaction of the parts, that of the live performer and that which is prerecorded.  The prerecorded material may be thought of as four distinct additional players, each constrained to perform in the virtual environment that mirrors the real-world environment of the live performer. Each of these parts interacts with the special areas of its shared virtual space, with each other, and with the performer. As the performer moves through the real environment, the performance space, he traverses a real space that is simultaneously being traversed by the other parts in the parallel virtual space. The interaction of the performer with the prerecorded material may be thought of as the simultaneous measurement of past and real-time musical events. Conceptually, the prerecorded elements occupy multiple places in time. They were real-time when they were created, but were also created with the future in mind. Although they are technically of a past time, they are experienced in real-time during the performance of the piece, “interacting” and combining with the material of the live performer.

Four Pillars I

For live electronics (supercollider) and 4 channel audio.

This piece is the first of a series (still in progress) of pieces that explore feedback networks, interactivity (perceived) and aural serendipity. The recording below is one realization of the piece as each time the code is executed the results vary. It has also been modified from the original (for four channels) for stereo.

Here is a graph of the feedback structure:
And for your edification, here is the code for the piece.
/*_________________________________________________________________________

USE THIS FOR 2 CHANNEL VERSION: Pans the synths across the 2 channels instead
of making them left or right only

Execute I: allocates more memory, boots server, and registers the synthdef
___________________________________________________________________________*/

s.options.sampleRate = "int24";
s.options.memSize = 1024*10
s.reboot;

SynthDef("four_pillars_I", {|out=0, pan=0|
var range, offset, amp, speed, input, fBLoopIn, processing, fBLoopOut;
range = EnvGen.kr(Env([0.5, 0.45, 0.35, 0.30, 0.1], [100, 140, 30, 120]), 1);
offset = EnvGen.kr(Env([0.7, 0.725, 0.75, 0.85, 0.3], [100, 140, 30, 120]), 1, doneAction:2);
amp = LFNoise0.kr(0.5, range, offset).round(1);
speed = LFNoise0.kr(0.5, 2, 2.1);
input = Impulse.ar(speed, mul:0.35);
input = Amplitude.kr(input);
fBLoopIn = LocalIn.ar(1);
processing = input + LeakDC.ar((DelayN.ar(fBLoopIn, 3.5, speed, 1.1*amp)));
processing = RLPF.ar(processing, LFNoise0.kr(speed, 400, 800), 0.15);
fBLoopOut = LocalOut.ar(processing);
processing = processing.thresh(0.45);
processing = Limiter.ar(processing).softclip;
processing = FreeVerb.ar(processing, 0.25, 0.45, 0.75);
processing = Pan2.ar(processing, pan);
Out.ar(out, processing);
}).add;

/*_________________________________________________________________________
Execute II: to begin piece, double-click inside the parentheses below to select all of the code and evaluate it by pressing 'cmd + c'.
___________________________________________________________________________*/

(
w = Synth("four_pillars_I", [\pan, -0.8]);
x = Synth("four_pillars_I", [\pan, -0.2]);
y = Synth("four_pillars_I", [\pan, 0.2]);
z = Synth("four_pillars_I", [\pan, 0.8]);
)