Here We Go Loop da Loop

Color me frustrated. After spending a couple of days with the new version of Ableton Live, I’ve come back around to the conclusion that I was right about it all along.

I decided this month to get into Live seriously because I have this peculiar desire to play a few small, local live gigs. You know, coffee house stuff. Not a career move, God knows — I’d just like to be able to share my music with a few people.

But what will they see when they sit down at a table with their decaf latte? An old guy hunched over a laptop. Yawn. For reasons that are deeply rooted in our species’ evolutionary history, when people go to hear live music they want to see something that demonstrates expertise — something that looks difficult and is therefore impressive. If all you’re going to do is press the start button, it’s not really live music.

I love producing music in Propellerhead Reason. It’s a great piece of software. But it’s not well suited for live use. Yes, I could mute the track that has the melody on it and then play the melody live on a MIDI keyboard — but that’s just an old guy noodling out a melody on a keyboard. It wouldn’t look good. Everybody has seen a thousand piano players; the parameters with which one impresses an audience while sitting at a black-and-white keyboard are well known, and I don’t qualify. I ain’t that good.

So then there’s Ableton. This software is well suited to live performance. You can cue up material on the fly, improvising an arrangement. And if you have a couple of tabletop boxes with blinky colored lights (such as Ableton Push), you look like you’re doing something. It’s mysterious, it’s high-tech, you must be a whiz! So I upgraded to Live 9.7 and got me a Push 2. Hooked up a monome grid to the PC too — it’s deliberately minimalist hardware, but what it can do with Live is actually more interesting than what the Push does, thanks to Max For Live, which is also extremely cool.

The trouble is, Live is about loops, and I’m not. I don’t think in loops. Oh, sure, I sometimes use repeating two-bar phrases (especially in the drum department). But I’m always adding fills, lead-ups, and variations. The way to use Live live is to construct your music in scenes (which are built out of loops), and I don’t think in scenes either. I certainly compose in sections, but to me a section is not like a cement block — it has a more fluid shape. If I write, let’s say, a 20-measure B section, there’s liable to be a new instrument entering in the background after 8 bars just to move the thing forward.

In a conventional track-oriented (non-live-performance) sequencer, this is an easy and natural thing to do. In Live’s Session View, it’s neither easy nor natural. A lead-up before the start of a phrase is not easy either.

There’s a lot to like about Live — I’m not trying to disrespect it. It’s a brilliant piece of software! But it doesn’t mesh well with the way I think about music. After putting together an intro and an A section of a new piece, I’m finding that none of my usual working methods is available. Let’s say I want to take that A section and start a new background sound in the middle of it. In order to accommodate the entrance of this new part, I have to move a whole bunch of clips to new locations in order to make an intermediate scene. In Reason, I could just start recording at the spot where I want the new part to go — no muss, no fuss.

And yes, I’m aware that Live also has an Arrangement View, which is much more a conventional linear multitrack sequencer. But if I’m using Arrangement View, where’s the advantage for live performance? Now I’m back to pressing the start button and then noodling out a melody on a keyboard.

This is a luxury problem, to be sure. But I would like to be able to do something live.


Something New…

Here are no less than three new pieces for y’all. First up is “Dance of the Boring Gypsies”:

And then we get to “Sailing”:

Last but not least, “Sunder Blunder”:

These were all done in Reason, and they use quite a number of third-party Rack Extensions. The gypsies were enamored of PSQ-1684, Oberon 2, Vecto, GSX, Beat Crush, Renoun, Revival, and Predator. In a couple of cases, such as the guitar riff near the beginning, I started with a preset that had its own built-in step sequencer pattern and turned off the step sequencer so I could write a part that had variations. Three of the presets are from Francis Preve’s library for Reason 9.

“Sailing” begins and ends with a neat trick that the Resonans physical modeling synth from Robotic Bean can do. There are two Resonanses operating, a bright one in the center and a more muted one off to the left. They’re being fed audio from a Redrum pattern; the audio input activates the model. But the Redrum signal to the muted Resonans passes through a delay line first, so it lags by a quarter-note. Expanse, Antidote, and ReSpire are also heard here and there.

“Sunder Blunder” was also inspired by its opening rhythm riff, which started out as an experiment with Unfiltered Audio Sunder. The input to Sunder is a simple drum loop coming from Dr. Octo Rex. The outputs of Sunder’s three bands are sent, variously, through an Air Raid Audio Relapse (the squeaky noises in the high range), a Scream distortion, a couple of Steerpike delays, and probably some other stuff too.

Another Dr. Octo Rex goes through a Sononics GSX into a Synchronous and a phaser. There’s also a Proton synth going through an Etch Red filter and another Synchronous. Once I get patching, it’s hard to know where to stop!

Enjoy, if possible.

That Mellow Thing

This is embarrassing, but there’s no help for it. This afternoon I whipped up a 2-1/2 minute piece that is about as gauzy a mellow new age meditation soundtrack as I could manage. (Don’t ask why — that’s kind of embarrassing too.) Here it is:

Sadly, this mix is the first time I’ve been able to clearly see age-related hearing loss. The tambourine backbeat that starts about 30 seconds in — when I recorded it, it sounded nice and balanced with the other instruments, just the way I wanted it, but when I looked at the stereo mix wave file, the tambourine was about 6db hotter than the whole rest of the mix put together. So I EQed it and compressed it a little, and now it sounds flabby to me, but if your ears are good you’ll probably think it’s well balanced. I hope.

Now I’m worried about the breathy sound. Is that too loud? Somebody tell me. I’m deaf as a post above 7kHz. Sad.

Aleatoric Ambient

Can you do aleatoric ambient music in Reason? Why, yes — you can! This modest effort was put together using a couple of Parsec synthesizers, a couple of Fritz granular effects, a couple of Euclid gate generators, and a couple of Step note sequencers set to random playback order. And a couple of LFOs, of course.

Parsec can sound a bit like a steel drum, depending on what patch you use. Here we’re hearing “Ballophone” and “Neptune.” The Fritz effects toss the sounds around rather freely using the “Altostratus Translucidus Duplicatus” preset. I entered 32 random notes into each of the Step devices, and then panned the two sound sources slightly left and slightly right.

No actual notes were played or recorded during this track — it’s all being generated in real time by the Euclids and the Steps.


Keeping Score

When I was growing up, music meant notated sheet music, pretty much. Sure, jazz players improvised, but nobody was teaching improvisation (or jazz anything) where I went to high school. Someone handed you a page of dots, and you played the dots.

How times change. These days, I still use pages of dots if I’m playing cello or piano, but my composing is done in a computer sequencer. There’s no notated score, and no need for one. It’s all MIDI tracks or loops. The old-line MIDI sequencers (Cubase, for instance) have notation editing and printing facilities, but that’s all legacy code. Very few people would ever touch it. Newer sequencers such as Reason and FL Studio, which are mostly what I use these days, don’t do notation. Nor does Ableton Live.

A couple of days ago, for reasons that I don’t want to go into quite yet, I decided that it would be a good idea to have notated versions of the melodies of some of the music I’ve done in Reason. This turns out to be possible, but it’s a bit of a scramble.

First I tried pencil and paper. That works, but it’s a punishing regime for the hand holding the pencil. So how about extracting MIDI files from Reason and importing them into a notation program? The pages would be easier to read, and also easier to edit — for instance, if I decide I need to add eight bars in the middle of a piece.

For the benefit of anyone else who may be contemplating such a quixotic venture, here’s what I’ve learned.

First, for basic notation you don’t need an expensive notation program. MuseScore is free, and it works very well. It will load a MIDI file and interpret the data so as to produce notated pages. But that’s not the end of the story; it’s just the beginning.

If you just want to export a melody from Reason, the first thing to do is save a special copy of your song called something like “MySong Melody.reason.” This is so you won’t accidentally destroy the song data! then select all of the other tracks except the melody, right-click on one of the tracks, and choose “Delete Tracks and Devices.” When you’ve done this, you’ll be exporting only a single MIDI track — the melody. If you export the whole song, you’ll have a multitrack MIDI file. MuseScore will import this, but editing it would take days.

If you’ve recorded a melody on a monophonic synth, you’ll probably never notice if a few notes overlap here or there. (If you’re using a preset that has legato enabled, you’ll probably want some overlaps, in fact.) But MuseScore handles overlapping notes by assigning them to a different voice on the staff. Voice 2 may have only one note in a measure, so MuseScore will strew rests across the rest of the measure and use conventional stem directions for what it thinks are the two separate voices.

Reason has a nice editing command for introducing a small, fixed-size gap between notes in a legato line. Use this before exporting the MIDI file, and most of the Voice 2 notes will drop back to Voice 1. But this command has to be used with care. If two notes overlap significantly (again, this will be inaudible if your preset is monophonic), the editing command will make the first note longer rather than shortening it, so there will still be an overlap. Also, you can’t do a select all on the track before using the command, because then all of the notes at the ends of phrases will be lengthened, perhaps radically, so that they reach almost to the first note in the next phrase. The way to use this command is by selecting one phrase at a time, clicking the button to use the command (the button is in the F8 tool kit), and then inspecting the whole track visually before exporting the MIDI file.

Even then, you may miss a couple of overlaps. You’ll see them when MuseScore imports the file, at which point you can go back to Reason, edit the offending notes, re-export the file, and re-import it into MuseScore.

And then we’re ready to print out the pages? No, not yet.

MuseScore analyzes MIDI files to figure out what key signature to use. This is a nice time-saver if your music is simple, but if you’ve changed key in the middle, or are using an exotic scale (as I will sometimes do), MuseScore may make a bad guess. The first piece I tried to import ended up in G-flat major when notated. This resulted in a whole big bunch of B-double-flats, among other enharmonic anomalies. Getting rid of the key signature didn’t change the way notes were displayed, other than adding a slew of new accidentals. From a quick trip to the MuseScore user forum, I learned about the Respell Pitches command. That took care of most (though not all) of the spelling problems. With the ones that remain, it’s click on a note, hit the J key. Click on another note, hit the J key.

The lengths of notes at the ends of phrases are not always easy to read. I had to delete a few sixteenth-notes that were tied to the previous note. Other notes had unnecessary staccato dots.

For some reason, MuseScore didn’t see my printer. I had to “print” to SnagIt (a screen-capture program) and save from SnagIt to PDF in order to print.

The next melody I tried extracting was deliberately recorded without quantization, and the tune has a gentle shuffle groove. Figuring MuseScore wouldn’t like that, I went through the track and quantized everything. The results were still a mess:


With this one (which fortunately isn’t too long) I’m going to have to transcribe using a pencil and then enter the data into MuseScore by hand. That’s almost bound to be easier than trying to thrash through that tangle.

I think I’m starting to get the hang of it, though. And the good news is, as you can see in the above clip, Reason exports time signature changes as part of the MIDI file. MuseScore happily inserted time signature changes in all the right spots.

I’m sure I’ll run into a few more snags along the way. The output is easier to read than pencil and paper, though, and I don’t have to worry quite so much about writer’s cramp.


Reconnecting with Francis Preve, a demon sound designer and regular Keyboard contributor whom I worked with some years ago. We were talking about Reason 9, and he said, “Search for FP in the new presets. You’ll find about 150 of mine.”

So naturally I got inspired and used three of them to make a new piece. Right now I’m calling it “Frandango,” but if he objects to that I’ll rename it “The Banjo Player Only Knows Three Chords.” (There are actually seven chords in it, but who’s counting?)

The catchy grinding rhythm that starts in bar 5 and is heard throughout is one of Francis’s presets. It’s a neat trick: A square wave LFO from Pulsar is patched into the Combinator’s CV input, and the CV is used to switch from one Alligator rhythm to another. The square wave is on a two-measure cycle, so it switches back and forth at the bar lines. I did have to fiddle with the phase offset of the LFO to get the rhythm to line up the way I wanted it.

The Theremin “voice” that sings the melody and the bass line with a kind of vowel transient in it are also Francis’s work.


The percussion patterns are managed by Lectric Panda ProPulsion (visible above in the right-side rack). I highly recommend this Rack Extension. Even if you’re just using Redrum, the ability to program an entire pattern graphically is sweet, and ProPulsion has lots of other great features. Other than that, it’s just stock Reason synths and a few RE’s — mainly Expanse, Vecto, and SubBoomBass. Enjoy.


Not much need be said about this, really. Well, maybe one or two things, just because WordPress likes to toss ads in at the bottom of posts that have mp3’s in them, and I’d rather push the ad down a little lower on the page.

The quasi-algorithmic patterns at the beginning and end were done on a pair of Lectric Panda PSQ1684 devices, playing Subtractor synths through Jiggery-Pokery Steerpike delay lines. The electric piano sounds like a classic DX7 because it is, more or less — it’s Propellerhead’s own PX7.

And then there’s the Schenkerian analysis. Those who are attuned to music theory may notice that the entire piece is a greatly extended V-I progression. The bass avoids the tonic until the very end — the tonic chords before that are all 2nd inversion (with occasional excursions to 1st inversion). And yet, the tonic note (G) is present in all of the chord voicings. It’s never absent. The dominant note (D) drops a half-step in only two of the chords — an A7 and, a few bars later, a Db major triad with an added sharp 4th (which is of course the G). There’s also a B7#5 where the D (a sharp 9) is a bit hidden.

This may sound fussy and academic, and possibly it is, but there are times when music theory can work for you, not against you. I don’t think someone who didn’t know theory could have written this piece.