Arpeggios We Got

This may not be my best work, but I needed to finish it up and move on. It’s called “Carriage Ride,” for no particular reason other than that it needed a name. Made with Reason, as usual. There’s a lot of Korde Sequencer module in it. Korde isn’t actually a sequencer, it’s an arpeggiator. I used it to drive a couple of the new Reason 10 sound modules — Klang and Pangea — among other things.

Enjoy, if possible.

Advertisements

Sweet Dreams

Did this last fall. Not sure why I set it aside when it was 99% finished, other than the fact that it’s not really a very creative arrangement. Well, I was thinking of doing some other pop tunes too, and this was one of two that I finished — all except the last flourish. I had ended it on the A-flat chord (it’s in C minor), and that was obviously not right.

It’s definitely harder-edged than the original hit song. Not better, necessarily, just more aggressive. This was inspired by an instrument called SynthMaster 2. Believe it or not, the full bass line (heard clearly at the beginning of this track) is just a single note in SynthMaster. Hold down one key and it plays the entire riff. So how could I avoid filling it out?

I don’t usually use this much compression, but a dance mix seems to call for it. Enjoy, if possible.

Losing Track

Being much in need of an absorbing and rewarding activity, and being less than excited at the prospect of yard work, I thought to take a fresh look at Csound. Also at Max. I’ve used both in the past. Max is fun, and because it’s a commercial program, not freeware, it has a smart, attractive user interface. I’m not sure what I’d do with it, but I’ll give it a few more days before I decide whether to stick with it.

Csound I’m more familiar with. It has a lot more synthesis power than Max — more types of oscillators and filters, that type of thing. It’s less fun to use, however. So much less fun that I’m starting to wonder darkly about the motivations of the people who maintain it.

They’re very bright, that’s clear. And they know a hell of a lot more about computer programming than I do. But it’s not 1985 anymore. Musicians who use computers have a lot of options. I won’t say, “Csound needs to be more competitive,” because I’m not a big fan of competitiveness as a motivational factor or an economic model. Let’s just say, if you want to stay relevant in the 21st century, you need to look around and smell the coffee.

Just to be clear: I like Csound a lot. If you’re composing in Just Intonation or exploring complex polyrhythms, it would be a great choice. It’s also, I’m sure, a good resource for teaching undergraduates what’s really going on in digital synthesis and signal processing. But right now I’m looking at it from the outside. How would it appear to some random computer-savvy musician who was encountering it for the first time?

I thought I’d refresh my memory of Csound coding by looking at a few of the instrument files that are installed along with CsoundQt, the default IDE for Csound. These instruments are not even charmingly naive. They’re just bad. And not only is the sound cheesy and awful, the response to incoming MIDI note messages is not low-latency.

ASIO has been around for more than 20 years now. Low-latency audio is not a weird, esoteric thing. Developing an audio software system that doesn’t use ASIO to operate with low latency is like writing a computer graphics program that only runs in black-and-white. Seriously — that’s a valid comparison.

Why does Csound still insist on using something called Portaudio for its audio I/O? I’m sure there must be a reason for it, in the minds of the developers. Maybe it’s a Linux thing. But looking at this choice from the outside, it’s hard to avoid the feeling that they just don’t care. It’s not even that they don’t care about their end users; it’s like they don’t care about the quality of their own work.

One developer (that would be Rory Walsh) has come up with a system that lets you bundle a Csound instrument or effect in VST format and export it so as to run Csound in your favorite DAW. The system is called Cabbage; I don’t know why.

Using Cabbage, you could in theory run Csound-based instruments in a low-latency DAW environment. Sounds good, right? Problem solved! But here’s a screen shot from one of the Cabbage video tutorials:

cabbage

Seriously, now: Does anybody outside the Csound developer community actually think that panel is something a musician would want to experiment with in 2018?

It could be argued that Csound is a do-it-yourself system — that it’s not necessary for the developers to give users a plug-and-play, drag-and-drop experience. I get that. But if you’re serious about the value of your system, I do think it’s incumbent on you to show what it’s capable of.

If that screen is all it’s capable of, maybe it’s time to fold up your tent, hop on your camel, and ride off into the desert night.

In the world of do-it-yourself music software, forget Max (it’s $399). Consider Native Instruments Reaktor. That’s only $199. By the time you’ve bought a decent computer and a decent pair of speakers with which to listen to your music, $199 is not a brick wall that you can’t climb over. Most of the user-created Reaktor instruments look good, and many of them are very functional. (There are some turkeys, granted.)

I’m not going to ask, “Why doesn’t Cabbage look and feel even a little tiny bit like Reaktor?” That’s the wrong question; the answer is obvious. Rory Walsh doesn’t have cash flow, and he doesn’t have a team of full-time developers working on Cabbage. No, the question to ask is, “Why bother?” Why should anybody (including the developers) care about Cabbage? Or about Csound itself?

I don’t have an answer for that one.

Crash Course

Looking around online for a college-level course in music composition. It’s slim pickin’s. Some stuff for beginners. (That’s not me.) Some expensive courses from Berklee, aimed at pop musicians. (That’s not me either.) A couple of interesting items from MIT, but they’re not actually courses, they’re just packaged-up course materials.

The one on machine-generated composition looked interesting — it uses a free software system called AthenaCL, which generates scores for Csound. Pretty steep learning curve there, even for someone who knows Csound. (Yes, I do. Wrote a book about it once.)

Listened to a couple of pieces composed in AthenaCL. They were extremely harsh. Rude, grinding stuff. Also highly abstract. Clearly it will do other things, but rather than jump straight into AthenaCL coding, I figured I’d try doing something highly abstract, and perhaps only half as rude, in Reason.

This uses mostly the Nostromo synth and PSQ-1684 step sequencer from Lectric Panda. Nostromo has a nifty roll-the-dice button, with which you can generate random patches, so I started there. I also generated random step sequences in PSQ and grabbed the BZR-1 Chaotic Signal Generator from LoveOne to mangle things a bit further.

A lot more could be done with this concept. I just slapped this together to take a quick look-see. It’s not entirely random, you understand; I did fiddle with a lot of parameters and route patch cords here and there manually. But the result is completely abstract. Sometimes, if you want to strike sparks, you have to bang a few rocks together.

Breathing Out, Breathing In

My friend Peter Kirn, who runs (who for all practical purposes is) the high-profile Create Digital Music blog, recently called attention to a free VST plug-in called PaulXStretch. It’s only good for ambient music, but within that dreamy bailiwick it rocks. Today I whipped up a brief piece using it:

PaulXStretch plays the breathy drone you hear throughout; believe it or not, that’s a processed electric guitar strum. Also heard are three Reason synths: Grain, Mixfood 4, and Resonans.

The tricky bit is, you can’t do a normal file mixdown of PaulXStretch. It stutters like mad. It can only bounce to an audio track in real time, not in sped-up file render time. Once I learned how to do that, the rest was easy. Or fairly easy. Because the Mixfood and Grain notes sustain throughout the piece, the only way to listen to the ending is to play the whole thing again from the start.

Enjoy, if possible.

Thankin’

Our local Unitarian Church is making an effort to attract a younger and perhaps more diverse clientele. As a peripheral participant in the music part of the service, I got to thinking, “Yeah, the music here is awfully white, isn’t it? Awfully New Englandy.” The lyrics of the hymns are secular humanist (though the G-word does appear from time to time, much to my disgust), but the whole presentation — standing up and singing together, with or without the help of our somewhat ragtag choir — is straight out of 19th century Protestantism. And not the Southern Baptist kind of Protestantism, either. The Holy Spirit (speaking metaphorically here, and not a little sarcastically) is wearing a stovepipe hat.

So why not a little hip-hop, for Pete’s sake? (I worship Pete, actually.) Most of the services end with a short hymn called “We Give Thanks.” This song was written by Canadian folksinger Wendy Luella Perkins, and I’m a little nervous about asking her permission to use it. Folksingers don’t always appreciate synthesizers! Plus, judging by the bio on her website, she probably doesn’t worship Pete. Nevertheless, I did an instrumental arrangement of the tune:

Truth be told, I’m not sure it’s hip-hop anymore. The snare sound in the drum loop I chose was a lot more street, but it didn’t fit the vibe, so I replaced it with that pretty new age stick-tap. Even if it’s a lame-ass attempt at hip-hop, though, this is hipper (and hopper) than what you’ll hear on Sunday. Don’t get me wrong: Some of our music presentations are quite nice. But they’re nice in a thoroughly white, New Englandy way.

Not always. Last Sunday a young woman sang “De Colores” in Spanish. That part was terrific. But then the choir came in and sang the same tune in an English translation.

Diversity — still a work in progress. I’ll do my bit, but you know, I’m awfully white and New Englandy myself.

Wallpaper Music

We used to call it algorithmic composition, after a MIDI program called Algorithmic Composer, which was written for the Commodore-64 by (if memory serves) a fellow named Jim Johnson. I don’t know if a new term has become accepted. I’ll call it generated music.

The idea is, you set up certain conditions and then let the computer choose, in some manner, the notes and rhythms. This might seem almost an abdication of the process of making music, but in practice it turns out not to be. Lots and lots of delicate choices still have to be made.

Several factors are front and center. First, what sort of mood or style are you aiming at? This will dictate your choices of particular sounds and rhythms. Are some of the tracks too busy, or too predictable? What harmonies do you want to imply? And then there’s the mix, which always has to be carefully balanced.

During a discussion of generated music on Facebook, someone posted a couple of links to pieces that she created for a background — sonic wallpaper, if you will — for her yoga practice. I don’t know a lot about yoga, but it seemed to me that her background music was too interesting. Too energetic. So I thought I’d try it myself, see if I could smooth it out.

Like most music that contains randomized elements, this piece will be different in detail every time it’s rendered, even though the large processes are exactly the same. What you’re hearing is one particular “performance” of it. This one happened to start with a couple of uses of the G# (the Lydian fourth in the D major scale). I could have tried again and gotten a different opening, but I decided I liked this one.

None of the sounds are entirely stock, but some of them are close. The “Ballophone” preset in Parsec and “Vergon 6” in Grain are both factory defaults, and required only a little editing. In the Thor presets (this is all in Propellerhead Reason, by the way) I lowered the filter a little and lengthened the amplitude envelope release time, not much else.

That’s how it works, with generated music. You’re not choosing the moment-to-moment organization of notes, but you still have to choose everything else.