Post-Semester Rampage, Electronic Version

It shows my naivété, after 20 years of teaching, that I still hold any illusions about academia. Until recently I had nurtured a belief that electronic music was one area of music in which the otherwise pervasive distinctions between academic and non-academic did not apply. After all, electronic music is the only department in which (you will excuse the term) Downtown composers have been able to find positions in universities. As far as I know, there are currently only two Downtown composers in the country who have ridden into permanent teaching positions on skills other than electronic technology; one of those, William Duckworth, did so on his music education degrees, and the other, myself, masqueraded as a musicologist. All the others work in electronic music, where, I fondly presumed, open-mindedness prevailed.

It’s not true. I’ve been becoming aware that, even among the Downtowners, there is a standard academic position regarding electronic music, and am learning how to articulate it. I’ve long known that, though much of my music emanates from computers and loudspeakers, I am not considered an electronic composer by the “real electronic composers.” Why not? I use MIDI and commercial synthesizers and samplers, which are disallowed, and relegate my music to an ontological no-man’s genre. But more and more students have been telling me lately that their music is disallowed by their professors, and some fantastic composers outside academia have been explaining why academia will have nothing to do with them.

The official position seems to be that the composer must generate, or at least record, all his or her own sounds, and those sounds must be manipulated using only the most basic software or processes. Max/MSP is a “good” software because it provides nothing built in – the composer must build every instrument, every effects unit up from scratch. Build-your-own analogue circuitry is acceptable for the same reason. Sequencers are suspect, synthesizers with preset sounds even more so, and MIDI is for wusses. Commercial softwares – for instance, Logic, Reason, Ableton Live – are beyond the pale; they offer too many possibilities without the student understanding how they are achieved. Anything that smacks of electronica is to be avoided, and merely having a steady beat can raise eyebrows. Using software or pedals as an adjunct to your singing or instrument-playing is, if not officially discouraged, not taught, either. I’m an electronic amateur, and so I won’t swear I’m getting the description exactly right. Maybe you can help me. But at the heart of the academic conception of electronics seems to be a devout belief that the electronic composer proves his macho by MANIPULATION, by what he DOES to the sound. If you use some commercial program that does something to the sound at the touch of a button, and you didn’t DO IT YOURSELF, then, well, you’re not really “serious,” are you? In fact, you’re USELESS because you haven’t grasped the historical necessity of the 12-tone language. Uh, I’m sorry, I meant, uh, Max/MSP.

Where does this leave a composer like Henry Gwiazda, whom I have often called the Conlon Nancarrow of my generation? He makes electronic music from samples taken verbatim from sound effects libraries, and you know what he does to them? Nothing. Not a reverb, not a pitch shift, not a crossfade. He just places them next to each other in wild, poetic juxtapositions, and it’s so lovely. From what music department could he graduate doing that today? Is he rather, instead of Nancarrow, the Erik Satie of electronic music? the guy so egoless (or simply self-confident) that he doesn’t have to prove to you what a technonerd stud he is with all the manipulations he knows how to apply?

Now, there is one aesthetic fact so obviously incontrovertible that it hardly merits mentioning: a piece of music is not good because a certain type of software was employed in making it, nor is it bad because a different type of software was applied. Compelling music can be achieved with virtually any kind of software, and so can bad. You’d have to be a drooling moron to believe otherwise. Given that patent truth, it would seem to follow that there is no type of software a young composer should be prevented from using. The question then follows, are there pedagogical reasons to avoid some types of software and concentrate on others? I am assured that there are: 1. Since softwares come and go, it’s important that students learn the most basic principles, so that they can build their own programs if necessary, rather than rely on commercial electronics companies. And, 2. Commercial software doesn’t need to be taught, all the student needs to do is read the instruction manual and use it on his own.

Let’s take the second rationale first. As someone who just struggled six months with Kontakt software just to get to first base, I don’t buy it. There are a million things Kontakt will do that, at my current rate, it will take me until 2060 to figure out. Even after wading through the damn manual, I’d give anything for a lesson in it. But even given that some softwares, like Garage Band, are admittedly idiot-proof, there are a million programs out there, and a young composer would benefit (hell, I’d benefit) from an overview of what various packages can do. How about a course in teaching instrumentalists or vocalists how to interact with software? A thousand working musicians do it as their vocation, but academia seems uninterested in helping anyone reach that state. It’s unwise to base one’s life’s work on a single, ephemeral software brand, Max as well as anything else – but knowing how to use a few makes it easier to get into others, and some of my more interesting students have subverted cheap commercial software, making it do things for which it was never intended.

Rationale number one is more deeply theoretical. I’m all for teaching musicians first principles. You don’t want to send someone out in the world with a bunch of gadgets whose workings they don’t understand, dependent for their art on commercial manufacturers. Good, teach ‘em the basics, absolutely. You teach ‘em circuit design, I’ll teach ‘em secondary dominants. But why should either of us mandate that they use those things in their creative expression? Creativity, like sexual desire, has a yen for the irrational, and not every artist has the right kind of imagination to get creative in the labyrinth of logical baby steps that Max/MSP affords. I’ve seen young musicians terribly frustrated by the gap between the dinky little tricks they can do with a year’s worth of Max training and the music they envision. I heard so much about Max/MSP I bought it myself, and now have a feel for how depressingly long it would take me to learn to get fluent in it. I thought it must be some incredibly powerful program, from what I kept hearing about it – it turns out, the technonerds love it because it’s incredibly impotent in most people’s hands, until you’ve learned to stack dozens of pages of complicated designs.

There are at least two types of creativity that apply to electronic music, probably more, but at least two. One is the creativity of imagining the music you want to hear and employing the electronics to realize it. Another is learning to use the software or circuitry and seeing what interesting things you can finagle it into doing. There are certainly some composers who have excelled at the second – David Tudor leaps to mind. Perhaps there are a handful who have mastered the first in terms of Max/MSP, but it’s a long shot. Of course, if you’ve got the type of creative imagination that flows seamlessly into Max/MSP, by all means use it. “Good music can be achieved with any kind of software.” But why does academia turn everything into an either-or situation, whereby if A is smiled upon, B must be banished?

There’s an analogue in tuning. I’m a good, old-fashioned just intonationist with a lightning talent for fractions and logarithms. I can bury myself in numbers and get really creative. In nine years of teaching alternate tunings, I can count on one hand the students who have shown a similar talent. Faced with pages of fractions, most would-be microtonalists freeze up and can’t get their juices flowing. Were I a real academic, I would respond, “Tough shit, maggot – this is the REAL way to do microtonality, and if you can’t handle it, then you’re on your own.” But I’m not like that, and I let students work in any microtonal way they can feel comfortable, whether it’s the random tuning of found objects or just pitch bends on a guitar – as long as they understand the theory underlying it. Likewise, some young composers get caught up making drums beat and lights blink in different patterns in Max/MSP, lose sight of their goal, and never make the electronic music they’d had in mind.

In fact, many years of listening to music made with Max/MSP, by both professionals and students, have not impressed me with the software’s results. I’ve heard a ton of undecipherable algorithms, heard a lot of scratchy noise, and I’ve heard instrumentalists play while the MSP part diffracts their sounds into a myriad bits whose relevance I have to take on trust. In the hands of students, the pieces tend to come out rather dismally the same – and not only students. The only really beautiful Max/MSP piece I can name for you is John Luther Adams’s The Place Where You Go to Listen, and you wanna know how he did it? He worked out just the effects he wanted on some other software, and then hired a young Max-programming genius, Jim Altieri, to replicate it. He envisioned the sound, the effect, the affect, but he knew he didn’t possess the genius to create the instrument he needed. Meanwhile I hear lots of beautiful music by Ben Neill, Emily Bezar, Mikel Rouse and others using commercial software that does a lot of the work for them. If we can talk about software as an instrument (and we should), there’s a talent for making the instrument, and there’s a talent for playing the instrument. To assume that one shouldn’t be allowed to exist without the other is to claim Itzhak Perlman isn’t really a violinist because he didn’t carve his own violin. It’s ludicrous.

In short, it appears that academia has applied the same instinct to electronic music as to everything else: find the most difficult and unrewarding technique, declare it the only valid one, take failure as evidence of integrity, and parade your boring integrity at conferences. Whatever happened to the concept of artist as a magician with a suspicious bag of tricks? Art is about appearances, not reality, so who cares if you cheat? Our society is truly upside down. Our politicians and CEOs, whom one could wish to keep honest, dazzle us with virtuoso sleight-of-hand, while our musicians, who are supposed to entertain us, meticulously account for every waveform. It’s completely bass-ackwards.

Do I overgeneralize? I hope so. Please tell me that there’s an electronic music program that doesn’t make this pernicious distinction, and I will send droves of students applying to that school. I was living in a fool’s paradise, and I’m only reacting to what I’m hearing – from disenfranchised young composers, from electronic faculty who proudly affirm the truth of what I’m saying as though it’s a good thing, from fine composers who are wizzes at commercial software. One brilliant electronic student composer this year insisted that I advise his senior project: me, who can barely configure my own MIDI setup. I had nothing to teach him; our “lessons” consisted of me grilling him with questions about how to get the electronic effects I was trying to achieve. But I gave him permission to use synthesizers, and found sounds, and let him play the piano in synch with a prerecorded CD. I didn’t emasculate his imagination by forcing him back into a thicket of first principles from which he would never emerge. His music was lovely, crazy, expressive. Another student, a couple of years ago, enlisted me for a children’s musical he made entirely on Fruity Loops. It was a riot.

And so I say to all composers who got excited in high school about the possibility of musical software but feel intimidated by their professors’ insistence on doing everything from scratch: go ahead, use Logic, and Reason, and Ableton Live, and Sibelius, and Fruity Loops, and synthesizers, and stand-alone sequencers, and hell yes, even Garage Band, with my blessing. Be the Erik Saties and Frank Zappas and Charles Iveses of electronic music, not the Mario Davidovskys and Leon Kirchners. Resist the power structure that would tie anvils to your composing legs, with a pretense that they’re only temporary. The dogmatic, defensive ideology that‘s in danger of being callsed Max/MSPism is merely an importation of 12-tone-style thinking into the realm of technology. Who needs it?

[N.B.: In the comments, some confusion is caused by the fact that there are two Paul Muller’s, with different e-mail addresses. At least they agree with each other.]

Comments

  1. says

    I did a test at one of my schools (which I will not mention) to test this very prejudice. In one class, I presented an electronic piece as it actually had been made, using synthesizer presets and so forth. In another class, I took the same piece and represented the piece as having been built from nothing. One class loved it and the other class hated it. I understand where you’re coming from.

  2. says

    Kyle, I am quite confident you’re correct. It’s interesting how different the academic music world is from, say, software development. It used to be that one generated HTML code by hand (which was pretty laborious, and kept me from doing more intricate sites due to time constraints). Then we had tools like Dreamweaver to generate the code for us, and initially there was some bias against these tools, since even I wanted to feel like I was writing my own code. But eventually, all of us caved and stopped writing the majority of code by hand and used Dreamweaver, etc. Why? Because it’s the end result that matters, not how you get there.

    But in music, I guess that’s not the case, at least within academia. Should I care how a Ussachevsky made his electronic music as opposed to someone using samplers or Digital Performer? I don’t think so, any more than I should care whether someone wrote a nice canon or used serial techniques—it’s the music that matters, not the technique used to make it.

  3. Jun-Dai Bates-Kobashigawa says

    I think there is a fundamental fear that many (most?) people have when teaching things, which is that the only surefire way to bring someone up to your level of understanding is to force them through all of the lower layers first, in the same order that you learned them. There is a certain twisted logic to this, which goes something like this: by going through a grueling process of understanding all of the fundamentals, I have succeeded in acquiring a greater understanding of the subject, for which others hold me in high regard. If I teach all of those fundamentals, in order, to the pupil, then at least I have guaranteed them the opportunity to acquire the same level of understanding of the subject, and thus if they fail, it’s because they didn’t perservere. If, on the other hand, I skip some of those fundamentals (no matter how irrelevant today’s playing field seems to make them), then it is possible that the failure of my pupils will be _my_ fault, since I showed them a path that I didn’t travel myself and therefore cannot guarantee. Obviously it doesn’t take much to debunk this: the same paths are not equally effective for all people; but then I didn’t say that this was a _conscious_ approach.

    In the world of software development, there is a fairly large debate over a similar subject: whether it’s important for people to learn how to program by using C and BASIC (or some other low-level language), and writing their source code in notepad or pico, or some other raw text editor. Given that a very large number of programmers will never have to write C or BASIC (not to mention the fact that C and BASIC will, syntactically at least, be (very slow and frustrating) child’s play to an experienced software developer), and that they will use sophisticated IDEs in almost all of their programming, it seems pretty straightforward to me that writing a compiler in an assembly language should not be necessary as a first step to becoming a Web programmer. That said, many people feel that developers skipping the more low-level and/or arcane means of writing software will be missing something essential, and that their development will somehow be more frustrating or handicapped. Of course in academia, these sorts of debates would probably be discouraged, since whole careers and educational philosophies are at stake, rather than just some bloggers’ egos.

    Nevermind the fact that teaching a software application has a very vocational vibe to it, whereas teaching a computer language seems much more theoretical and academic.

    KG replies: Hey, Jun-Dai. The odd thing is, though, that I went through my education backwards, learning all about Cage and The Rite of Spring long before getting into Romantic harmony or sonata form. And I think that’s pretty common, just as everyone admits they look through magazines backwards. The basics are the hardest things to approach, and lots of people save them for last.

  4. david says

    The obvious question to me is: how does Bard’s electronic music program fits into this argument today?

    It’s been about a decade since I graduated from Bard, but back then both the canonical Music Department (where I spent my time) and to a lesser extent the dysfunctionally brilliant Music Program Zero seemed fixated on composing from as close to the bare wires (struck, bowed, or charged) as possible. I remember Max/MSP getting a thorough workout there near the end of my stay, and 4-to-the-floor beats were always subject to the hairy eyeball.

    Presumably, the electronic music student you took on as an advisee found himself ostracized from the Electronic Music studio there. Much as I love and respect Richard Teitelbaum, I know that students like him had some difficulty finding a willing ear at Bard for ten to fifteen years ago. I’d be saddened to think it’s been the same since.

  5. Jun-Dai Bates-Kobashigawa says

    Looks like Mr. Toub beat me to the software development comparison.

    I’m not sure I agree with his point that the means of producing art is irrelevant, however. We can’t divorce art from its context no matter how hard some of us might try. Part of the context of an artwork is how it was created, and this often informs our perception of the work in ways we can’t help.

    Try showing a painting to two people, telling the first that it was made by projecting a photograph onto a canvas and then tracing/painting over it. Then tell the second that it was done purely from the mind’s eye, and that the artist not only avoided using photography, but had completely envisioned the scene without ever having been to a place that looked like it. I suspect the responses will be different, though neither less genuine than the other (and, I would argue, it’s naïve to think of one response as being less genuine).

    It does indeed matter to me that a composer might construct the sounds by manipulating 1s and 0s by hand, because if I know this before hearing the piece, it will affect my perception in unavoidable ways, though I will probably try to divorce my appreciation of the piece from this knowledge. On the other hand, if I simply don’t know about this aspect of the piece, I might wonder about it. If I wonder enough, I might search the piece for clues, and this too, will clearly have an effect on my perception of the piece, because it has become a puzzle.

    This is not to say that learning that the piece was composed using Dreamweaver when we had earlier been told it was done on a per-bit basis is going to spoil it for us or cause us to have a dramatic shift in opinion. On the contrary, many of us will probably assume that our appreciation of the piece is based completely outside of our knowledge of its production, and will search for other reasons to justify the impressions that it left on us.

    On a side note, many people disliked Dreamweaver because it enabled people who knew nothing about HTML to build very bloated and difficult-to-manage Web pages. While I understand that the HTML output by Dreamweaver is not as bad as it once was (and now it contains many sophisticated tools for editing raw HTML, PHP, and JavaScript), it is still a pain point for many a developer because the HTML it produces can be very difficult to work with. As someone who programs dynamic functionality into Web sites, there’s nothing I enjoy less than wading through auto-generated HTML to figure out how to bring the page to life.

  6. says

    Jun-Dai, you’re quite correct about Dreamweaver, which is why I initially stuck to my guns and hand wrote my own HTML in a text editor. But to get things done quickly and with a reasonable amount of non-sloppy code, tools like Dreamweaver became the way to go. But we’re both digressing…

    To your point about how the music is constructed influencing your perception: does it really do that? I used to marvel at the scores of Elliott Carter, which were pretty intricate and lovely to look at, and his methodology was interesting in terms of rhythm. I still despised his music. I may listen to a Cage piece, and while his construction is interesting, I love the music as music, period. I would dig Webern’s music whether or not I ever analyzed his scores. Same with late Stravinsky.

    You yourself may listen to music and respond to it very differently in terms of knowing how it is constructed. But other than fairly obvious aural elements such as a movement playing backward (as in a few of Berg’s or Ginastera’s pieces) canonical sequence or persistent rhythm, I’m not sure how a knowledge of a work’s construction makes that much of a difference. And even then, it’s more on the level of “oh, the piece is now playing backwards in a retrograde of the first half…that’s nice!” In other words, I can recognize certain structural elements after studying the score, but it still doesn’t really affect my listening in a very major way.

    Can one really hear the golden section that Bartok used to organize some of his music? And if one did, would it really matter?

  7. Jun-Dai Bates-Kobashigawa says

    Well, now you’re talking about concious perceptions. What I’m talking about is the attitude and awareness of the context of a piece affecting our perceptions of it without us knowing it. Obviously an impressive story around a piece’s construction will not make most people appreciate it if it is otherwise totally opaque to them.

    We can’t, for instance, imagine what it would be like listening to the Rite of Spring without having heard any other music from the 20th century. Other music that we’ve heard, as well as other details that make up who we are affect how we hear something, whether we attribute our perceptions to that or no. Similarly, the context in which we come to understand a piece will affect our perceptions of it. It’s not as simple as causing us to like a piece–I see it as being more like gusts of air that push us in various directions on an axis, or like filters that make slight modifications to the way we look at something. If we hear two different pianists play the same piece, we may have a slightly different attitude towards it as we listen to it. If we hear a piece on a scratchy old record, and then hear it in live performance, it will affect our attitude. Similarly, if we have learned that the composer wrote the piece in one pass while skydiving before we listen to it, it will affect our perception of it as well, more for some of us than for others. Some people might even put themselves into the mind of the composer, falling through the air, as they listen to it.

    I propose a series of experiments, which I will most likely never carry out: take an electronic piece, and play it for four different groups. Before listening, group 1 is informed that the piece was written with MAX/MSP and group 2 is informed that it was assembled using garage band and a number of samples the composer found online. Groups 3 and 4 are told the same things, but a week after hearing the piece. I wonder if groups 3 and 4 will have more similar responses given that they had a more similar listening experience (giving the second pair of groups a week is important, since much of their digestion of the piece will probably occur upon reflection). Then follow all four groups through repeated listening to the piece over a few weeks, and see if their opinions normalize somewhat from their initial impressions.

  8. jimmymac says

    well harry partch cant be fully appreciated without a knowledge of his theories about word-music, the instruments he built or his tonal flux methods. i do agree with kyle though, the goal of the modern composer should be the expression of his or her personality through sound. i love the almost purely explicit beauty of carl stone’s shing kee.
    all the disciplines of digital media do exhibit these biases though. the aesthetics of such seem to be much more worked out in the visually oriented art scene. there’s a great intro lecture by christiane paul here:
    http://dma.ucla.edu/events/calendar.php?ID=374
    i think the hacker ethos might fit into this a tiny bit as well.

  9. says

    This certainly was the case when I was at UCSD, 1990-1995. We grad students used to joke that the equivalent approach in a creative writing course would be to [1] learn UNIX and C [2] write a word processing program. Building a printer was the next step . . .

  10. says

    I feel compelled to confess that, despite having no knowledge of Kyle’s music, as soon as I see ‘MIDI’ I jump to the same conclusion: you’re not an electronic music composer. The prejudice is that ingrained that it reminds me of Chappelle.
    This post was spot-on, and whoever made the point about teaching hit the nose on the head. Doctors continue to force an inhuman residency on their pupils simply because that’s what they had to do to become doctors.
    The truly damnable thing is that electronic music is already such a marginalized field that it’s only made more inaccessible by its own practitioners.

  11. says

    Great post, Kyle. I’ve always felt stigmatized by electronic composers. Of course, I call them, “gadget” composers, but I think that’s more of a defensive reaction. I think part of the problem, that is so similar to academic composition, is that it’s a lot easier to teach the x’s and o’s (or 1’s and 0’s), than what to do with them – the craft versus the art, the objective vs. the subjective.

    Since it’s that time of year and I’ve been spending too much time mowing my lawn, I thought of a mowing analogy. Suppose I got down on my hands and knees and cut each blade of grass by hand instead of using a mower. Would that affect one’s appreciation of what I’d done, or would you just think I was nuts by not taking advantage of the convenience of a mower? (To say nothing of the fact that it probably wouldn’t even look as good.)

  12. says

    Couple of points:

    1. I find the analogy to medical internship spot on, and I come up with that same analogy in a number of areas in the music world – “we had to go through it, you have to go through it!”

    2. Another parallel drawing from the computation world goes back to the mainframe era, where access was carefully guarded by the cultists in the lab coats. The more obscure and arcance the interface, the more jargonesque the language, the greater degree of power and control the upper caste has. The professors are aware of this.

    3. Lastly, jodru’s comment about “electronic music is already such a marginalized field” really only applies to that academic branch – at least one or two generations of people now regularly listen to electronic music. It just doesn’t come in an IRCAM wrapper.

    Kyle, I sincerely hope you manage to enjoy at least a couple years of teaching sans windmills before you eventually retire! :)

    Cheers,
    Jon

  13. says

    I studied electronic music at UMass (undergrad, ’88-’91) and Mills College (MFA, ’91-93). I never once encountered anything like the attitudes you’re describing. Both students and professors used commercial, custom, and scratch-built hardware and software as their music dictated.

    Some programs (e.g. CCRMA, CNMAT, the Media Lab) are engaged in training researchers as much as composers. It seems plausible and reasonable that they would emphasize an engineering/computer-science approach in their teaching. But I’ve met several students, former students, and faculty from these programs, and they’re not particularly snotty about commercial equipment or the composers who use it. Maybe they just wouldn’t say it to me because I’m a Mills guy.

    Nor did these guys have a bad attitude:

    Electric Rainbow Coalition

    So, yes, I think you overgeneralize. Anyway, Max/MSP is for sissies. Real men use SuperCollider.

    P.S. Emily Bezar rocks. And she went to CCRMA, so they couldn’t have screwed her up too badly…

  14. bill says

    Yeah! Technological freedom for academic musicians! Now if someone will just give conceptual/intellectual/structural freedom to the rock/pop musicians…
    I learned Logic Audio from someone who had worked with Trent Reznor and Roli Mossiman in the early nineties, and this guy could SERIOUSLY manipulate this program. It was like watching a virtuoso piano player on the mac keyboard. But what came out was inevitably a four-four, techno-esque pleasingly textural commercial music suitable for jingles or backing up songwriters. All that skill at manipulation and no big idea to tie it to, to pull it somewhere groundbreaking. Cart without a horse? Maybe these electronic academic fellows are in a similar boat.

  15. says

    I agree that Max/MSP-ism is rampant through computer music departments in academia. I have to say, though, that anti-Max/MSPism is also quite ubiquitous in many new-music circles. I’ve heard a whole lot of people point out as you have, Kyle, that lots of terrible pieces are made with MSP, and that the gratuitous use of random-number-generators in pieces just make them all end up sounding the same. But I think it’s just plain silly to imply that any smaller percentage of terrible pieces come out of any other software package.

    Using higher-level environments such as Live or Garageband, the composer is stuck with certain decisions about how the piece should work on some levels. For many applications, these corporately-decided aspects might not hinder the expression of the composer’s music. In those cases, I would hope that professors encourage the use of these tools, because they will save the students valuable time that should be spent on compositional issues rather than in technical morasses. But it’s a shame when students’ imaginations are limited by these unseen assumptions, or when their music ends up being only a sketch of what it may be otherwise.

    The thing that’s really special about environments like PD or Max or Supercollider is that they let the composer/programmer shape the sound at every possible level of scale. Unfortunately, it takes a hell of a long time to actually do this kind of work, which is probably why the only Max/MSP piece you’ve liked, Kyle, took a year and a half of full-time programming to sculpt all of these structural and sonic levels. And thanks for the compliment, but I ain’t no genius – I just kept chipping away at that piece for a really long time.

    I respect when a professor can tell her student that he should do something the hard way, because the piece will be better for it. And when is a better time to learn the hard way than when you’re studying electronic music? Any prejudices that a student acquires while in school will probably be erased fairly quickly after graduation when they start going to more performances in the “real” world. They will hear awesome and horrendous pieces made with every possible level of technical sophistication.

    KG replies: “I agree that Max/MSP-ism is rampant through computer music departments in academia. I have to say, though, that anti-Max/MSPism is also quite ubiquitous in many new-music circles.”

    I have to laugh, Jim – this is like saying, “The Bush administration may be prejudiced, but poor Black people in New Orleans are prejudiced in a different way.” It’s hardly symmetrical. One kind of prejudice becomes state policy, the other is almost never publicly expressed.

    You’re absolutely right about Garage Band, etc. So why not start out a student on some cheap commercial software, wait till they hit the limitations, change to a more professional software, wait till they hit its limitations, and so on – and then, perhaps they’ll become so sophisticated that they’ll eventually need the least limiting software of all, Max/MSP! AND, they’ll know why they need it, instead of being blindly pushed into it as freshmen. Meanwhile, they’ll be building up composing skills as well as technological ones.

  16. says

    When I seriously got into studying electronic music I found that my teacher, Paul Rudy, was completely disinterested in what/how I used to manipulate the sounds. I was using Garageband and a whole slew of freeware apps (since I have no money for a studio at home).
    He simply listened to the music and he made musical suggestions. At the end of the day, it is the music that matters and not the software. Some of us get that point and are getting academic jobs. The pendulum is swinging back.

  17. Paul Muller says

    -david toub
    Just to set the record straight neither C or Basic are “assembly languages”. Neither one is a low-level language either. Well an argument for C could be made because it allows you to access memory by address, but by the classical definition it is a high-level language.
    If you’re calling Basic a low-level language then let me guess: you’ve never written anything in machine code have you? Assembly language and machine code are generally considered low-level because they don’t allow for the abstractions of functions, variables, and expression evaluation.
    Designing a web site and writing applications are two entirely different ball games. Sure you don’t need to have ever worked with a low-level language to design a webpage, but if you’re writing applications it’s almost a necessity. You have so many people these days writing applications in languages like Java that eat up memory and are practically garbage. When you write a program in assembly language the code is going to be far more optimized than when you compile it from a high-level language. There are many instances when this is important if not essential.
    I’m not trying to be snotty here either, I happily worked as a Visual Basic programmer for years before I decided to learn about computer architecture and assembly language. I can now say that I’ll only use Visual Basic if I need to write something in a matter of seconds. If it’s an application that I need to be relying on or plan on distributing then I’ll at least write it in c or c++…..
    -to the first poster
    Why shouldn’t be this way? Take a similar example.
    Say that you record a violinist playing a scale at a speed that is almost humanly impossible. Then you play it for one class and say that it is a recording of one of the most virtuosic violinists in the world. Then you play it for another class and say that you threw a couple of notes into Reason and exported the audio. Who do you think is going to be more impressed? Your students weren’t just judging it as a piece of music, they were judging it based on the amount of work going into its creation (you know this). If they think that somebody designed all of the sounds as well as the music then obviously they’re going to be impressed. This isn’t a “prejudice” it’s just common sense and I’d be more worried if your two classes were equally impressed than if they weren’t.
    What about the Well-Tempered Synthesizer? Has modern technology rendered this recording obsolete? I could make a similar recording in a few hours that wouldn’t sound all that different. The piece is respected (at least in part) because making it was such a laborious process.
    KG replies: Paul, I responded to your e-mail, and it bounced back; perhaps you should post with your right address. I shortened it because it was very, very long, and because I hadn’t claimed that teaching anyone fundamentals was a plot to hold them down. Fundamentals is what I teach. But I have seen students spend a year learning the myriad little symbols of Max/MSP without getting to the point of actually making any music. Some of the extremely “basic” electronics programs do tie up and thwart students’ creativity. It didn’t seem worth including several paragraphs to refute a point I hadn’t made. And it is my page.
    Cheers, and thanks for your contributions.

  18. says

    Hey Kyle–

    I think you observations about the existence legit and non-legit forms Electroacoustic music are pretty much spot-on. Having done half of my undergrad at Dartmouth, which has a pretty interesting Electroacoustic Music program as its only graduate composition program, I think I might be able to expand on what you’ve said in a couple more ways.

    The Dartmouth program is pretty non-judgemental — Jon Appleton became an electronic composer because the establishment wouldn’t let him write tonally for traditional instruments but he could sneak it in under the wire in electronic music — and of course you’re old buddies with Larry Polansky. That said, the program is very Max/MSP and Pro-Tools oriented, and while the grad population tends to be somewhat diverse and often interested in interesting sonic and conceptual experimentation, there’s not much MIDI-driven music for commercial synthesizers. On the other hand, there’s certainly no prohibition against having a steady beat. Not having ever sat in on the admissions meetings, I don’t know for sure, but I’d guess that the Max-ProTools bend of the grad population stems not from discrimination but from picking the students who are doing work that most interests the faculty. (My understanding is that Princeton is somewhat similar, although I couldn’t say for sure.)

    The “Electric Rainbow Coalition” concert mentioned by Tim Walters was indeed open to a wide variety of music, (organizer Eric Lyon is both a great guy and a great composer — I’m really not sure why he didn’t get tenure. Politics, probably.) and there were some delightful oddities on the concert. In fact, I believe no submissions were turned down, but even so the vast majority of the pieces were generic take-some-sounds-and-do-wierd-fourier-transforms and were largely homogenous. I’m not sure what that means, but it’s interesting.

    One potentially productive avenue of analysis might be taking a look at the makeup of the standard populations of undergrad “Intro to Computer Music” courses. These numbers are speculative, but I’d guess that at Dartmouth 80% of the students in that class are there because they want to learn how to do Techno, 10% because they’re composers who need a taste of Electroacoustic music to round out their educations, and %10 composers with a specific focus on computer music. (My sense of the analagous course at Brandeis is similar, but with a larger proportion of music majors looking for some variety — but that’s based on my impression from 1998, not any recent data.) Anyway, the faculty ends up frustrated that so many of the students aren’t there to learn the stuff that the course is supposed to be about — i.e. Academic computer music — and tend to discourage the students from doing techno-oriented pieces in the interest of keeping them focused on the aims of the course. And in fact most of the students are up to the challenge. I can see both sides if this piece of the issue — on the one hand I think most of the students are well served by being forced out of their comfort-zones; on the other hand I’m generally opposed to prohibitions and to prescriptive genre boundaries (i.e. “you can’t do that in classical music”) and there’s lots of interesting music to be made on the pop side of the spectrum and I think music departments should cater to all musical tastes and goals.

    Secondly, to make a very broad generalization with a bazillion exceptions, I think historically a large proportion of the Electroacoustic Music population has been tech-geeks with some musical interest (which drives the DIY-tech-focus) or people who are excited about electronic music because they want to make all sorts of wierd noises (which drives the aversion to non-wierd generic synthesizer patches). These two types are doing classical music specifically as an alternative to popular electronic music, which is the obvious avenue for electronic music from the perspective of people not coming out of the classical tradition, and so often they feel a need to distance themselves from it. These people might be considered the Electroacoustic-music as a way-of-life group. On the other hand, you have the smaller group of composers who see electroacoustic music as another tool, and many of them have no problem with synthesizers and music with a beat — but of course in order for most of the regular composers to get jobs they have to be relatively uptown, which comes with its own set of anti-populist aversions. Plus, the dominance of the no-synthesizers school of thought trains otherwise open-minded young composers, musicians, and audiences to equate canned synth sounds with cheeziness — I suffer from this very problem even though philosophically I’m in favor of canned synth sounds, I’ve just been successfully brainwashed. The cycle will remain self-reinforcing even in non-prohibitive environments.

    Anyway, sorry that was so long. Thanks for plowing through it :)

  19. says

    The percentage of composers (electronic and acoustic) who are doing truly interesting, engaging, and moving music is very low. In every general group of creative artists (i.e. painters, filmmakers, writers, etc.), the majority is creating mediocre things. I suspect that when teachers and students are not engaged, moved, or otherwise affected by a piece of electronic music, the best fallback is to talk about how the music was made. When the music itself has no substance, one can always talk about the tools and techniques used to create it.

    If the music is really bad, it is important to have very very complicated tools and techniques in order to successfully shift focus away from the composer’s lack of imagination. One can talk for hours about how to create something in Max/MSP whereas it’s more difficult to talk for hours about a more intuitive program such as Garage Band. Therefore, it is my conclusion that Max/MSP, SuperCollider, etc., are favored because they allow professors (and students) to focus on the complexity of technology rather than the lack of substance in the music.

    One can of course do complex and sophisticated things in just about any music program. I could take notes on how fussily I manipulate every possible parameter in the relatively straight-forward sequencing/synth/sampler program REASON, but I don’t keep track of those things because if the music itself isn’t engaging enough, then there’s nothing to talk about anyway.

  20. Paul Muller says

    My day job is in electronic engineering and one of the real problems is that engineers tend to use the programming tools that they first learned to program with, even if they are not the most efficient. I also know several engineers who can write tight, elegantly constructed assembly code because when they learned to write memory space was at a premium (which is no longer the case). Point is, programming, and programming tools change quickly and this would seem to be reason enough to avoid them when learning the basics.
    Also it would seem that electronic music turns the art of composing, or certainly orchestrating, completely upsidedown. Part of the composers art is specifying the combination of instruments to achieve a certain effect. At some point it will probably be possible to program any kind of sound electronically, and this exponentially increases the choices available to the composer. This would seem to be the real issue: what to do with all the new sounds, not how to create them.
    Final rant: I wonder if all of this ground was covered during the development of the pipe organ? Maybe electronic composers ultimately will have a unique signiture sound. Pipe organs are a function of their construction and accoustic setting; maybe this is the fate of musical composition for electronics.

  21. Paul Muller says

    Somehow the names got mixed up above. The post listed as Jay C’s is mine and the next one isn’t.
    Also, Kyle did you censor the following out of my post because of its content?
    I’m hesitant to trust anyone claiming that an emphasis on learning the fundamentals of something is all just a plot to hold you down. If that person hasn’t yet learned the fundamentals themselves then I find it almost impossible to believe them. Lets go back to music for a minute. Sure, you can be a composer without having recieved any education on the subject. Does that mean that music theory and music history are just obstacles to slow down aspiring young composers?
    If you want to just touch on a subject in your spare time then the higher level aspects of it are fine. If you want to learn how to write simple little programs on the weekends then by all means use Visual Basic. If you want your vocation to be “computer programmer” then do yourself a favor and learn what a computer is and how it works as well as how to control it from a low-level language. If you want to be in a band outside of work then just let your natural insiration and years of radio conditioning flow. If you want to fancy yourself first and foremost a composer then learn about composers before you and what they did. If you love composing then it should interest you anyway.
    Can anybody guess where this is going?
    If your AIM screenname is something like “technorocker1989″ and you aspire to smoke a bowl and stay up late jamming out sick Jungle beats on your midi keyboard every night only to wake up dazed and sleepy for highschool the next day, then of course you should download a cracked version of Reason immediamente and get crackalackin. If you call yourself an “electronic music composer” and aspire to be respected in that field then maybe you should learn how to use Max.

  22. says

    paul muller—
    I think you’ve confused me with Jun-Dai. And I think he meant to say “lower-level languages” rather than imply that C and BASIC are assembly languages. I’m actually very familiar with the difference between machine code, assembly, 3GL and 4GL, although I’m not a programmer. To be fair to Jun-Dai, however, C+ is not as high-level as C++ of course, since it is not an object-oriented language.
    Either way, this is a new music blog, not a computer language blog, although it’s always nice to discuss these things 8-)

  23. Paul Muller says

    Just for the record, there may be something wacky with the comment posting software here. I did not write the well-written post with my name at the bottom, dated May 25, 2006, 1:14 PM. Whoever it was, though, I agree!

  24. Paul Muller says

    Sorry I know that my post came off as a little bit more condescending than I meant it to be. I was trying to draw a parallel between using assembly language and max. Similarly you sacrifice time for control and a better end result (but that conclusion got snipped out of my post). Perhaps the major difference between fields like computer programming and electronic music and fields like classical composition is that you don’t need the fundamentals to start off with. If you’re going to be studying classical composition then it obviously makes sense to learn theory first and go from there. With programming it might be easier to start with a higher level language and work your way down. It seems as if Kyle is suggesting the same for electronic music.
    If you can’t think creatively with numbers and lots of little steps though then maybe you’re not cut out for pure electronic composition. Just cause you start with a high-level synth doesn’t mean that you’re going to be any more cut out for max down the road. Is it that terrible if they are trying to establish right off whether someone is really cut out to be an electronic music composer or just a composer who uses electronics? I hate to beat this programming analogy to death, but just because you can write a “hello world” program in VB doens’t mean that you’re cut out to be a programmer. Starting with assembly language might help you establish that right off.
    Kyle what’s your opinion of prejudices against electronics in the classical composition department? I seem to remember a lot of resistance to the idea of letting me have a listening of music made with high-level synths and recordings. I was told that I should instead somehow arrange a live performance unless there was a reason that this was totally impossible. If music like this stems from the classical tradition then it doesn’t belong anymore in the elctronic music department than it does in the classical, why should it be any more accepted there?
    Oh and the names above are all amuck for me now.
    This was written by Paul Muller.
    KG replies: I don’t have any opinion about so-called classical prejudices against electronic music. But my intense irritation about people who think there is some such thing as “pure electronic composition,” as opposed to any other kind of electronic composition, is what this post is all about.

  25. Julian Day says

    Wow. There’s a fair bit of machismo going on here (even if I can’t work out who wrote which post!). I couldn’t agree less with these views:
    “Say that you record a violinist playing a scale at a speed that is almost humanly impossible. Then you play it for one class and say that it is a recording of one of the most virtuosic violinists in the world. Then you play it for another class and say that you threw a couple of notes into Reason and exported the audio. Who do you think is going to be more impressed?”
    “If you want your vocation to be “computer programmer” then do yourself a favor and learn what a computer is and how it works as well as how to control it from a low-level language. If you want to be in a band outside of work then just let your natural insiration and years of radio conditioning flow. If you want to fancy yourself first and foremost a composer then learn about composers before you and what they did.”
    The first view assumes that art appeals in the same way as sport does: the faster the better, the longer the better, the harder the better. I recently had the pleasure to be on a panel choosing work to be submitted to a very prominent European music event. In the end, we overrode a complex, ‘well-written’ composition by an Ivy League lecturer in favour of a one-page score that despite quoting two existing works as its source material (and using little else) absolutely captivated us in its beauty. Isn’t that what music, and art, is really about? The most ironic thing was that this composer had been booted out of university for submitting one-page scores! And here he is being featured now in a highly prestigious, and in fact quite academic, environment.
    The second comment assumes an immutability about artistic roles – which seems to me to be the antithesis of why art appeals, which is that, ultimatly, anything goes. That’s why it’s not rocket science.
    Just finally, I recently saved myself $1500 by choosing Garage Band over Pro Logic Audio – and I’m having a ball!

  26. says

    Electronic studios and courses of study had a branching disconnect 20-some years ago: one path focused on hardware, the other on the computer. The distinctions overlap or become irrelevant in some small or great degree depending on the studio and staff, and more irrelevant with each passing year. But the “camps” are still definitely around.

    The whole issue becomes more irrelevant the farther you get from any academic community. When you’re on the outside, you tend to use whatever you can get (or afford) that will work for what you want to make. From out here, the whole debate looks rather hair-splitting and a bit pompous.

    Of course (and Kyle’s whole point) in the end all of it says pretty much nothing about the work as music. That’s in the artist, just as it always is and was. On the aesthetic level, the most elaborate acousmatic or computer-generated composition has to do what Rzewski already accomplished with just four terra-cotta flower pots (in “To the Earth”): make some real, outstanding musical art.

  27. mclaren says

    This post will be censored, so hey! Hot diggity! I can say anything I want!

    Gann certainly hit a nerve with his latest. Several articles on the net might
    prove momentarily diverting:

    Why Computer Music Sucks by Bob Ostertag

    http://creativetechnology.salford.ac.uk/fuchs/theory/authors/bob_ostertag.htm

    The crucial paragraph:

    For all the self-professed interest in using digital technology to create new musical forms, in fact the agenda of “computer music” quickly ossified around the concerns of the Western avant garde prevalent at the time of the introduction of computers into music (in fact, concerns which pre-dated the appearance of the computer in music): algorithmic composition (which is really a digital extension of serial music), and extended timbral exploration.

    Trenchantly, Ostertag remarks:

    …Despite the vastly increased power of the technology involved, the timbral sophistication of the most cutting edge technology is not significantly greater that of the most mundane and commonplace systems. In fact, after listening to the 287 pieces submitted to Ars Electronica, I would venture to say that the pieces created with today’s cutting edge technology (spectral resynthesis, sophisticated phase vocoding schemes, and so on) have an even greater uniformity of sound among them than the pieces done on MIDI modules available in any music store serving the popular music market. This fact was highlighted during the jury session when it was discovered that a piece whose timbral novelty was noted by the jury as being exceptional was discovered to have been created largely with old Buchla analogue gear. (..) To put the matter in its bluntest form, it appears that the more technology is thrown at the problem, the more boring the results.

    Also of potential interest:

    The Aesthetics of Failure: `Post-Digital’ Tendencies in Contemporary Computer Music by Kim Cascone

    http://subsol.c3.hu/subsol_2/contributors3/casconetext.html

    Cascone discusses the ways in which, as William Gibson famously remarked, “The street finds its own uses for technology”…in particular, in modern music:

    Consider Paul Muller’s marvellously contemptuous gibe:
    If your AIM screenname is something like “technorocker1989″ and you aspire to smoke a bowl and stay up late jamming out sick Jungle beats on your midi keyboard every night only to wake up dazed and sleepy for highschool [sic] the next day, then of course you should download a cracked version of Reason immediamente and get crackalackin. If you call yourself an “electronic music composer” and aspire to be respected in that field then maybe you should learn how to use Max [sic].

    And now compare with Cascone’s description of the selfsame script kiddie (whose AIM screen name is indeed quite probably “technorocker1989″!):

    Tools now aid composers in the deconstruction of digital files: exploring the sonic possibilities of a Photoshop file that displays an image of a flower, trawling word processing documents in search of coherent bytes of sound, using noise-reduction software to analyze and process audio in ways that the software designer never intended. Any selection of algorithms can be interfaced to pass data back and forth, mapping effortlessly from one dimension into another. In this way, all data can become fodder for sonic experimentation. (..)

    Quite a difference, eh?

    In The Mins of MAX from SEAMUS Journal (can’t recall the date, about 5 years back) the author makes essentially the same point Gann makes. Namely, that MAX is not a musical technology but a programming language. If you study MAX, it’s just a visual iconography for representing a program flow. The output of the programming language boasts a preponderance of MIDI commands, and, in MSP, audio DSP commands, and this remains an accident of history, for MAX objects can just as easily be written to manipulate digital bitmap files, or digital video processors, or a MySQL database containing the bra sizes of the female population of Ecuador. There is nothing specifically musical about MAX since MIDI commands can be mapped to control lighting or video processors rather than musical synths. The MAX program language lacks even as rudimentary a musical facility as the ability to pick up again at the same place in a score once it has stopped playing back. Accordingly no one should find it surprising that the sonic output of MAX lacks musicality. MAX itself is nothing but a visual programming language in which program flow is controlled by drawing onscreen blocks connected by little lines, rather than writing ASCII text if-then-else or while-do constructs. There is nothing inherently or specifically musical about MAX any more than there is anything inherently or specifically musical about the C++ programming language. No one would expect a C++ database program translated into music (by mapping arbitrary program variables into equally arbitrary sounds) to sound especially musical — why would anyone expect the results to sound especially musical when such arbitrary and contingent mappings are performed in MAX? (I should quote from the original article, but, darn it, I can’t find my xerox of it!)

    In Aesthetic Direction In Electronic Music by Jon Appleton, in In Heifetz, R. J. (ed.) On the Wires of Our Nerves: The Art of Electroacoustic Music. London: Associated University Press: 69-75, Appleton makes the familiar point that the music generated by the Institute for Really Crappy Automated Music (IRCAM) sounds dreadily uniform and suffers from a general lack of musical skill and a basic dearth of musicality: i.e., a lack of vividly memorable melodies, a paucity of compelling functinal harmonies, absence of any sort of perceptible rhythmic pulse, and a notable dearth of audible organization. Judged from a compositional rather than a technological viewpoint, as Appleton points out, the general impression given by the output of temples of musical technology like the Institute for Really Crappy Automated Music remains one of unutterable boredom. When listening to concerts or electronic music from IRCAM, as Appleton points out, you tend to get the strange impression that you’re hearing the same electronic composition many different times.

    Appleton diagnoses a confusion twixt science and aesthetics — a familiar problem in high musical modernism. When that which proves scientifically (mathematically, technologically, computationally, algorithmically) impressive gets confuted with that which is musically (melodically, harmonically, rhythmically, dramatically) impressive, the results prove dire. Attempts to generate vividly memorable music solely by ratcheting up the mathematical or computational sophistication of an algorithm are akin to the effort to turn a sheep into a chihuahua by shaving it. Such escapades do not end well.

    It’s particularly wonderful to see how many high modernist musical canards have come spraying out of the pinata of computer music now that Gann has whacked it with a big stick. In particular, I love this one:

    The percentage of composers (electronic and acoustic) who are doing truly interesting, engaging, and moving music is very low. — Corey Dargel.

    My own personal experience of listening to countless thousands of hours of electronic and acoustic music suggests just the opposite. It seems to me that the percentage of composers (electronic and acoustic) who are doing truly interesting, engaging, and moving music is very high. In fact, it presents a fascinating conundrum. There exists so much truly interesting, engaging and moving music being created out there that I believe large parts of the ecology of serious contemporary music can be explained in terms of the social mechanisms required to cope. Indeed, the massive overload of high-quality interesting impressive music being created by absolute nobodies with no credentials and no academic stature and no social standing remains arguably THE major problem faced by electronic composers and audiences today. There’s too much good electronic music to listen to! And even more being created every day! And the rate of production of excellent electronic music continues to increase! How’s a listener to avoid options paralysis — the fatal vapor-lock that results from too many choices? How do you tell which 5% of serious contemporary to listen to, since you can’t possibly listen to it all?

    Enter the social sieve.

    By applying extramusical criteria to remove large amounts of electronic music from consideration as “serious” and “worthwhile,” we cut down what we feel required to listen to. In that way, we arrive at subset of serious contemporary electronic music small enough that we can actually have a shot at listening to all of it. Voila! Options paralysis defeated!

    For example: the well-known fact that a Pulitzer prize has become a musical kiss of death. If someone gets a Pulitzer prize in music nowadays, this is virtually a sure indication of mediocrity. Why should that be?

    The reason becomes clear immediately once we understand that the basic problem in serious contemporary music is a massive overload of excellent music — and in many cases, excellent music created by people with no social standing. The social function of the Pulitzer prize is, of course, to discourage composers from producing music. With far too many talented composers already producing far too much good music, Pulitzer prizes serve the same function as graphite moderator rods in a nuclear reactor. The essential utiliy of the Pulitzer prize in music is not to reward one composer, but to punish all others, thus discouraging and denigrating them. Thus, the Pulitzer prize serves to damp down the neutron flux (i.e., reduce the number of high-quality compositions to which we feel compelled to listen, and, if we’re really lucky encouraging good composers who never win a Pulitzer to leave music entirely and stop composing) and prevent a chain reaction (i.e., prevent excellent composers with low social standing from eclipsing the reputations of talented composers with high social standing and proper academic credentials).

    Or consider the peculiar fact that MIDI gets sneered at as unmusical, while the output of mathematically sophisticated real-time DSP algorithms gets lauded by academia…even though the DSP stuff exhibits no discernible melody, no functional harmony, no perceptible rhythmic pulse and no audible organization. How can this be?

    Once again, the reason becomes obvious as soon we recognize that the more mathematically sophisticated the real-time DSP algorithm, the faster and more expensive the computer required to run it in real time. If the DSP algorithm is really complex, a dedicated DSP chip will be required. The MSC8144 or the TMS320VC5502 in the latest Korg synthesizer, or, even better, the card cage full of TMS320 series DSP chips in the utlra-costly Kyma system. Once again, this serves the socially useful function of reducing the number of talented composers without social standing to whom we need pay attention, since a composer without social standing is unlikely to have access to an $5,000 Sun workstation, or an even more expensive development system with multiple parallel DSP cards. Compare the cost of a card cage full of DSPs like the Kyma system with a typical Dell computer system. The Dell costs $600, the Kyma card cage costs $12,000. By sieving composers economically (.i.e., only electronic music composed on a Kyma system is “real” electronic music — which is to say, that only $12,000 computer systems produce real electronic music, while $600 computer systems can never produce real eletronic music), this further eliminates composers from consideration. After all, how many composers can afford to pay $12,000 for a Kyma system? This greatly helps reduce the problem of massive overload of high-quality interesting moving music to which a listener feels compelled to pay attention. We could reduce the number of composers to which we feel compelled to listen even further by applying even more extramusical criteria. For example, only composers who wear $3000 suits are worth listening to. And only composers with $400 haircuts are worth listening to. And only left-handed lesbian composers with sickle-cell anemia are worth listening to. And so on. This really helps tremendously in cutting down the number of composers we have to hear. Soon tens of thousands of CDs turn into hundreds of CDs, and then we’re really cookin’.

    By applying multiple extramusical social sieves of this kind, we arrive at a subset of electronic music small enough that we no longer feel that fatal deer-in-the-headlights sensation when faced with a tidal wave of new electronic composers. And here I would have to respectfully disagree with Gann, albeit slightly. I’ve heard a lot of moving and impressive and compelling MAX/MSP compositions, or at least algorithmic compositions. For examples, listen to Carther Scholz’s superb Hamilton Circuit (microtonal and JI too!) or Barry Truax’s Arras (microtonal and JI!) or John Chowning’s Stria (microtonal and non-just non-equal-tempered!).

    Once we recognize that the major problem listeners of all stripes face nowadays involves the massive overload of excellent music to listen to, we can suddenly make sense of a lot of the seemingly arbitrary social strictures placed on what is considered “serious” contemporary electronic music and what gets dismissed contemptuously as “non-art.” The situation is exactly analogous to the requirement during the great Depression that all applicants for the job of elevator operator at Macy’s Department had to have a minimum four-year college degree. This made sense if you think about it. There was absolutely nothing in the job of elevator operator that required a college education, but the requirement did serve to drastically cut down the number of job applicants. That was the whole point. Once we understand that the problem is not a lack of excellent serious contemporary composers and compositions but rather an overload, much of the social structure of contemporary “serious” electronic music starts to make sense.

    It’s wonderfully liberating to be censored. You can say anything you want! Total freedom. Why, it’s like a breath of fresh air!

  28. Paul H. Muller says

    It may be that there are actually two Paul Mullers posting on this. I posted content once, on May 25 at 2:57 PM and I posted a clarification on May 25 at 7:18 PM. I will use my middle initial to avoid confusion. Sorry for detracting from what has become a lively thread.

  29. says

    Webmeister Kyle Gann here, with a message for both Paul Mullers: I’ve tried to respond to e-mails you’ve each sent me, and they’ve bounced back in both cases. I can’t connect you nor respond to you if I don’t have working e-mail addresses for you. ??????

  30. Paul H. Muller says

    Sorry for all the confusion. I can’t decide if we are all victims of the infamous Gann computer karma [KG comments – always a possibility] or if I have somehow come back through time to torment myself. I did send Kyle an e-mail to the earthlink account listed on the homepage, and I filled out the spam filter form. In addition I just tested the e-mail listed with this post and it is working. I would like to hear from the other Paul Muller. Thanks for your help just before the holiday…
    Paul H. Muller

  31. Nick Collins says

    I’ve really enjoyed reading through these opinions, and there are salutary lessons here for educators and curators.
    I particularly enjoyed mclaren’s post on the social sieve and the tremendous number of composers in the world.
    This is where an attitude change is most required; I would argue that electronic music composition is now a universal activity for anyone with a computer (or some hardware…). And the amount of academic training required to apply trial and error (aural feedback loop exploration) production is zero. Thus, large amounts of fantastic music by people who haven’t paid any money to academic departments.
    Anyway, the value of academic composers, to use Steve Reich’s distinction, is usually as experimental composers, rather than masters utilising existing techniques. They can explore new technological areas, and they can teach others to also become composer-technologists. This does not guarantee subjectively wonderful music using traditional listening criteria, but it does (hopefully) open up new pathways for all those other composers out there.
    Of course, there are also technology hobbyists, but the real efforts required in learning research skills or research development are often well supported by full-time grants, particularly in area like musical artificial intelligence and perceptual research.
    The tendency of technology is eventually to universal access, so you hope that this research activity filters down to commercial products, free software etc.
    It’s exhausting keeping track of all the music in the world; in fact it’s impossible. You cheer up a bit when you realise you should just do your best, and that your contributions don’t have to be about establishing your 19th century style genius, but about the joy of exploration.
    Perhaps all academics (professors and students) should take a vow not to promote or produce their own music, but only to support the compositional activity of those unfunded.
    (I didn’t properly consider the psychological aspects of music here, but suffice to say much of the meaning of music is in the ear of the beholder, and we’re not (necessarily) going to reach agreement on the value of certain music as encultured listeners. Hence some of the difficulties associated with compositional aesthetics)

  32. says

    Wow! Lot’s of comments on this one.
    From what I’ve read so far, I think maybe it’s time to lay “electronic music” to rest as a separate category. Fold it in with regular music courses and treat it as a technique and not a style or genre.
    A side note – the two camps presented in the discussions remind me of the early days of video art. On one side, you had people like Steve Rutt and Woody Vasulka who believed that the only true video art was made using only electronic signals. On the other side, you had people like Vito Acconci who felt that anything recorded on video was automatically art.

  33. says

    A few thoughts on software packages and pedagogy:

    As a bloodstained veteran of Logic since version 4.X on both Mac and PC platforms, I take issue with anyone who would reduce it to a glorified BandBox. True, too much radio music relies on simplistic use of samples but there are many creative techniques using Logic, Cubase, etc. that can and should be taught to those studying “electronic music.” For example, if you’re using computers to store and process your recordings (which is the most cost-effective methodF—remember the prices of 24-track master tapes?) and you’ve got a recording based on a repetitive rhythm track, it’s computer-conservative to record the sample once, slice it up, and cut and paste rather than record the same repetitive track live.

    Logic and similar products offer a relatively simple way of chopping up and looping live sounds. This could be and was done in the past with tape and a razor blade. When I write “relatively simple,” though, I mean that the process is complex enough that there is no shame in teaching it at the undergraduate level. (Even if a few students may already be ahead of the curve.)

    I’ve never tried the sofware Kyle mentions (though, when I was a teenager, I did “build an effects unit from scratch”—with a soldering iron, copper wires, and a homemade circuit board. I’m not sure if making students go through that searing experience would improve their compositional skills.

    But here’s a thought about how to integrate the concept of building waves (which is what we should be talking about, not virtual instruments and effects) with off-the-shelf technology and even the dreaded MIDI.

    An analog-modeling synthesizer will allow students to change the presets, not only by reprogramming them, but by turning lots and lots of different dials. There is also the possibility of feeding in live sound or voice via mic and line inputs into a vocoder. And, via MIDI, the whole thing is computer-synchable. Here is a way to create unique electronic sounds while incorporating samples and avoid the two tyrannies of the factory preset and the instrument that must be virtually “built” from scratch.

  34. Thibault Schneeberger says

    Didn’t Boulez use MIDI on …explosante-fixe… ? (an immensely boring piece of music by the way)