Continuing from my previous post, about applying criteria to measure how well orchestras play…
Because, to begin with, we for the most part discuss how well orchestras play only in the most general way. We have an idea, let’s say, that Cleveland (or at least this used to be the belief) stands above most American orchestra. Or that Berlin might be the best orchestra in the world. But what exactly do we mean by that?
Or we think that San Francisco, under MTT, stands very high. But do we mean that their programming does, or their playing? How does their playing rank, compared to other American orchestras their size?
Compare this to what any baseball fan knows. You’re a Mets fan? If you’re serious about it, you know their strengths and weaknesses, position by position. Stellar shortstop, really good third baseman (though he’s injured), promising young first baseman (also injured), left fielder who forgot how to hit.
And if you’re even more serious, you can compare the Mets, position by position, with every other team in the National League. These comparisons are written up in detail by sportswriters.
Orchestras, you’d think, are more important than baseball teams, unless we think that great art isn’t important. But position by position orchestral comparisons — or, to use orchestral terms, section by section comparisons — just aren’t available. So the public, it seems to me, is largely in the dark about how orchestras compare to each other. (If you think I’m wrong about this, please point me to the writing that proves me wrong!)
So that’s the public side of this. But now let’s look at the inside view. How are orchestras harmed from the inside, by not knowing (or openly talking about) how they compare to each other?
Because the board now can’t properly govern the orchestra. How’s their orchestra doing? Well, you’d think one measure of that would be how well the orchestra plays. But the board may well not know that. I very much doubt that board members make detailed comparisons. Some might, of course. But are the comparisons openly talked about at board meetings?
If baseball teams had boards, of course the comparisons would be talked about.
And this strikes home with special force in smaller orchestras. At least the big ones tour, and get reviewed around the country, especially in New York, when they come to Lincoln Center or Carnegie Hall. So the board at least can read the reviews.
But if you’re on the board of (let’s say) the Des Moines Symphony, what kind of information do you have about other orchestras of its size? Is your orchestra playing as well as it should? Well, sure, there are regional differences (availability of musicians, how far the musicians have to drive to play concerts; if it’s a long distance, it might be harder to get the musicians you want). But still. Are you getting, within your limitations, the best musicians you can? And are they playing up to their ability, or maybe (with a good music director) even above their ability? (As happened, just for instance, when Mariss Jansons was music director in Oslo.)
How does the board judge these things? Wouldn’t it be helpful if they had exactly the kind of detailed information any Mets fan has about the Mets? Or (which ought to be readily available) about the minor league team in their city, if the city happens to have one?
But instead, I suspect (from everything I’ve heard and, for that matter, in some cases seen firsthand), that boards of smaller orchestras don’t often know how well — compared to other orchestras of the same size — their orchestra played. If comparative section by section rankings were readily available…
And no, I’m not saying that — at least under present conditions — it would be easy to get those rankings. But, really now: If you were responsible for the health of an orchestra, wouldn’t you want to know how well (compared to other orchestras their size) they play?