The Evaluation Problem

Diane Ragsdale and Devon Smith have written tandem arguments about the difference and relative laxness of arts incubators over technology incubators. I won’t go into all that they’ve said, as you should be reading their blogs anyway, but in sum it comes down to an argument that arts incubators, rather than adopting the put up or shut up attitude of tech incubators, whose average support time is 33 months and at the end of which they let you fly or fall on your own, instead adopt a system that coddles companies and encourages them to stick around half-formed, like a college student on the couch.

The honest truth is that we don’t really prepare new arts organizations to function in the “real world.” Ragsdale, in her post, hypothesizes that 100 years or more of separating the “fine arts world” from the “mainstream audience” means that artists starting new organizations may feel less beholden to any bottom line or found audience. If I were to put it bluntly, I think it would go something like this: incubators, in those circumstances where they aren’t effectively preparing the artists at the core of their new endeavors for the real world, sometimes end up instead preparing them to believe that the arts aren’t really a collaborative enterprise between artist and audience, that they aren’t beholden to the interests and attitudes of whatever segment of the world they hope to lure through their door, that because they are the genius/next-big-thing, people will find them.

And so, as Devon points out, they favor “good process” (usually, here, artistic process) over good business practice. They’re “too nice.”

And yet, both because I know of fantastic incubator programs (see, for example, Intersection for the Arts (for organizations) or the still-launching artist residency program at Z Space) in which there are demonstrable goals, expectations etc. I wonder how much of this attitude towards arts incubatees (and the incubators who enable them) can really be laid at the feet of either the incubators on the fledgling companies.  What I think is hardest to incorporate into those programs, even the good ones, is the dual-pronged fork-in-the-butt called evaluation. Prong 1: predictive benchmarking and modeling for financial success, and evaluating yourself against those benchmarks. It is not (much as it may seem to be) blasphemy to say that an artistic organization needs to be able to demonstrate the ability to draw an audience, and to further say that that organization be given a specified amount of time (and as much support as possible) to do so. Prong 2: evaluation of work. Just like it’s not blasphemy to say that organizations are beholden to a bottom line, it’s also not blasphemy to say that they are beholden to a certain quality of work—here not necessarily meaning the flashiest costumes or the most expensive set, but meaning the alignment of audience expectations and satisfaction with the stated outcomes of a piece of work. No one who is trying to launch a company should be creating art for their own living room, unless there’s a fundraiser going on in front of the fireplace.

In both of these evaluative areas, we are currently failing our fledgling companies. As both Devon and Diane point out, while we provide desks and phones and space, maybe some internet access, in the tech world incubator projects are given access to the greatest minds Silicon Valley has to offer. They are given a strong, proven curriculum, a good chunk of money and the collective wisdom of the group in exchange for a percentage of future profits. They further are evaluated (and evaluate themselves) on a standard set of metrics that mean they always know where they are in relation to where they want to be.

This method is, of course, messy. Fifty percent or more of start-ups fail in the first few years out of the nest, often taking all of that financial and intellectual investment with them. Sometimes, despite all odds, a start-up deemed a “great idea” flat lines because of shifts in the marketplace or new or different attitudes from the consumer. And this method may not be a perfect fit for the arts—but we’re not far enough along in the conversation to even judge that yet, are we?

As we continue to examine the now-completed surveying (over 25,000 responses) associated with the intrinsic impact project, there’s a lot that we haven’t really teased out yet. But even just a cursory look at the responses make it clear that, for example, in multiple cases self-described “edgy” companies that felt their work would push their audiences into a lot of uncomfortable places generally don’t, that while subscribers have basic loyalty to the company (and fill out the survey at a ratio of about 2 to 1) they generally are less impressed and impacted by any individual show, and that impact is not really at all dependent on the budget of the company, the size of the theatre or the relative “quality” of the parts. Some of this is confirmation of existing thinking, some of it is new—but all of it, I think, provides a possible basic language for evaluation, both for members of the producing company and whatever support structures sit behind it.

Share on FacebookTweet about this on TwitterShare on RedditEmail this to someone


  1. says

    I’d love to know more about the “intrinsic impact project” — where can I read more about? In the meantime, you can read about my view on evaluation from inside one of the few university-based arts incubators (p.a.v.e., the one Devon mentions at the beginning of her piece):
    Evaluation IS a problem, as you note. What little business incubation evaluation literature is out there seems to measure success as “the business launched from the incubator” or “the business still exists 1 or 3 years after launch,” but I don’t think the business incubation model is all that concerned with your Prong 2, the quality of the work itself, believing that the market will assess product quality. I’m not convinced that that’s an appropriate evaluation of quality in the arts.

    Lets keep the conversation about this critical issue going..