Arts Council England says it will use a standardized assessment system called Quality Metrics in evaluating the arts it it considers funding. The system has been developed over several years and is an attempt to create a matrix by which arts experiences can be measured and evaluated.
Here are the criteria:
Self, peer and public:
- Concept: it was an interesting idea
- Presentation: it was well produced and presented
- Distinctiveness: it was different from things I’ve experienced before
- Challenge: it was thought-provoking
- Captivation: it was absorbing and held my attention
- Enthusiasm: I would come to something like this again
- Local impact: it is important that it’s happening here
- Relevance: it has something to say about the world in which we live
- Rigour: it was well thought through and put together
Self and peer only:
- Originality: it was ground-breaking
- Risk: the artists/curators really challenged themselves
- Excellence: it is one of the best examples of its type that I have seen
But following a pilot project, there has been mixed enthusiasm for the approach from the arts sector:
Their evaluation report reveals that arts organisations don’t view all the metrics as appropriate measures of quality. Rather than adopt the standardised measures, it concludes that they want a “bespoke, tailored approach” that aligns with their individual artistic objectives and allows them to “select metrics that measure what matters”.
The evaluation is at odds with earlier research by John Knell, Director of Counting What Counts Ltd, and former ACE Director of Research Catherine Bunting, who claimed that their study had “proved that the cultural sector is capable of generating a clear consensus on outcomes and standardised metric dimensions to capture the quality of their work”.
Among the concerns are an administrative burden, potential cost, and, perhaps most important, “a lack of confidence in the reliability and validity of the data.”
But ACE says it will go ahead anyway, having made a substantial financial investment already. Why? The system “will help us all understand and talk about quality in a more consistent way,” the agency says. For arts advocates looking to make a case for the value of the arts, developing an objective, standard, easily understood measure of artistic quality is a Holy Grail. After all, how can you talk about quality if you can’t define and measure it?
And herein lies the problem. Art is a deeply personal experience. Standardized? Hard to imagine.
A bigger problem is imposing scores on arts experiences. Leave aside whether the scores are accurate. The score criteria can’t help but begin to define the art, not just measure it. Pretty soon we’re letting algorithms decide what art gets made and funded. This story from last week about publishers using algorithms to determine what will get published, for example: “A handful of startups in the US and abroad claim to have created their own algorithms or other data-driven approaches that can help them pick novels and nonfiction topics that readers will love, as well as understand which books work for which audiences.”
Creativity in market-driven Hollywood is defined by market forces. Pop music gets reduced to tried and true formulas to get on the charts. That’s not to say that creativity doesn’t survive or even flourish in pop culture – but metrics drive the product in a way that doesn’t put art at the center. Scoring quality by polling audiences inevitably makes quality a popularity contest with measurable criteria. We’re living in a culture that is increasingly being defined by clicks, likes, RTs etc. Popularity is squeezing out almost every other measure of excellence. Is this where we’re headed in the arts?