A few weeks ago I was invited to present on my concept of Arts Talk at the University of Chicago’s Cultural Policy Center. (Here’s the link to the video of my talk if you’re interested.)
Among the many valuable aspects of my Chicago visit was a stimulating dinner conversation with Executive Director Betty Farrell and Research Manager Jennifer Novak-Leonard about the definition of arts research. Under Farrell’s leadership, the UC’s Cultural Policy Center has made significant efforts to bring a variety of cultural researchers together for workshops, dialogue and collaborative projects. And Novak-Leonard’s measurement systems projects (in partnership with various foundations and the NEA) are highly regarded in the cultural policy field.
One of the issues that came up over dinner is the fact that the term “research” is often narrowly defined in the arts industry (at least that part of the industry controlled by funders and agencies). Many arts workers, conditioned by a ticket sales and marketing science ethos, assume that research refers solely to quantitative measuring of one kind or another. Others split their understanding of the term between “applied research” and “theoretical research.” The former is understood to solve problems (consultants do this kind of arts research). The latter is understood to build knowledge for knowledge’s sake (academics do this kind of arts research).
The arts industry didn’t invent this practical vs. theoretical binary, of course. But as many in the Cultural Research Network working group have noted, there is a particularly strong disconnect between arts industry researchers and arts-discipline academic researchers.
As a cultural historian and an industry theorist who spends time in both worlds, I see this divide as both unnecessary and, frankly, pretty costly. I don’t work with numbers, but I learn a lot from my colleagues who do (including Betty Farrell, Jennifer Novak-Leonard and the many smart people who participate in the CRN community).
And I have a lot in common with these colleagues on the “applied” side. To begin, we all gather data. My data come from the historical archive and from a variety of contemporary narrative sources that provide information about the audience experience.
But more importantly, we all analyze the data we gather. My analysis happens in my narrative writing (articles, books, blog posts). At that point in the research process, questions are brought to the data and answers are mined. The yield from that process is what we refer to more generally as “information”; that is, something that can be used.
I don’t think of the information my research process yields as unpractical. Quite the opposite—I think of it as knowledge that can be used to better understand contemporary audience behavior. This knowledge may not be derived from numbers, and it may not be “empirical” in the way in which we assume statistics to be, but it is highly valuable. Historical and theoretical narratives can influence practice and promote institutional change.
I’ll be at the AFTA conference this weekend in Nashville, where I’m sure there will be lots of discussion about the role of research, the value of data collection, and the difficulties of integrating various research traditions. I look forward to joining the conversation.