The topic of this conference (going on now!) at Utrecht University raises an issue similar to the one raised in my article at LSE’s Impact Blog: DH’ists have been brilliant at mining data but not always so brilliant at pooling data to address the traditional questions and theories that interest humanists. Here’s the conference description (it focuses specifically on DH and history):
Across Europe, there has been much focus on digitizing historical collections and on developing digital tools to take advantage of those collections. What has been lacking, however, is a discussion of how the research results provided by such tools should be used as a part of historical research projects. Although many developers have solicited input from researchers, discussion between historians has been thus far limited.
The workshop seeks to explore how results of digital research should be used in historical research and to address questions about the validity of digitally mined evidence and its interpretation.
And here’s what I said in my Impact Blog article, using as an example my own personal hero’s research in literary geography:
[Digital humanists] certainly re-purpose and evoke one another’s methods, but to date, I have not seen many papers citing, for example, Moretti’s actual maps to generate an argument not about methods but about what the maps might mean. Just because Moretti generated these geographical data does not mean he has sole ownership over their implications or their usefulness in other contexts.
I realize now that the problem is still one of method—or, more precisely, of method incompatibility. And the conference statement above gets to the heart of it.
Mining results with quantitative techniques is ultimately just data gathering; the next and more important step is to build theories and answer questions with that data. The problem is, in the humanities, that moving from data gathering to theory building forces the researcher to move between two seemingly incommensurable ways of working. Quantitative data mining is based on strict structuralist principles, requiring categorization and sometimes inflexible ontologies; humanistic theories about history or language, on the other hand, are almost always post-structuralist in their orientation. Even if we’re not talking Foucault or Derrida, the tendency in the humanities is to build theories that reject empirical readings of the world that rely on strict categorization. The 21st century humanistic move par excellence is to uncover the influence of “socially constructed” categories on one’s worldview (or one’s experimental results).
On Twitter, Melvin Wevers brings up the possibility of a “post-structuralist corpus linguistics.” To which James Baker and I replied that that might be a contradiction in terms. To my knowledge, there is no corpus project in existence that could be said to enact post-structuralist principles in any meaningful way. Such a project would require a complete overhaul of corpus technology from the ground up.
So where does that leave the digital humanities when it comes to the sorts of questions that got most of us interested in the humanities in the first place? Is DH condemned forever to gather interesting data without ever building (or challenging) theories from that data? Is it too much of an unnatural vivisection to insert structural, quantitative methods into a post-structuralist humanities?
James Baker throws an historical light on the question. When I said that post-structuralism and corpus linguistics are fundamentally incommensurable, he replied with the following point:
@SethLargo @melvinwevers And it makes sense, right. As post-structuralism was ~ a reaction to quant social science methods, cliometrics etc.
— James Baker (@j_w_baker) September 14, 2015
And he suggested that in his own work, he tries to follow this historical development:
@melvinwevers I enjoy being consciously positivist then subjecting my positivism to sustained critique. Helps me get somewhere #beyondmining
— James Baker (@j_w_baker) September 14, 2015
Structuralism/post-structuralism exists (or should exist) in dialectical tension. The latter is a real historical response to the former. It makes sense, then, to enact this tension in DH research. Start out as a positivist, end as a critical theorist, then go back around in a recursive process. This is probably what anyone working with DH methods probably does already. I think Baker’s point is that my “problem” posed above (structuralist methods in a post-structuralist humanities) isn’t so much a problem as a tension we need to be comfortable living with.
Not all humanistic questions or theories can be meaningfully tackled with structuralist methods, but some can. Perhaps a first step toward enacting the structuralist/post-structuralist dialectical tension in research is to discuss principles regarding which topics are or are not “fair game” for DH methods. Another step is going to be for skeptical peer reviewers not to balk at structuralist methods by subtly trying to remove them with calls for more “nuance.” Searching out the nuances of an argument—refining it—is the job of multiple researchers across years of coordinated effort. Knee-jerk post-structuralist critiques (or requests for an author to put them in her article) are unhelpful when a researcher has consciously chosen to utilize structuralist methods.
Pingback: Positivist(ish) Digital History | cradledincaricature
@jwbaker
Thanks for commenting, and thanks for the great aphorism! It’s really helped clarify a lot for me (and others, I’m sure).
But I’d still be quick to defend a lot of DH work as being more than “bean counting,” at least in literature and rhetoric. Matt Jockers’ plot and sentiment analysis, the use of principal component analysis in his book, are obvious examples. Text networks are generated with centrality equations. Even the simplest corpus analysis relies on tf-idf and other ranking measurements.
I agree, though, that it’s important to keep in mind that “counting” is essentially the data-collection stage of analysis, not the analysis itself. And I think maybe the point is that switching into data analysis mode in the humanities can mean going back to theory or history and leaving the equations behind.
Pingback: Structuralist Methods in a Post-Structuralist Humanities | | unicritique
@Ben . . . Thanks for the clarification. I agree there’s always a danger for humanists to use these methods superficially, without understanding the underlying mathematics or what the methods are actually doing (that’s what the post below this one is about). It’s definitely beneficial for people who want to get into this stuff to use an interface that has a learning curve (like the NLTK or Gephi) rather than a “drop in your data and get some results” kind of interface (like Antconc or Textexture).
Thanks for the Interesting post, Seth. I wonder if some of the problems that you’ve encountered are also due to the basic confusion between quantitative analysis and…just counting. Numbers – quantities – are indeed meaningful for the humanities. But just showing that you’ve learned to sort thousands of words isn’t quite enough. You have to do something with the numbers, or question what has been done with them. As far as I know, quantitative and qualitative analyses (or as you say structuralist and post-structuralist) have never been incommensurate with each other in other disciplines, such as archaeology. So I’m a bit baffled when I hear of such stark divisions, which I honestly have not really encountered in art history (hi Matt!).
There’s a big difference between “post-structuralism” and “qualitative analysis.” Post-structuralism, as I understand it, is about critiquing and deconstructing categories, which is the exact opposite of what both quant/qual do, which is to pre-suppose categories and then use them to count or analyze.
I’m not an expert, but from what I understand, certain strands in American post-processual archaeology would maybe fall under what I call post-structuralist.
Also, I’m not sure what you’re getting at in the middle of your comment. I’d respond, but I’m so baffled by your characterization of corpus linguistics that I’m not even sure where to begin without coming across like an ass.
Sorry, Seth, no offense was intended. Perhaps I wrote that way too fast on my way out the door. I didn’t mean to conflate post-structuralism with qualitative analysis, although I see it came across that way. I was agreeing with you about the necessary interaction between category building and analysis/critique. Also, I wasn’t singling out corpus linguistics, which has done some incredible work, such as that of Ian Gregory, or a lot of Moretti’s work. I was thinking more generally many fields about the danger in many fields of using methods such as data and text mining and visualizations superficially, especially because it’s easy to be seduced by the immediacy of the results or not to question their underlying assumptions or practices. That’s all, man. Just a misstated agreement.
Lovely post. I think you are spot on that “choosing” between structuralist and post-structuralist approaches is hardly choice, and more of an iterative process.
I have been thinking about this question a lot lately from the perspective of historical network analysis. Working in the field of art history, the question of an artist’s individual agency is and has been (quelle suprise) a rich topic of discussion for decades. I’ve been hit with the question a few times now: am I trying to claim that actors’ choices are enabled/constrained by the networks they inhabit, or that they themselves are shaping these networks? I’m not alone in arguing that network analysis provides a perfectly reasonable answer: they are mutually constitutive processes, and it all depends on your perspective and scale of analysis. Padgett and Powell (2012) put it nicely: “In the short run, actors create relations; in the long run, relations create actors.” Employing these quantitative, innately structuralist methods, I’ve found complex and delightfully contradictory evidence about how much my particular subject network (artistic print production networks in the early modern Low Countries) underwent both change and stasis, depending on the particular measure and scale you are looking at. I think (yes, I am biased 🙂 the resulting picture is far more vibrant than that provided by non-quantitative literature that takes a “fuzzy” view of what networks can and should mean.
p.s. I assume you have seen kieranhealy.org/files/papers/fuck-nuance.pdf
Thanks. I absolutely agree. The apparent intractability re: shaping or being shaped by networks (or, more grandly, being agents of history or slaves to non-human economic or evolutionary processes) is, like you say, a result of our inability to shift scale and appreciate that different things may be going on depending on the distance of our perspective. But we can’t begin to talk about the delightful tensions of scaled effects until, ironically, we adopt some structuralist priors. I’m glad to see that people looking at such different cultural data are nevertheless coming to the same conclusion, that structuralist methods are not at all incommensurate with “nuanced” (in the good sense!) theory building.
And, yes, I loved Healy’s piece. I meant to link it where I put nuance in scare-quotes in the post.
Great post. Glad someone picked up our backchannel rumblings!
You may be interested to know that in the two weeks running up to the conference I’d been preparing for teaching some art history theory and methods modules and – surprise, surprise – structuralism and post-structuralism has comes up a lot. I have to thank Melvin though for prodding me to connect my paper with my teaching prep. I guess my realisation there and then sitting in my chair at the conference was that I could frame my ‘be positivist first, subject to all the critique later’ approach via (post)structuralist theory, and that – as Matthew notes – I shouldn’t really worry too much about that as I hardly have a choice in the matter: that is just what we humanists do, of whatever ilk.
On Ben’s point about bean counting not being quant, I agree. I have friends on the more science end of the spectrum would giggle at the notion that what I do with a concordancer is quant (no real stats, no significance testing et al). What I’d say to that – however – is that *most* of the humanities, and certain most of History, has spent the last four decades forgetting how to count (we don’t teach our students to count, we don’t put counting in our undergraduate ‘what it is to be a Historian’ books any more – I ranted a bit too far on this some time back… http://britishlibrary.typepad.co.uk/digital-scholarship/2014/04/digital-history-and-the-death-of-quant.html). So I think we’re going to have to accept that quant means different things in different places (OT, as qual means different things in different places: what most historians do is rather removed from ‘qualitative analysis’ that codes data is SPSS and looks for trends…).