Cross-posted from The H Word.
There have been many writers who have claimed that history can be, or should be, scientific. Different things are meant by this, of course, and such statements are provoked by different motivations, although generally they trade on the perceived successes, rewards, professionalism and certainty of the sciences.
There have, historically, been two opposing trends in “scientific history”. In one case the claim is that patterns and laws can be found if the historic record is studied in the right way. The ideal model has variously been Newtonian physics, statistics or mathematics. In the other, the “scientific” element is careful observation and recording, in the manner of natural history. These approaches produce radically different histories, and can underlie very different attitudes to, for example, the importance of individual agency.
Looking for broad patterns, or for the detailed “facts” among the archival or tangible remains of history, are natural impulses, found throughout humanity’s attempts to understand or make use of the past. The claim of being “scientific” is a more recent phenomenon, dating from the cultural success of science in the 19th century.
I have written a couple of posts on my former blog relating to these 19th century debates, including in a review of Ian Hesketh’s book The Science of History in Victorian Britain. Henry Buckle is, here, the example of broad-sweep pattern-finding, while JR Seeley and the new breed of professional academic historians looked for legitimacy by focusing on detailed examination of primary sources.
I do not believe that history can predict the future (although I certainly think that important lessons can be learned) but, as Hesketh suggested to me on Twitter, some sort of proof of patterns would seem to be suggested by the regular revival of such approaches.
The latest comes from Peter Turchin of the University of Connecticut, who coined the term “cliodynamics” in 2003 and was recently interviewed for Nature. The approach, which uses mathematical modelling to analyse interactions between and long-term trends in social and demographic systems, has a number of advocates and there has even been a journal since the end of 2010.
As the Wikipedia article on cliodynamics suggests, its practitioners attempt “to explain ‘big history’ – things like the rise of empires, social discontent, civil wars, and state collapse”. Things, therefore, that capture the popular imagination, that might just convince those in power that this is useful knowledge and – significantly – things that academic historians, focused on primary sources and “micro-histories”, have perhaps tended to neglect.
A post on the History Today blog by Paul Lay suggests it is a kind of pseudoscience, adding, “Given the way in which mathematical modelling, using past data to predict future trends, has brought the global economy to its knees, this may not be the best time to introduce such methods to the more pragmatic discipline of history.”
Further doubts are voiced at Scientific American blogs, with Maria Konnikova’s post, “The humanities aren’t a science: stop treating them like one”.
I will admit that I have not read Turchin’s detailed work, or the other papers in the journal, and so my comments are based on the Nature interview and a 2008 article he wrote, again in Nature. His opening gambit here would do little to endear him to historians:
What caused the collapse of the Roman Empire? More than 200 explanations have been proposed, but there is no consensus about which explanations are plausible and which should be rejected. This situation is as risible as if, in physics, phlogiston theory and thermodynamics coexisted on equal terms.
The recent interview notes that academic historians are deeply sceptical about cliodynamics. This is not (just) a knee-jerk defence against interlopers from the sciences claiming that they know better than those who have trained long and hard in the ways of more standard approaches to history. There are many historians today who understand that other disciplines can offer us a number of useful tools. But their experience and training also helps them to understand that historical data is a complex business.
Turchin writes that his analysis is based on his collections of “quantitative data on demographic, social and political variables for several historical societies”, but, strikingly, gives no indication here of what his sources might be. The interview states that he and colleagues drew “on all the sources they can find – historical databases, newspaper archives, ethnographic studies”, and, from these, somehow locate factors such as “indicators of corruption … and political cooperation”.
Just how, I wonder, do they do that, across several cultures and vast stretches of time, with any degree of confidence? The detailed studies of historians have amply demonstrated that information contained in their sources cannot be taken on trust or treated equally. We need to have detailed understanding of the terminology of the period, their methods of collecting information, their political interests in sharing (or hiding) it and a sense of who was writing, who reading and why.
Turchin’s interpretation of his results is also pretty strange. He claims that his work has revealed regular 50-year cycles of political violence in the United States: that it was “almost absent in the early 19th century, increased from the 1830s and reached a peak in around 1900. The American Civil War occurred during this period of growing unrest. The instability then subsided during the 1930s, and the following two decades were remarkably calm. Finally, in the 1960s, political violence increased again.”
One has to wonder just what “political” and “violence” mean here, let alone “50-year cycle”. And, just because the Civil War was within the country, why is it counted and the 1812 and Second World Wars ignored?
Treating data like this and pulling out results like these seems to do neither science nor history any favours.