Abstract

The first section of this chapter gives an overview on how big data and their mathematical calculation enter in the historical discourse. It introduces the two main issues that prevent ‘big’ results from emerging so far. Firstly, the input is problematic because historical records cannot be easily and comprehensively decomposed into unambiguous fields, except for the population and taxation ones, which are rare and scattered throughout space and time till the nineteenth century. Secondly, even if we run machine-learning tools on properly structured data, big results cannot emerge until we built formal models, with explanatory and predictive powers. The second section of the chapter presents a complex network, data-driven approach to mining historical sources and supporting the perennial historical chase for truth. In the time-integrated network obtained by overlaying all records from the historians’ databases, the nodes are actors, while the links are actions. The third section explains how this tool allows historians to deal with historical data issues (e.g., source criticism, facts validation, trade-conflict-diplomacy relationships, etc.), and take advantage of automatic extraction of key narratives to formulate and test their hypotheses on the courses of history in other actions or in additional data sets. The conclusions describe the vision of how this narrative-driven analysis of historical big data can lead to the development of multiscale agent-based models and simulations to generate ensembles of counterfactual histories that would deepen our understanding of why our actual history developed the way it did and how to treasure these human experiences.Accepted versio

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 10/08/2021