1,522 research outputs found
Reading the Readers: Modelling Complex Humanities Processes to Build Cognitive Systems
The ink and stylus tablets discovered at the Roman Fort of Vindolanda are a unique resource for scholars of ancient history. However, the stylus tablets have proved particularly difficult to read. This paper describes the initial stages in the development of a computer system designed to aid historians in the reading of the stylus tablets. A detailed investigation was undertaken, using Knowledge Elicitation techniques borrowed from Artificial IntelliJOURce, Cognitive Psychology, and Computational Linguistics, to elicit the processes experts use whilst reading an ancient text. The resulting model was used as the basis of a computer architecture to construct a system which takes in images of the tablets and outputs plausible interpretations of the documents. It is demonstrated that using Knowledge Elicitation techniques can further the understanding of complex processes in the humanities, and that these techniques can provide an underlying structure for the basis of a computer system that replicates that process. As such it provides significant insight into how experts work in the humanities, whilst providing the means to develop tools to assist them in their complex task
Towards a reading of the Vindolanda Stylus Tablets: Engineers and the Papyrologist
We introduce a collaborative project between the Department of Engineering Science and the Centre for the Study of Ancient Documents at the University of Oxford regarding the analysis and reading of the Vindolanda Stylus Tablets. We sketch the imaging and image processing techniques used to digitally capture and analyse the tablets, the development of the image analysis tools to aid papyrologists in the transcription of the texts, and lessons that can be learned so far from such an inter-disciplinary project
Disciplined: Using educational studies to analyse 'Humanities Computing'
Humanities Computing is an emergent field. The activities described as 'Humanities Computing' continue to expand in number and sophistication, yet no concrete definition of the field exists, and there are few academic departments that specialize in this area. Most introspection regarding the role, meaning, and focus of "Humanities Computing" has come from a practical and pragmatic perspective from scholars and educators within the field itself. This article provides an alternative, externalized, viewpoint of the focus of Humanities Computing, by analysing the discipline through its community, research, curriculum, teaching programmes, and the message they deliver, either consciously or unconsciously, about the scope of the discipline. It engages with Educational Theory to provide a means to analyse, measure, and define the field, and focuses specifically on the ACH/ALLC 2005 Conference to identify and analyse those who are involved with the humanities computing community. © 2006 Oxford University Press
Artefacts and Errors: Acknowledging Issues of Representation in the Digital: Imaging of Ancient Texts
It is assumed, in palaeography, papyrology and epigraphy, that a certain amount of
uncertainty is inherent in the reading of damaged and abraded texts. Yet we have
not really grappled with the fact that, nowadays, as many scholars tend to deal with
digital images of texts, rather than handling the texts themselves, the procedures for
creating digital images of texts can insert further uncertainty into the representation
of the text created. Technical distortions can lead to the unintentional introduction
of ‘artefacts’ into images, which can have an effect on the resulting representation. If
we cannot trust our digital surrogates of texts, can we trust the readings from them?
How do scholars acknowledge the quality of digitised images of texts? Furthermore,
this leads us to the type of discussions of representation that have been present in
Classical texts since Plato: digitisation can be considered as an alternative form of
representation, bringing to the modern debate of the use of digital technology in Classics
the familiar theories of mimesis (imitation) and ekphrasis (description): the conversion
of visual evidence into explicit descriptions of that information, stored in computer
files in distinct linguistic terms, with all the difficulties of conversion understood in the
ekphratic process. The community has not yet considered what becoming dependent
on digital texts means for the field, both in practical and theoretical terms. Issues of
quality, copying, representation, and substance should be part of our dialogue when
we consult digital surrogates of documentary material, yet we are just constructing
understandings of what it means to rely on virtual representations of artefacts. It is
necessary to relate our understandings of uncertainty in palaeography and epigraphy
to our understanding of the mechanics of visualization employed by digital imaging
techniques, if we are to fully understand the impact that these will have
The 8-vertex model with quasi-periodic boundary conditions
We study the inhomogeneous 8-vertex model (or equivalently the XYZ Heisenberg
spin-1/2 chain) with all kinds of integrable quasi-periodic boundary
conditions: periodic, -twisted, -twisted or
-twisted. We show that in all these cases but the periodic one with
an even number of sites , the transfer matrix of the model is
related, by the vertex-IRF transformation, to the transfer matrix of the
dynamical 6-vertex model with antiperiodic boundary conditions, which we have
recently solved by means of Sklyanin's Separation of Variables (SOV) approach.
We show moreover that, in all the twisted cases, the vertex-IRF transformation
is bijective. This allows us to completely characterize, from our previous
results on the antiperiodic dynamical 6-vertex model, the twisted 8-vertex
transfer matrix spectrum (proving that it is simple) and eigenstates. We also
consider the periodic case for odd. In this case we can define two
independent vertex-IRF transformations, both not bijective, and by using them
we show that the 8-vertex transfer matrix spectrum is doubly degenerate, and
that it can, as well as the corresponding eigenstates, also be completely
characterized in terms of the spectrum and eigenstates of the dynamical
6-vertex antiperiodic transfer matrix. In all these cases we can adapt to the
8-vertex case the reformulations of the dynamical 6-vertex transfer matrix
spectrum and eigenstates that had been obtained by - functional
equations, where the -functions are elliptic polynomials with
twist-dependent quasi-periods. Such reformulations enables one to characterize
the 8-vertex transfer matrix spectrum by the solutions of some Bethe-type
equations, and to rewrite the corresponding eigenstates as the multiple action
of some operators on a pseudo-vacuum state, in a similar way as in the
algebraic Bethe ansatz framework.Comment: 35 page
Downs and Acrosses: Textual Markup on a Stroke Based Level
Textual encoding is one of the main focuses of Humanities Computing. However, existing encoding schemes and initiatives focus on
'text' from the character level upwards, and are of little use to scholars, such as papyrologists and palaeographers, who study the constituent strokes of
individual characters. This paper discusses the development of a markup system used to annotate a corpus of images of Roman texts, resulting in an
XML representation of each character on a stroke by stroke basis. The XML data generated allows further interrogation of the palaeographic data, increasing
the knowledge available regarding the palaeography of the documentation produced by the Roman Army. Additionally, the corpus was used to train an
Artificial Intelligence system to effectively 'read' in stroke data of unknown text and output possible, reliable, interpretations of that text:
the next step in aiding historians in the reading of ancient texts. The development and implementation of the markup scheme is introduced, the results
of our initial encoding effort are presented, and it is demonstrated that textual markup on a stroke level can extend the remit of marked-up digital texts in the
humanities
Antiperiodic XXZ chains with arbitrary spins: Complete eigenstate construction by functional equations in separation of variables
Generic inhomogeneous integrable XXZ chains with arbitrary spins are studied
by means of the quantum separation of variables (SOV) method. Within this
framework, a complete description of the spectrum (eigenvalues and eigenstates)
of the antiperiodic transfer matrix is derived in terms of discrete systems of
equations involving the inhomogeneity parameters of the model. We show here
that one can reformulate this discrete SOV characterization of the spectrum in
terms of functional T-Q equations of Baxter's type, hence proving the
completeness of the solutions to the associated systems of Bethe-type
equations. More precisely, we consider here two such reformulations. The first
one is given in terms of Q-solutions, in the form of trigonometric polynomials
of a given degree , of a one-parameter family of T-Q functional equations
with an extra inhomogeneous term. The second one is given in terms of
Q-solutions, again in the form of trigonometric polynomials of degree but
with double period, of Baxter's usual (i.e. without extra term) T-Q functional
equation. In both cases, we prove the precise equivalence of the discrete SOV
characterization of the transfer matrix spectrum with the characterization
following from the consideration of the particular class of Q-solutions of the
functional T-Q equation: to each transfer matrix eigenvalue corresponds exactly
one such Q-solution and vice versa, and this Q-solution can be used to
construct the corresponding eigenstate.Comment: 38 page
Image and interpretation using artificial intelligence to read ancient Roman texts
The ink and stylus tablets discovered at the Roman Fort of Vindolanda are a unique resource for scholars of ancient history. However, the stylus tablets have proved particularly difficult to read. This paper describes a system that assists expert papyrologists in the interpretation of the Vindolanda writing tablets. A model-based approach is taken that relies on models of the written form of characters, and statistical modelling of language, to produce plausible interpretations of the documents. Fusion of the contributions from the language, character, and image feature models is achieved by utilizing the GRAVA agent architecture that uses Minimum Description Length as the basis for information fusion across semantic levels. A system is developed that reads in image data and outputs plausible interpretations of the Vindolanda tablets
- …