239 research outputs found

    The Electronic Explication

    Get PDF
    The explication de texte, or commentary, has a distinguished record in the history of French education. It originated in biblical exegesis; by the seventeenth century it became a fundamental component in classical training at the Port Royal where the practice was for a master to "marquer" the text with different signs representing ideas, sentences, words or phrases for comment. It was systematised by educational dogmatists who dominated the Académie Française in the first half of the twentieth century and it became a focus of the pre-68 intellectual crisis, when it was increasingly subject to suspicion and challenge. It remains one of the principal methods of studying the works of French authors and testing student competence in textual appreciation in Britain and France alike. With its requirement for different orders of commentary on context, culture, form and content, structure and lexis, meaning and mise-en-scène amongst other considerations, it is clear that this classic exercise may benefit from recent developments in electronic publishing, particularly those relating to the electronic critical edition such as XML, XSLT and the TEI Guidelines. Electronic editions of texts with associated materials are excellent aids to the preparation of the traditional explication de texte. The study of old texts requires easy access to associated materials so that as well as the literary, linguistic and dramatic aspects, the social and political changes are also understood. Much work is currently being carried out by scholars to collate the disparate data that is available so that new insights into the text can be gained. But how can computer-based tools be of benefit to the undergraduate who has yet to gain a basic background knowledge of the texts, especially texts which present linguistic barriers? Two different websites have been designed by the authors of this paper specifically with the explication in mind. One is Hypert(ex)te/plications which provides information relating to seventeenth-century theatre studies: it contains the base texts of several plays by Corneille, Molière and Racine with commentaries by staff and students on selected extracts, as well as associated background materials that the students can refer to in a user-centred hypertext fashion. The other site, MedFrench, a prototype xml version of a DOS program created at the University of Hull which contains eight medieval French poetry together with "pearls of wisdom" about the history, culture, and language. It contains detailed sentence structure analysis and every word is annotated with its part-of-speech data, its modern French equivalent and its old French stem. The web version has been designed specifically with the idea of guiding the reader through the materials in a linear way. Can the new methods of digitisation and electronic publishing allow for a new style of explication? Could the process if creating an electronic edition constitute a form of explication as well? The idea of students creating their own electronic editions is not new. Programs such as the Poetry Shell provided a friendly interface and easy to learn tools for the students to add their own textual and graphic materials. But these programs shielded the students from grappling with some important issues relating to encoding and the ontology of text. By marking up a text, the original practices of explication de texte as experienced by Racine himself at Port-Royal are revived. But now the student is empowered to guide the master rather than simply follow his example.Hosted by the Scholarly Text and Imaging Service (SETIS), the University of Sydney Library, and the Research Institute for Humanities and Social Sciences (RIHSS), the University of Sydney

    Using XML and XSLT for flexible elicitation of mental-health risk knowledge

    Get PDF
    Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Methods and evolving results: Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Conclusions: Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge

    Description-driven Adaptation of Media Resources

    Get PDF
    The current multimedia landscape is characterized by a significant diversity in terms of available media formats, network technologies, and device properties. This heterogeneity has resulted in a number of new challenges, such as providing universal access to multimedia content. A solution for this diversity is the use of scalable bit streams, as well as the deployment of a complementary system that is capable of adapting scalable bit streams to the constraints imposed by a particular usage environment (e.g., the limited screen resolution of a mobile device). This dissertation investigates the use of an XML-driven (Extensible Markup Language) framework for the format-independent adaptation of scalable bit streams. Using this approach, the structure of a bit stream is first translated into an XML description. In a next step, the resulting XML description is transformed to reflect a desired adaptation of the bit stream. Finally, the transformed XML description is used to create an adapted bit stream that is suited for playback in the targeted usage environment. The main contribution of this dissertation is BFlavor, a new tool for exposing the syntax of binary media resources as an XML description. Its development was inspired by two other technologies, i.e. MPEG-21 BSDL (Bitstream Syntax Description Language) and XFlavor (Formal Language for Audio-Visual Object Representation, extended with XML features). Although created from a different point of view, both languages offer solutions for translating the syntax of a media resource into an XML representation for further processing. BFlavor (BSDL+XFlavor) harmonizes the two technologies by combining their strengths and eliminating their weaknesses. The expressive power and performance of a BFlavor-based content adaptation chain, compared to tool chains entirely based on either BSDL or XFlavor, were investigated by several experiments. One series of experiments targeted the exploitation of multi-layered temporal scalability in H.264/AVC, paying particular attention to the use of sub-sequences and hierarchical coding patterns, as well as to the use of metadata messages to communicate the bit stream structure to the adaptation logic. BFlavor was the only tool to offer an elegant and practical solution for XML-driven adaptation of H.264/AVC bit streams in the temporal domain

    Adaptable Web content for e-learning communities

    Get PDF
    In this paper we explore an easy-to-use methodology aimed to optimise the design, construction and maintenance processes of Web-based educational material, which leverages on adaptive features to foster re-use and sharing between educational communities. Our approach is easier than those proposed by other organizations in the sense that it borrows the principal idea of describing learning material through meta-information (about properties and structures of the educational contents and the existing relationships between them), but discards the inherent complexity typical of their richer categorization schemas that may overwhelm the authors\u27 task. This simplification will favour, in our opinion, a more light-weight and more rapid production and delivery of educational content

    FESA 3.0 : Overcoming the XML/RDBMS Impedance Mismatch

    Get PDF
    The Front End System Architecture (FESA) framework developed at CERN takes an XML-centric approach to modelling accelerator equipment software. Among other techniques, XML Schema is used for abstract model validation, while XSLT drives the generation of code. At the same time all the information generated and used by the FESA framework is just a relatively small subset of a much wider realm of Controls Configuration data stored in a dedicated database and represented as a sophisticated relational model. Some data transformations occur in the XML universe, while others are handled by the database, depending on which technology is a better fit for the task at hand. This paper describes our approach to dealing with what we call the “XML/Relational impedance mismatch” – by analogy to Object/Relational impedance mismatch – that is how to best leverage the power of an RDBMS as a back-end for an XML-driven framework. We discuss which techniques work best for us, what to avoid, where the potential pitfalls lie. All this is based on several years of experience with a living system used to control the world’s biggest accelerator complex

    TIGRA - An architectural style for enterprise application integration

    Get PDF

    Exploring manuscripts: sharing ancient wisdoms across the semantic web

    Get PDF
    Recent work in digital humanities has seen researchers in-creasingly producing online editions of texts and manuscripts, particularly in adoption of the TEI XML format for online publishing. The benefits of semantic web techniques are un-derexplored in such research, however, with a lack of sharing and communication of research information. The Sharing Ancient Wisdoms (SAWS) project applies linked data prac-tices to enhance and expand on what is possible with these digital text editions. Focussing on Greek and Arabic col-lections of ancient wise sayings, which are often related to each other, we use RDF to annotate and extract seman-tic information from the TEI documents as RDF triples. This allows researchers to explore the conceptual networks that arise from these interconnected sayings. The SAWS project advocates a semantic-web-based methodology, en-hancing rather than replacing current workflow processes, for digital humanities researchers to share their findings and collectively benefit from each other’s work

    HELM and the Semantic Math-Web

    Full text link
    corecore