5,100 research outputs found

    Exploring the viability of semi-automated document markup

    Get PDF
    Digital humanities scholarship has long acknowledged the abundant theoretical advantages of text encoding; more questionable is whether the advantages can, in practice and in general, outweigh the costs of the usually labor-intensive task of encoding. Markup of literary texts has not yet been undertaken on a scale large enough to realize many of its potential applications and benefits. If we can reduce the human labor required to encode texts, libraries and their users can take greater advantage of the hosts of texts being produced by various mass digitization projects, and can focus more attention on implementing tools that use underlying encodings. How far can automation take an encoding effort? And what implications might that have for libraries and their users? Compelled by such questions, this paper explores the viability of semi-automated text encodingunpublishednot peer reviewe

    A Vernacular for Coherent Logic

    Full text link
    We propose a simple, yet expressive proof representation from which proofs for different proof assistants can easily be generated. The representation uses only a few inference rules and is based on a frag- ment of first-order logic called coherent logic. Coherent logic has been recognized by a number of researchers as a suitable logic for many ev- eryday mathematical developments. The proposed proof representation is accompanied by a corresponding XML format and by a suite of XSL transformations for generating formal proofs for Isabelle/Isar and Coq, as well as proofs expressed in a natural language form (formatted in LATEX or in HTML). Also, our automated theorem prover for coherent logic exports proofs in the proposed XML format. All tools are publicly available, along with a set of sample theorems.Comment: CICM 2014 - Conferences on Intelligent Computer Mathematics (2014

    Automating property-based testing of evolving web services

    Get PDF
    Web services are the most widely used service technology that drives the Service-Oriented Computing~(SOC) paradigm. As a result, effective testing of web services is getting increasingly important. In this paper, we present a framework and toolset for testing web services and for evolving test code in sync with the evolution of web services. Our approach to testing web services is based on the Erlang programming language and QuviQ QuickCheck, a property-based testing tool written in Erlang, and our support for test code evolution is added to Wrangler, the Erlang refactoring tool. The key components of our system include the automatic generation of initial test code, the inference of web service interface changes between versions, the provision of a number of domain specific refactorings and the automatic generation of refactoring scripts for evolving the test code. Our framework provides users with a powerful and expressive web service testing framework, while minimising users' effort in creating, maintaining and evolving the test model. The framework presented in this paper can be used by both web service providers and consumers, and can be used to test web services written in whatever language; the approach advocated here could also be adopted in other property-based testing frameworks and refactoring tools

    Ontology technology for the development and deployment of learning technology systems - a survey

    Get PDF
    The World-Wide Web is undergoing dramatic changes at the moment. The Semantic Web is an initiative to bring meaning to the Web. The Semantic Web is based on ontology technology – a knowledge representation framework – at its core. We illustrate the importance of this evolutionary development. We survey five scenarios demonstrating different forms of applications of ontology technologies in the development and deployment of learning technology systems. Ontology technologies are highly useful to organise, personalise, and publish learning content and to discover, generate, and compose learning objects

    Using a Cruise Report to Generate XML Metadata

    Get PDF
    Since 2005 metadata generation at the Center for Coastal and Ocean Mapping/Joint Hydrographic Center has slowly evolved from a painful and tedious process of copying and pasting, to generate hundreds of files, to using an automated system that generates 90% of the needed metadata from the data collected on cruises. However there remained one piece missing to the automated system- the wordy part of the metadata that deals with information such as the attribute accuracy report, abstract and the process description. This information cannot be mined from the raw survey data. This paper illustrates how to generate a template from a Microsoft Word based cruise report that can be used in conjunction with another template (generated from the raw data collected on a cruise) to create XML metadata ready for submission to the NOAA/National Geophysical Data Center

    Schema matching for transforming structured documents

    Full text link
    Structured document content reuse is the problem of restructuring and translating data structured under a source schema into an instance of a target schema. A notion closely tied with structured document reuse is that of structure transformations. Schema matching is a critical strep in structured document transformations. Manual matching is expensive and error-prone. It is therefore important to develop techniques to automate the matching process and thus the transformation process. In this paper, we contributed in both understanding the matching problem in the context of structured document transformations and developing matching methods those output serves as the basis for the automatic generation of transformation scripts

    Interoperability of Information Systems and Heterogenous Databases Using XML

    Get PDF
    Interoperabilily of information systerrrs is the most critical issue facing businesse! that need to access information from multiple idormution systems on tlifferent environments ancl diverse platforms. Interoperability has been a basic requirement for the modern information systems in a competitive and volatile business environment, particularly with the advent of distributed network system and the growing relevance of inter-network communications. Our objective in tltis paper is to develop a comprehensiveframework tofacilitate interoperability smong distributed and heterogeneous information systems and to develop prototype software to validate tlte application of XML in interoperability of infurmation systems and databases
    corecore