11,703 research outputs found

    Description of the LTG system used for MUC-7

    Get PDF
    The basic building blocks in our muc system are reusable text handling tools which wehave been developing and using for a number of years at the Language Technology Group. They are modular tools with stream input/output; each tooldoesavery speci c job, but can be combined with other tools in a unix pipeline. Di erent combinations of the same tools can thus be used in a pipeline for completing di erent tasks. Our architecture imposes an additional constraint on the input/output streams: they should have a common syntactic format. For this common format we chose eXtensible Markup Language (xml). xml is an o cial, simpli ed version of Standard Generalised Markup Language (sgml), simpli ed to make processing easier [3]. Wewere involved in the developmentofthexml standard, building on our expertise in the design of our own Normalised sgml (nsl) and nsl tool lt nsl [10], and our xml tool lt xml [11]. A detailed comparison of this sgml-oriented architecture with more traditional data-base oriented architectures can be found in [9]. A tool in our architecture is thus a piece of software which uses an api for all its access to xml and sgml data and performs a particular task: exploiting markup which has previously been added by other tools, removing markup, or adding new markup to the stream(s) without destroying the previously adde

    Ontology technology for the development and deployment of learning technology systems - a survey

    Get PDF
    The World-Wide Web is undergoing dramatic changes at the moment. The Semantic Web is an initiative to bring meaning to the Web. The Semantic Web is based on ontology technology – a knowledge representation framework – at its core. We illustrate the importance of this evolutionary development. We survey five scenarios demonstrating different forms of applications of ontology technologies in the development and deployment of learning technology systems. Ontology technologies are highly useful to organise, personalise, and publish learning content and to discover, generate, and compose learning objects

    The economics of Information Technologies Standards &

    Get PDF
    This research investigates the problem of Information Technologies Standards or Recommendations from an economical point of view. In our competitive economy, most enterprises adopted standardization’s processes, following recommendations of specialized Organisations such as ISO (International Organisation for Standardization), W3C (World Wide Web Consortium) and ISOC (Internet Society) in order to reassure their customers. But with the development of new and open internet standards, different enterprises from the same sector fields, decided to develop their own IT standards for their activities. So we will hypothesis that the development of a professional IT standard required a network of enterprises but also a financial support, a particular organizational form and a precise activity to describe. In order to demonstrate this hypothesis and understand how professional organise themselves for developing and financing IT standards, we will take the Financial IT Standards as an example. So after a short and general presentation of IT Standards for the financial market, based on XML technologies, we will describe how professional IT standards could be created (nearly 10 professional norms or recommendations appear in the beginning of this century). We will see why these standards are developed outside the classical circles of standardisation organisations, and what could be the “key factors of success” for the best IT standards in Finance. We will use a descriptive and analytical method, in order to evaluate the financial support and to understand these actors’ strategies and the various economical models described behind. Then, we will understand why and how these standards have emerged and been developed. We will conclude this paper with a prospective view on future development of standards and recommendations.information technologies, financial standards, development of standards, evaluation of the economical costs of standards

    Putting the Text back into Context: A Codicological Approach to Manuscript Transcription

    Get PDF
    Textual scholars have tended to produce editions which present the text without its manuscript context. Even though digital editions now often present single-witness editions with facsimiles of the manuscripts, nevertheless the text itself is still transcribed and represented as a linguistic object rather than a physical one. Indeed, this is explicitly stated as the theoretical basis for the de facto standard of markup for digital texts: the Guidelines of the Text Encoding Initiative (TEI). These explicitly treat texts as semantic units such as paragraphs, sentences, verses and so on, rather than physical elements such as pages, openings, or surfaces, and some scholars have argued that this is the only viable model for representing texts. In contrast, this chapter presents arguments for considering the document as a physical object in the markup of texts. The theoretical arguments of what constitutes a text are first reviewed, with emphasis on those used by the TEI and other theoreticians of digital markup. A series of cases is then given in which a document-centric approach may be desirable, with both modern and medieval examples. Finally a step forward in this direction is raised, namely the results of the Genetic Edition Working Group in the Manuscript Special Interest Group of the TEI: this includes a proposed standard for documentary markup, whereby aspects of codicology and mise en page can be included in digital editions, putting the text back into its manuscript context

    Consistency checking of financial derivatives transactions

    Get PDF

    Research Articles in Simplified HTML: a Web-first format for HTML-based scholarly articles

    Get PDF
    Purpose. This paper introduces the Research Articles in Simplified HTML (or RASH), which is a Web-first format for writing HTML-based scholarly papers; it is accompanied by the RASH Framework, a set of tools for interacting with RASH-based articles. The paper also presents an evaluation that involved authors and reviewers of RASH articles submitted to the SAVE-SD 2015 and SAVE-SD 2016 workshops. Design. RASH has been developed aiming to: be easy to learn and use; share scholarly documents (and embedded semantic annotations) through the Web; support its adoption within the existing publishing workflow. Findings. The evaluation study confirmed that RASH is ready to be adopted in workshops, conferences, and journals and can be quickly learnt by researchers who are familiar with HTML. Research Limitations. The evaluation study also highlighted some issues in the adoption of RASH, and in general of HTML formats, especially by less technically savvy users. Moreover, additional tools are needed, e.g., for enabling additional conversions from/to existing formats such as OpenXML. Practical Implications. RASH (and its Framework) is another step towards enabling the definition of formal representations of the meaning of the content of an article, facilitating its automatic discovery, enabling its linking to semantically related articles, providing access to data within the article in actionable form, and allowing integration of data between papers. Social Implications. RASH addresses the intrinsic needs related to the various users of a scholarly article: researchers (focussing on its content), readers (experiencing new ways for browsing it), citizen scientists (reusing available data formally defined within it through semantic annotations), publishers (using the advantages of new technologies as envisioned by the Semantic Publishing movement). Value. RASH helps authors to focus on the organisation of their texts, supports them in the task of semantically enriching the content of articles, and leaves all the issues about validation, visualisation, conversion, and semantic data extraction to the various tools developed within its Framework
    corecore