285 research outputs found

    Modelling discourse in contested domains: A semiotic and cognitive framework

    Get PDF
    This paper examines the representational requirements for interactive, collaborative systems intended to support sensemaking and argumentation over contested issues. We argue that a perspective supported by semiotic and cognitively oriented discourse analyses offers both theoretical insights and motivates representational requirements for the semantics of tools for contesting meaning. We introduce our semiotic approach, highlighting its implications for discourse representation, before describing a research system (ClaiMaker) designed to support the construction of scholarly argumentation by allowing analysts to publish and contest 'claims' about scientific contributions. We show how ClaiMaker's representational scheme is grounded in specific assumptions concerning the nature of explicit modelling, and the evolution of meaning within a discourse community. These characteristics allow the system to represent scholarly discourse as a dynamic process, in the form of continuously evolving structures. A cognitively oriented discourse analysis then shows how the use of a small set of cognitive relational primitives in the underlying ontology opens possibilities for offering users advanced forms of computational service for analysing collectively constructed argumentation networks

    The VERBMOBIL domain model version 1.0

    Get PDF
    This report describes the domain model used in the German Machine Translation project VERBMOBIL. In order make the design principles underlying the modeling explicit, we begin with a brief sketch of the VERBMOBIL demonstrator architecture from the perspective of the domain model. We then present some rather general considerations on the nature of domain modeling and its relationship to semantics. We claim that the semantic information contained in the model mainly serves two tasks. For one thing, it provides the basis for a conceptual transfer from German to English; on the other hand, it provides information needed for disambiguation. We argue that these tasks pose different requirements, and that domain modeling in general is highly task-dependent. A brief overview of domain models or ontologies used in existing NLP systems confirms this position. We finally describe the different parts of the domain model, explain our design decisions, and present examples of how the information contained in the model can be actually used in the VERBMOBIL demonstrator. In doing so, we also point out the main functionality of FLEX, the Description Logic system used for the modeling

    Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges

    Full text link
    Unstructured Electronic Health Record (EHR) data often contains critical information complementary to imaging data that would inform radiologists' diagnoses. However, time constraints and the large volume of notes frequently associated with individual patients renders manual perusal of such data to identify relevant evidence infeasible in practice. Modern Large Language Models (LLMs) provide a flexible means of interacting with unstructured EHR data, and may provide a mechanism to efficiently retrieve and summarize unstructured evidence relevant to a given query. In this work, we propose and evaluate an LLM (Flan-T5 XXL) for this purpose. Specifically, in a zero-shot setting we task the LLM to infer whether a patient has or is at risk of a particular condition; if so, we prompt the model to summarize the supporting evidence. Enlisting radiologists for manual evaluation, we find that this LLM-based approach provides outputs consistently preferred to a standard information retrieval baseline, but we also highlight the key outstanding challenge: LLMs are prone to hallucinating evidence. However, we provide results indicating that model confidence in outputs might indicate when LLMs are hallucinating, potentially providing a means to address this

    Enhancing Recommendations in Specialist Search Through Semantic-based Techniques and Multiple Resources

    Get PDF
    Information resources abound on the Internet, but mining these resources is a non-trivial task. Such abundance has raised the need to enhance services provided to users, such as recommendations. The purpose of this work is to explore how better recommendations can be provided to specialists in specific domains such as bioinformatics by introducing semantic techniques that reason through different resources and using specialist search techniques. Such techniques exploit semantic relations and hidden associations that occur as a result of the information overlapping among various concepts in multiple bioinformatics resources such as ontologies, websites and corpora. Thus, this work introduces a new method that reasons over different bioinformatics resources and then discovers and exploits different relations and information that may not exist in the original resources. Such relations may be discovered as a consequence of the information overlapping, such as the sibling and semantic similarity relations, to enhance the accuracy of the recommendations provided on bioinformatics content (e.g. articles). In addition, this research introduces a set of semantic rules that are able to extract different semantic information and relations inferred among various bioinformatics resources. This project introduces these semantic-based methods as part of a recommendation service within a content-based system. Moreover, it uses specialists' interests to enhance the provided recommendations by employing a method that is collecting user data implicitly. Then, it represents the data as adaptive ontological user profiles for each user based on his/her preferences, which contributes to more accurate recommendations provided to each specialist in the field of bioinformatics

    The Boltzmann Machine: a Connectionist Model for Supra-Classical Logic

    Get PDF
    This thesis moves towards reconciliation of two of the major paradigms of artificial intelligence: by exploring the representation of symbolic logic in an artificial neural network. Previous attempts at the machine representation of classical logic are reviewed. We however, consider the requirements of inference in the broader realm of supra-classical, non-monotonic logic. This logic is concerned with the tolerance of exceptions, thought to be associated with common-sense reasoning. Biological plausibility extends these requirements in the context of human cognition. The thesis identifies the requirements of supra-classical, non-monotonic logic in relation to the properties of candidate neural networks. Previous research has theoretically identified the Boltzmann machine as a potential candidate. We provide experimental evidence supporting a version of the Boltzmann machine as a practical representation of this logic. The theme is pursued by looking at the benefits of utilising the relationship between the logic and the Boltzmann machine in two areas. We report adaptations to the machine architecture which select for different information distributions. These distributions correspond to state preference in traditional logic versus the concept of atomic typicality in contemporary approaches to logic. We also show that the learning algorithm of the Boltzmann machine can be adapted to implement pseudo-rehearsal during retraining. The results of machine retraining are then utilised to consider the plausibility of some current theories of belief revision in logic. Furthermore, we propose an alternative approach to belief revision based on the experimental results of retraining the Boltzmann machine

    Novel Methods for Forensic Multimedia Data Analysis: Part I

    Get PDF
    The increased usage of digital media in daily life has resulted in the demand for novel multimedia data analysis techniques that can help to use these data for forensic purposes. Processing of such data for police investigation and as evidence in a court of law, such that data interpretation is reliable, trustworthy, and efficient in terms of human time and other resources required, will help greatly to speed up investigation and make investigation more effective. If such data are to be used as evidence in a court of law, techniques that can confirm origin and integrity are necessary. In this chapter, we are proposing a new concept for new multimedia processing techniques for varied multimedia sources. We describe the background and motivation for our work. The overall system architecture is explained. We present the data to be used. After a review of the state of the art of related work of the multimedia data we consider in this work, we describe the method and techniques we are developing that go beyond the state of the art. The work will be continued in a Chapter Part II of this topic
    corecore