1,268 research outputs found

    Featherweight VeriFast

    Full text link
    VeriFast is a leading research prototype tool for the sound modular verification of safety and correctness properties of single-threaded and multithreaded C and Java programs. It has been used as a vehicle for exploration and validation of novel program verification techniques and for industrial case studies; it has served well at a number of program verification competitions; and it has been used for teaching by multiple teachers independent of the authors. However, until now, while VeriFast's operation has been described informally in a number of publications, and specific verification techniques have been formalized, a clear and precise exposition of how VeriFast works has not yet appeared. In this article we present for the first time a formal definition and soundness proof of a core subset of the VeriFast program verification approach. The exposition aims to be both accessible and rigorous: the text is based on lecture notes for a graduate course on program verification, and it is backed by an executable machine-readable definition and machine-checked soundness proof in Coq

    A Functional Architecture Approach to Neural Systems

    Get PDF
    The technology for the design of systems to perform extremely complex combinations of real-time functionality has developed over a long period. This technology is based on the use of a hardware architecture with a physical separation into memory and processing, and a software architecture which divides functionality into a disciplined hierarchy of software components which exchange unambiguous information. This technology experiences difficulty in design of systems to perform parallel processing, and extreme difficulty in design of systems which can heuristically change their own functionality. These limitations derive from the approach to information exchange between functional components. A design approach in which functional components can exchange ambiguous information leads to systems with the recommendation architecture which are less subject to these limitations. Biological brains have been constrained by natural pressures to adopt functional architectures with this different information exchange approach. Neural networks have not made a complete shift to use of ambiguous information, and do not address adequate management of context for ambiguous information exchange between modules. As a result such networks cannot be scaled to complex functionality. Simulations of systems with the recommendation architecture demonstrate the capability to heuristically organize to perform complex functionality

    Symbolizing Number: fMRI investigations of the semantic, auditory, and visual correlates of Hindu-Arabic numerals

    Get PDF
    Humans are born with a sensitivity to numerical magnitude. In literate cultures, these numerical intuitions are associated with a symbolic notation (e.g..Hindu-Arabic numerals). While a growing body of neuroscientific research has been conducted to elucidate commonalities between symbolic (e.g. Hinud-Arabic numerals) and non-symbolic (e.g. arrays of objects) representations, relatively little is known about the neural correlates specific to the symbolic processing of numerical magnitude. To address this, I conducted the three fMRI experiments contained within this thesis to characterize the neuroanatomical correlates of the auditory, visual, audiovisual, and semantic processing of numerical symbols. In Experiment 1, the neural correlates of symbolic and non-symbolic number were contrasted to reveal that the left angular and superior temporal gyri responded specifically to numerals, while the right posterior superior parietal lobe only responded to non-symbolic arrays. Moreover, the right intraparietal sulcus (IPS) was activated by both formats. The results reflect divergent encoding pathways that converge upon a common representation across formats. In Experiment 2, the neural response to Hindu-Arabic numerals and Chinese numerical ideographs was recorded in individuals who could read both notations and a control group who could read only the numerals. A between-groups contrast revealed semantic processing of ideographs in the right IPS, while asemantic visual processing was found in the left fusiform gyrus. In contrast to the ideographs, the semantic processing of numerals was associated with left IPS activity. The role of these brain regions in the semantic and asemantic representation of numerals is discussed. In Experiment 3, the neural response of the visual, auditory, and audiovisual processing of numerals and letters was measured. The regions associated with visual and auditory responses to letters and numerals were highly similar. In contrast, the audiovisual response to numerals recruited a region of the right supramarginal gyrus, while the audiovisual letters activated left visual regions. In addition, an effect of congruency in the audiovisual pairs was comparable across numeral-number name pairs and letter-letter name pairs, but absent in letter-speech sound pairs. Taken together, these three experiments provide new insights into how the brain processes numerical symbols at different levels of description

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Ontology Enrichment from Free-text Clinical Documents: A Comparison of Alternative Approaches

    Get PDF
    While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships, as well as difficulty in updating the ontology as domain knowledge changes. Methodologies developed in the fields of Natural Language Processing (NLP), Information Extraction (IE), Information Retrieval (IR), and Machine Learning (ML) provide techniques for automating the enrichment of ontology from free-text documents. In this dissertation, I extended these methodologies into biomedical ontology development. First, I reviewed existing methodologies and systems developed in the fields of NLP, IR, and IE, and discussed how existing methods can benefit the development of biomedical ontologies. This previously unconducted review was published in the Journal of Biomedical Informatics. Second, I compared the effectiveness of three methods from two different approaches, the symbolic (the Hearst method) and the statistical (the Church and Lin methods), using clinical free-text documents. Third, I developed a methodological framework for Ontology Learning (OL) evaluation and comparison. This framework permits evaluation of the two types of OL approaches that include three OL methods. The significance of this work is as follows: 1) The results from the comparative study showed the potential of these methods for biomedical ontology enrichment. For the two targeted domains (NCIT and RadLex), the Hearst method revealed an average of 21% and 11% new concept acceptance rates, respectively. The Lin method produced a 74% acceptance rate for NCIT; the Church method, 53%. As a result of this study (published in the Journal of Methods of Information in Medicine), many suggested candidates have been incorporated into the NCIT; 2) The evaluation framework is flexible and general enough that it can analyze the performance of ontology enrichment methods for many domains, thus expediting the process of automation and minimizing the likelihood that key concepts and relationships would be missed as domain knowledge evolves
    • …
    corecore