1,697 research outputs found

    Efficient Groundness Analysis in Prolog

    Get PDF
    Boolean functions can be used to express the groundness of, and trace grounding dependencies between, program variables in (constraint) logic programs. In this paper, a variety of issues pertaining to the efficient Prolog implementation of groundness analysis are investigated, focusing on the domain of definite Boolean functions, Def. The systematic design of the representation of an abstract domain is discussed in relation to its impact on the algorithmic complexity of the domain operations; the most frequently called operations should be the most lightweight. This methodology is applied to Def, resulting in a new representation, together with new algorithms for its domain operations utilising previously unexploited properties of Def -- for instance, quadratic-time entailment checking. The iteration strategy driving the analysis is also discussed and a simple, but very effective, optimisation of induced magic is described. The analysis can be implemented straightforwardly in Prolog and the use of a non-ground representation results in an efficient, scalable tool which does not require widening to be invoked, even on the largest benchmarks. An extensive experimental evaluation is givenComment: 31 pages To appear in Theory and Practice of Logic Programmin

    Coherence in simultaneous interpreting : an idealized cognitive model perspective

    Get PDF
    This study aims to explore the two questions: 1) Does the interpreter’s relevant bodily experience help her to achieve coherence in the source text (ST) and the target text (TT)? 2) How does the interpreter’s mental effort expended in achieving coherence reflect the textual structure of the ST? The findings of this study contribute to the general understanding of how coherence is achieved in simultaneous interpreting (SI). The theoretical framework is based on the concept of the Idealized Cognitive Model (ICM), which emphasizes the role of bodily experience in organizing and understanding knowledge. A bodily experience based experiment was conducted with two contrastive groups: experimental group and control group, involving thirty subjects from a China-based university, who had Chinese as their first language and English as their second language. The data collected was recordings of English to Chinese simultaneous interpretations. Coherence in SI was analyzed on the basis of both quantitative and qualitative approaches by virtue of coherence clues. The analysis shows that the interpreter’s bodily experience helped her to achieve coherence and distribute her mental effort in both the ST and TT. As the term ICM suggests, the cognitive model is idealized on the grounds that the ICM does not fit into the real specifics of a textual structure perfectly or all the time. The ICM is an open-ended model in terms of the analysis of understanding abstract concepts especially in this SI discourse and needs more research. This study can contribute to SI research and training, suggesting that specialization is a trend in interpreting education

    Knowledge-Based Aircraft Automation: Managers Guide on the use of Artificial Intelligence for Aircraft Automation and Verification and Validation Approach for a Neural-Based Flight Controller

    Get PDF
    The ultimate goal of this report was to integrate the powerful tools of artificial intelligence into the traditional process of software development. To maintain the US aerospace competitive advantage, traditional aerospace and software engineers need to more easily incorporate the technology of artificial intelligence into the advanced aerospace systems being designed today. The future goal was to transition artificial intelligence from an emerging technology to a standard technology that is considered early in the life cycle process to develop state-of-the-art aircraft automation systems. This report addressed the future goal in two ways. First, it provided a matrix that identified typical aircraft automation applications conducive to various artificial intelligence methods. The purpose of this matrix was to provide top-level guidance to managers contemplating the possible use of artificial intelligence in the development of aircraft automation. Second, the report provided a methodology to formally evaluate neural networks as part of the traditional process of software development. The matrix was developed by organizing the discipline of artificial intelligence into the following six methods: logical, object representation-based, distributed, uncertainty management, temporal and neurocomputing. Next, a study of existing aircraft automation applications that have been conducive to artificial intelligence implementation resulted in the following five categories: pilot-vehicle interface, system status and diagnosis, situation assessment, automatic flight planning, and aircraft flight control. The resulting matrix provided management guidance to understand artificial intelligence as it applied to aircraft automation. The approach taken to develop a methodology to formally evaluate neural networks as part of the software engineering life cycle was to start with the existing software quality assurance standards and to change these standards to include neural network development. The changes were to include evaluation tools that can be applied to neural networks at each phase of the software engineering life cycle. The result was a formal evaluation approach to increase the product quality of systems that use neural networks for their implementation

    Imaging deductive reasoning and the new paradigm

    Get PDF
    There has been a great expansion of research into human reasoning at all of Marr’s explanatory levels. There is a tendency for this work to progress within a level largely ignoring the others which can lead to slippage between levels (Chater et al., 2003). It is argued that recent brain imaging research on deductive reasoning—implementational level—has largely ignored the new paradigm in reasoning—computational level (Over, 2009). Consequently, recent imaging results are reviewed with the focus on how they relate to the new paradigm. The imaging results are drawn primarily from a recent meta-analysis by Prado et al. (2011) but further imaging results are also reviewed where relevant. Three main observations are made. First, the main function of the core brain region identified is most likely elaborative, defeasible reasoning not deductive reasoning. Second, the subtraction methodology and the meta-analytic approach may remove all traces of content specific System 1 processes thought to underpin much human reasoning. Third, interpreting the function of the brain regions activated by a task depends on theories of the function that a task engages. When there are multiple interpretations of that function, interpreting what an active brain region is doing is not clear cut. It is concluded that there is a need to more tightly connect brain activation to function, which could be achieved using formalized computational level models and a parametric variation approach

    Recent Advances in General Game Playing

    Get PDF
    The goal of General Game Playing (GGP) has been to develop computer programs that can perform well across various game types. It is natural for human game players to transfer knowledge from games they already know how to play to other similar games. GGP research attempts to design systems that work well across different game types, including unknown new games. In this review, we present a survey of recent advances (2011 to 2014) in GGP for both traditional games and video games. It is notable that research on GGP has been expanding into modern video games. Monte-Carlo Tree Search and its enhancements have been the most influential techniques in GGP for both research domains. Additionally, international competitions have become important events that promote and increase GGP research. Recently, a video GGP competition was launched. In this survey, we review recent progress in the most challenging research areas of Artificial Intelligence (AI) related to universal game playing

    Can we Build Theories of Understanding on the Basis of Mirror Neurons?

    Get PDF
    The discovery of mirror neurons and the characterization of their response properties is certainly an important achievement in neurophysiology and cognitive neuroscience. The reference to the role of mirror neurons in ‘reading’ the intentions of other creatures and in the learning process fulfils an explanatory function in understanding many cognitive phenomena beginning from imitating, towards understanding, and finishing with complex social interactions. The focus of this paper is to review selected approaches to the role of mirror neurons in mental activity as understanding, and to conclude with some possible implications for researches on mirror neurons for philosophical theories of understanding

    Inductive logic programming at 30: a new introduction

    Full text link
    Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises training examples. As ILP turns 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main learning settings; describe the building blocks of an ILP system; compare several systems on several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol); highlight key application areas; and, finally, summarise current limitations and directions for future research.Comment: Paper under revie
    corecore