32,750 research outputs found

    Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br> SAT-Based Planning

    Full text link
    In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability (SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations (and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior -- the proof arguments -- illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks

    Recent advances in malaria genomics and epigenomics

    Get PDF
    Malaria continues to impose a significant disease burden on low- and middle-income countries in the tropics. However, revolutionary progress over the last 3 years in nucleic acid sequencing, reverse genetics, and post-genome analyses has generated step changes in our understanding of malaria parasite (Plasmodium spp.) biology and its interactions with its host and vector. Driven by the availability of vast amounts of genome sequence data from Plasmodium species strains, relevant human populations of different ethnicities, and mosquito vectors, researchers can consider any biological component of the malarial process in isolation or in the interactive setting that is infection. In particular, considerable progress has been made in the area of population genomics, with Plasmodium falciparum serving as a highly relevant model. Such studies have demonstrated that genome evolution under strong selective pressure can be detected. These data, combined with reverse genetics, have enabled the identification of the region of the P. falciparum genome that is under selective pressure and the confirmation of the functionality of the mutations in the kelch13 gene that accompany resistance to the major frontline antimalarial, artemisinin. Furthermore, the central role of epigenetic regulation of gene expression and antigenic variation and developmental fate in P. falciparum is becoming ever clearer. This review summarizes recent exciting discoveries that genome technologies have enabled in malaria research and highlights some of their applications to healthcare. The knowledge gained will help to develop surveillance approaches for the emergence or spread of drug resistance and to identify new targets for the development of antimalarial drugs and perhaps vaccines

    Deep Learning Features at Scale for Visual Place Recognition

    Full text link
    The success of deep learning techniques in the computer vision domain has triggered a range of initial investigations into their utility for visual place recognition, all using generic features from networks that were trained for other types of recognition tasks. In this paper, we train, at large scale, two CNN architectures for the specific place recognition task and employ a multi-scale feature encoding method to generate condition- and viewpoint-invariant features. To enable this training to occur, we have developed a massive Specific PlacEs Dataset (SPED) with hundreds of examples of place appearance change at thousands of different places, as opposed to the semantic place type datasets currently available. This new dataset enables us to set up a training regime that interprets place recognition as a classification problem. We comprehensively evaluate our trained networks on several challenging benchmark place recognition datasets and demonstrate that they achieve an average 10% increase in performance over other place recognition algorithms and pre-trained CNNs. By analyzing the network responses and their differences from pre-trained networks, we provide insights into what a network learns when training for place recognition, and what these results signify for future research in this area.Comment: 8 pages, 10 figures. Accepted by International Conference on Robotics and Automation (ICRA) 2017. This is the submitted version. The final published version may be slightly differen

    Photonic Delay Systems as Machine Learning Implementations

    Get PDF
    Nonlinear photonic delay systems present interesting implementation platforms for machine learning models. They can be extremely fast, offer great degrees of parallelism and potentially consume far less power than digital processors. So far they have been successfully employed for signal processing using the Reservoir Computing paradigm. In this paper we show that their range of applicability can be greatly extended if we use gradient descent with backpropagation through time on a model of the system to optimize the input encoding of such systems. We perform physical experiments that demonstrate that the obtained input encodings work well in reality, and we show that optimized systems perform significantly better than the common Reservoir Computing approach. The results presented here demonstrate that common gradient descent techniques from machine learning may well be applicable on physical neuro-inspired analog computers

    What's on your mind? Recent advances in memory detection using the concealed information test

    Get PDF
    Lie detectors can be applied in a wide variety of settings. But this advantage comes with a considerable cost: False positives. The applicability of the Concealed Information Test (CIT) is More limited, yet when it can be applied, the risk of false accusations can be set a priori at a very low level. The CIT assesses the recognition of; critical information that is known only by the examiners and the culprit, for example, the face a an accomplice. Large effects are Obtained with the CIT, whether combined with peripheral, brain, or Motor responses. We see three important challenges for the CIT. First, the false negative rate Of the CIT can be substantial, particularly under :realistic circumstantes. A possible solution Seems to restrict the CIT to highly Salient details. Second, there exist effective faking strategies. Future research will tell whether faking can be detected or even prevented (e.g., Using Overt measures). Third, recognition of critical crime detail's does not necessarily result from criminal activity. It is therefore important to properly embed the CIT in the investigative process, While taking care when drawing conclusions from the test outcome (recognition, not guilt)
    • …
    corecore