278,461 research outputs found

    Decision Making for Rapid Information Acquisition in the Reconnaissance of Random Fields

    Full text link
    Research into several aspects of robot-enabled reconnaissance of random fields is reported. The work has two major components: the underlying theory of information acquisition in the exploration of unknown fields and the results of experiments on how humans use sensor-equipped robots to perform a simulated reconnaissance exercise. The theoretical framework reported herein extends work on robotic exploration that has been reported by ourselves and others. Several new figures of merit for evaluating exploration strategies are proposed and compared. Using concepts from differential topology and information theory, we develop the theoretical foundation of search strategies aimed at rapid discovery of topological features (locations of critical points and critical level sets) of a priori unknown differentiable random fields. The theory enables study of efficient reconnaissance strategies in which the tradeoff between speed and accuracy can be understood. The proposed approach to rapid discovery of topological features has led in a natural way to to the creation of parsimonious reconnaissance routines that do not rely on any prior knowledge of the environment. The design of topology-guided search protocols uses a mathematical framework that quantifies the relationship between what is discovered and what remains to be discovered. The quantification rests on an information theory inspired model whose properties allow us to treat search as a problem in optimal information acquisition. A central theme in this approach is that "conservative" and "aggressive" search strategies can be precisely defined, and search decisions regarding "exploration" vs. "exploitation" choices are informed by the rate at which the information metric is changing.Comment: 34 pages, 20 figure

    Image Segmentation using PDE, Variational, Morphological and Probabilistic Methods

    Get PDF
    The research in this dissertation has focused upon image segmentation and its related areas, using the techniques of partial differential equations, variational methods, mathematical morphological methods and probabilistic methods. An integrated segmentation method using both curve evolution and anisotropic diffusion is presented that utilizes both gradient and region information in images. A bottom-up image segmentation method is proposed to minimize the Mumford-Shah functional. Preferential image segmentation methods are presented that are based on the tree of shapes in mathematical morphologies and the Kullback-Leibler distance in information theory. A thorough evaluation of the morphological preferential image segmentation method is provided, and a web interface is described. A probabilistic model is presented that is based on particle filters for image segmentation. These methods may be incorporated as components of an integrated image processed system. The system utilizes Internet Protocol (IP) cameras for data acquisition. It utilizes image databases to provide prior information and store image processing results. Image preprocessing, image segmentation and object recognition are integrated in one stage in the system, using various methods developed in several areas. Interactions between data acquisition, integrated image processing and image databases are handled smoothly. A framework of the integrated system is implemented using Perl, C++, MySQL and CGI. The integrated system works for various applications such as video tracking, medical image processing and facial image processing. Experimental results on this applications are provided in the dissertation. Efficient computations such as multi-scale computing and parallel computing using graphic processors are also presented

    Are people really conformist-biased? An empirical test and a new mathematical model

    Get PDF
    According to an influential theory in cultural evolution, within-group similarity of culture is explained by a human 'conformist-bias', which is a hypothesized evolved predisposition to preferentially follow a member of the majority when acquiring ideas and behaviours. However, this notion has little support from social psychological research. In fact, a major theory in social psychology (LATANÉ and WOLF, 1981) argues for what is in effect a ‘nonconformist-bias’: by analogy to standard psychophysics they predict minority sources of influence to have relatively greater impact than majority sources. Here we present a new mathematical model and an experiment on social influence, both specifically designed to test these competing predictions. The results are in line with nonconformism. Finally, we discuss within-group similarity and suggest that it is not a general phenomenon but must be studied trait by trait

    Measure What Should be Measured: Progress and Challenges in Compressive Sensing

    Full text link
    Is compressive sensing overrated? Or can it live up to our expectations? What will come after compressive sensing and sparsity? And what has Galileo Galilei got to do with it? Compressive sensing has taken the signal processing community by storm. A large corpus of research devoted to the theory and numerics of compressive sensing has been published in the last few years. Moreover, compressive sensing has inspired and initiated intriguing new research directions, such as matrix completion. Potential new applications emerge at a dazzling rate. Yet some important theoretical questions remain open, and seemingly obvious applications keep escaping the grip of compressive sensing. In this paper I discuss some of the recent progress in compressive sensing and point out key challenges and opportunities as the area of compressive sensing and sparse representations keeps evolving. I also attempt to assess the long-term impact of compressive sensing

    Assessing schematic knowledge of introductory probability theory

    Get PDF
    [Abstract]: The ability to identify schematic knowledge is an important goal for both assessment and instruction. In the current paper, schematic knowledge of statistical probability theory is explored from the declarative-procedural framework using multiple methods of assessment. A sample of 90 undergraduate introductory statistics students was required to classify 10 pairs of probability problems as similar or different; to identify whether 15 problems contained sufficient, irrelevant, or missing information (text-edit); and to solve 10 additional problems. The complexity of the schema on which the problems were based was also manipulated. Detailed analyses compared text-editing and solution accuracy as a function of text-editing category and schema complexity. Results showed that text-editing tends to be easier than solution and differentially sensitive to schema complexity. While text-editing and classification were correlated with solution, only text-editing problems with missing information uniquely predicted success. In light of previous research these results suggest that text-editing is suitable for supplementing the assessment of schematic knowledge in development
    • …
    corecore