12,703 research outputs found

    Benchmarking Particle Filter Algorithms for Efficient Velodyne-Based Vehicle Localization

    Get PDF
    Keeping a vehicle well-localized within a prebuilt-map is at the core of any autonomous vehicle navigation system. In this work, we show that both standard SIR sampling and rejection-based optimal sampling are suitable for efficient (10 to 20 ms) real-time pose tracking without feature detection that is using raw point clouds from a 3D LiDAR. Motivated by the large amount of information captured by these sensors, we perform a systematic statistical analysis of how many points are actually required to reach an optimal ratio between efficiency and positioning accuracy. Furthermore, initialization from adverse conditions, e.g., poor GPS signal in urban canyons, we also identify the optimal particle filter settings required to ensure convergence. Our findings include that a decimation factor between 100 and 200 on incoming point clouds provides a large savings in computational cost with a negligible loss in localization accuracy for a VLP-16 scanner. Furthermore, an initial density of ∼2 particles/m 2 is required to achieve 100% convergence success for large-scale (∼100,000 m 2 ), outdoor global localization without any additional hint from GPS or magnetic field sensors. All implementations have been released as open-source software

    Real-Time analysis and visualization for single-molecule based super-resolution microscopy

    Get PDF
    Accurate multidimensional localization of isolated fluorescent emitters is a time consuming process in single-molecule based super-resolution microscopy. We demonstrate a functional method for real-time reconstruction with automatic feedback control, without compromising the localization accuracy. Compatible with high frame rates of EM-CCD cameras, it relies on a wavelet segmentation algorithm, together with a mix of CPU/GPU implementation. A combination with Gaussian fitting allows direct access to 3D localization. Automatic feedback control ensures optimal molecule density throughout the acquisition process. With this method, we significantly improve the efficiency and feasibility of localization-based super-resolution microscopy

    Simulation of Rapidly-Exploring Random Trees in Membrane Computing with P-Lingua and Automatic Programming

    Get PDF
    Methods based on Rapidly-exploring Random Trees (RRTs) have been widely used in robotics to solve motion planning problems. On the other hand, in the membrane computing framework, models based on Enzymatic Numerical P systems (ENPS) have been applied to robot controllers, but today there is a lack of planning algorithms based on membrane computing for robotics. With this motivation, we provide a variant of ENPS called Random Enzymatic Numerical P systems with Proteins and Shared Memory (RENPSM) addressed to implement RRT algorithms and we illustrate it by simulating the bidirectional RRT algorithm. This paper is an extension of [21]a. The software presented in [21] was an ad-hoc simulator, i.e, a tool for simulating computations of one and only one model that has been hard-coded. The main contribution of this paper with respect to [21] is the introduction of a novel solution for membrane computing simulators based on automatic programming. First, we have extended the P-Lingua syntax –a language to define membrane computing models– to write RENPSM models. Second, we have implemented a new parser based on Flex and Bison to read RENPSM models and produce source code in C language for multicore processors with OpenMP. Finally, additional experiments are presented.Ministerio de Economía, Industria y Competitividad TIN2017-89842-

    Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines

    Get PDF
    Many automatically analyzable scientific questions are well-posed and offer a variety of information about the expected outcome a priori. Although often being neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and the direct information about the ambiguity inherent in the extracted data. We present a new concept for the estimation and propagation of uncertainty involved in image analysis operators. This allows using simple processing operators that are suitable for analyzing large-scale 3D+t microscopy images without compromising the result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it enhance the result quality of various processing operators. All presented concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. Furthermore, the functionality of the proposed approach is validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. Especially, the automated analysis of terabyte-scale microscopy data will benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. The generality of the concept, however, makes it also applicable to practically any other field with processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure

    Compensating inaccurate annotations to train 3D facial landmark localisation models

    Get PDF
    In this paper we investigate the impact of inconsistency in manual annotations when they are used to train automatic models for 3D facial landmark localization. We start by showing that it is possible to objectively measure the consistency of annotations in a database, provided that it contains replicates (i.e. repeated scans from the same person). Applying such measure to the widely used FRGC database we find that manual annotations currently available are suboptimal and can strongly impair the accuracy of automatic models learnt therefrom. To address this issue, we present a simple algorithm to automatically correct a set of annotations and show that it can help to significantly improve the accuracy of the models in terms of landmark localization errors. This improvement is observed even when errors are measured with respect to the original (not corrected) annotations. However, we also show that if errors are computed against an alternative set of manual annotations with higher consistency, the accuracy of the models constructed using the corrections from the presented algorithm tends to converge to the one achieved by building the models on the alternative,more consistent set
    corecore