1,743 research outputs found

    Two Dimensional Positioning and Heading Solution for Flying Vehicles Using a Line-Scanning Laser Radar (LADAR)

    Get PDF
    Emerging technology in small autonomous flying vehicles requires the systems to have a precise navigation solution in order to perform tasks. In many critical environments, such as indoors, GPS is unavailable necessitating the development of supplemental aiding sensors to determine precise position. This research investigates the use of a line scanning laser radar (LADAR) as a standalone two dimensional position and heading navigation solution and sets up the device for augmentation into existing navigation systems. A fast histogram correlation method is developed to operate in real-time on board the vehicle providing position and heading updates at a rate of 10 Hz. LADAR navigation methods are adapted to 3 dimensions with a simulation built to analyze performance loss due attitude changes during flight. These simulations are then compared to experimental results collected using SICK LD-OEM 1000 mounted a cart traversing. The histogram correlation algorithm applied in this work was shown to successfully navigate a realistic environment where a quadrotor in short flights of less than 5 min in larger rooms. Application in hallways show great promise providing a stable heading along with tracking movement perpendicular to the hallway

    Fail-aware LIDAR-based odometry for autonomous vehicles

    Get PDF
    Autonomous driving systems are set to become a reality in transport systems and, so, maximum acceptance is being sought among users. Currently, the most advanced architectures require driver intervention when functional system failures or critical sensor operations take place, presenting problems related to driver state, distractions, fatigue, and other factors that prevent safe control. Therefore, this work presents a redundant, accurate, robust, and scalable LiDAR odometry system with fail-aware system features that can allow other systems to perform a safe stop manoeuvre without driver mediation. All odometry systems have drift error, making it difficult to use them for localisation tasks over extended periods. For this reason, the paper presents an accurate LiDAR odometry system with a fail-aware indicator. This indicator estimates a time window in which the system manages the localisation tasks appropriately. The odometry error is minimised by applying a dynamic 6-DoF model and fusing measures based on the Iterative Closest Points (ICP), environment feature extraction, and Singular Value Decomposition (SVD) methods. The obtained results are promising for two reasons: First, in the KITTI odometry data set, the ranking achieved by the proposed method is twelfth, considering only LiDAR-based methods, where its translation and rotation errors are 1.00% and 0.0041 deg/m, respectively. Second, the encouraging results of the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry system. The results depict that, in order to achieve an accurate odometry system, complex models and measurement fusion techniques must be used to improve its behaviour. Furthermore, if an odometry system is to be used for redundant localisation features, it must integrate a fail-aware indicator for use in a safe manner

    A Survey on Global LiDAR Localization

    Full text link
    Knowledge about the own pose is key for all mobile robot applications. Thus pose estimation is part of the core functionalities of mobile robots. In the last two decades, LiDAR scanners have become a standard sensor for robot localization and mapping. This article surveys recent progress and advances in LiDAR-based global localization. We start with the problem formulation and explore the application scope. We then present the methodology review covering various global localization topics, such as maps, descriptor extraction, and consistency checks. The contents are organized under three themes. The first is the combination of global place retrieval and local pose estimation. Then the second theme is upgrading single-shot measurement to sequential ones for sequential global localization. The third theme is extending single-robot global localization to cross-robot localization on multi-robot systems. We end this survey with a discussion of open challenges and promising directions on global lidar localization

    Development of a hyperspectral imaging technique with internal scene scan for analysing the chemistry of food degradation

    Get PDF
    Hyperspectral imaging (HSI) can provide valuable information about the spatial distribution of ingredients in an object, therefore the technique has been widely adopted in numerous applications, ranging from remote sensing and land planning, food quality control, to biomedical applications. However, HSI instruments are expensive, which has limited the technique to some high-end applications. In this study, we developed a cost-effective HSI technique with an internal scene-scan mechanism, which enables rapid acquisitions of a scene without moving the instrument or the tested object. The apparatus was characterised, revealing an imaging resolution of 0.4 mm in a field of view (FoV) of 10 cm and a spectral resolution of 1.3 nm in the 40–800 nm visible light region. We succeeded in applying our apparatus to analyse the oxidation processes of apple and meat, which demonstrated our design and relevant data analysis to be of high value to visualise chemistry related to food quality and safety

    Beyond imaging with coherent anti-Stokes Raman scattering microscopy

    Get PDF
    La microscopie optique permet de visualiser des échantillons biologiques avec une bonne sensibilité et une résolution spatiale élevée tout en interférant peu avec les échantillons. La microscopie par diffusion Raman cohérente (CARS) est une technique de microscopie non linéaire basée sur l’effet Raman qui a comme avantage de fournir un mécanisme de contraste endogène sensible aux vibrations moléculaires. La microscopie CARS est maintenant une modalité d’imagerie reconnue, en particulier pour les expériences in vivo, car elle élimine la nécessité d’utiliser des agents de contraste exogènes, et donc les problèmes liés à leur distribution, spécificité et caractère invasif. Cependant, il existe encore plusieurs obstacles à l’adoption à grande échelle de la microscopie CARS en biologie et en médecine : le coût et la complexité des systèmes actuels, les difficultés d’utilisation et d’entretient, la rigidité du mécanisme de contraste, la vitesse de syntonisation limitée et le faible nombre de méthodes d’analyse d’image adaptées. Cette thèse de doctorat vise à aller au-delà de certaines des limites actuelles de l’imagerie CARS dans l’espoir que cela encourage son adoption par un public plus large. Tout d’abord, nous avons introduit un nouveau système d’imagerie spectrale CARS ayant une vitesse de syntonisation de longueur d’onde beaucoup plus rapide que les autres techniques similaires. Ce système est basé sur un laser à fibre picoseconde synchronisé qui est à la fois robuste et portable. Il peut accéder à des lignes de vibration Raman sur une plage importante (2700–2950 cm-1) à des taux allant jusqu’à 10 000 points spectrales par seconde. Il est parfaitement adapté pour l’acquisition d’images spectrales dans les tissus épais. En second lieu, nous avons proposé une nouvelle méthode d’analyse d’images pour l’évaluation de la structure de la myéline dans des images de sections longitudinales de moelle épinière. Nous avons introduit un indicateur quantitatif sensible à l’organisation de la myéline et démontré comment il pourrait être utilisé pour étudier certaines pathologies. Enfin, nous avons développé une méthode automatisé pour la segmentation d’axones myélinisés dans des images CARS de coupes transversales de tissu nerveux. Cette méthode a été utilisée pour extraire des informations morphologique des fibres nerveuses dans des images CARS de grande échelle.Optical-based microscopy techniques can sample biological specimens using many contrast mechanisms providing good sensitivity and high spatial resolution while minimally interfering with the samples. Coherent anti-Stokes Raman scattering (CARS) microscopy is a nonlinear microscopy technique based on the Raman effect. It shares common characteristics of other optical microscopy modalities with the added benefit of providing an endogenous contrast mechanism sensitive to molecular vibrations. CARS is now recognized as a great imaging modality, especially for in vivo experiments since it eliminates the need for exogenous contrast agents, and hence problems related to the delivery, specificity, and invasiveness of those markers. However, there are still several obstacles preventing the wide-scale adoption of CARS in biology and medicine: cost and complexity of current systems as well as difficulty to operate and maintain them, lack of flexibility of the contrast mechanism, low tuning speed and finally, poor accessibility to adapted image analysis methods. This doctoral thesis strives to move beyond some of the current limitations of CARS imaging in the hope that it might encourage a wider adoption of CARS as a microscopy technique. First, we introduced a new CARS spectral imaging system with vibrational tuning speed many orders of magnitude faster than other narrowband techniques. The system presented in this original contribution is based on a synchronized picosecond fibre laser that is both robust and portable. It can access Raman lines over a significant portion of the highwavenumber region (2700–2950 cm-1) at rates of up to 10,000 spectral points per second and is perfectly suitable for the acquisition of CARS spectral images in thick tissue. Secondly, we proposed a new image analysis method for the assessment of myelin health in images of longitudinal sections of spinal cord. We introduced a metric sensitive to the organization/disorganization of the myelin structure and showed how it could be used to study pathologies such as multiple sclerosis. Finally, we have developped a fully automated segmentation method specifically designed for CARS images of transverse cross sections of nerve tissue.We used our method to extract nerve fibre morphology information from large scale CARS images

    Affine multi-view modelling for close range object measurement

    Get PDF
    In photogrammetry, sensor modelling with 3D point estimation is a fundamental topic of research. Perspective frame cameras offer the mathematical basis for close range modelling approaches. The norm is to employ robust bundle adjustments for simultaneous parameter estimation and 3D object measurement. In 2D to 3D modelling strategies image resolution, scale, sampling and geometric distortion are prior factors. Non-conventional image geometries that implement uncalibrated cameras are established in computer vision approaches; these aim for fast solutions at the expense of precision. The projective camera is defined in homogeneous terms and linear algorithms are employed. An attractive sensor model disembodied from projective distortions is the affine. Affine modelling has been studied in the contexts of geometry recovery, feature detection and texturing in vision, however multi-view approaches for precise object measurement are not yet widely available. This project investigates affine multi-view modelling from a photogrammetric standpoint. A new affine bundle adjustment system has been developed for point-based data observed in close range image networks. The system allows calibration, orientation and 3D point estimation. It is processed as a least squares solution with high redundancy providing statistical analysis. Starting values are recovered from a combination of implicit perspective and explicit affine approaches. System development focuses on retrieval of orientation parameters, 3D point coordinates and internal calibration with definition of system datum, sensor scale and radial lens distortion. Algorithm development is supported with method description by simulation. Initialization and implementation are evaluated with the statistical indicators, algorithm convergence and correlation of parameters. Object space is assessed with evaluation of the 3D point correlation coefficients and error ellipsoids. Sensor scale is checked with comparison of camera systems utilizing quality and accuracy metrics. For independent method evaluation, testing is implemented over a perspective bundle adjustment tool with similar indicators. Test datasets are initialized from precise reference image networks. Real affine image networks are acquired with an optical system (~1M pixel CCD cameras with 0.16x telecentric lens). Analysis of tests ascertains that the affine method results in an RMS image misclosure at a sub-pixel level and precisions of a few tenths of microns in object space

    Holographic Manipulation of Nanostructured Fiber Optics Enables Spatially-Resolved, Reconfigurable Optical Control of Plasmonic Local Field Enhancement and SERS

    Get PDF
    Integration of plasmonic structures on step-index optical fibers is attracting interest for both applications and fundamental studies. However, the possibility to dynamically control the coupling between the guided light fields and the plasmonic resonances is hindered by the turbidity of light propagation in multimode fibers (MMFs). This pivotal point strongly limits the range of studies that can benefit from nanostructured fiber optics. Fortunately, harnessing the interaction between plasmonic modes on the fiber tip and the full set of guided modes can bring this technology to a next generation progress. Here, the intrinsic wealth of information of guided modes is exploited to spatiotemporally control the plasmonic resonances of the coupled system. This concept is shown by employing dynamic phase modulation to structure both the response of plasmonic MMFs on the plasmonic facet and their response in the corresponding Fourier plane, achieving spatial selective field enhancement and direct control of the probe's work point in the dispersion diagram. Such a conceptual leap would transform the biomedical applications of holographic endoscopic imaging by integrating new sensing and manipulation capabilities.L.C. and Fi.P. contributed equally to this work. M.D.V. and Fe.P. jointly supervised and are co-last authors of this work. L.C., D.Z., L.M.P., C.C., M.D.V., and Fe.P. acknowledge European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No. 828972. Fi.P., A.B., and Fe.P. acknowledge European Research Council under the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No. 677683. Fi.P., M.D.V., and Fe.P. acknowledge European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No 101016787. M.P. and M.D.V. acknowledge European Research Council under the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No. 692943. M.P., Fe.P., and M.D.V. acknowledge U.S. National Institutes of Health (Grant No. 1UF1NS108177-01). M.D.V. acknowledges U.S. National Institutes of Health (Grant No. U01NS094190)
    • …
    corecore