19 research outputs found

    Combining omnidirectional vision with polarization vision for robot navigation

    Get PDF
    La polarisation est le phénomène qui décrit les orientations des oscillations des ondes lumineuses qui sont limitées en direction. La lumière polarisée est largement utilisée dans le règne animal,à partir de la recherche de nourriture, la défense et la communication et la navigation. Le chapitre (1) aborde brièvement certains aspects importants de la polarisation et explique notre problématique de recherche. Nous visons à utiliser un capteur polarimétrique-catadioptrique car il existe de nombreuses applications qui peuvent bénéficier d'une telle combinaison en vision par ordinateur et en robotique, en particulier pour l'estimation d'attitude et les applications de navigation. Le chapitre (2) couvre essentiellement l'état de l'art de l'estimation d'attitude basée sur la vision.Quand la lumière non-polarisée du soleil pénètre dans l'atmosphère, l'air entraine une diffusion de Rayleigh, et la lumière devient partiellement linéairement polarisée. Le chapitre (3) présente les motifs de polarisation de la lumière naturelle et couvre l'état de l'art des méthodes d'acquisition des motifs de polarisation de la lumière naturelle utilisant des capteurs omnidirectionnels (par exemple fisheye et capteurs catadioptriques). Nous expliquons également les caractéristiques de polarisation de la lumière naturelle et donnons une nouvelle dérivation théorique de son angle de polarisation.Notre objectif est d'obtenir une vue omnidirectionnelle à 360 associée aux caractéristiques de polarisation. Pour ce faire, ce travail est basé sur des capteurs catadioptriques qui sont composées de surfaces réfléchissantes et de lentilles. Généralement, la surface réfléchissante est métallique et donc l'état de polarisation de la lumière incidente, qui est le plus souvent partiellement linéairement polarisée, est modifiée pour être polarisée elliptiquement après réflexion. A partir de la mesure de l'état de polarisation de la lumière réfléchie, nous voulons obtenir l'état de polarisation incident. Le chapitre (4) propose une nouvelle méthode pour mesurer les paramètres de polarisation de la lumière en utilisant un capteur catadioptrique. La possibilité de mesurer le vecteur de Stokes du rayon incident est démontré à partir de trois composants du vecteur de Stokes du rayon réfléchi sur les quatre existants.Lorsque les motifs de polarisation incidents sont disponibles, les angles zénithal et azimutal du soleil peuvent être directement estimés à l'aide de ces modèles. Le chapitre (5) traite de l'orientation et de la navigation de robot basées sur la polarisation et différents algorithmes sont proposés pour estimer ces angles dans ce chapitre. A notre connaissance, l'angle zénithal du soleil est pour la première fois estimé dans ce travail à partir des schémas de polarisation incidents. Nous proposons également d'estimer l'orientation d'un véhicule à partir de ces motifs de polarisation.Enfin, le travail est conclu et les possibles perspectives de recherche sont discutées dans le chapitre (6). D'autres exemples de schémas de polarisation de la lumière naturelle, leur calibrage et des applications sont proposées en annexe (B).Notre travail pourrait ouvrir un accès au monde de la vision polarimétrique omnidirectionnelle en plus des approches conventionnelles. Cela inclut l'orientation bio-inspirée des robots, des applications de navigation, ou bien la localisation en plein air pour laquelle les motifs de polarisation de la lumière naturelle associés à l'orientation du soleil à une heure précise peuvent aboutir à la localisation géographique d'un véhiculePolarization is the phenomenon that describes the oscillations orientations of the light waves which are restricted in direction. Polarized light has multiple uses in the animal kingdom ranging from foraging, defense and communication to orientation and navigation. Chapter (1) briefly covers some important aspects of polarization and explains our research problem. We are aiming to use a polarimetric-catadioptric sensor since there are many applications which can benefit from such combination in computer vision and robotics specially robot orientation (attitude estimation) and navigation applications. Chapter (2) mainly covers the state of art of visual based attitude estimation.As the unpolarized sunlight enters the Earth s atmosphere, it is Rayleigh-scattered by air, and it becomes partially linearly polarized. This skylight polarization provides a signi cant clue to understanding the environment. Its state conveys the information for obtaining the sun orientation. Robot navigation, sensor planning, and many other applications may bene t from using this navigation clue. Chapter (3) covers the state of art in capturing the skylight polarization patterns using omnidirectional sensors (e.g fisheye and catadioptric sensors). It also explains the skylight polarization characteristics and gives a new theoretical derivation of the skylight angle of polarization pattern. Our aim is to obtain an omnidirectional 360 view combined with polarization characteristics. Hence, this work is based on catadioptric sensors which are composed of reflective surfaces and lenses. Usually the reflective surface is metallic and hence the incident skylight polarization state, which is mostly partially linearly polarized, is changed to be elliptically polarized after reflection. Given the measured reflected polarization state, we want to obtain the incident polarization state. Chapter (4) proposes a method to measure the light polarization parameters using a catadioptric sensor. The possibility to measure the incident Stokes is proved given three Stokes out of the four reflected Stokes. Once the incident polarization patterns are available, the solar angles can be directly estimated using these patterns. Chapter (5) discusses polarization based robot orientation and navigation and proposes new algorithms to estimate these solar angles where, to the best of our knowledge, the sun zenith angle is firstly estimated in this work given these incident polarization patterns. We also propose to estimate any vehicle orientation given these polarization patterns. Finally the work is concluded and possible future research directions are discussed in chapter (6). More examples of skylight polarization patterns, their calibration, and the proposed applications are given in appendix (B). Our work may pave the way to move from the conventional polarization vision world to the omnidirectional one. It enables bio-inspired robot orientation and navigation applications and possible outdoor localization based on the skylight polarization patterns where given the solar angles at a certain date and instant of time may infer the current vehicle geographical location.DIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    Sky segmentation with ultraviolet images can be used for navigation

    Get PDF
    Inspired by ant navigation, we explore a method for sky segmentation using ultraviolet (UV) light. A standard camera is adapted to allow collection of outdoor images containing light in the visible range, in UV only and in green only. Automatic segmentation of the sky region using UV only is significantly more accurate and far more consistent than visible wavelengths over a wide range of locations, times and weather conditions, and can be accomplished with a very low complexity algorithm. We apply this method to obtain compact binary (sky vs non-sky) images from panoramic UV images taken along a 2km route in an urban environment. Using either sequence SLAM or a visual compass on these images produces reliable localisation and orientation on a subsequent traversal of the route under different weather conditions

    Vision systems for autonomous aircraft guidance

    Get PDF

    Mechanisms of place recognition and path integration based on the insect visual system

    Get PDF
    Animals are often able to solve complex navigational tasks in very challenging terrain, despite using low resolution sensors and minimal computational power, providing inspiration for robots. In particular, many species of insect are known to solve complex navigation problems, often combining an array of different behaviours (Wehner et al., 1996; Collett, 1996). Their nervous system is also comparatively simple, relative to that of mammals and other vertebrates. In the first part of this thesis, the visual input of a navigating desert ant, Cataglyphis velox, was mimicked by capturing images in ultraviolet (UV) at similar wavelengths to the ant’s compound eye. The natural segmentation of ground and sky lead to the hypothesis that skyline contours could be used by ants as features for navigation. As proof of concept, sky-segmented binary images were used as input for an established localisation algorithm SeqSLAM (Milford and Wyeth, 2012), validating the plausibility of this claim (Stone et al., 2014). A follow-up investigation sought to determine whether using the sky as a feature would help overcome image matching problems that the ant often faced, such as variance in tilt and yaw rotation. A robotic localisation study showed that using spherical harmonics (SH), a representation in the frequency domain, combined with extracted sky can greatly help robots localise on uneven terrain. Results showed improved performance to state of the art point feature localisation methods on fast bumpy tracks (Stone et al., 2016a). In the second part, an approach to understand how insects perform a navigational task called path integration was attempted by modelling part of the brain of the sweat bee Megalopta genalis. A recent discovery that two populations of cells act as a celestial compass and visual odometer, respectively, led to the hypothesis that circuitry at their point of convergence in the central complex (CX) could give rise to path integration. A firing rate-based model was developed with connectivity derived from the overlap of observed neural arborisations of individual cells and successfully used to build up a home vector and steer an agent back to the nest (Stone et al., 2016b). This approach has the appeal that neural circuitry is highly conserved across insects, so findings here could have wide implications for insect navigation in general. The developed model is the first functioning path integrator that is based on individual cellular connections

    VGC 2023 - Unveiling the dynamic Earth with digital methods: 5th Virtual Geoscience Conference: Book of Abstracts

    Get PDF
    Conference proceedings of the 5th Virtual Geoscience Conference, 21-22 September 2023, held in Dresden. The VGC is a multidisciplinary forum for researchers in geoscience, geomatics and related disciplines to share their latest developments and applications.:Short Courses 9 Workshops Stream 1 10 Workshop Stream 2 11 Workshop Stream 3 12 Session 1 – Point Cloud Processing: Workflows, Geometry & Semantics 14 Session 2 – Visualisation, communication & Teaching 27 Session 3 – Applying Machine Learning in Geosciences 36 Session 4 – Digital Outcrop Characterisation & Analysis 49 Session 5 – Airborne & Remote Mapping 58 Session 6 – Recent Developments in Geomorphic Process and Hazard Monitoring 69 Session 7 – Applications in Hydrology & Ecology 82 Poster Contributions 9

    From skylight input to behavioural output : a computational model of the insect polarised light compass

    Get PDF
    Many insects navigate by integrating the distances and directions travelled on an outward path, allowing direct return to the starting point. Fundamental to the reliability of this process is the use of a neural compass based on external celestial cues. Here we examine how such compass information could be reliably computed by the insect brain, given realistic constraints on the sky polarisation pattern and the insect eye sensor array. By processing the degree of polarisation in different directions for different parts of the sky, our model can directly estimate the solar azimuth and also infer the confidence of the estimate. We introduce a method to correct for tilting of the sensor array, as might be caused by travel over uneven terrain. We also show that the confidence can be used to approximate the change in sun position over time, allowing the compass to remain fixed with respect to ‘true north’ during long excursions. We demonstrate that the compass is robust to disturbances and can be effectively used as input to an existing neural model of insect path integration. We discuss the plausibility of our model to be mapped to known neural circuits, and to be implemented for robot navigation

    Biologically Inspired Navigational Strategies Using Atmospheric Scattering Patterns

    Get PDF
    A source of accurate and reliable heading is vital for the navigation of autonomous systems such as micro-air vehicles (MAVs). It is desirous that a passive computationally efficient measure of heading is available even when magnetic heading is not. To confront this scenario, a biologically inspired methodology to determine heading based on atmospheric scattering patterns is proposed. A simplified model of the atmosphere is presented, and a hardware analog to the insect Dorsal Rim Area (DRA) photodetection is introduced. Several algorithms are developed to map the patterns of polarized and unpolarized celestial light to heading relative to the sun. Temporal information is used to determine current solar position, and then merged with solar relative heading resulting in absolute heading. Simulation and outdoor experimentation are used to validate the proposed heading determination methodology. Celestial heading measurements are shown to provide closed loop heading control of a ground based robot

    Development of high-precision snow mapping tools for Arctic environments

    Get PDF
    Le manteau neigeux varie grandement dans le temps et l’espace, il faut donc de nombreux points d’observation pour le décrire précisément et ponctuellement, ce qui permet de valider et d’améliorer la modélisation de la neige et les applications en télédétection. L’analyse traditionnelle par des coupes de neige dévoile des détails pointus sur l’état de la neige à un endroit et un moment précis, mais est une méthode chronophage à laquelle la distribution dans le temps et l’espace font défaut. À l’opposé sur la fourchette de la précision, on retrouve les solutions orbitales qui couvrent la surface de la Terre à intervalles réguliers, mais à plus faible résolution. Dans l’optique de recueillir efficacement des données spatiales sur la neige durant les campagnes de terrain, nous avons développé sur mesure un système d’aéronef télépiloté (RPAS) qui fournit des cartes d’épaisseur de neige pour quelques centaines de mètres carrés, selon la méthode Structure from motion (SfM). Notre RPAS peut voler dans des températures extrêmement froides, au contraire des autres systèmes sur le marché. Il atteint une résolution horizontale de 6 cm et un écart-type d’épaisseur de neige de 39 % sans végétation (48,5 % avec végétation). Comme la méthode SfM ne permet pas de distinguer les différentes couches de neige, j’ai développé un algorithme pour un radar à onde continue à modulation de fréquence (FM-CW) qui permet de distinguer les deux couches principales de neige que l’on retrouve régulièrement en Arctique : le givre de profondeur et la plaque à vent. Les distinguer est crucial puisque les caractéristiques différentes des couches de neige font varier la quantité d’eau disponible pour l’écosystème lors de la fonte. Selon les conditions sur place, le radar arrive à estimer l’épaisseur de neige selon un écart-type entre 13 et 39 %. vii Finalement, j’ai équipé le radar d’un système de géolocalisation à haute précision. Ainsi équipé, le radar a une marge d’erreur de géolocalisation d’en moyenne <5 cm. À partir de la mesure radar, on peut déduire la distance entre le haut et le bas du manteau neigeux. En plus de l’épaisseur de neige, on obtient également des points de données qui permettent d’interpoler un modèle d’élévation de la surface solide sous-jacente. J’ai utilisé la méthode de structure triangulaire (TIN) pour toutes les interpolations. Le système offre beaucoup de flexibilité puisqu’il peut être installé sur un RPAS ou une motoneige. Ces outils épaulent la modélisation du couvert neigeux en fournissant des données sur un secteur, plutôt que sur un seul point. Les données peuvent servir à entraîner et à valider les modèles. Ainsi améliorés, ils peuvent, par exemple, permettre de prédire la taille, le niveau de santé et les déplacements de populations d’ongulés, dont la survie dépend de la qualité de la neige. (Langlois et coll., 2017.) Au même titre que la validation de modèles de neige, les outils présentés permettent de comparer et de valider d’autres données de télédétection (par ex. satellites) et d’élargir notre champ de compréhension. Finalement, les cartes ainsi créées peuvent aider les écologistes à évaluer l’état d’un écosystème en leur donnant accès à une plus grande quantité d’information sur le manteau neigeux qu’avec les coupes de neige traditionnelles.Abstract: Snow is highly variable in time and space and thus many observation points are needed to describe the present state of the snowpack accurately. This description of the state of the snowpack is necessary to validate and improve snow modeling efforts and remote sensing applications. The traditional snowpit analysis delivers a highly detailed picture of the present state of the snow in a particular location but lacks the distribution in space and time as it is a time-consuming method. On the opposite end of the spatial scale are orbital solutions covering the surface of the Earth in regular intervals, but at the cost of a much lower resolution. To improve the ability to collect spatial snow data efficiently during a field campaign, we developed a custom-made, remotely piloted aircraft system (RPAS) to deliver snow depth maps over a few hundred square meters by using Structure-from-Motion (SfM). The RPAS is capable of flying in extremely low temperatures where no commercial solutions are available. The system achieves a horizontal resolution of 6 cm with snow depth RMSE of 39% without vegetation (48.5% with vegetation) As the SfM method does not distinguish between different snow layers, I developed an algorithm for a frequency modulated continuous wave (FMCW) radar that distinguishes between the two main snow layers that are found regularly in the Arctic: “Depth Hoar” and “Wind Slab”. The distinction is important as these characteristics allow to determine the amount of water stored in the snow that will be available for the ecosystem during the melt season. Depending on site conditions, the radar estimates the snow depth with an RMSE between 13% and 39%. v Finally, I equipped the radar with a high precision geolocation system. With this setup, the geolocation uncertainty of the radar on average < 5 cm. From the radar measurement, the distance to the top and the bottom of the snowpack can be extracted. In addition to snow depth, it also delivers data points to interpolate an elevation model of the underlying solid surface. I used the Triangular Irregular Network (TIN) method for any interpolation. The system can be mounted on RPAS and snowmobiles and thus delivers a lot of flexibility. These tools will assist snow modeling as they provide data from an area instead of a single point. The data can be used to force or validate the models. Improved models will help to predict the size, health, and movements of ungulate populations, as their survival depends on it (Langlois et al., 2017). Similar to the validation of snow models, the presented tools allow a comparison and validation of other remote sensing data (e.g. satellite) and improve the understanding limitations. Finally, the resulting maps can be used by ecologist to better asses the state of the ecosystem as they have a more complete picture of the snow cover on a larger scale that it could be achieved with traditional snowpits

    Polarization Sensor Design for Biomedical Applications

    Get PDF
    Advances in fabrication technology have enabled the development of compact, rigid polarization image sensors by integrating pixelated polarization filters with standard image sensing arrays. These compact sensors have the capability for allowing new applications across a variety of disciplines, however their design and use may be influenced by many factors. The underlying image sensor, the pixelated polarization filters, and the incident lighting conditions all directly impact how the sensor performs. In this research endeavor, I illustrate how a complete understanding of these factors can lead to both new technologies and applications in polarization sensing. To investigate the performance of the underlying image sensor, I present a new CMOS image sensor architecture with a pixel capable of operation using either measured voltages or currents. I show a detailed noise analysis of both modes, and that, as designed, voltage mode operates with lower noise than current mode. Further, I integrated aluminum nanowires with this sensor post fabrication, realizing the design of a compact CMOS sensor with polarization sensitivity. I describe a full set of experiments designed as a benchmark to evaluate the performance of compact, integrated polarization sensors. I use these tests to evaluate for incident intensity, wavelength, focus, and polarization state, demonstrating the accuracy and limitations of polarization measurements with such a compact sensor. Using these as guides, I present two novel biomedical applications that rely on the compact, real-time nature of compact integrated polarimeters. I first demonstrate how these sensors can be used to measure the dynamics of soft tissue in real-time, with no moving parts or complex optical alignment. I used a 2 megapixel integrated polarization sensor to measure the direction and strength of alignment in a bovine flexor tendon at over 20 frames per second, with results that match the current method of rotating polarizers. Secondly, I present a new technique for optical neural recording that uses intrinsic polarization reflectance and requires no fluorescent dyes or electrodes. Exposing the antennal lobe of the locust Schistocerca americana, I was able to measure a change in the polarization reflectance during the introduction of the odors hexanol and octanol with the integrated CMOS polarization sensor

    A New Cooperative PPP-RTK System with Enhanced Reliability in Challenging Environments

    Get PDF
    Compared to the traditional PPP-RTK methods, cooperative PPP-RTK methods provide expandable service coverage and eliminate the need for a conventional expensive data processing center and the establishment and maintenance of a permanently deployed network of dense GNSS reference stations. However, current cooperative PPP-RTK methods suffer from some major limitations. First, they require a long initialization period before the augmentation service can be made available from the reference stations, which decreases their usability in practical applications. Second, the inter-reference station baseline ambiguity resolution (AR) and regional atmospheric model, as presented in current state-of-art PPP-RTK and network RTK (NRTK) methods, are not utilized to improve the accuracy and service coverage of the network augmentation. Third, the positioning performance of current PPP-RTK methods would be significantly degraded in challenging environments due to multipath effects, non-line-of-sight (NLOS) errors, poor satellite visibility and geometry caused by severe signal blockages. Finally, current position domain or ambiguity domain partial ambiguity resolution (PAR) methods suffer from high false alarm and miss detection, particularly in challenging environments with poor satellite geometry and observations contaminated by NLOS effect, gross errors, biases, and high observation noise. This thesis proposed a new cooperative PPP-RTK positioning system, which offers significant improvements to provide fast-initialization, scalable coverage, and decentralized real-time kinematic precise positioning with enhanced reliability in challenging environments. The system is composed of three major components. The first component is a new cooperative PPP-RTK framework in which a scalable chain of cooperative static or moving reference stations, generates single reference station-derived or reference station network-derived state-space-representation (SSR) corrections for fast ambiguity resolution at surrounding user stations with no need for a conventional expensive data processing center. The second component is a new multi-feature support vector machine (SVM) signal classifier based weight scheme for GNSS measurements to improve the kinematic GNSS positioning accuracy in urban environments. The weight scheme is based on the identification of important features in GNSS data in urban environments and intelligent classification of line-of-sight (LOS) and NLOS signals. The third component is a new PAR method based on machine learning, which employs the combination of two support vector machine (SVM) to effectively identify and exclude bias sources from PAR without relying on satellite geometry. The prototype of the new PPP-RTK system is developed and substantially tested using publically available real-time SSR products from International GNSS Service (IGS) Real-Time Service (RTS)
    corecore