2,780 research outputs found

    Event-Based Visual-Inertial Odometry on a Fixed-Wing Unmanned Aerial Vehicle

    Get PDF
    Event-based cameras are a new type of visual sensor that operate under a unique paradigm. These cameras provide asynchronous data on the log-level changes in light intensity for individual pixels, independent of other pixels\u27 measurements. Through the hardware-level approach to change detection, these cameras can achieve microsecond fidelity, millisecond latency, ultra-wide dynamic range, and all with very low power requirements. The advantages provided by event-based cameras make them excellent candidates for visual odometry (VO) for unmanned aerial vehicle (UAV) navigation. This document presents the research and implementation of an event-based visual inertial odometry (EVIO) pipeline, which estimates a vehicle\u27s 6-degrees-of-freedom (DOF) motion and pose utilizing an affixed event-based camera with an integrated Micro-Electro-Mechanical Systems (MEMS) inertial measurement unit (IMU). The front-end of the EVIO pipeline uses the current motion estimate of the pipeline to generate motion-compensated frames from the asynchronous event camera data. These frames are fed the back-end of the pipeline, which uses a Multi-State Constrained Kalman Filter (MSCKF) [1] implemented with Scorpion, a Bayesian state estimation framework developed by the Autonomy and Navigation Technology (ANT) Center at Air Force Institute of Technology (AFIT) [2]. This EVIO pipeline was tested on selections from the benchmark Event Camera Dataset [3]; and on a dataset collected, as part of this research, during the ANT Center\u27s first flight test with an event-based camera

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    The Imaging Magnetograph eXperiment (IMaX) for the Sunrise balloon-borne solar observatory

    Get PDF
    The Imaging Magnetograph eXperiment (IMaX) is a spectropolarimeter built by four institutions in Spain that flew on board the Sunrise balloon-borne telesocope in June 2009 for almost six days over the Arctic Circle. As a polarimeter IMaX uses fast polarization modulation (based on the use of two liquid crystal retarders), real-time image accumulation, and dual beam polarimetry to reach polarization sensitivities of 0.1%. As a spectrograph, the instrument uses a LiNbO3 etalon in double pass and a narrow band pre-filter to achieve a spectral resolution of 85 mAA. IMaX uses the high Zeeman sensitive line of Fe I at 5250.2 AA and observes all four Stokes parameters at various points inside the spectral line. This allows vector magnetograms, Dopplergrams, and intensity frames to be produced that, after reconstruction, reach spatial resolutions in the 0.15-0.18 arcsec range over a 50x50 arcsec FOV. Time cadences vary between ten and 33 seconds, although the shortest one only includes longitudinal polarimetry. The spectral line is sampled in various ways depending on the applied observing mode, from just two points inside the line to 11 of them. All observing modes include one extra wavelength point in the nearby continuum. Gauss equivalent sensitivities are four Gauss for longitudinal fields and 80 Gauss for transverse fields per wavelength sample. The LOS velocities are estimated with statistical errors of the order of 5-40 m/s. The design, calibration and integration phases of the instrument, together with the implemented data reduction scheme are described in some detail.Comment: 17 figure

    Event-Based Visual-Inertial Odometry Using Smart Features

    Get PDF
    Event-based cameras are a novel type of visual sensor that operate under a unique paradigm, providing asynchronous data on the log-level changes in light intensity for individual pixels. This hardware-level approach to change detection allows these cameras to achieve ultra-wide dynamic range and high temporal resolution. Furthermore, the advent of convolutional neural networks (CNNs) has led to state-of-the-art navigation solutions that now rival or even surpass human engineered algorithms. The advantages offered by event cameras and CNNs make them excellent tools for visual odometry (VO). This document presents the implementation of a CNN trained to detect and describe features within an image as well as the implementation of an event-based visual-inertial odometry (EVIO) pipeline, which estimates a vehicle\u27s 6-degrees-offreedom (DOF) pose using an affixed event-based camera with an integrated inertial measurement unit (IMU). The front-end of this pipeline utilizes a neural network for generating image frames from asynchronous event camera data. These frames are fed into a multi-state constraint Kalman filter (MSCKF) back-end that uses the output of the developed CNN to perform measurement updates. The EVIO pipeline was tested on a selection from the Event-Camera Dataset [1], and on a dataset collected from a fixed-wing unmanned aerial vehicle (UAV) flight test conducted by the Autonomy and Navigation Technology (ANT) Center

    Map-based localization for urban service mobile robotics

    Get PDF
    Mobile robotics research is currently interested on exporting autonomous navigation results achieved in indoor environments, to more challenging environments, such as, for instance, urban pedestrian areas. Developing mobile robots with autonomous navigation capabilities in such urban environments supposes a basic requirement for a upperlevel service set that could be provided to an users community. However, exporting indoor techniques to outdoor urban pedestrian scenarios is not evident due to the larger size of the environment, the dynamism of the scene due to pedestrians and other moving obstacles, the sunlight conditions, and the high presence of three dimensional elements such as ramps, steps, curbs or holes. Moreover, GPS-based mobile robot localization has demonstrated insufficient performance for robust long-term navigation in urban environments. One of the key modules within autonomous navigation is localization. If localization supposes an a priori map, even if it is not a complete model of the environment, localization is called map-based. This assumption is realistic since current trends of city councils are on building precise maps of their cities, specially of the most interesting places such as city downtowns. Having robots localized within a map allows for a high-level planning and monitoring, so that robots can achieve goal points expressed on the map, by following in a deliberative way a previously planned route. This thesis deals with the mobile robot map-based localization issue in urban pedestrian areas. The thesis approach uses the particle filter algorithm, a well-known and widely used probabilistic and recursive method for data fusion and state estimation. The main contributions of the thesis are divided on four aspects: (1) long-term experiments of mobile robot 2D and 3D position tracking in real urban pedestrian scenarios within a full autonomous navigation framework, (2) developing a fast and accurate technique to compute on-line range observation models in 3D environments, a basic step required by the real-time performance of the developed particle filter, (3) formulation of a particle filter that integrates asynchronous data streams and (4) a theoretical proposal to solve the global localization problem in an active and cooperative way, defining cooperation as either information sharing among the robots or planning joint actions to solve a common goal.Actualment, la recerca en robòtica mòbil té un interés creixent en exportar els resultats de navegació autònoma aconseguits en entorns interiors cap a d'altres tipus d'entorns més exigents, com, per exemple, les àrees urbanes peatonals. Desenvolupar capacitats de navegació autònoma en aquests entorns urbans és un requisit bàsic per poder proporcionar un conjunt de serveis de més alt nivell a una comunitat d'usuaris. Malgrat tot, exportar les tècniques d'interiors cap a entorns exteriors peatonals no és evident, a causa de la major dimensió de l'entorn, del dinamisme de l'escena provocada pels peatons i per altres obstacles en moviment, de la resposta de certs sensors a la il.luminació natural, i de la constant presència d'elements tridimensionals tals com rampes, escales, voreres o forats. D'altra banda, la localització de robots mòbils basada en GPS ha demostrat uns resultats insuficients de cara a una navegació robusta i de llarga durada en entorns urbans. Una de les peces clau en la navegació autònoma és la localització. En el cas que la localització consideri un mapa conegut a priori, encara que no sigui un model complet de l'entorn, parlem d'una localització basada en un mapa. Aquesta assumpció és realista ja que la tendència actual de les administracions locals és de construir mapes precisos de les ciutats, especialment dels llocs d'interés tals com les zones més cèntriques. El fet de tenir els robots localitzats en un mapa permet una planificació i una monitorització d'alt nivell, i així els robots poden arribar a destinacions indicades sobre el mapa, tot seguint de forma deliberativa una ruta prèviament planificada. Aquesta tesi tracta el tema de la localització de robots mòbils, basada en un mapa i per entorns urbans peatonals. La proposta de la tesi utilitza el filtre de partícules, un mètode probabilístic i recursiu, ben conegut i àmpliament utilitzat per la fusió de dades i l'estimació d'estats. Les principals contribucions de la tesi queden dividides en quatre aspectes: (1) experimentació de llarga durada del seguiment de la posició, tant en 2D com en 3D, d'un robot mòbil en entorns urbans reals, en el context de la navegació autònoma, (2) desenvolupament d'una tècnica ràpida i precisa per calcular en temps d'execució els models d'observació de distàncies en entorns 3D, un requisit bàsic pel rendiment del filtre de partícules a temps real, (3) formulació d'un filtre de partícules que integra conjunts de dades asíncrones i (4) proposta teòrica per solucionar la localització global d'una manera activa i cooperativa, entenent la cooperació com el fet de compartir informació, o bé com el de planificar accions conjuntes per solucionar un objectiu comú

    The Robo-AO-2 facility for rapid visible/near-infrared AO imaging and the demonstration of hybrid techniques

    Get PDF
    We are building a next-generation laser adaptive optics system, Robo-AO-2, for the UH 2.2-m telescope that will deliver robotic, diffraction-limited observations at visible and near-infrared wavelengths in unprecedented numbers. The superior Maunakea observing site, expanded spectral range and rapid response to high-priority events represent a significant advance over the prototype. Robo-AO-2 will include a new reconfigurable natural guide star sensor for exquisite wavefront correction on bright targets and the demonstration of potentially transformative hybrid AO techniques that promise to extend the faintness limit on current and future exoplanet adaptive optics systems.Comment: 15 page

    Vision Science and Technology at NASA: Results of a Workshop

    Get PDF
    A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program

    The Role of Vision Algorithms for Micro Aerial Vehicles

    Get PDF
    This work investigates the research topics related to visual aerial navigation in loosely structured and cluttered environments. During the inspection of the desired infrastructure the robot is required to fly in an environment which is uncertain and only partially structured because, usually, no reliable layouts and drawings of the surroundings are available. To support these features, advanced cognitive capabilities are required, and in particular the role played by vision is of paramount importance. The use of vision and other onboard sensors such as IMU and GPS play a fundamental to provide high level degree of autonomy to flying vehicles. In detail, the outline of this thesis is organized as follows • Chapter 1 is a general introduction of the aerial robotic field, the quadrotor platform, the use of onboard sensors like cameras and IMU for autonomous navigation. A discussion about camera modeling, current state of art on vision based control, navigation, environment reconstruction and sensor fusion is presented. • Chapter 2 presents vision based control algorithms useful for reactive control like collision avoidance, perching and grasping tasks. Two main contributions are presented based on relative depth map and image based visual servoing respectively. • Chapter 3 discusses the use of vision algorithms for localization and mapping. Compared to the previous chapter, the vision algorithm is more complex involving vehicle’s poses estimation and environment reconstruction. An algorithm based on RGB-D sensors for localization, extendable to localization of multiple vehicles, is presented. Moreover, an environment representation for planning purposes, applied to industrial environments, is introduced. • Chapter 4 introduces the possibility to combine vision measurements and IMU to estimate the motion of the vehicle. A new contribution based on Pareto Optimization, which overcome classical Kalman filtering techniques, is presented. • Chapter 5 contains conclusion, remarks and proposals for possible developments
    corecore