62 research outputs found

    Intelligent Behavioral Action Aiding for Improved Autonomous Image Navigation

    Get PDF
    In egomotion image navigation, errors are common especially when traversing areas with few landmarks. Since image navigation is often used as a passive navigation technique in Global Positioning System (GPS) denied environments; egomotion accuracy is important for precise navigation in these challenging environments. One of the causes of egomotion errors is inaccurate landmark distance measurements, e.g., sensor noise. This research determines a landmark location egomotion error model that quantifies the effects of landmark locations on egomotion value uncertainty and errors. The error model accounts for increases in landmark uncertainty due to landmark distance and image centrality. A robot then uses the error model to actively orient to position landmarks in image positions that give the least egomotion calculation uncertainty. Two actions aiding solutions are proposed: (1) qualitative non-evaluative aiding action, and (2) quantitative evaluative aiding action with landmark tracking. Simulation results show that both action aiding techniques reduce the position uncertainty compared to no action aiding. Physical testing results substantiate simulation results. Compared to no action aiding, non-evaluative action aiding reduced egomotion position errors by an average 31.5%, while evaluative action aiding reduced egomotion position errors by an average 72.5%. Physical testing also showed that evaluative action aiding enables egomotion to work reliably in areas with few features, achieving 76% egomotion position error reduction compared to no aiding

    Visual Odometry Estimation Using Selective Features

    Get PDF
    The rapid growth in computational power and technology has enabled the automotive industry to do extensive research into autonomous vehicles. So called self- driven cars are seen everywhere, being developed from many companies like, Google, Mercedes Benz, Delphi, Tesla, Uber and many others. One of the challenging tasks for these vehicles is to track incremental motion in runtime and to analyze surroundings for accurate localization. This crucial information is used by many internal systems like active suspension control, autonomous steering, lane change assist and many such applications. All these systems rely on incremental motion to infer logical conclusions. Measurement of incremental change in pose or perspective, in other words, changes in motion, measured using visual only information is called Visual Odometry. This thesis proposes an approach to solve the Visual Odometry problem by using stereo-camera vision to incrementally estimate the pose of a vehicle by examining changes that motion induces on the background in the frame captured from stereo cameras. The approach in this thesis research uses a selective feature based motion tracking method to track the motion of the vehicle by analyzing the motion of its static surroundings and discarding the motion induced by dynamic background (outliers). The proposed approach considers that the surrounding may have moving objects like a truck, a car or a pedestrian body which has its own motion which may be different with respect to the vehicle. Use of stereo camera adds depth information which provides more crucial information necessary for detecting and rejecting outliers. Refining the interest point location using sinusoidal interpolation further increases the accuracy of the motion estimation results. The results show that by using a process that chooses features only on the static background and by tracking these features accurately, robust semantic information can be obtained

    Multiple Integrated Navigation Sensors for Improving Occupancy Grid FastSLAM

    Get PDF
    An autonomous vehicle must accurately observe its location within the environment to interact with objects and accomplish its mission. When its environment is unknown, the vehicle must construct a map detailing its surroundings while using it to maintain an accurate location. Such a vehicle is faced with the circularly defined Simultaneous Localization and Mapping (SLAM) problem. However difficult, SLAM is a critical component of autonomous vehicle exploration with applications to search and rescue. To current knowledge, this research presents the first SLAM solution to integrate stereo cameras, inertial measurements, and vehicle odometry into a Multiple Integrated Navigation Sensor (MINS) path. The implementation combines the MINS path with LIDAR to observe and map the environment using the FastSLAM algorithm. In real-world tests, a mobile ground vehicle equipped with these sensors completed a 140 meter loop around indoor hallways. This SLAM solution produces a path that closes the loop and remains within 1 meter of truth, reducing the error 92% from an image-inertial navigation system and 79% from odometry FastSLAM

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Visually-guided walking reference modification for humanoid robots

    Get PDF
    Humanoid robots are expected to assist humans in the future. As for any robot with mobile characteristics, autonomy is an invaluable feature for a humanoid interacting with its environment. Autonomy, along with components from artificial intelligence, requires information from sensors. Vision sensors are widely accepted as the source of richest information about the surroundings of a robot. Visual information can be exploited in tasks ranging from object recognition, localization and manipulation to scene interpretation, gesture identification and self-localization. Any autonomous action of a humanoid, trying to accomplish a high-level goal, requires the robot to move between arbitrary waypoints and inevitably relies on its selflocalization abilities. Due to the disturbances accumulating over the path, it can only be achieved by gathering feedback information from the environment. This thesis proposes a path planning and correction method for bipedal walkers based on visual odometry. A stereo camera pair is used to find distinguishable 3D scene points and track them over time, in order to estimate the 6 degrees-of-freedom position and orientation of the robot. The algorithm is developed and assessed on a benchmarking stereo video sequence taken from a wheeled robot, and then tested via experiments with the humanoid robot SURALP (Sabanci University Robotic ReseArch Laboratory Platform)

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    Perception and Motion: use of Computer Vision to solve Geometry Processing problems

    Get PDF
    Computer vision and geometry processing are often see as two different and, in a certain sense, distant fields: the first one works on two-dimensional data, while the other needs three dimensional information. But are 2D and 3D data really disconnected? Think about the human vision: each eye captures patterns of light, that are then used by the brain in order to reconstruct the perception of the observed scene. In a similar way, if the eye detects a variation in the patterns of light, we are able to understand that the scene is not static; therefore, we're able to perceive the motion of one or more object in the scene. In this work, we'll show how the perception of the 2D motion can be used in order to solve two significant problems, both dealing with three-dimensional data. In the first part, we'll show how the so-called optical flow, representing the observed motion, can be used to estimate the alignment error of a set of digital cameras looking to the same object. In the second part, we'll see how the detected 2D motion of an object can be used to better understand its underlying geometric structure by means of detecting its rigid parts and the way they are connected

    BIO-INSPIRED MOTION PERCEPTION: FROM GANGLION CELLS TO AUTONOMOUS VEHICLES

    Get PDF
    Animals are remarkable at navigation, even in extreme situations. Through motion perception, animals compute their own movements (egomotion) and find other objects (prey, predator, obstacles) and their motions in the environment. Analogous to animals, artificial systems such as robots also need to know where they are relative to structure and segment obstacles to avoid collisions. Even though substantial progress has been made in the development of artificial visual systems, they still struggle to achieve robust and generalizable solutions. To this end, I propose a bio-inspired framework that narrows the gap between natural and artificial systems. The standard approaches in robot motion perception seek to reconstruct a three-dimensional model of the scene and then use this model to estimate egomotion and object segmentation. However, the scene reconstruction process is data-heavy and computationally expensive and fails to deal with high-speed and dynamic scenarios. On the contrary, biological visual systems excel in the aforementioned difficult situation by extracting only minimal information sufficient for motion perception tasks. I derive minimalist/purposive ideas from biological processes throughout this thesis and develop mathematical solutions for robot motion perception problems. In this thesis, I develop a full range of solutions that utilize bio-inspired motion representation and learning approaches for motion perception tasks. Particularly, I focus on egomotion estimation and motion segmentation tasks. I have four main contributions: 1. First, I introduce NFlowNet, a neural network to estimate normal flow (bio-inspired motion filters). Normal flow estimation presents a new avenue for solving egomotion in a robust and qualitative framework. 2. Utilizing normal flow, I propose the DiffPoseNet framework to estimate egomotion by formulating the qualitative constraint in a differentiable optimization layer, which allows for end-to-end learning. 3. Further, utilizing a neuromorphic event camera, a retina-inspired vision sensor, I develop 0-MMS, a model-based optimization approach that employs event spikes to segment the scene into multiple moving parts in high-speed dynamic lighting scenarios. 4. To improve the precision of event-based motion perception across time, I develop SpikeMS, a novel bio-inspired learning approach that fully capitalizes on the rich temporal information in event spikes
    • …
    corecore