295,834 research outputs found

    Model-based vs. model-free visual servoing: A Performance evaluation in microsystems

    Get PDF
    In this paper, model-based and model-free image based visual servoing (VS) approaches are implemented on a microassembly workstation, and their regulation and tracking performances are evaluated. A precise image based VS relies on computation of the image jacobian. In the model-based visual servoing, the image Jacobian is computed via calibrating the optical system. Precisely calibrated model based VS promises better positioning and tracking performance than the model-free approach. However, in the model-free approach, optical system calibration is not required due to the dynamic Jacobian estimation, thus it has the advantage of adapting to the different operating modes

    Visual servoing of an autonomous helicopter in urban areas using feature tracking

    Get PDF
    We present the design and implementation of a vision-based feature tracking system for an autonomous helicopter. Visual sensing is used for estimating the position and velocity of features in the image plane (urban features like windows) in order to generate velocity references for the flight control. These visual-based references are then combined with GPS-positioning references to navigate towards these features and then track them. We present results from experimental flight trials, performed in two UAV systems and under different conditions that show the feasibility and robustness of our approach

    Indoor pedestrian dead reckoning calibration by visual tracking and map information

    Get PDF
    Currently, Pedestrian Dead Reckoning (PDR) systems are becoming more attractive in market of indoor positioning. This is mainly due to the development of cheap and light Micro Electro-Mechanical Systems (MEMS) on smartphones and less requirement of additional infrastructures in indoor areas. However, it still faces the problem of drift accumulation and needs the support from external positioning systems. Vision-aided inertial navigation, as one possible solution to that problem, has become very popular in indoor localization with satisfied performance than individual PDR system. In the literature however, previous studies use fixed platform and the visual tracking uses feature-extraction-based methods. This paper instead contributes a distributed implementation of positioning system and uses deep learning for visual tracking. Meanwhile, as both inertial navigation and optical system can only provide relative positioning information, this paper contributes a method to integrate digital map with real geographical coordinates to supply absolute location. This hybrid system has been tested on two common operation systems of smartphones as iOS and Android, based on corresponded data collection apps respectively, in order to test the robustness of method. It also uses two different ways for calibration, by time synchronization of positions and heading calibration based on time steps. According to the results, localization information collected from both operation systems has been significantly improved after integrating with visual tracking data

    Trajectory Servoing: Image-Based Trajectory Tracking Using SLAM

    Full text link
    This paper describes an image based visual servoing (IBVS) system for a nonholonomic robot to achieve good trajectory following without real-time robot pose information and without a known visual map of the environment. We call it trajectory servoing. The critical component is a feature-based, indirect SLAM method to provide a pool of available features with estimated depth, so that they may be propagated forward in time to generate image feature trajectories for visual servoing. Short and long distance experiments show the benefits of trajectory servoing for navigating unknown areas without absolute positioning. Trajectory servoing is shown to be more accurate than pose-based feedback when both rely on the same underlying SLAM system

    Multi-modal probabilistic indoor localization on a smartphone

    Get PDF
    The satellite-based Global Positioning System (GPS) provides robust localization on smartphones outdoors. In indoor environments, however, no system is close to achieving a similar level of ubiquity, with existing solutions offering different trade-offs in terms of accuracy, robustness and cost. In this paper, we develop a multi-modal positioning system, targeted at smartphones, which aims to get the best out of each of its constituent modalities. More precisely, we combine Bluetooth low energy (BLE) beacons, round-trip-time (RTT) enabled WiFi access points and the smartphone’s inertial measurement unit (IMU) to provide a cheap robust localization system that, unlike fingerprinting methods, requires no pre-training. To do this, we use a probabilistic algorithm based on a conditional random field (CRF). We show how to incorporate sparse visual information to improve the accuracy of our system, using pose estimation from pre-scanned visual landmarks, to calibrate the system online. Our method achieves an accuracy of around 2 meters on two realistic datasets, outperforming other distance-based localization approaches. We also compare our approach with an ultra-wideband (UWB) system. While we do not match the performance of UWB, our system is cheap, smartphone compatible and provides satisfactory performance for many applications

    GNSS Shadow Matching in a Changing Urban Environment

    Get PDF
    This publication describes apparatuses, methods, and techniques for performing Global Navigation Satellite System (GNSS) shadow matching in a changing urban environment. To do so, a user equipment (e.g., a smartphone) utilizes a comprehensive positioning algorithm. The smartphone can measure a signal strength of satellites of the GNSS. When the signal strength matches an expected shadow, the comprehensive positioning algorithm can utilize GNSS data, area network data, inertial data, and an Urban Canyon Positioning Algorithm. The Urban Canyon Positioning Algorithm uses GNSS shadow matching data to increase user location accuracy in the urban environment. When the signal strength does not match the expected shadow, the comprehensive positioning algorithm can estimate user position using GNSS data, area network data, inertial data, and other optional localization signals (e.g., step counting, visual matches against a known model of a street-level visual map). Then, the comprehensive positioning algorithm can compare and quantify differences between the signal strength with the expected shadows, and quantify discrepancies between an estimated user location from various localization signals. Based on the differences between the signal strength and the expected shadows, the comprehensive positioning algorithm can determine and map changes in the urban environment. When the GNSS shadow matching determines user location with a high degree of confidence and accuracy, the comprehensive positioning algorithm can use this information to find discrepancies in other localization signals that rely on a map model (e.g., terrain height data, street-level visual maps, WiFi® hot spots). Lastly, the comprehensive positioning algorithm can adjust updates from the Urban Canyon Positioning Algorithm near unmodeled physical features (e.g., buildings, bridges, tunnels) in the urban environment

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Kinematic Visual Servo Control of a Quadrotor aerial vehicle

    Get PDF
    Visual systems are key sensors for control of small scale unmanned aerial vehicles. In this paper we investigate a range of image based visual servo control algorithms for positioning of flying vehicles capable of hover. The image based outer control loop for translation kinematics is coupled to a high-gain inner control loop that regulates translational velocities and full attitude dynamics. Zero and first order image moments are used as visual features for the control design. Perspective projection moments with suitable scaling along with a classical image based visual servo control design lead to satisfactory transients and asymptotic stability of the closed-loop system when the image plane remains parallel to the target. However, the system response may lack robustness for aggressive manoeuvres. In order to overcome this problem, several control schemes, based on spherical image moments, are designed and their performance is analysed. All designed control laws have been tested on a kinematic robotic manipulator to demonstrate the relative strengths and weaknesses of thedifferent image based visual servo control designs. The three most promising control algorithms have been successfully implemented on an autonomous aerial vehicle showing excellent performances in all three cases
    • …
    corecore