357 research outputs found

    Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments

    Get PDF
    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version

    Homography-based pose estimation to guide a miniature helicopter during 3D-trajectory tracking

    Get PDF
    This work proposes a pose-based visual servoing control, through using planar homography, to estimate the position and orientation of a miniature helicopter relative to a known pattern. Once having the current flight information, the nonlinear underactuated controller presented in one of our previous works, which attends all flight phases, is used to guide the rotorcraft during a 3Dtrajectory tracking task. In the sequel, the simulation framework and the results obtained using it are presented and discussed, validating the proposed controller when a visual system is used to determine the helicopter pose information.Fil: Brandão, Alexandre . Universidade Federal Do Espirito Santo. Centro Tecnologico. Departamento de Ingenieria Electrica; BrasilFil: Sarapura, Jorge Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico San Juan. Instituto de Automática; Argentina. Universidad Nacional de San Juan; ArgentinaFil: Sarcinelli Filho, Mario . Universidade Federal Do Espirito Santo. Centro Tecnologico. Departamento de Ingenieria Electrica; BrasilFil: Carelli Albarracin, Ricardo Oscar. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico San Juan. Instituto de Automática; Argentina. Universidad Nacional de San Juan; Argentin

    External localization system for mobile robotics

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera’s intrinsic parameters and hardware’s processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems

    A Mobile Robot Localization using External Surveillance Cameras at Indoor

    Get PDF
    AbstractLocalization is a technique that is needed for the service robot to drive at indoors, and it has been studied in various ways. Most localization techniques let the robot measure environmental information to gain location information, but those require high costs as it use many equipment, and also complicate the robot development. But if an external device could calculate the location of the robot and transmit it to the robot, it will reduce the extra cost for the internal equipment needed to recognize the location, and it will also simplify the robot development. Therefore this study suggests an effective way to control the robot by using the location information of the robot included in a map made by visual information from the surveillance cameras installed at indoors. The object in a single image is difficult to tell its size because of the shadow components and occlusion. Therefore, combination of shadow removal technique using HSV image from indoors and images from different perspective using homography to create two- dimensional map with accurate object information is suggested. In the experiment, the effectiveness of the suggested method is shown by analyzing the movement result of the robot which applied the location information from the two-dimensional map that is based on the multi cameras, which its accuracy is measured in advance

    Exploring Motion Signatures for Vision-Based Tracking, Recognition and Navigation

    Get PDF
    As cameras become more and more popular in intelligent systems, algorithms and systems for understanding video data become more and more important. There is a broad range of applications, including object detection, tracking, scene understanding, and robot navigation. Besides the stationary information, video data contains rich motion information of the environment. Biological visual systems, like human and animal eyes, are very sensitive to the motion information. This inspires active research on vision-based motion analysis in recent years. The main focus of motion analysis has been on low level motion representations of pixels and image regions. However, the motion signatures can benefit a broader range of applications if further in-depth analysis techniques are developed. In this dissertation, we mainly discuss how to exploit motion signatures to solve problems in two applications: object recognition and robot navigation. First, we use bird species recognition as the application to explore motion signatures for object recognition. We begin with study of the periodic wingbeat motion of flying birds. To analyze the wing motion of a flying bird, we establish kinematics models for bird wings, and obtain wingbeat periodicity in image frames after the perspective projection. Time series of salient extremities on bird images are extracted, and the wingbeat frequency is acquired for species classification. Physical experiments show that the frequency based recognition method is robust to segmentation errors and measurement lost up to 30%. In addition to the wing motion, the body motion of the bird is also analyzed to extract the flying velocity in 3D space. An interacting multi-model approach is then designed to capture the combined object motion patterns and different environment conditions. The proposed systems and algorithms are tested in physical experiments, and the results show a false positive rate of around 20% with a low false negative rate close to zero. Second, we explore motion signatures for vision-based vehicle navigation. We discover that motion vectors (MVs) encoded in Moving Picture Experts Group (MPEG) videos provide rich information of the motion in the environment, which can be used to reconstruct the vehicle ego-motion and the structure of the scene. However, MVs suffer from high noise level. To handle the challenge, an error propagation model for MVs is first proposed. Several steps, including MV merging, plane-at-infinity elimination, and planar region extraction, are designed to further reduce noises. The extracted planes are used as landmarks in an extended Kalman filter (EKF) for simultaneous localization and mapping. Results show that the algorithm performs localization and plane mapping with a relative trajectory error below 5:1%. Exploiting the fact that MVs encodes both environment information and moving obstacles, we further propose to track moving objects at the same time of localization and mapping. This enables the two critical navigation functionalities, localization and obstacle avoidance, to be performed in a single framework. MVs are labeled as stationary or moving according to their consistency to geometric constraints. Therefore, the extracted planes are separated into moving objects and the stationary scene. Multiple EKFs are used to track the static scene and the moving objects simultaneously. In physical experiments, we show a detection rate of moving objects at 96:6% and a mean absolute localization error below 3:5 meters

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    Real-time model-based video stabilization for microaerial vehicles

    Get PDF
    The emerging branch of micro aerial vehicles (MAVs) has attracted a great interest for their indoor navigation capabilities, but they require a high quality video for tele-operated or autonomous tasks. A common problem of on-board video quality is the effect of undesired movements, so different approaches solve it with both mechanical stabilizers or video stabilizer software. Very few video stabilizer algorithms in the literature can be applied in real-time but they do not discriminate at all between intentional movements of the tele-operator and undesired ones. In this paper, a novel technique is introduced for real-time video stabilization with low computational cost, without generating false movements or decreasing the performance of the stabilized video sequence. Our proposal uses a combination of geometric transformations and outliers rejection to obtain a robust inter-frame motion estimation, and a Kalman filter based on an ANN learned model of the MAV that includes the control action for motion intention estimation.Peer ReviewedPostprint (author's final draft

    Aerial robotics in building inspection and maintenance

    Get PDF
    Buildings need periodic revision about their state, materials degrade with time and repairs or renewals have to be made driven by maintenance needs or safety requirements. That happens with any kind of buildings and constructions: housing, architecture masterpieces, old and ancient buildings and industrial buildings. Currently, nearly all of these tasks are carried out by human intervention. In order to carry out the inspection or maintenance, humans need to access to roofs, façades or other areas hard to reach and otherwise potentially hazardous location to perform the task. In some cases, it might not be feasible to access for inspection. For instance, in industry buildings operation must be often interrupted to allow for safe execution of such tasks; these shutdowns not only lead to substantial production loss, but the shutdown and start-up operation itself causes risks to human and environment. In touristic buildings, access has to be restricted with the consequent losses and inconveniences to visitors. The use of aerial robots can help to perform this kind of hazardous operations in an autonomous way, not only teleoperated. Robots are able to carry sensors to detect failures of many types and to locate them in a previously generated map, which the robot uses to navigate. Some of those sensors are cameras in different spectra (visual, near-infrared, UV), laser, LIDAR, ultrasounds and inertial sensory system. If the sensory part is crucial to inspect hazardous areas in buildings, the actuation is also important: the aerial robot can carry small robots (mainly crawler) to be deployed to perform more in-depth operation where the contact between the sensors and the material is basic (any kind of metallic part: pipes, roofs, panels…). The aerial robot has the ability to recover the deployed small crawler to be reused again. In this paper, authors will explain the research that they are conducting in this area and propose future research areas and applications with aerial, ground, submarine and other autonomous robots within the construction field.Peer ReviewedPostprint (author's final draft

    PWM and PFM for visual servoing in fully decoupled approaches

    Full text link
    In this paper, novel visual servoing techniques based on Pulse Width Modulation (PWM) and Pulse Frequency Modulation (PFM) are presented. In order to apply previous pulse modulations, a fully decoupled position based visual servoing approach (i.e. with block-diagonal interaction matrix) is considered, controlling independently translational and rotational camera motions. These techniques, working at high frequency, could be considered to address the sensor latency problem inherent in visual servoing systems. The expected appearance of ripple due to the concentration of the control action in pulses is quantified and analyzed under simulated scenario. This high frequency ripple does not affect the system performance since it is filtered by the manipulator dynamics. On the contrary it can be seen as a dither signal to minimize the impact of friction and overcome back-lashing.This work was supported in part by the Spanish Government under Grant BES-2010-038486 and Project DPI2013-42302-R.Muñoz Benavent, P.; Solanes Galbis, JE.; Gracia Calandin, LI.; Tornero Montserrat, J. (2015). PWM and PFM for visual servoing in fully decoupled approaches. Robotics and Autonomous Systems. 65(1):57-64. doi:10.1016/j.robot.2014.11.011S576465
    • …
    corecore