5 research outputs found

    Error Model of Misalignment Error in a Radial 3D Scanner

    Get PDF
    A radial 3D, structured light scanner, was developed from a laser projector and a wide field of view machine vision camera to inspect two - four inch diameter pipers, primarily in the nuclear industry. For identifying the nature and the spatial extent of defective regions, the system constructs a surface point cloud. A dominant source of error in the system is caused by manufacturing tolerances which leads to misalignment between the laser projector and the camera. This causes a triangulation error, reducing the accuracy of the result. In this paper, the error model of the misalignment of the laser and image plane. For a given target distance, we derive an almost linear relationship between angular error in degrees and the error in reported radius (distance from the probe to the surface) in mm and found that for the target 0.1 mm accuracy on a 4 inch pipe, the misalignment needs to be controlled to less than 0.05 degrees. Future work will consider a post manufacturing calibration routine to compensate for this misalignment

    Vision-Aided Inertial Navigation Using Virtual Features

    Get PDF
    In this paper we consider an aerial vehicle equipped with a monocular camera and inertial sensors. Additionally, a laser pointer is mounted on the vehicle and it produces a laser spot. The laser spot is observed by the monocular camera and it is the unique point feature used in the proposed approach. We focus our attention to the case when the vehicle moves in proximity of a planar surface and in particular when the laser spot belongs to this surface. The paper provides two main contributions. The former is the analytical derivation of all the observable modes, i.e. all the physical quantities that can be determined by only using the inertial data and the camera observations of the laser spot during a short time-interval. Speci cally, it is shown that the observable modes are: the distance of the vehicle from the planar surface; the component of the vehicle speed, which is orthogonal to the planar surface; the relative orientation of the vehicle with respect to the planar surface; the orientation of the planar surface with respect to the gravity. The second contribution is the introduction of a simple recursive method to perform the estimation of all the aforementioned observable modes. This method is based on a local decomposition of the original system, which separates the observable modes from the rest of the system. The method is validated by using synthetic data. Additionally, preliminary tests with real data are provided and more complete experiments are in progress. The presented approach can be integrated in the framework of autonomous take-o and landing, safe touch-down and low altitude manoeuvres even in dark or featureless environment

    Vision based attitude and altitude estimation for UAVs in dark environments

    No full text
    International audienceThis paper presents our novel approach to estimate the real time pose and altitude for unmanned aerial vehicle UAV under low light environment, such as on a cloudy day and night or indoor dark environment. This method utilises the maximum out of a calibrated projective camera and a laser pattern projector. The model is mathematically proven with the validation on simulation and later implemented for a real set of images. The estimated values from our method are compared with commercial sensors for their accuracy and correctness. The proposed solution with the system indicates it being suitable to use in dark environmental conditions especially in the night or dark indoor environment in the absence of GPS signals for positioning. The results prove its suitability for autonomous take off and landing, low altitude maneuver in the dark environment and provides room for additional payload due to use of lighter single camera system

    Design of Flying Robots for Collision Absorption and Self-Recovery

    Get PDF
    Flying robots have the unique advantage of being able to move through the air unaffected by the obstacles or precipices below them. This ability quickly becomes a disadvantage, however, as the amount of free space is reduced and the risk of collisions increases. Their sensitivity to any contact with the environment have kept them from venturing beyond large open spaces and obstacle-free skies. Recent efforts have concentrated on improving obstacle detection and avoidance strategies, modeling the environment and intelligent planning to navigate ever tighter spaces while remaining airborne. Though this strategy is yielding impressive and improving results, it is limited by the quality of the information that can be provided by on-board sensors. As evidenced by insects that collide with windows, there will always be situations in which sensors fail and a flying platform will collide with the obstacles around it. It is this fact that inspired the topic of this thesis: enabling flying platforms to survive and recover from contact with their environment through intelligent mechanical design. There are three main challenges tackled in this thesis: robustness to contact, self-recovery and integration into flight systems. Robustness to contact involves the protection of fast-spinning propellers, the stiff inner frame of a flying robot and its embedded sensors from damage through the elastic absorption of collision energy. A method is presented for designing protective structures that transfer the lowest possible amount of force to the platform's frame while simultaneously minimizing weight and thus their effect on flight performance. The method is first used to design a teardrop-shaped spring configuration for absorbing head-on collisions typically experienced by winged platforms. The design is implemented on a flying platform that can survive drops from a height of 2 m. A second design is then presented, this time using springs in a tetrahedral configuration that absorb energy through buckling. When embedded into a hovering platform the tetrahedral protective mechanisms are able to absorb dozens of high-speed collisions while significantly reducing the forces on the platforms frame compared to foam-based protection typically used on other platforms. Surviving a collision is only half of the equation and is only useful if a flying platform can subsequently return to flight without requiring human intervention, a process called self-recovery. The theory behind self-recovery as it applies to many types of flying platforms is first presented, followed by a method for designing and optimizing different types of self-recovery mechanisms. A gravity-based mechanism is implemented on an ultra-light (20.5 g) wing-based platform whose morphology and centre of gravity are optimized to always land on its side after a collision, ready to take off again. Such a mechanism, however, is limited to surfaces that are flat and obstacle-free and requires clear space in front of the platform to return to the air. A second, leg-based self-recovery mechanism is thus designed and integrated into a second hovering platform, allowing it to upright into a vertical takeoff position. The mechanism is successful in returning the platform to the air in a variety of complex environments, including sloped surfaces, corners and surface textures ranging from smooth hardwood to gravel and rocks. In a final chapter collision energy absorption and self-recovery mechanisms are integrated into a single hovering platform, the first example of a flying robot capable of crashing into obstacles, falling to the ground, uprighting and returning to the air, all without human intervention. These abilities are first demonstrated through a contact-based random search behaviour in which the platform explores a small enclosed room in complete darkness. After each collision with a wall the platform falls to the ground, recovers and then continues exploring. In a second experiment the platform is programmed with a basic phototaxis behaviour. Using only four photodiodes that provide a rough idea of the bearing to a source of light the platform is able to consistently cross a 13x2.2mcorridor and traverse a doorway without using any obstacle avoidance, modeling or planning

    A Geometrical Approach for Vision Based Attitude and Altitude Estimation for UAVs in Dark Environments

    No full text
    International audienceThis paper presents a single camera and laser system dedicated to the realtime estimation of attitude and altitude for unmanned aerial vehicles (UAV) under low illumination conditions to dark environments. The fisheye camera allows to cover a large field of view (FOV). The approach, close to structured light systems, uses the geometrical information obtained by the projection of a laser circle onto the ground plane and perceived by the camera. We propose some experiments based on simulated data and real sequences. The results show good agreement with the ground truth values from the commercial sensors in terms of its accuracy and correctness. The results also prove its suitability for autonomous take-off and landing as well as for the case of low altitude manoeuvre in dark, GPS signal deficient unknown environments with no prebuilt map. It also provides room for additional payload to be used for different applications due to it being inexpensive and use of light weight micro-camera and laser system
    corecore