2,976 research outputs found

    On the Position Determination of Docking Station for AUVs Using Optical Sensor and Neural Network

    Get PDF
    Detecting the relative position of the docking station is a very important issue for the homing of AUVs (Autonomous Unmanned Vehicles). To detect the position of the light source, a pinhole camera model structure was proposed like the camera model. However, due to the sensor resolution and the distortion errors of the pinhole camera system, the application of the camera of docking the under turbid sea environments is almost impossible. In this paper, a new method detecting the position of the docking station using a light source is presented. Also, a newly developed optical sensor which makes it much easier to sense the light source than the camera system for homing of the AUV under the water is performed. In addition, to improve the system, a neural network (NN) algorithm constructing a model relating the light inputs and optical sensor which are developed in this study is proposed. To evaluate the performance of the NN algorithm, the experiments were performed in the air beforehand. The result shows that the NN algorithm with AUV docking system using the NN model is better than the pinhole camera model

    On the Issue of Camera Calibration with Narrow Angular Field of View

    Get PDF
    This paper considers the issue of calibrating a camera with narrow angular field of view using standard, perspective methods in computer vision. In doing so, the significance of perspective distortion both for camera calibration and for pose estimation is revealed. Since narrow angular field of view cameras make it difficult to obtain rich images in terms of perspectivity, the accuracy of the calibration results is expectedly low. From this, we propose an alternative method that compensates for this loss by utilizing the pose readings of a robotic manipulator. It facilitates accurate pose estimation by nonlinear optimization, minimizing reprojection errors and errors in the manipulator transformations at the same time. Accurate pose estimation in turn enables accurate parametrization of a perspective camera

    Optical Geolocation for Small Unmanned Aerial Systems

    Get PDF
    This paper presents an airborne optical geolocation system using four optical targets to provide position and attitude estimation for a sUAS supporting the NASA Acoustic Research Mission (ARM), where the goal is to reduce nuisance airframe noise during approach and landing. A large precision positioned microphone array captures the airframe noise for multiple passes of a Gulfstream III aircraft. For health monitoring of the microphone array, the Acoustic Calibration Vehicle (ACV) sUAS completes daily flights with an onboard speaker emitting tones at frequencies optimized for determining microphone functionality. An accurate position estimate of the ACV relative to the array is needed for microphone health monitoring. To this end, an optical geolocation system using a downward facing camera mounted to the ACV was developed. The 3D positioning of the ACV is computed using the pinhole camera model. A novel optical geolocation algorithm first detects the targets, then a recursive algorithm tightens the localization of the targets. Finally, the position of the sUAS is computed using the image coordinates of the targets, the 3D world coordinates of the targets, and the camera matrix. A Real-Time Kinematic GPS system is used to compare the optical geolocation system

    Vision based motion control for a humanoid head

    Get PDF
    This paper describes the design of a motion control algorithm for a humanoid robotic head, which consists of a neck with four degrees of freedom and two eyes (a stereo pair system) that tilt on a common axis and rotate sideways freely. The kinematic and dynamic properties of the head are analyzed and modeled using screw theory. The motion control algorithm is designed to receive, as an input, the output of a vision processing algorithm and to exploit the redundancy of the system for the realization of the movements. This algorithm is designed to enable the head to focus on and to follow a target, showing human-like motions. The performance of the control algorithm has been tested in a simulated environment and, then, experimentally applied to the real humanoid head

    Mechatronic design of the Twente humanoid head

    Get PDF
    This paper describes the mechatronic design of the Twente humanoid head, which has been realized in the purpose of having a research platform for human-machine interaction. The design features a fast, four degree of freedom neck, with long range of motion, and a vision system with three degrees of freedom, mimicking the eyes. To achieve fast target tracking, two degrees of freedom in the neck are combined in a differential drive, resulting in a low moving mass and the possibility to use powerful actuators. The performance of the neck has been optimized by minimizing backlash in the mechanisms, and using gravity compensation. The vision system is based on a saliency algorithm that uses the camera images to determine where the humanoid head should look at, i.e. the focus of attention computed according to biological studies. The motion control algorithm receives, as input, the output of the vision algorithm and controls the humanoid head to focus on and follow the target point. The control architecture exploits the redundancy of the system to show human-like motions while looking at a target. The head has a translucent plastic cover, onto which an internal LED system projects the mouth and the eyebrows, realizing human-like facial expressions

    Characterizing driving behavior using automatic visual analysis

    Full text link
    In this work, we present the problem of rash driving detection algorithm using a single wide angle camera sensor, particularly useful in the Indian context. To our knowledge this rash driving problem has not been addressed using Image processing techniques (existing works use other sensors such as accelerometer). Car Image processing literature, though rich and mature, does not address the rash driving problem. In this work-in-progress paper, we present the need to address this problem, our approach and our future plans to build a rash driving detector.Comment: 4 pages,7 figures, IBM-ICARE201

    Photogrammetry and ballistic analysis of a high-flying projectile in the STS-124 space shuttle launch

    Full text link
    A method combining photogrammetry with ballistic analysis is demonstrated to identify flying debris in a rocket launch environment. Debris traveling near the STS-124 Space Shuttle was captured on cameras viewing the launch pad within the first few seconds after launch. One particular piece of debris caught the attention of investigators studying the release of flame trench fire bricks because its high trajectory could indicate a flight risk to the Space Shuttle. Digitized images from two pad perimeter high-speed 16-mm film cameras were processed using photogrammetry software based on a multi-parameter optimization technique. Reference points in the image were found from 3D CAD models of the launch pad and from surveyed points on the pad. The three-dimensional reference points were matched to the equivalent two-dimensional camera projections by optimizing the camera model parameters using a gradient search optimization technique. Using this method of solving the triangulation problem, the xyz position of the object's path relative to the reference point coordinate system was found for every set of synchronized images. This trajectory was then compared to a predicted trajectory while performing regression analysis on the ballistic coefficient and other parameters. This identified, with a high degree of confidence, the object's material density and thus its probable origin within the launch pad environment. Future extensions of this methodology may make it possible to diagnose the underlying causes of debris-releasing events in near-real time, thus improving flight safety.Comment: 26 pages, 11 figures, 3 table

    Rectification from Radially-Distorted Scales

    Full text link
    This paper introduces the first minimal solvers that jointly estimate lens distortion and affine rectification from repetitions of rigidly transformed coplanar local features. The proposed solvers incorporate lens distortion into the camera model and extend accurate rectification to wide-angle images that contain nearly any type of coplanar repeated content. We demonstrate a principled approach to generating stable minimal solvers by the Grobner basis method, which is accomplished by sampling feasible monomial bases to maximize numerical stability. Synthetic and real-image experiments confirm that the solvers give accurate rectifications from noisy measurements when used in a RANSAC-based estimator. The proposed solvers demonstrate superior robustness to noise compared to the state-of-the-art. The solvers work on scenes without straight lines and, in general, relax the strong assumptions on scene content made by the state-of-the-art. Accurate rectifications on imagery that was taken with narrow focal length to near fish-eye lenses demonstrate the wide applicability of the proposed method. The method is fully automated, and the code is publicly available at https://github.com/prittjam/repeats.Comment: pre-prin
    • …
    corecore