1,875 research outputs found

    Automatic Detection of Calibration Grids in Time-of-Flight Images

    Get PDF
    It is convenient to calibrate time-of-flight cameras by established methods, using images of a chequerboard pattern. The low resolution of the amplitude image, however, makes it difficult to detect the board reliably. Heuristic detection methods, based on connected image-components, perform very poorly on this data. An alternative, geometrically-principled method is introduced here, based on the Hough transform. The projection of a chequerboard is represented by two pencils of lines, which are identified as oriented clusters in the gradient-data of the image. A projective Hough transform is applied to each of the two clusters, in axis-aligned coordinates. The range of each transform is properly bounded, because the corresponding gradient vectors are approximately parallel. Each of the two transforms contains a series of collinear peaks; one for every line in the given pencil. This pattern is easily detected, by sweeping a dual line through the transform. The proposed Hough-based method is compared to the standard OpenCV detection routine, by application to several hundred time-of-flight images. It is shown that the new method detects significantly more calibration boards, over a greater variety of poses, without any overall loss of accuracy. This conclusion is based on an analysis of both geometric and photometric error.Comment: 11 pages, 11 figures, 1 tabl

    Real-time edge tracking using a tactile sensor

    Get PDF
    Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. A controller is proposed that utilizes a tactile sensor in the feedback loop of a manipulator to track along edges. In the control system, the data from the tactile sensor is first processed to find edges. The parameters of these edges are then used to generate a control signal to a hybrid controller. Theory is presented for tactile edge detection and an edge tracking controller. In addition, experimental verification of the edge tracking controller is presented

    Pattern recognition in a multi-sensor environment

    Get PDF
    Journal ArticleCurrent pattern recognition systems tend to operate on a single sensor, e.g., a camera. however. the need is now evident for pattern recognition systems which can operate in multi-sensor environments. For example, a robotics workstation may use range finders. cameras, tactile pads, etc. The Multi-sensor Kernel System (MKS) provides an efficient and coherent approach to the specification, recovery, and analysis of patterns in the data sensed by such a diverse set of sensors. We demonstrate how much a system can be used to support both feature-based object models as well as structural models. The problems solved is the localization of a three-dimensional object in 3-space. Moreover, MKS allows rapid reconfiguration of the available sensors and the high-level models

    Automatic infrasound signal detection using the Hough transform

    No full text
    The Hough transform is a mathematical device that allows the retrieval of parametric curve information from binary-pixelated data in the presence of noise. This slope-intercept transform maps each point in the image space S into a straight line in parameter space P and has the very useful property that all points in S that lie along the same straight-line map to the same number of straight lines in P with a common intersection point. Thus with a suitable counting procedure, the problem of extended straight-line detection in noisy pixelated data becomes one of local peak finding, a problem that may be substantially more tractable. In this study, an algorithm that utilizes the Hough transform for the detection of signals in International Monitoring System style infrasonic array data by seeking periods of constant backazimuth that are associated with coherent acoustic signals is described. A system of synthetic signal implants is used to assess the performance of the detection algorithm by generating a set of pseudo Receiver Operator Characteristic curves. A feature of the detection algorithm is the ability to accommodate full three-dimensional array geometry

    Automated calibration of multi-sensor optical shape measurement system

    Get PDF
    A multi-sensor optical shape measurement system (SMS) based on the fringe projection method and temporal phase unwrapping has recently been commercialised as a result of its easy implementation, computer control using a spatial light modulator, and fast full-field measurement. The main advantage of a multi-sensor SMS is the ability to make measurements for 360° coverage without the requirement for mounting the measured component on translation and/or rotation stages. However, for greater acceptance in industry, issues relating to a user-friendly calibration of the multi-sensor SMS in an industrial environment for presentation of the measured data in a single coordinate system need to be addressed. The calibration of multi-sensor SMSs typically requires a calibration artefact, which consequently leads to significant user input for the processing of calibration data, in order to obtain the respective sensor's optimal imaging geometry parameters. The imaging geometry parameters provide a mapping from the acquired shape data to real world Cartesian coordinates. However, the process of obtaining optimal sensor imaging geometry parameters (which involves a nonlinear numerical optimization process known as bundle adjustment), requires labelling regions within each point cloud as belonging to known features of the calibration artefact. This thesis describes an automated calibration procedure which ensures that calibration data is processed through automated feature detection of the calibration artefact, artefact pose estimation, automated control point selection, and finally bundle adjustment itself. [Continues.

    Radar Technology

    Get PDF
    In this book “Radar Technology”, the chapters are divided into four main topic areas: Topic area 1: “Radar Systems” consists of chapters which treat whole radar systems, environment and target functional chain. Topic area 2: “Radar Applications” shows various applications of radar systems, including meteorological radars, ground penetrating radars and glaciology. Topic area 3: “Radar Functional Chain and Signal Processing” describes several aspects of the radar signal processing. From parameter extraction, target detection over tracking and classification technologies. Topic area 4: “Radar Subsystems and Components” consists of design technology of radar subsystem components like antenna design or waveform design

    Coupling Vanishing Point Tracking with Inertial Navigation to Estimate Attitude in a Structured Environment

    Get PDF
    This research aims to obtain accurate and stable estimates of a vehicle\u27s attitude by coupling consumer-grade inertial and optical sensors. This goal is pursued by first modeling both inertial and optical sensors and then developing a technique for identifying vanishing points in perspective images of a structured environment. The inertial and optical processes are then coupled to enable each one to aid the other. The vanishing point measurements are combined with the inertial data in an extended Kalman filter to produce overall attitude estimates. This technique is experimentally demonstrated in an indoor corridor setting using a motion profile designed to simulate flight. Through comparison with a tactical-grade inertial sensor, the combined consumer-grade inertial and optical data are shown to produce a stable attitude solution accurate to within 1.5 degrees. A measurement bias is manifested which degrades the accuracy by up to another 2.5 degrees

    3D reconstruction and motion estimation using forward looking sonar

    Get PDF
    Autonomous Underwater Vehicles (AUVs) are increasingly used in different domains including archaeology, oil and gas industry, coral reef monitoring, harbour’s security, and mine countermeasure missions. As electromagnetic signals do not penetrate underwater environment, GPS signals cannot be used for AUV navigation, and optical cameras have very short range underwater which limits their use in most underwater environments. Motion estimation for AUVs is a critical requirement for successful vehicle recovery and meaningful data collection. Classical inertial sensors, usually used for AUV motion estimation, suffer from large drift error. On the other hand, accurate inertial sensors are very expensive which limits their deployment to costly AUVs. Furthermore, acoustic positioning systems (APS) used for AUV navigation require costly installation and calibration. Moreover, they have poor performance in terms of the inferred resolution. Underwater 3D imaging is another challenge in AUV industry as 3D information is increasingly demanded to accomplish different AUV missions. Different systems have been proposed for underwater 3D imaging, such as planar-array sonar and T-configured 3D sonar. While the former features good resolution in general, it is very expensive and requires huge computational power, the later is cheaper implementation but requires long time for full 3D scan even in short ranges. In this thesis, we aim to tackle AUV motion estimation and underwater 3D imaging by proposing relatively affordable methodologies and study different parameters affecting their performance. We introduce a new motion estimation framework for AUVs which relies on the successive acoustic images to infer AUV ego-motion. Also, we propose an Acoustic Stereo Imaging (ASI) system for underwater 3D reconstruction based on forward looking sonars; the proposed system features cheaper implementation than planar array sonars and solves the delay problem in T configured 3D sonars
    corecore