48,996 research outputs found

    Edge and corner identification for tracking the line of sight

    Get PDF
    This article presents an edge-corner detector, implemented in the realm of the GEIST project (an Computer Aided Touristic Information System) to extract the information of straight edges and their intersections (image corners) from camera-captured (real world) and computer-generated images (from the database of Historical Monuments, using observer position and orientation data) -- Camera and computer-generated images are processed for reduction of detail, skeletonization and corner-edge detection -- The corners surviving the detection and skeletonization process from both images are treated as landmarks and fed to a matching algorithm, which estimates the sampling errors which usually contaminate GPS and pose tracking data (fed to the computer-image generatator) -- In this manner, a closed loop control is implemented, by which the system converges to exact determination of position and orientation of an observer traversing a historical scenario (in this case the city of Heidelberg) -- With this exact position and orientation, in the GEIST project other modules are able to project history tales on the view field of the observer, which have the exact intended scenario (the real image seen by the observer) -- In this way, the tourist “sees” tales developing in actual, material historical sites of the city -- To achieve these goals this article presents the modification and articulation of algorithms such as the Canny Edge Detector, SUSAN Corner Detector, 1-D and 2-D filters, etceter

    Data Association and Map Management for Robot SLAM using Local Invariant Features

    Get PDF
    [[abstract]]To build a persistent map with visual landmarks is one of the most important steps for implementing the visual simultaneous localization and mapping (SLAM). The corner detector is a common method utilized to detect visual landmarks for constructing a map of the environment. However, due to the scalevariant characteristic of corner detection, extensive computational cost is needed to recover the scale and orientation of corner features in SLAM tasks. The purpose of this paper is to build the map using a local invariant feature detector, namely speeded-up robust features (SURF), to detect scale- and orientation-invariant features as well as provide a robust representation of visual landmarks for SLAM. The procedures of detection, description and matching of regular SURF algorithms are modified in this paper in order to provide a robust data-association of visual landmarks in SLAM. Furthermore, the effective method of map management for SURF features in SLAM is also designed to improve the accuracy of robot state estimation.[[notice]]èŁœæ­ŁćźŒç•ą[[conferencetype]]朋際[[conferencedate]]20130804~20130807[[booktype]]é›»ć­ç‰ˆ[[iscallforpapers]]Y[[conferencelocation]]Takamatsu, Japa

    A biologically inspired spiking model of visual processing for image feature detection

    Get PDF
    To enable fast reliable feature matching or tracking in scenes, features need to be discrete and meaningful, and hence edge or corner features, commonly called interest points are often used for this purpose. Experimental research has illustrated that biological vision systems use neuronal circuits to extract particular features such as edges or corners from visual scenes. Inspired by this biological behaviour, this paper proposes a biologically inspired spiking neural network for the purpose of image feature extraction. Standard digital images are processed and converted to spikes in a manner similar to the processing that transforms light into spikes in the retina. Using a hierarchical spiking network, various types of biologically inspired receptive fields are used to extract progressively complex image features. The performance of the network is assessed by examining the repeatability of extracted features with visual results presented using both synthetic and real images

    LIFT: Learned Invariant Feature Transform

    Get PDF
    We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-to-end differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining.Comment: Accepted to ECCV 2016 (spotlight

    Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor

    Get PDF
    Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level. Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques. Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate. Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation
    • 

    corecore