767 research outputs found

    Centroid Distance Keypoint Detector for Colored Point Clouds

    Full text link
    Keypoint detection serves as the basis for many computer vision and robotics applications. Despite the fact that colored point clouds can be readily obtained, most existing keypoint detectors extract only geometry-salient keypoints, which can impede the overall performance of systems that intend to (or have the potential to) leverage color information. To promote advances in such systems, we propose an efficient multi-modal keypoint detector that can extract both geometry-salient and color-salient keypoints in colored point clouds. The proposed CEntroid Distance (CED) keypoint detector comprises an intuitive and effective saliency measure, the centroid distance, that can be used in both 3D space and color space, and a multi-modal non-maximum suppression algorithm that can select keypoints with high saliency in two or more modalities. The proposed saliency measure leverages directly the distribution of points in a local neighborhood and does not require normal estimation or eigenvalue decomposition. We evaluate the proposed method in terms of repeatability and computational efficiency (i.e. running time) against state-of-the-art keypoint detectors on both synthetic and real-world datasets. Results demonstrate that our proposed CED keypoint detector requires minimal computational time while attaining high repeatability. To showcase one of the potential applications of the proposed method, we further investigate the task of colored point cloud registration. Results suggest that our proposed CED detector outperforms state-of-the-art handcrafted and learning-based keypoint detectors in the evaluated scenes. The C++ implementation of the proposed method is made publicly available at https://github.com/UCR-Robotics/CED_Detector.Comment: Accepted to IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023; copyright will be transferred to IEEE upon publicatio

    Fast feature matching for simultaneous localization and mapping

    Get PDF
    Bakalářská práce se zabývá rychlým vyhledáváním lokálních obrazových vlastností v rozsáhlých databázích pro simultánní lokalizaci a mapování prostředí. Součástí práce je krátký přehled detektorů a deskriptorů invariantních vůči rotaci, translaci, změně měřítka a affinitě. Pro řadu aplikací z oblasti počítačového vidění (SLAM, object retrieval, wide–robust baseline stereo, tracking, . . . ) je odezva reálném čase naprosto nezbytná. Jako řešení sublineární časové náročnosti vyhledávání v databázích bylo navrženo použití vícenásobných náhodně generovaných KD–stromů. Dále je předkládán nový způsob dělení dat do vícenásobných KD–stromů. Navíc byl navržen nový, obecně použitelný vyhodnocovací software (podporovány jsou KD–stromy, BBD-stromy a k-means stromy.)The thesis deals with the fast feature matching for simultaneous localization and mapping. A brief description of local features invariant to scale, rotation, translation and affine transformations, their detectors and descriptors are included. In general, real–time response for matching is crucial for various computer vision applications (SLAM, object retrieval, wide–robust baseline stereo, tracking, . . . ). We solve the problem of sub–linear search complexity by multiple randomised KD–trees. In addition, we propose a novel way of splitting dataset into the multiple trees. Moreover, a new evaluation package for general use (KD–trees, BBD–trees, k–means trees) was developed.

    Enhanced Approximated SURF Model For Object Recognition

    Get PDF
    ABSTRACT Computer vision applications like camera calibration, 3D reconstruction, and object recognition and image registration are becoming widely popular now a day. In this paper an enhanced model for speeded up robust features (SURF) is proposed by which the object recognition process will become three times faster than common SURF model The main idea is to use efficient data structures for both, the detector and the descriptor. The detection of interest regions is considerably speed-up by using an integral image for scale space computation. The descriptor which is based on orientation histograms is accelerated by the use of an integral orientation histogram. We present an analysis of the computational costs comparing both parts of our approach to the conventional method. Extensive experiments show a speed-up by a factor of eight while the matching and repeatability performance is decreased only slightly

    Data fusion for unsupervised video object detection, tracking and geo-positioning

    Get PDF
    In this work we describe a system and propose a novel algorithm for moving object detection and tracking based on video feed. Apart of many well-known algorithms, it performs detection in unsupervised style, using velocity criteria for the objects detection. The algorithm utilises data from a single camera and Inertial Measurement Unit (IMU) sensors and performs fusion of video and sensory data captured from the UAV. The algorithm includes object tracking and detection, augmented by object geographical co-ordinates estimation. The algorithm can be generalised for any particular video sensor and is not restricted to any specific applications. For object tracking, Bayesian filter scheme combined with approximate inference is utilised. Object localisation in real-world co-ordinates is based on the tracking results and IMU sensor measurements

    A performance evaluation of local descriptors

    Full text link

    A Vision System for Automating Municipal Waste Collection

    Get PDF
    This thesis describes an industry need to make municipal waste collection more efficient. In an attempt to solve this need Waterloo Controls Inc. and a research team at UWO are exploring the idea of combining a vision system and a robotic arm to complete the waste collection process. The system as a whole is described during the introduction section of this report, but the specific goal of this thesis was the development of the vision system component. This component is the main contribution of this thesis and consists of a candidate selection step followed by a verification step

    Towards object-based image editing

    Get PDF

    Feature-based Image Comparison and Its Application in Wireless Visual Sensor Networks

    Get PDF
    This dissertation studies the feature-based image comparison method and its application in Wireless Visual Sensor Networks. Wireless Visual Sensor Networks (WVSNs), formed by a large number of low-cost, small-size visual sensor nodes, represent a new trend in surveillance and monitoring practices. Although each single sensor has very limited capability in sensing, processing and transmission, by working together they can achieve various high level tasks. Sensor collaboration is essential to WVSNs and normally performed among sensors having similar measurements, which are called neighbor sensors. The directional sensing characteristics of imagers and the presence of visual occlusion present unique challenges to neighborhood formation, as geographically-close neighbors might not monitor similar scenes. Besides, the energy resource on the WVSNs is also very tight, with wireless communication and complicated computation consuming most energy in WVSNs. Therefore the feature-based image comparison method has been proposed, which directly compares the captured image from each visual sensor in an economical way in terms of both the computational cost and the transmission overhead. The feature-based image comparison method compares different images and aims to find similar image pairs using a set of local features from each image. The image feature is a numerical representation of the raw image and can be more compact in terms of the data volume than the raw image. The feature-based image comparison contains three steps: feature detection, descriptor calculation and feature comparison. For the step of feature detection, the dissertation proposes two computationally efficient corner detectors. The first detector is based on the Discrete Wavelet Transform that provides multi-scale corner point detection and the scale selection is achieved efficiently through a Gaussian convolution approach. The second detector is based on a linear unmixing model, which treats a corner point as the intersection of two or three “line” bases in a 3 by 3 region. The line bases are extracted through a constrained Nonnegative Matrix Factorization (NMF) approach and the corner detection is accomplished through counting the number of contributing bases in the linear mixture. For the step of descriptor calculation, the dissertation proposes an effective dimensionality reduction algorithm for the high dimensional Scale Invariant Feature Transform (SIFT) descriptors. A set of 40 SIFT descriptor bases are extracted through constrained NMF from a large training set and all SIFT descriptors are then projected onto the space spanned by these bases, achieving dimensionality reduction. The efficiency of the proposed corner detectors have been proven through theoretical analysis. In addition, the effectiveness of the proposed corner detectors and the dimensionality reduction approach has been validated through extensive comparison with several state-of-the-art feature detector/descriptor combinations

    Colour local feature fusion for image matching and recognition

    Get PDF
    This thesis investigates the use of colour information for local image feature extraction. The work is motivated by the inherent limitation of the most widely used state of the art local feature techniques, caused by their disregard of colour information. Colour contains important information that improves the description of the world around us, and by disregarding it; chromatic edges may be lost and thus decrease the level of saliency and distinctiveness of the resulting grayscale image. This thesis addresses the question of whether colour can improve the distinctive and descriptive capabilities of local features, and if this leads to better performances in image feature matching and object recognition applications. To ensure that the developed local colour features are robust to general imaging conditions and capable for real-world applications, this work utilises the most prominent photometric colour invariant gradients from the literature. The research addresses several limitations of previous studies that used colour invariants, by implementing robust local colour features in the form of a Harris-Laplace interest region detection and a SIFT description which characterises the detected image region. Additionally, a comprehensive and rigorous evaluation is performed, that compares the largest number of colour invariants of any previous study. This research provides for the first time, conclusive findings on the capability of the chosen colour invariants for practical real-world computer vision tasks. The last major aspect of the research involves the proposal of a feature fusion extraction strategy, that uses grayscale intensity and colour information conjointly. Two separate fusion approaches are implemented and evaluated, one for local feature matching tasks and another approach for object recognition. Results from the fusion analysis strongly indicate, that the colour invariants contain unique and useful information that can enhance the performance of techniques that use grayscale only based features

    ACS Without an Attitude

    Get PDF
    The book (ACS without an Attitude) is an introduction to spacecraft attitude control systems. It is based on a series of lectures that Dr. Hallock presented in the early 2000s to members of the GSFC flight software branch, the target audience being flight software engineers (developers and testers), fairly new to the field that desire an introductory understanding of spacecraft attitude determination and control
    corecore