35 research outputs found

    Vehicle-borne Scanning for Detailed 3D Terrain Model Generation

    No full text
    Three-dimensional models of real world terrain have application in a variety of tasks, but digitizing a large environment poses constraints on the design of a 3D scanning system. We have developed a Mobile Scanning System that works within these constraints to quickly digitize large-scale real world environments. We utilize a mobile platform to move our sensors past the scene to be digitized – fusing the data from cm-level accuracy laser range scanners, positioning and orientation instruments, and high-resolution video cameras – to provide the mobility and speed required to quickly and accurately model the target scene

    ESTIMATING ILLUMINATION CHROMATICITY via KERNEL REGRESSION

    No full text
    We propose a simple nonparametric linear regression tool, known as kernel regression (KR), to estimate the illumination chromaticity. We design a Gaussian kernel whose bandwidth is selected empirically. Previously, nonlinear techniques like neural networks (NN) and support vector machines (SVM) are applied to estimate the illumination chromaticity. However, neither of the techniques was compared with linear regression tools. We show that the proposed method performs better chromaticity estimation compared to NN, SVM, and linear ridge regression (RR) approach on the same data set. Index Terms — Kernel regression, Color constancy 1

    Image Fusion and Enhancement via Empirical Mode Decomposition

    No full text
    In this paper, we describe a novel technique for image fusion and enhancement, using Empirical Mode Decomposition (EMD). EMD is a non-parametric data-driven analysis tool that decomposes non-linear non-stationary signals into Intrinsic Mode Functions (IMFs). In this method, we decompose images, rather than signals, from different imaging modalities into their IMFs. Fusion is performed at the decomposition level and the fused IMFs are reconstructed to realize the fused image. We have devised weighting schemes which emphasize features from both modalities by decreasing the mutual information between IMFs, thereby increasing the information and visual content of the fused image. We demonstrate how the proposed method improves the interpretive information of the input images, by comparing it with widely used fusion schemes. Apart from comparing our method with some advanced techniques, we have also evaluated our method against pixelby-pixel averaging, a comparison, which incidentally, is not common in the literature

    Improving Video-Based Robot Self Localization Through Outlier Removal

    No full text
    Abstract – The purpose of this paper is to present a method for rejecting false matches of points from successive views in a video sequence – e.g., one used to perform Pose from Motion for a mobile sensing platform. Invariably, the algorithms used to determine point correspondences between two images output false matches along with the true. These false matches negatively impact the calculations required to perform the pose estimation from video. This paper presents a new algorithm for identifying these false matches and removing them from consideration in order to improve system performance. Experimental results show that our algorithm works in cases where the percentage of false matches may be as high as 80%, providing a set of point correspondences whose true/false match ratio is much higher than the mutual best match method commonly used for outlier filtering, resulting in comparable or better outlier rejection – increasing the true/false match ratio by 2-3 times – in only a fraction of the time
    corecore