129 research outputs found

    Large Scale SfM with the Distributed Camera Model

    Full text link
    We introduce the distributed camera model, a novel model for Structure-from-Motion (SfM). This model describes image observations in terms of light rays with ray origins and directions rather than pixels. As such, the proposed model is capable of describing a single camera or multiple cameras simultaneously as the collection of all light rays observed. We show how the distributed camera model is a generalization of the standard camera model and describe a general formulation and solution to the absolute camera pose problem that works for standard or distributed cameras. The proposed method computes a solution that is up to 8 times more efficient and robust to rotation singularities in comparison with gDLS. Finally, this method is used in an novel large-scale incremental SfM pipeline where distributed cameras are accurately and robustly merged together. This pipeline is a direct generalization of traditional incremental SfM; however, instead of incrementally adding one camera at a time to grow the reconstruction the reconstruction is grown by adding a distributed camera. Our pipeline produces highly accurate reconstructions efficiently by avoiding the need for many bundle adjustment iterations and is capable of computing a 3D model of Rome from over 15,000 images in just 22 minutes.Comment: Published at 2016 3DV Conferenc

    GraphMatch: Efficient Large-Scale Graph Construction for Structure from Motion

    Full text link
    We present GraphMatch, an approximate yet efficient method for building the matching graph for large-scale structure-from-motion (SfM) pipelines. Unlike modern SfM pipelines that use vocabulary (Voc.) trees to quickly build the matching graph and avoid a costly brute-force search of matching image pairs, GraphMatch does not require an expensive offline pre-processing phase to construct a Voc. tree. Instead, GraphMatch leverages two priors that can predict which image pairs are likely to match, thereby making the matching process for SfM much more efficient. The first is a score computed from the distance between the Fisher vectors of any two images. The second prior is based on the graph distance between vertices in the underlying matching graph. GraphMatch combines these two priors into an iterative "sample-and-propagate" scheme similar to the PatchMatch algorithm. Its sampling stage uses Fisher similarity priors to guide the search for matching image pairs, while its propagation stage explores neighbors of matched pairs to find new ones with a high image similarity score. Our experiments show that GraphMatch finds the most image pairs as compared to competing, approximate methods while at the same time being the most efficient.Comment: Published at IEEE 3DV 201

    Eye-CU: Sleep Pose Classification for Healthcare using Multimodal Multiview Data

    Full text link
    Manual analysis of body poses of bed-ridden patients requires staff to continuously track and record patient poses. Two limitations in the dissemination of pose-related therapies are scarce human resources and unreliable automated systems. This work addresses these issues by introducing a new method and a new system for robust automated classification of sleep poses in an Intensive Care Unit (ICU) environment. The new method, coupled-constrained Least-Squares (cc-LS), uses multimodal and multiview (MM) data and finds the set of modality trust values that minimizes the difference between expected and estimated labels. The new system, Eye-CU, is an affordable multi-sensor modular system for unobtrusive data collection and analysis in healthcare. Experimental results indicate that the performance of cc-LS matches the performance of existing methods in ideal scenarios. This method outperforms the latest techniques in challenging scenarios by 13% for those with poor illumination and by 70% for those with both poor illumination and occlusions. Results also show that a reduced Eye-CU configuration can classify poses without pressure information with only a slight drop in its performance.Comment: Ten-page manuscript including references and ten figure

    Estimating Confidences for Classifier Decisions using Extreme Value Theory

    Get PDF
    Classifiers generally lack a mechanism to compute decision confidences. As humans, when we sense that the confidence for a decision is low, we either conduct additional actions to improve our confidence or dismiss the decision. While this reasoning is natural to us, it is currently missing in most common decision algorithms (i.e., classifiers) used in computer vision or machine learning. This limits the capability for a machine to take further actions to either improve a result or dismiss the decision. In this thesis, we design algorithms for estimating the confidence for decisions made by classifiers such as nearest-neighbor or support vector machines. We developed these algorithms leveraging the theory of extreme values. We use the statistical models that this theory provides for modeling the classifier's decision scores for correct and incorrect outcomes. Our proposed algorithms exploit these statistical models in order to compute a correctness belief: the probability that the classifier's decision is correct. In this work, we show how these beliefs can be used to filter bad classifications and to speed up robust estimations via sample and consensus algorithms, which are used in computer vision for estimating camera motions and for reconstructing the scene's 3D structure. Moreover, we show how these beliefs improve the classification accuracy of one-class support vector machines. In conclusion, we show that extreme value theory leads to powerful mechanisms that can predict the correctness of a classifier's decision

    Optimizing Fiducial Marker Placement for Improved Visual Localization

    Full text link
    Adding fiducial markers to a scene is a well-known strategy for making visual localization algorithms more robust. Traditionally, these marker locations are selected by humans who are familiar with visual localization techniques. This paper explores the problem of automatic marker placement within a scene. Specifically, given a predetermined set of markers and a scene model, we compute optimized marker positions within the scene that can improve accuracy in visual localization. Our main contribution is a novel framework for modeling camera localizability that incorporates both natural scene features and artificial fiducial markers added to the scene. We present optimized marker placement (OMP), a greedy algorithm that is based on the camera localizability framework. We have also designed a simulation framework for testing marker placement algorithms on 3D models and images generated from synthetic scenes. We have evaluated OMP within this testbed and demonstrate an improvement in the localization rate by up to 20 percent on three different scenes
    • …
    corecore