10 research outputs found

    Cross-Domain Image Matching with Deep Feature Maps

    Get PDF
    We investigate the problem of automatically determining what type of shoe left an impression found at a crime scene. This recognition problem is made difficult by the variability in types of crime scene evidence (ranging from traces of dust or oil on hard surfaces to impressions made in soil) and the lack of comprehensive databases of shoe outsole tread patterns. We find that mid-level features extracted by pre-trained convolutional neural nets are surprisingly effective descriptors for this specialized domains. However, the choice of similarity measure for matching exemplars to a query image is essential to good performance. For matching multi-channel deep features, we propose the use of multi-channel normalized cross-correlation and analyze its effectiveness. Our proposed metric significantly improves performance in matching crime scene shoeprints to laboratory test impressions. We also show its effectiveness in other cross-domain image retrieval problems: matching facade images to segmentation labels and aerial photos to map images. Finally, we introduce a discriminatively trained variant and fine-tune our system through our proposed metric, obtaining state-of-the-art performance

    Automatic co-registration of aerial imagery and untextured model data utilizing average shading gradients

    Get PDF
    The comparison of current image data with existing 3D model data of a scene provides an efficient method to keep models up to date. In order to transfer information between 2D and 3D data, a preliminary co-registration is necessary. In this paper, we present a concept to automatically co-register aerial imagery and untextured 3D model data. To refine a given initial camera pose, our algorithm computes dense correspondence fields using SIFT flow between gradient representations of the model and camera image, from which 2D–3D correspondences are obtained. These correspondences are then used in an iterative optimization scheme to refine the initial camera pose by minimizing the reprojection error. Since it is assumed that the model does not contain texture information, our algorithm is built up on an existing method based on Average Shading Gradients (ASG) to generate gradient images based on raw geometry information only. We apply our algorithm for the co-registering of aerial photographs to an untextured, noisy mesh model. We have investigated different magnitudes of input error and show that the proposed approach can reduce the final reprojection error to a minimum of 1.27 ± 0.54 pixels, which is less than 10% of its initial value. Furthermore, our evaluation shows that our approach outperforms the accuracy of a standard Iterative Closest Point (ICP) implementation

    Automatic alignment of paintings and photographs depicting a 3D scene

    Get PDF
    International audienceThis paper addresses the problem of automatically align- ing historical architectural paintings with 3D models obtained using multi-view stereo technology from modern photographs. This is a challenging task because of the variations in appearance, geometry, color and texture due to environmental changes over time, the non-photorealistic nature of architectural paintings, and differences in the view- points used by the painters and photographers. Our contribution is two-fold: (i) we combine the gist descriptor [23] with the view-synthesis/retrieval of Irschara et al. [14] to obtain a coarse alignment of the painting to the 3D model by view-sensitive retrieval; (ii) we develop an ICP-like viewpoint refinement procedure, where 3D surface orientation discontinuities (folds and creases) and view-dependent occlusion boundaries are rendered from the automatically obtained and noisy 3D model in a view-dependent manner and matched to contours extracted from the paintings. We demonstrate the alignment of XIXth Century architectural watercolors of the Casa di Championnet in Pompeii with a 3D model constructed from modern photographs using the PMVS public-domain multi-view stereo software

    Cross-dimensional Analysis for Improved Scene Understanding

    Get PDF
    Visual data have taken up an increasingly large role in our society. Most people have instant access to a high quality camera in their pockets, and we are taking more pictures than ever before. Meanwhile, through the advent of better software and hardware, the prevalence of 3D data is also rapidly expanding, and demand for data and analysis methods is burgeoning in a wide range of industries. The amount of information about the world implicitly contained in this stream of data is staggering. However, as these images and models are created in uncontrolled circumstances, the extraction of any structured information from the unstructured pixels and vertices is highly non-trivial. To aid this process, we note that the 2D and 3D data modalities are similar in content, but intrinsically different in form. Exploiting their complementary nature, we can investigate certain problems in a cross-dimensional fashion - for example, where 2D lacks expressiveness, 3D can supplement it; where 3D lacks quality, 2D can provide it. In this thesis, we explore three analysis tasks with this insight as our point of departure. First, we show that by considering the tasks of 2D and 3D retrieval jointly we can improve performance of 3D retrieval while simultaneously enabling interesting new ways of exploring 2D retrieval results. Second, we discuss a compact representation of indoor scenes called a "scene map", which represents the objects in a scene using a top-down map of object locations. We propose a method for automatically extracting such scene maps from single 2D images using a database of 3D models for training. Finally, we seek to convert single 2D images to full 3D scenes using a database of 3D models as input. Occlusion is handled by modelling object context explicitly, allowing us to identify and pose objects that would otherwise be too occluded to make inferences about. For all three tasks, we show the utility of our cross-dimensional insight by evaluating each method extensively and showing favourable performance over baseline methods

    3D Recording and Interpretation for Maritime Archaeology

    Get PDF
    This open access peer-reviewed volume was inspired by the UNESCO UNITWIN Network for Underwater Archaeology International Workshop held at Flinders University, Adelaide, Australia in November 2016. Content is based on, but not limited to, the work presented at the workshop which was dedicated to 3D recording and interpretation for maritime archaeology. The volume consists of contributions from leading international experts as well as up-and-coming early career researchers from around the globe. The content of the book includes recording and analysis of maritime archaeology through emerging technologies, including both practical and theoretical contributions. Topics include photogrammetric recording, laser scanning, marine geophysical 3D survey techniques, virtual reality, 3D modelling and reconstruction, data integration and Geographic Information Systems. The principal incentive for this publication is the ongoing rapid shift in the methodologies of maritime archaeology within recent years and a marked increase in the use of 3D and digital approaches. This convergence of digital technologies such as underwater photography and photogrammetry, 3D sonar, 3D virtual reality, and 3D printing has highlighted a pressing need for these new methodologies to be considered together, both in terms of defining the state-of-the-art and for consideration of future directions. As a scholarly publication, the audience for the book includes students and researchers, as well as professionals working in various aspects of archaeology, heritage management, education, museums, and public policy. It will be of special interest to those working in the field of coastal cultural resource management and underwater archaeology but will also be of broader interest to anyone interested in archaeology and to those in other disciplines who are now engaging with 3D recording and visualization
    corecore