8,242 research outputs found

    Unwind: Interactive Fish Straightening

    Full text link
    The ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication

    Symmetry-guided nonrigid registration: the case for distortion correction in multidimensional photoemission spectroscopy

    Full text link
    Image symmetrization is an effective strategy to correct symmetry distortion in experimental data for which symmetry is essential in the subsequent analysis. In the process, a coordinate transform, the symmetrization transform, is required to undo the distortion. The transform may be determined by image registration (i.e. alignment) with symmetry constraints imposed in the registration target and in the iterative parameter tuning, which we call symmetry-guided registration. An example use case of image symmetrization is found in electronic band structure mapping by multidimensional photoemission spectroscopy, which employs a 3D time-of-flight detector to measure electrons sorted into the momentum (kxk_x, kyk_y) and energy (EE) coordinates. In reality, imperfect instrument design, sample geometry and experimental settings cause distortion of the photoelectron trajectories and, therefore, the symmetry in the measured band structure, which hinders the full understanding and use of the volumetric datasets. We demonstrate that symmetry-guided registration can correct the symmetry distortion in the momentum-resolved photoemission patterns. Using proposed symmetry metrics, we show quantitatively that the iterative approach to symmetrization outperforms its non-iterative counterpart in the restored symmetry of the outcome while preserving the average shape of the photoemission pattern. Our approach is generalizable to distortion corrections in different types of symmetries and should also find applications in other experimental methods that produce images with similar features

    Building profile reconstruction using TerraSAR-X data time-series and tomographic techniques

    Get PDF
    This work aims to show the potentialities of SAR Tomography (TomoSAR) techniques for the 3-D characterization (height, reflectivity, time stability) of built-up areas using data acquired by the satellite sensor TerraSAR-X. For this purpose 19 TerraSAR-X single-polarimetric multibaseline images acquired over Paris urban area have been processed applying classical nonparametric (Beamforming and Capon) and parametric (MUSIC) spectral estimation techniques

    Semantic Visual Localization

    Full text link
    Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes
    • …
    corecore