2,512 research outputs found

    3D Matting: A Soft Segmentation Method Applied in Computed Tomography

    Full text link
    Three-dimensional (3D) images, such as CT, MRI, and PET, are common in medical imaging applications and important in clinical diagnosis. Semantic ambiguity is a typical feature of many medical image labels. It can be caused by many factors, such as the imaging properties, pathological anatomy, and the weak representation of the binary masks, which brings challenges to accurate 3D segmentation. In 2D medical images, using soft masks instead of binary masks generated by image matting to characterize lesions can provide rich semantic information, describe the structural characteristics of lesions more comprehensively, and thus benefit the subsequent diagnoses and analyses. In this work, we introduce image matting into the 3D scenes to describe the lesions in 3D medical images. The study of image matting in 3D modality is limited, and there is no high-quality annotated dataset related to 3D matting, therefore slowing down the development of data-driven deep-learning-based methods. To address this issue, we constructed the first 3D medical matting dataset and convincingly verified the validity of the dataset through quality control and downstream experiments in lung nodules classification. We then adapt the four selected state-of-the-art 2D image matting algorithms to 3D scenes and further customize the methods for CT images. Also, we propose the first end-to-end deep 3D matting network and implement a solid 3D medical image matting benchmark, which will be released to encourage further research.Comment: 12 pages, 7 figure

    Style transfer for headshot portraits

    Get PDF
    Headshot portraits are a popular subject in photography but to achieve a compelling visual style requires advanced skills that a casual photographer will not have. Further, algorithms that automate or assist the stylization of generic photographs do not perform well on headshots due to the feature-specific, local retouching that a professional photographer typically applies to generate such portraits. We introduce a technique to transfer the style of an example headshot photo onto a new one. This can allow one to easily reproduce the look of renowned artists. At the core of our approach is a new multiscale technique to robustly transfer the local statistics of an example portrait onto a new one. This technique matches properties such as the local contrast and the overall lighting direction while being tolerant to the unavoidable differences between the faces of two different people. Additionally, because artists sometimes produce entire headshot collections in a common style, we show how to automatically find a good example to use as a reference for a given portrait, enabling style transfer without the user having to search for a suitable example for each input. We demonstrate our approach on data taken in a controlled environment as well as on a large set of photos downloaded from the Internet. We show that we can successfully handle styles by a variety of different artists.Quanta Computer (Firm)Adobe System

    An evaluation of imagery from an unmanned aerial vehicle (UAV) for the mapping of intertidal macroalgae on Seal Sands, Tees Estuary, UK

    Get PDF
    The Seal Sands area of Teesmouth is designated a Special Protection Area under the habitats directive because guideline concentrations of nutrients in coastal waters are exceeded. This may be responsible for extensive growth of the green filamentous macroalgae Enteromorpha sp., and literature suggests that algal cover in the intertidal zone is detrimental to the feeding behaviour of wading bird species. Although numerous studies have highlighted the causes and consequences of macroalgal cover, the complex spatial and temporal dynamics of macroalgal bloom growth are not as well understood, and hence there is a need to develop a precise and cost effective monitoring method for the mapping and quantifying of algal biomass. Previous studies have highlighted several image processing techniques that could be applied to high resolution airborne imagery in order to predict algal biomass. In order to test these methods, high resolution imagery was acquired in the Sea Ő¬ Sands area using a lightweight SmartPlanes SmartOne unmanned aerial vehicle (UAV) equipped with a near-infrared sensitive 5-megapixel Canon IXUS compact camera, a standard 6-megapixel Canon IXUS compact camera and a Garmin Geko 201 handheld GPS device. Imagery was acquired in November 2006 and June 2007 in order to examine the spectral response of Enteromorpha sp. at different time periods within a macroalgal growth cycle. Images were mosaicked and georeferenced using ground control points located with a Leica 1200 differential GPS and processed to allow for analysis of their spectral and textural properties. Samples of macroalgal cover were collected, georeferenced and their dry biomass content obtained for ground truthing. Although textural entropy and inertia did not correlate significantly with macroalgal biomass, normalised green-red difference index (NGRDI), normalised difference vegetation index (NDVI) and colour saturation computed on the imagery showed a good degree of linear correlation with Enteromorpha sp. dry weight, achieving coefficients of determination in excess of r(^2)= 0.6 for both the November2006 and June 2007 image sets. Linear regression was used to establish predictive models to estimate macroalgal biomass from image spectral properties. Enteromorpha sp. Biomass estimations of 71.4 g DW m(^-2) and 7.9g DW m(^-2) were established for the November 2006 and June2007 data acquisition sessions respectively. Despite a lack of previous biomass quantification for Seal Sands, the favourable performance of a UAV in terms of operating cost and man hours required for image acquisition suggests that unmanned aerial vehicles may present a viable method for the mapping of intertidal algal biomass on an annual basis

    Data driven approaches for investigating molecular heterogeneity of the brain

    Get PDF
    It has been proposed that one of the clearest organizing principles for most sensory systems is the existence of parallel subcircuits and processing streams that form orderly and systematic mappings from stimulus space to neurons. Although the spatial heterogeneity of the early olfactory circuitry has long been recognized, we know comparatively little about the circuits that propagate sensory signals downstream. Investigating the potential modularity of the bulb’s intrinsic circuits proves to be a difficult task as termination patterns of converging projections, as with the bulb’s inputs, are not feasibly realized. Thus, if such circuit motifs exist, their detection essentially relies on identifying differential gene expression, or “molecular signatures,” that may demarcate functional subregions. With the arrival of comprehensive (whole genome, cellular resolution) datasets in biology and neuroscience, it is now possible for us to carry out large-scale investigations and make particular use of the densely catalogued, whole genome expression maps of the Allen Brain Atlas to carry out systematic investigations of the molecular topography of the olfactory bulb’s intrinsic circuits. To address the challenges associated with high-throughput and high-dimensional datasets, a deep learning approach will form the backbone of our informatic pipeline. In the proposed work, we test the hypothesis that the bulb’s intrinsic circuits are parceled into distinct, parallel modules that can be defined by genome-wide patterns of expression. In pursuit of this aim, our deep learning framework will facilitate the group-registration of the mitral cell layers of ~ 50,000 in-situ olfactory bulb circuits to test this hypothesis
    • …
    corecore