398 research outputs found

    Joint Object and Part Segmentation using Deep Learned Potentials

    Full text link
    Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-channel fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks

    Learning Material-Aware Local Descriptors for 3D Shapes

    Full text link
    Material understanding is critical for design, geometric modeling, and analysis of functional objects. We enable material-aware 3D shape analysis by employing a projective convolutional neural network architecture to learn material- aware descriptors from view-based representations of 3D points for point-wise material classification or material- aware retrieval. Unfortunately, only a small fraction of shapes in 3D repositories are labeled with physical mate- rials, posing a challenge for learning methods. To address this challenge, we crowdsource a dataset of 3080 3D shapes with part-wise material labels. We focus on furniture models which exhibit interesting structure and material variabil- ity. In addition, we also contribute a high-quality expert- labeled benchmark of 115 shapes from Herman-Miller and IKEA for evaluation. We further apply a mesh-aware con- ditional random field, which incorporates rotational and reflective symmetries, to smooth our local material predic- tions across neighboring surface patches. We demonstrate the effectiveness of our learned descriptors for automatic texturing, material-aware retrieval, and physical simulation. The dataset and code will be publicly available.Comment: 3DV 201

    Deformable Kernel Networks for Joint Image Filtering

    Get PDF
    Joint image filters are used to transfer structural details from a guidance picture used as a prior to a target image, in tasks such as enhancing spatial resolution and suppressing noise. Previous methods based on convolutional neural networks (CNNs) combine nonlinear activations of spatially-invariant kernels to estimate structural details and regress the filtering result. In this paper, we instead learn explicitly sparse and spatially-variant kernels. We propose a CNN architecture and its efficient implementation, called the deformable kernel network (DKN), that outputs sets of neighbors and the corresponding weights adaptively for each pixel. The filtering result is then computed as a weighted average. We also propose a fast version of DKN that runs about seventeen times faster for an image of size 640 x 480. We demonstrate the effectiveness and flexibility of our models on the tasks of depth map upsampling, saliency map upsampling, cross-modality image restoration, texture removal, and semantic segmentation. In particular, we show that the weighted averaging process with sparsely sampled 3 x 3 kernels outperforms the state of the art by a significant margin in all cases.Comment: arXiv admin note: substantial text overlap with arXiv:1903.11286 (IJCV accepted

    Model and Appearance Based Analysis of Neuronal Morphology from Different Microscopy Imaging Modalities

    Get PDF
    The neuronal morphology analysis is key for understanding how a brain works. This process requires the neuron imaging system with single-cell resolution; however, there is no feasible system for the human brain. Fortunately, the knowledge can be inferred from the model organism, Drosophila melanogaster, to the human system. This dissertation explores the morphology analysis of Drosophila larvae at single-cell resolution in static images and image sequences, as well as multiple microscopy imaging modalities. Our contributions are on both computational methods for morphology quantification and analysis of the influence of the anatomical aspect. We develop novel model-and-appearance-based methods for morphology quantification and illustrate their significance in three neuroscience studies. Modeling of the structure and dynamics of neuronal circuits creates understanding about how connectivity patterns are formed within a motor circuit and determining whether the connectivity map of neurons can be deduced by estimations of neuronal morphology. To address this problem, we study both boundary-based and centerline-based approaches for neuron reconstruction in static volumes. Neuronal mechanisms are related to the morphology dynamics; so the patterns of neuronal morphology changes are analyzed along with other aspects. In this case, the relationship between neuronal activity and morphology dynamics is explored to analyze locomotion procedures. Our tracking method models the morphology dynamics in the calcium image sequence designed for detecting neuronal activity. It follows the local-to-global design to handle calcium imaging issues and neuronal movement characteristics. Lastly, modeling the link between structural and functional development depicts the correlation between neuron growth and protein interactions. This requires the morphology analysis of different imaging modalities. It can be solved using the part-wise volume segmentation with artificial templates, the standardized representation of neurons. Our method follows the global-to-local approach to solve both part-wise segmentation and registration across modalities. Our methods address common issues in automated morphology analysis from extracting morphological features to tracking neurons, as well as mapping neurons across imaging modalities. The quantitative analysis delivered by our techniques enables a number of new applications and visualizations for advancing the investigation of phenomena in the nervous system
    • …
    corecore