164,505 research outputs found

    Network Representation Learning Guided by Partial Community Structure

    Get PDF
    Network Representation Learning (NRL) is an effective way to analyze large scale networks (graphs). In general, it maps network nodes, edges, subgraphs, etc. onto independent vectors in a low dimension space, thus facilitating network analysis tasks. As community structure is one of the most prominent mesoscopic structure properties of real networks, it is necessary to preserve community structure of networks during NRL. In this paper, the concept of k-step partial community structure is defined and two Partial Community structure Guided Network Embedding (PCGNE) methods, based on two popular NRL algorithms (DeepWalk and node2vec respectively), for node representation learning are proposed. The idea behind this is that it is easier and more cost-effective to find a higher quality 1-step partial community structure than a higher quality whole community structure for networks; the extracted partial community information is then used to guide random walks in DeepWalk or node2vec. As a result, the learned node representations could preserve community structure property of networks more effectively. The two proposed algorithms and six state-of-the-art NRL algorithms were examined through multi-label classification and (inner community) link prediction on eight synthesized networks: one where community structure property could be controlled, and one real world network. The results suggested that the two PCGNE methods could improve the performance of their own based algorithm significantly and were competitive for node representation learning. Especially, comparing against used baseline algorithms, PCGNE methods could capture overlapping community structure much better, and thus could achieve better performance for multi-label classification on networks that have more overlapping nodes and/or larger overlapping memberships

    Deformable Shape Completion with Graph Convolutional Autoencoders

    Full text link
    The availability of affordable and portable depth sensors has made scanning objects and people simpler than ever. However, dealing with occlusions and missing parts is still a significant challenge. The problem of reconstructing a (possibly non-rigidly moving) 3D object from a single or multiple partial scans has received increasing attention in recent years. In this work, we propose a novel learning-based method for the completion of partial shapes. Unlike the majority of existing approaches, our method focuses on objects that can undergo non-rigid deformations. The core of our method is a variational autoencoder with graph convolutional operations that learns a latent space for complete realistic shapes. At inference, we optimize to find the representation in this latent space that best fits the generated shape to the known partial input. The completed shape exhibits a realistic appearance on the unknown part. We show promising results towards the completion of synthetic and real scans of human body and face meshes exhibiting different styles of articulation and partiality.Comment: CVPR 201

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio
    • …
    corecore