27,816 research outputs found

    Learning to solve Minimum Cost Multicuts efficiently using Edge-Weighted Graph Convolutional Neural Networks

    Get PDF
    The minimum cost multicut problem is the NP-hard/APX-hard combinatorial optimization problem of partitioning a real-valued edge-weighted graph such as to minimize the total cost of the partition. While graph convolutional neural networks (GNN) have proven to be promising in the context of combinatorial optimization, most of them are only tailored to or tested on positive-valued edge weights, i.e. they do not comply to the nature of the multicut problem. We therefore adapt various GNN architectures including Graph Convolutional Networks, Signed Graph Convolutional Networks and Graph Isomorphic Networks to facilitate the efficient encoding of real-valued edge costs. Moreover, we employ a reformulation of the multicut ILP constraints to a polynomial program as loss function that allows to learn feasible multicut solutions in a scalable way. Thus, we provide the first approach towards end-to-end trainable multicuts. Our findings support that GNN approaches can produce good solutions in practice while providing lower computation times and largely improved scalability compared to LP solvers and optimized heuristics, especially when considering large instances

    Optimal Graph Filters for Clustering Attributed Graphs

    Full text link
    Many real-world systems can be represented as graphs where the different entities are presented by nodes and their interactions by edges. An important task in studying large datasets is graph clustering. While there has been a lot of work on graph clustering using the connectivity between the nodes, many real-world networks also have node attributes. Clustering attributed graphs requires joint modeling of graph structure and node attributes. Recent work has focused on graph convolutional networks and graph convolutional filters to combine structural and content information. However, these methods are mostly limited to lowpass filtering and do not explicitly optimize the filters for the clustering task. In this paper, we introduce a graph signal processing based approach, where we design polynomial graph filters optimized for clustering. The proposed approach is formulated as a two-step iterative optimization problem where graph filters that are interpretable and optimal for the given data are learned while maximizing the separation between different clusters. The proposed approach is evaluated on attributed networks and compared to the state-of-the-art graph convolutional network approaches.Comment: 5 pages, 3 figure

    Depth from Monocular Images using a Semi-Parallel Deep Neural Network (SPDNN) Hybrid Architecture

    Get PDF
    Deep neural networks are applied to a wide range of problems in recent years. In this work, Convolutional Neural Network (CNN) is applied to the problem of determining the depth from a single camera image (monocular depth). Eight different networks are designed to perform depth estimation, each of them suitable for a feature level. Networks with different pooling sizes determine different feature levels. After designing a set of networks, these models may be combined into a single network topology using graph optimization techniques. This "Semi Parallel Deep Neural Network (SPDNN)" eliminates duplicated common network layers, and can be further optimized by retraining to achieve an improved model compared to the individual topologies. In this study, four SPDNN models are trained and have been evaluated at 2 stages on the KITTI dataset. The ground truth images in the first part of the experiment are provided by the benchmark, and for the second part, the ground truth images are the depth map results from applying a state-of-the-art stereo matching method. The results of this evaluation demonstrate that using post-processing techniques to refine the target of the network increases the accuracy of depth estimation on individual mono images. The second evaluation shows that using segmentation data alongside the original data as the input can improve the depth estimation results to a point where performance is comparable with stereo depth estimation. The computational time is also discussed in this study.Comment: 44 pages, 25 figure
    • …
    corecore