43 research outputs found

    Enhancing cardiac image segmentation through persistent homology regularization

    Full text link
    Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2022, Director: Sergio Escalera Guerrero, Carles Casacuberta i Rubén Ballester Bautista[en] Cardiovascular diseases are a major cause of death and disability. Deep learning-based segmentation methods could help to reduce their severity by aiding in early diagnosing but high levels of accuracy are necessary. The vast majority of methods focus on correcting local errors and miss the global picture. To ad- dress this issue, researchers have developed techniques that incorporate global context and consider the relationships between pixels. Here, we apply persistent homology, a branch of topology that studies the topological structure of shapes, along with deep learning methods to improve the heart segmentation. We use multidimensional topological losses to avoid spurious components and holes and increase the total accuracy. We evaluate the performance of three different approaches: using the dice and pixel-wise losses with the sum of persistences of label diagrams as a regularizer, using the dice and pixel-wise losses with the bottleneck distance as a regularizer, and using both losses without any regularization. We find that, while more computationally demanding, the methods using topological regularizers outperform the other method in terms of accuracy

    Using topological data analysis for building Bayesan neural networks

    Get PDF
    For the first time, a simplified approach to constructing Bayesian neural networks is proposed, combining computational efficiency with the ability to analyze the learning process. The proposed approach is based on Bayesianization of a deterministic neural network by randomizing parameters only at the interface level, i.e., the formation of a Bayesian neural network based on a given network by replacing its parameters with probability distributions that have the parameters of the original model as the average value. Evaluations of the efficiency metrics of the neural network were obtained within the framework of the approach under consideration, and the Bayesian neural network constructed through variation inference were performed using topological data analysis methods. The Bayesianization procedure is implemented through graded variation of the randomization intensity. As an alternative, two neural networks with identical structure were used — deterministic and classical Bayesian networks. The input of the neural network was supplied with the original data of two datasets in versions without noise and with added Gaussian noise. The zero and first persistent homologies for the embeddings of the formed neural networks on each layer were calculated. To assess the quality of classification, the accuracy metric was used. It is shown that the barcodes for embeddings on each layer of the Bayesianized neural network in all four scenarios are between the corresponding barcodes of the deterministic and Bayesian neural networks for both zero and first persistent homologies. In this case, the deterministic neural network is the lower bound, and the Bayesian neural network is the upper bound. It is shown that the structure of data associations within a Bayesianized neural network is inherited from a deterministic model, but acquires the properties of a Bayesian one. It has been experimentally established that there is a relationship between the normalized persistent entropy calculated on neural network embeddings and the accuracy of the neural network. For predicting accuracy, the topology of embeddings on the middle layer of the neural network model turned out to be the most revealing. The proposed approach can be used to simplify the construction of a Bayesian neural network from an already trained deterministic neural network, which opens up the possibility of increasing the accuracy of an existing neural network without ensemble with additional classifiers. It becomes possible to proactively evaluate the effectiveness of the generated neural network on simplified data without running it on a real dataset, which reduces the resource intensity of its development

    Path homologies of deep feedforward networks

    Full text link
    We provide a characterization of two types of directed homology for fully-connected, feedforward neural network architectures. These exact characterizations of the directed homology structure of a neural network architecture are the first of their kind. We show that the directed flag homology of deep networks reduces to computing the simplicial homology of the underlying undirected graph, which is explicitly given by Euler characteristic computations. We also show that the path homology of these networks is non-trivial in higher dimensions and depends on the number and size of the layers within the network. These results provide a foundation for investigating homological differences between neural network architectures and their realized structure as implied by their parameters.Comment: To appear in the proceedings of IEEE ICMLA 201
    corecore