15 research outputs found
Capturing Shape Information with Multi-Scale Topological Loss Terms for 3D Reconstruction
Reconstructing 3D objects from 2D images is both challenging for our brains
and machine learning algorithms. To support this spatial reasoning task,
contextual information about the overall shape of an object is critical.
However, such information is not captured by established loss terms (e.g. Dice
loss). We propose to complement geometrical shape information by including
multi-scale topological features, such as connected components, cycles, and
voids, in the reconstruction loss. Our method uses cubical complexes to
calculate topological features of 3D volume data and employs an optimal
transport distance to guide the reconstruction process. This topology-aware
loss is fully differentiable, computationally efficient, and can be added to
any neural network. We demonstrate the utility of our loss by incorporating it
into SHAPR, a model for predicting the 3D cell shape of individual cells based
on 2D microscopy images. Using a hybrid loss that leverages both geometrical
and topological information of single objects to assess their shape, we find
that topological information substantially improves the quality of
reconstructions, thus highlighting its ability to extract more relevant
features from image datasets.Comment: Accepted at the 25th International Conference on Medical Image
Computing and Computer Assisted Intervention (MICCAI
From Mathematics to Medicine: A Practical Primer on Topological Data Analysis (TDA) and the Development of Related Analytic Tools for the Functional Discovery of Latent Structure in fMRI Data
fMRI is the preeminent method for collecting signals from the human brain in vivo, for using these signals in the service of functional discovery, and relating these discoveries to anatomical structure. Numerous computational and mathematical techniques have been deployed to extract information from the fMRI signal. Yet, the application of Topological Data Analyses (TDA) remain limited to certain sub-areas such as connectomics (that is, with summarized versions of fMRI data). While connectomics is a natural and important area of application of TDA, applications of TDA in the service of extracting structure from the (non-summarized) fMRI data itself are heretofore nonexistent. âStructureâ within fMRI data is determined by dynamic fluctuations in spatially distributed signals over time, and TDA is well positioned to help researchers better characterize mass dynamics of the signal by rigorously capturing shape within it. To accurately motivate this idea, we a) survey an established method in TDA (âpersistent homologyâ) to reveal and describe how complex structures can be extracted from data sets generally, and b) describe how persistent homology can be applied specifically to fMRI data. We provide explanations for some of the mathematical underpinnings of TDA (with expository figures), building ideas in the following sequence: a) fMRI researchers can and should use TDA to extract structure from their data; b) this extraction serves an important role in the endeavor of functional discovery, and c) TDA approaches can complement other established approaches toward fMRI analyses (for which we provide examples). We also provide detailed applications of TDA to fMRI data collected using established paradigms, and offer our software pipeline for readers interested in emulating our methods. This working overview is both an inter-disciplinary synthesis of ideas (to draw researchers in TDA and fMRI toward each other) and a detailed description of methods that can motivate collaborative research
Topology-Aware Focal Loss for 3D Image Segmentation
The efficacy of segmentation algorithms is frequently compromised by
topological errors like overlapping regions, disrupted connections, and voids.
To tackle this problem, we introduce a novel loss function, namely
Topology-Aware Focal Loss (TAFL), that incorporates the conventional Focal Loss
with a topological constraint term based on the Wasserstein distance between
the ground truth and predicted segmentation masks' persistence diagrams. By
enforcing identical topology as the ground truth, the topological constraint
can effectively resolve topological errors, while Focal Loss tackles class
imbalance. We begin by constructing persistence diagrams from filtered cubical
complexes of the ground truth and predicted segmentation masks. We subsequently
utilize the Sinkhorn-Knopp algorithm to determine the optimal transport plan
between the two persistence diagrams. The resultant transport plan minimizes
the cost of transporting mass from one distribution to the other and provides a
mapping between the points in the two persistence diagrams. We then compute the
Wasserstein distance based on this travel plan to measure the topological
dissimilarity between the ground truth and predicted masks. We evaluate our
approach by training a 3D U-Net with the MICCAI Brain Tumor Segmentation
(BraTS) challenge validation dataset, which requires accurate segmentation of
3D MRI scans that integrate various modalities for the precise identification
and tracking of malignant brain tumors. Then, we demonstrate that the quality
of segmentation performance is enhanced by regularizing the focal loss through
the addition of a topological constraint as a penalty term
Integrating topological features to enhance cardiac disease diagnosis from 3D CMR images
Treballs Finals de Grau de MatemĂ tiques, Facultat de MatemĂ tiques, Universitat de Barcelona, Any: 2023, Director: Carles Casacuberta i Polyxeni Gkontra[en] Persistent homology is a technique from the field of algebraic topology for the analysis and characterization of the shape and structure of datasets in multiple dimensions. Its use is based on the identification and quantification of topological patterns in the dataset across various scales. In this thesis, persistent homology is applied with the objective of extracting topological descriptors from three-dimensional cardiovascular magnetic resonance (CMR) imaging. Thereafter, topological descriptors are used for the detection of cardiovascular diseases by means of Machine Learning (ML) techniques.
Radiomics has been one of the recently proposed approaches for disease diagnosis. This method involves the extraction and subsequent analysis of a significant number of quantitative descriptors from medical images. These descriptors offer a characterization of the spatial distribution, texture, and intensity of the structures present in the images.
This study demonstrates that radiomics and topological descriptors achieve comparable results, providing complementary insights into the underlying structures and characteristics of anatomical tissues. Moreover, the combination of these two methods leads to a further improvement of the performance of ML models, thereby
enhancing medical diagnosis
Time-dependent topological snalysis for cardiovascular disease diagnosis using magnetic resonance
Treballs finals del Mà ster en Matemà tica Avançada, Facultat de Matemà tiques, Universitat de Barcelona: Curs: 2022-2023. Director: Carles Casacuberta i Polyxeni Gkontra[en] The present research project aims to study the topology of time varying Cardiovascular Magnetic Resonance images (CMR) for disease diagnosis. CMR is a non-invasive technique that involves the acquisition of multiple 3D images at different cardiac phases throughout the cardiac cycle. Nonetheless, conventional assessment of CMR images typically involves the quantification of parameters related to the volumes, and more recently to the shape
and texture by means of radiomics (Raisi-Estabragh, 2020), of the cardiac chambers at only two static time-point points: the end-systole and the enddiastole. Therefore, potentially rich information regarding the cardiac function and structure from other phases of the cardiac cycle might be lost.
To overcome this limitation, we propose to leverage Topological Data Analysis (TDA) to optimally exploit information from the entire cardiac cycle, by measuring the variation of persistence descriptors. This approach
seems promising since a time series might not exhibit relevant geometrical features in its respective point cloud embedding, but it may rather display topological cyclic patterns and their respective variations that can be captured with the proposed machinery. Subsequently, the novel TDA-based CMR descriptors encompassing the entire cardiac cycle are used to feed supervised machine learning classifiers for cardiovascular disease diagnosis.
A full framework from data gathering, to image processing, mathematical modelling and classifier implementation is presented for this purpose.
The performance of the proposed approach based on TDA features and ML is limited. Nonetheless, the approach could be easily adapted to other diseases and scenario where the integration of ML and TDA could be more
beneficial
BLIS-Net: Classifying and Analyzing Signals on Graphs
Graph neural networks (GNNs) have emerged as a powerful tool for tasks such
as node classification and graph classification. However, much less work has
been done on signal classification, where the data consists of many functions
(referred to as signals) defined on the vertices of a single graph. These tasks
require networks designed differently from those designed for traditional GNN
tasks. Indeed, traditional GNNs rely on localized low-pass filters, and signals
of interest may have intricate multi-frequency behavior and exhibit long range
interactions. This motivates us to introduce the BLIS-Net (Bi-Lipschitz
Scattering Net), a novel GNN that builds on the previously introduced geometric
scattering transform. Our network is able to capture both local and global
signal structure and is able to capture both low-frequency and high-frequency
information. We make several crucial changes to the original geometric
scattering architecture which we prove increase the ability of our network to
capture information about the input signal and show that BLIS-Net achieves
superior performance on both synthetic and real-world data sets based on
traffic flow and fMRI data
THEĆRIA: THE VENERATION OF ICONS VIA THE TECHNOETIC PROCESS
The Second Council of Nicaea, in 787 AD, marked the end of iconoclasm, while in 843 the Treaty of Verdun laid the foundations of Europe. With these agreements, a sustained period of imageless iconolatry was initiated. However, the veneration of icons was based on the absolute worship of matter and form, which replaced the prime spiritual concept of âimage and likenessâ.
Millennia of research and thought resulted in imageless representations of natural phenomena. Pushing aside the topology of the image and its sign, the intelligent man, from the Age of Reason and onward, considered himself as an auto-authorised and teleological-free entity. To this end, he maximised the intelligibility of his space by designing an all-inclusive Cartesian cocoon in which to secure his mass and form. Yet, there he found his pet (Schrodingerâs Cat) to be both dead and alive, and the apple, still forbidden, had become a bouncing ball, serving as evidence of gravity.
Hence, this intelligent design, by default, carries the residual fear of Manichaean and Augustinian devils, and is deemed to have converted to a de-sign crisis.
Relying on literature sources, this dissertation examines two dominant models that govern human cognition and the production of knowledge. Despite remarkable scientific achievements which resulted, the aftermath of human progress was, among others, the maximisation of residual fear, to such an extent that voracious black holes devour all matter.
Inaugurating the transhumanist period, the human becomes a Manchurian Candidate, still an upgraded ape and a victim of his own nature in the Anthropocene.
In an attempt to overcome this de-sign crisis, the research presented in this thesis aims to address the necessity of the restoration of icons, as evidenced by Byzantine art and philosophy but neglected in the name of human supremacy and imperialism. This thesis elucidates Classical and Late Antiquity manuscripts in an effort to set a new ârestore pointâ, endeavouring to launch the image in the current organosilicon substances; examples from Scripture narratives as well as from visual arts contribute to this effort.
The proposed concluding scheme is the Module of TheĆria, which reflects the major transhumanistic elements such as transmutation, interaction and fluidity. TheĆria functions through noetic mechanisms, using âimage and likenessâ as the prime carriers of knowledge.
The anticipated outcome is to reveal a human investment in a pro-nature incorruptibility with the advent of TheĆria in the field of ΀echnoetics, where one can administer âimage and likenessâ to gain capital liquidity
Learning on sequential data with evolution equations
Data which have a sequential structure are ubiquitous in many scientific domains such as physical sciences or mathematical finance. This motivates an important research effort in developing statistical and machine learning models for sequential data. Recently, the signature map, rooted in the theory of controlled differential equations, has emerged as a principled and systematic way to encode sequences into finite-dimensional vector representations. The signature kernel provides an interface with kernel methods which are recognized as a powerful class of algorithms for learning on structured data. Furthermore, the signature underpins the theory of neural controlled differential equations, neural networks which can handle sequential inputs, and more specifically the case of irregularly sampled time-series.
This thesis is at the intersection of these three research areas and addresses key modelling and computational challenges for learning on sequential data. We make use of the well-established theory of reproducing kernels and the rich mathematical properties of the signature to derive an approximate inference scheme for Gaussian processes, Bayesian kernel methods, for learning with large datasets of multi-channel sequential data. Then, we construct new basis functions and kernel functions for regression problems where the inputs are sets of sequences instead of a single sequence. Finally, we use the modelling paradigm of stochastic partial differential equations to design a neural network architecture for learning functional relationships between spatio-temporal signals.
The role of differential equations of evolutionary type is central in this thesis as they are used to model the relationship between independent and dependent signals, and provide tractable algorithms for kernel methods on sequential data