534 research outputs found

    Development of Quantitative Bone SPECT Analysis Methods for Metastatic Bone Disease

    Get PDF
    Prostate cancer is one of the most prevalent types of cancer in males in the United States. Bone is a common site of metastases for metastatic prostate cancer. However, bone metastases are often considered “unmeasurable” using standard anatomic imaging and the RECIST 1.1 criteria. As a result, response to therapy is often suboptimally evaluated by visual interpretation of planar bone scintigraphy with response criteria related to the presence or absence of new lesions. With the commercial availability of quantitative single-photon emission computed tomography (SPECT) methods, it is now feasible to establish quantitative metrics of therapy response by skeletal metastases. Quantitative bone SPECT (QBSPECT) may provide the ability to estimate bone lesion uptake, volume, and the number of lesions more accurately than planar imaging. However, the accuracy of activity quantification in QBSPECT relies heavily on the precision with which bone metastases and bone structures are delineated. In this research, we aim at developing automated image segmentation methods for fast and accurate delineation of bone and bone metastases in QBSPECT. To begin, we developed registration methods to generate a dataset of realistic and anatomically-varying computerized phantoms for use in QBSPECT simulations. Using these simulations, we develop supervised computer-automated segmentation methods to minimize intra- and inter-observer variations in delineating bone metastases. This project provides accurate segmentation techniques for QBSPECT and paves the way for the development of QBSPECT methods for assessing bone metastases’ therapy response

    Discovering Causal Relations and Equations from Data

    Full text link
    Physics is a field of science that has traditionally used the scientific method to answer questions about why natural phenomena occur and to make testable models that explain the phenomena. Discovering equations, laws and principles that are invariant, robust and causal explanations of the world has been fundamental in physical sciences throughout the centuries. Discoveries emerge from observing the world and, when possible, performing interventional studies in the system under study. With the advent of big data and the use of data-driven methods, causal and equation discovery fields have grown and made progress in computer science, physics, statistics, philosophy, and many applied fields. All these domains are intertwined and can be used to discover causal relations, physical laws, and equations from observational data. This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of Physics and outlines the most important challenges and promising future lines of research. We also provide a taxonomy for observational causal and equation discovery, point out connections, and showcase a complete set of case studies in Earth and climate sciences, fluid dynamics and mechanics, and the neurosciences. This review demonstrates that discovering fundamental laws and causal relations by observing natural phenomena is being revolutionised with the efficient exploitation of observational data, modern machine learning algorithms and the interaction with domain knowledge. Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems.Comment: 137 page

    Data analysis with merge trees

    Get PDF
    Today’s data are increasingly complex and classical statistical techniques need growingly more refined mathematical tools to be able to model and investigate them. Paradigmatic situations are represented by data which need to be considered up to some kind of trans- formation and all those circumstances in which the analyst finds himself in the need of defining a general concept of shape. Topological Data Analysis (TDA) is a field which is fundamentally contributing to such challenges by extracting topological information from data with a plethora of interpretable and computationally accessible pipelines. We con- tribute to this field by developing a series of novel tools, techniques and applications to work with a particular topological summary called merge tree. To analyze sets of merge trees we introduce a novel metric structure along with an algorithm to compute it, define a framework to compare different functions defined on merge trees and investigate the metric space obtained with the aforementioned metric. Different geometric and topolog- ical properties of the space of merge trees are established, with the aim of obtaining a deeper understanding of such trees. To showcase the effectiveness of the proposed metric, we develop an application in the field of Functional Data Analysis, working with functions up to homeomorphic reparametrization, and in the field of radiomics, where each patient is represented via a clustering dendrogram

    Generalizable deep learning based medical image segmentation

    Get PDF
    Deep learning is revolutionizing medical image analysis and interpretation. However, its real-world deployment is often hindered by the poor generalization to unseen domains (new imaging modalities and protocols). This lack of generalization ability is further exacerbated by the scarcity of labeled datasets for training: Data collection and annotation can be prohibitively expensive in terms of labor and costs because label quality heavily dependents on the expertise of radiologists. Additionally, unreliable predictions caused by poor model generalization pose safety risks to clinical downstream applications. To mitigate labeling requirements, we investigate and develop a series of techniques to strengthen the generalization ability and the data efficiency of deep medical image computing models. We further improve model accountability and identify unreliable predictions made on out-of-domain data, by designing probability calibration techniques. In the first and the second part of thesis, we discuss two types of problems for handling unexpected domains: unsupervised domain adaptation and single-source domain generalization. For domain adaptation we present a data-efficient technique that adapts a segmentation model trained on a labeled source domain (e.g., MRI) to an unlabeled target domain (e.g., CT), using a small number of unlabeled training images from the target domain. For domain generalization, we focus on both image reconstruction and segmentation. For image reconstruction, we design a simple and effective domain generalization technique for cross-domain MRI reconstruction, by reusing image representations learned from natural image datasets. For image segmentation, we perform causal analysis of the challenging cross-domain image segmentation problem. Guided by this causal analysis we propose an effective data-augmentation-based generalization technique for single-source domains. The proposed method outperforms existing approaches on a large variety of cross-domain image segmentation scenarios. In the third part of the thesis, we present a novel self-supervised method for learning generic image representations that can be used to analyze unexpected objects of interest. The proposed method is designed together with a novel few-shot image segmentation framework that can segment unseen objects of interest by taking only a few labeled examples as references. Superior flexibility over conventional fully-supervised models is demonstrated by our few-shot framework: it does not require any fine-tuning on novel objects of interest. We further build a publicly available comprehensive evaluation environment for few-shot medical image segmentation. In the fourth part of the thesis, we present a novel probability calibration model. To ensure safety in clinical settings, a deep model is expected to be able to alert human radiologists if it has low confidence, especially when confronted with out-of-domain data. To this end we present a plug-and-play model to calibrate prediction probabilities on out-of-domain data. It aligns the prediction probability in line with the actual accuracy on the test data. We evaluate our method on both artifact-corrupted images and images from an unforeseen MRI scanning protocol. Our method demonstrates improved calibration accuracy compared with the state-of-the-art method. Finally, we summarize the major contributions and limitations of our works. We also suggest future research directions that will benefit from the works in this thesis.Open Acces

    Shape-Graph Matching Network (SGM-net): Registration for Statistical Shape Analysis

    Full text link
    This paper focuses on the statistical analysis of shapes of data objects called shape graphs, a set of nodes connected by articulated curves with arbitrary shapes. A critical need here is a constrained registration of points (nodes to nodes, edges to edges) across objects. This, in turn, requires optimization over the permutation group, made challenging by differences in nodes (in terms of numbers, locations) and edges (in terms of shapes, placements, and sizes) across objects. This paper tackles this registration problem using a novel neural-network architecture and involves an unsupervised loss function developed using the elastic shape metric for curves. This architecture results in (1) state-of-the-art matching performance and (2) an order of magnitude reduction in the computational cost relative to baseline approaches. We demonstrate the effectiveness of the proposed approach using both simulated data and real-world 2D and 3D shape graphs. Code and data will be made publicly available after review to foster research

    Efficient Models and Algorithms for Image Processing for Industrial Applications

    Get PDF
    Image processing and computer vision are now part of our daily life and allow artificial intelligence systems to see and perceive the world with a visual system similar to the human one. In the quest to improve performance, computer vision algorithms reach remarkable computational complexities. The high computational complexity is mitigated by the availability of hardware capable of supporting these computational demands. However, high-performance hardware cannot always be relied upon when one wants to make the research product usable. In this work, we have focused on the development of computer vision algorithms and methods with low computational complexity but high performance. The first approach is to study the relationship between Fourier-based metrics and Wasserstein distances to propose alternative metrics to the latter, considerably reducing the time required to obtain comparable results. In the second case, instead, we start from an industrial problem and develop a deep learning model for change detection, obtaining state-of-the-art performance but reducing the computational complexity required by at least a third compared to the existing literature

    Discovering causal relations and equations from data

    Get PDF
    Physics is a field of science that has traditionally used the scientific method to answer questions about why natural phenomena occur and to make testable models that explain the phenomena. Discovering equations, laws, and principles that are invariant, robust, and causal has been fundamental in physical sciences throughout the centuries. Discoveries emerge from observing the world and, when possible, performing interventions on the system under study. With the advent of big data and data-driven methods, the fields of causal and equation discovery have developed and accelerated progress in computer science, physics, statistics, philosophy, and many applied fields. This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of physics and outlines the most important challenges and promising future lines of research. We also provide a taxonomy for data-driven causal and equation discovery, point out connections, and showcase comprehensive case studies in Earth and climate sciences, fluid dynamics and mechanics, and the neurosciences. This review demonstrates that discovering fundamental laws and causal relations by observing natural phenomena is revolutionised with the efficient exploitation of observational data and simulations, modern machine learning algorithms and the combination with domain knowledge. Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems

    Functional Data Representation with Merge Trees

    Full text link
    In this paper we face the problem of representation of functional data with the tools of algebraic topology. We represent functions by means of merge trees and this representation is compared with that offered by persistence diagrams. We show that these two tree structures, although not equivalent, are both invariant under homeomorphic re-parametrizations of the functions they represent, thus allowing for a statistical analysis which is indifferent to functional misalignment. We employ a novel metric for merge trees and we prove a few theoretical results related to its specific implementation when merge trees represent functions. To showcase the good properties of our topological approach to functional data analysis, we first go through a few examples using data generated {\em in silico} and employed to illustrate and compare the different representations provided by merge trees and persistence diagrams, and then we test it on the Aneurisk65 dataset replicating, from our different perspective, the supervised classification analysis which contributed to make this dataset a benchmark for methods dealing with misaligned functional data

    Nonlocal Graph-PDEs and Riemannian Gradient Flows for Image Labeling

    Get PDF
    In this thesis, we focus on the image labeling problem which is the task of performing unique pixel-wise label decisions to simplify the image while reducing its redundant information. We build upon a recently introduced geometric approach for data labeling by assignment flows [ APSS17 ] that comprises a smooth dynamical system for data processing on weighted graphs. Hereby we pursue two lines of research that give new application and theoretically-oriented insights on the underlying segmentation task. We demonstrate using the example of Optical Coherence Tomography (OCT), which is the mostly used non-invasive acquisition method of large volumetric scans of human retinal tis- sues, how incorporation of constraints on the geometry of statistical manifold results in a novel purely data driven geometric approach for order-constrained segmentation of volumetric data in any metric space. In particular, making diagnostic analysis for human eye diseases requires decisive information in form of exact measurement of retinal layer thicknesses that has be done for each patient separately resulting in an demanding and time consuming task. To ease the clinical diagnosis we will introduce a fully automated segmentation algorithm that comes up with a high segmentation accuracy and a high level of built-in-parallelism. As opposed to many established retinal layer segmentation methods, we use only local information as input without incorporation of additional global shape priors. Instead, we achieve physiological order of reti- nal cell layers and membranes including a new formulation of ordered pair of distributions in an smoothed energy term. This systematically avoids bias pertaining to global shape and is hence suited for the detection of anatomical changes of retinal tissue structure. To access the perfor- mance of our approach we compare two different choices of features on a data set of manually annotated 3 D OCT volumes of healthy human retina and evaluate our method against state of the art in automatic retinal layer segmentation as well as to manually annotated ground truth data using different metrics. We generalize the recent work [ SS21 ] on a variational perspective on assignment flows and introduce a novel nonlocal partial difference equation (G-PDE) for labeling metric data on graphs. The G-PDE is derived as nonlocal reparametrization of the assignment flow approach that was introduced in J. Math. Imaging & Vision 58(2), 2017. Due to this parameterization, solving the G-PDE numerically is shown to be equivalent to computing the Riemannian gradient flow with re- spect to a nonconvex potential. We devise an entropy-regularized difference-of-convex-functions (DC) decomposition of this potential and show that the basic geometric Euler scheme for inte- grating the assignment flow is equivalent to solving the G-PDE by an established DC program- ming scheme. Moreover, the viewpoint of geometric integration reveals a basic way to exploit higher-order information of the vector field that drives the assignment flow, in order to devise a novel accelerated DC programming scheme. A detailed convergence analysis of both numerical schemes is provided and illustrated by numerical experiments
    corecore