595 research outputs found

    GradICON\texttt{GradICON}: Approximate Diffeomorphisms via Gradient Inverse Consistency

    Full text link
    We present an approach to learning regular spatial transformations between image pairs in the context of medical image registration. Contrary to optimization-based registration techniques and many modern learning-based methods, we do not directly penalize transformation irregularities but instead promote transformation regularity via an inverse consistency penalty. We use a neural network to predict a map between a source and a target image as well as the map when swapping the source and target images. Different from existing approaches, we compose these two resulting maps and regularize deviations of the Jacobian\bf{Jacobian} of this composition from the identity matrix. This regularizer -- GradICON\texttt{GradICON} -- results in much better convergence when training registration models compared to promoting inverse consistency of the composition of maps directly while retaining the desirable implicit regularization effects of the latter. We achieve state-of-the-art registration performance on a variety of real-world medical image datasets using a single set of hyperparameters and a single non-dataset-specific training protocol.Comment: 29 pages, 16 figures, CVPR 202

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    From Mouse Models to Patients: A Comparative Bioinformatic Analysis of HFpEF and HFrEF

    Get PDF
    Heart failure (HF) represents an immense health burden with currently no curative therapeutic strategies. Study of HF patient heterogeneity has led to the recognition of HF with preserved (HFpEF) and reduced ejection fraction (HFrEF) as distinct syndromes regarding molecular characteristics and clinical presentation. Until the recent past, HFrEF represented the focus of research, reflected in the development of a number of therapeutic strategies. However, the pathophysiological concepts applicable to HFrEF may not be necessarily applicable to HFpEF. HF induces a series of ventricular modeling processes that involve, among others, hallmarks of hypertrophy, fibrosis, inflammation, all of which can be observed to some extent in HFpEF and HFrEF. Thus, by direct comparative analysis between HFpEF and HFrEF, distinctive features can be uncovered, possibly leading to improved pathophysiological understanding and opportunities for therapeutic intervention. Moreover, recent advances in biotechnologies, animal models, and digital infrastructure have enabled large-scale collection of molecular and clinical data, making it possible to conduct a bioinformatic comparative analysis of HFpEF and HFrEF. Here, I first evaluated the field of HF transcriptome research by revisiting published studies and data sets to provide a consensus gene expression reference. I discussed the patient clientele that was captured, revealing that HFpEF patients were not represented. Thus, I applied alternative approaches to study HFpEF. I utilized a mouse surrogate model of HFpEF and analyzed single cell transcriptomics to gain insights into the interstitial tissue remodeling. I contrasted this analysis by comparison of fibroblast activation patterns found in mouse models resembling HFrEF. The human reference was used to further demonstrate similarities between models and patients and a novel possible biomarker for HFpEF was introduced. Mouse models only capture selected aspects of HFpEF but largely fail to imitate the complex multi-factor and multi-organ syndrome present in humans. To account for this complexity, I performed a top-down analysis in HF patients by analyzing phenome-wide comorbidity patterns. I derived clinical insights by contrasting HFpEF and HFrEF patients and their comorbidity profiles. These profiles were then used to predict associated genetic profiles, which could be also recovered in the HFpEF mouse model, providing hypotheses about the molecular links of comorbidity profiles. My work provided novel insights into HFpEF and HFrEF syndromes and exemplified an interdisciplinary bioinformatic approach for a comparative analysis of both syndromes using different data modalities

    Assisting digital volume correlation with mechanical image-based modeling: application to the measurement of kinematic fields at the architecture scale in cellular materials

    Get PDF
    La mesure de champs de déplacement et de déformation aux petites échelles dans des microstructures complexes représente encore un défi majeur dans le monde de la mécanique expérimentale. Ceci est en partie dû aux acquisitions d'images et à la pauvreté de la texture à ces échelles. C'est notamment le cas pour les matériaux cellulaires lorsqu'ils sont imagés avec des micro-tomographes conventionnels et qu'ils peuvent être sujets à des mécanismes de déformation complexes. Comme la validation de modèles numériques et l'identification des propriétés mécaniques de matériaux se base sur des mesures précises de déplacements et de déformations, la conception et l'implémentation d'algorithmes robustes et fiables de corrélation d'images semble nécessaire. Lorsque l'on s'intéresse à l'utilisation de la corrélation d'images volumiques (DVC) pour les matériaux cellulaires, on est confronté à un paradoxe: l'absence de texture à l'échelle du constituant conduit à considérer l'architecture comme marqueur pour la corrélation. Ceci conduit à l'échec des techniques ordinaires de DVC à mesurer des cinématiques aux échelles subcellulaires en lien avec des comportements mécaniques locaux complexes tels que la flexion ou le flambement de travées. L'objectif de cette thèse est la conception d'une technique de DVC pour la mesure de champs de déplacement dans des matériaux cellulaires à l'échelle de leurs architectures. Cette technique assiste la corrélation d'images par une régularisation élastique faible en utilisant un modèle mécanique généré automatiquement et basé sur les images. La méthode suggérée introduit une séparation d'échelles au dessus desquelles la DVC est dominante et en dessous desquelles elle est assistée par le modèle mécanique basé sur l'image. Une première étude numérique consistant à comparer différentes techniques de construction de modèles mécaniques basés sur les images est conduite. L'accent est mis sur deux méthodes de calcul particulières: la méthode des éléments finis (FEM) et la méthode des cellules finies (FCM) qui consiste à immerger la géométrie complexe dans une grille régulière de haut ordre sans utiliser de mailleurs. Si la FCM évite une première phase délicate de discrétisation, plusieurs paramètres restent néanmoins délicats à fixer. Dans ce travail, ces paramètres sont ajustés afin d'obtenir (a) la meilleure précision (bornée par les erreurs de pixellisation) tout en (b) assurant une complexité minimale. Pour l'aspect mesure par corrélation d'images régularisée, plusieurs expérimentations virtuelles à partir de différentes simulations numériques (en élasticité, en plasticité et en non-linéarité géométrique) sont d'abord réalisées afin d'analyser l'influence des paramètres de régularisation introduits. Les erreurs de mesures peuvent dans ce cas être quantifiées à l'aide des solutions de référence éléments finis. La capacité de la méthode à mesurer des cinématiques complexes en absence de texture est démontrée pour des régimes non-linéaires tels que le flambement. Finalement, le travail proposé est généralisé à la corrélation volumique des différents états de déformation du matériau et à la construction automatique de la micro-architecture cellulaire en utilisant soit une grille B-spline d'ordre arbitraire (FCM) soit un maillage éléments finis (FEM). Une mise en évidence expérimentale de l'efficacité et de la justesse de l'approche proposée est effectuée à travers de la mesure de cinématiques complexes dans une mousse polyuréthane sollicitée en compression lors d'un essai in situ.Measuring displacement and strain fields at low observable scales in complex microstructures still remains a challenge in experimental mechanics often because of the combination of low definition images with poor texture at this scale. The problem is particularly acute in the case of cellular materials, when imaged by conventional micro-tomographs, for which complex highly non-linear local phenomena can occur. As the validation of numerical models and the identification of mechanical properties of materials must rely on accurate measurements of displacement and strain fields, the design and implementation of robust and faithful image correlation algorithms must be conducted. With cellular materials, the use of digital volume correlation (DVC) faces a paradox: in the absence of markings of exploitable texture on/or in the struts or cell walls, the available speckle will be formed by the material architecture itself. This leads to the inability of classical DVC codes to measure kinematics at the cellular and a fortiori sub-cellular scales, precisely because the interpolation basis of the displacement field cannot account for the complexity of the underlying kinematics, especially when bending or buckling of beams or walls occurs. The objective of the thesis is to develop a DVC technique for the measurement of displacement fields in cellular materials at the scale of their architecture. The proposed solution consists in assisting DVC by a weak elastic regularization using an automatic image-based mechanical model. The proposed method introduces a separation of scales above which DVC is dominant and below which it is assisted by image-based modeling. First, a numerical investigation and comparison of different techniques for building automatically a geometric and mechanical model from tomographic images is conducted. Two particular methods are considered: the finite element method (FEM) and the finite-cell method (FCM). The FCM is a fictitious domain method that consists in immersing the complex geometry in a high order structured grid and does not require meshing. In this context, various discretization parameters appear delicate to choose. In this work, these parameters are adjusted to obtain (a) the best possible accuracy (bounded by pixelation errors) while (b) ensuring minimal complexity. Concerning the ability of the mechanical image-based models to regularize DIC, several virtual experimentations are performed in two-dimensions in order to finely analyze the influence of the introduced regularization lengths for different input mechanical behaviors (elastic, elasto-plastic and geometrically non-linear) and in comparison with ground truth. We show that the method can estimate complex local displacement and strain fields with speckle-free low definition images, even in non-linear regimes such as local buckling. Finally a three-dimensional generalization is performed through the development of a DVC framework. It takes as an input the reconstructed volumes at the different deformation states of the material and constructs automatically the cellular micro-architeture geometry. It considers either an immersed structured B-spline grid of arbitrary order or a finite-element mesh. An experimental evidence is performed by measuring the complex kinematics of a polyurethane foam under compression during an in situ test

    Quadratic Form Minimisation in X-ray Computed Tomography

    Get PDF
    The simplest physical model of X-ray attenuation through matter (i.e. Beer's Law) gives rise to a linear equality between a vector of measured attenuations, m, and the projection of an attenuating volume vector x. In equation form, m = Ax where A is determined by the tomographic experiment parameters. Due to measurement noise in the vector m, this equality can never be satisfied. In lieu of an exact solution, a quantitative reconstruction chooses and minimises some objective function (explicit or implicit) between m, A and x. In this thesis we develop a systematised approach to constructing and optimising quadratic approximations to objective penalty functions f(Ax) on projected attenuations. We describe how the inversion problem for quadratic approximations can be expressed precisely as x = (A^T S A)^{-1} A^T S c where c and S are the expansion point and Hessian respectively of the quadratic approximation to f(Ax) as a function of Ax. This conceptual framework encapsulates Generalised Least Squares and Weighted Least Squares, is more general, and offers an immediate proof that the objective function is a best possible quadratic-form approximation to f(Ax) in the neighbourhood of a solution. The primary challenge to minimising the quadratic approximations to penalty functions f(Ax) lies in actually applying the operator (A^T S A)^{-1} computationally. We derive some results pertaining to the exact expression (A^T S A)^{-1}, including a computationally-practicable version for 2D tomography for arbitrary choices of f(Ax)-- unfortunately we did not find a non-trivial example stable enough for use. The theoretical result is still of interest, if for no other reason than there will likely exist some choices of f(Ax) for which it is in fact stable. An expression for the exact inverse also serves as a foundation for further study into suitably stable approximations. Approximations to (A^T S A)^{-1} (also known as preconditioners of the gradient descent of f(Ax)) can be more computationally tractable and also more stable than the exact formula; in reconstruction there is a spectrum of tradeoff between exactness (speed) and stability (robustness) of the preconditioner. To this end, we propose a stable preconditioner (A^T S A)^{-1} ~= diag(A^T S A 1)^{-1} (where 1 is a vector of ones) which is the (unique) generalisation of the one used in the seminal SIRT algorithm by Peter Gilbert. We illustrative reconstructions based on this preconditioner (via the Conjugate Gradient Method) in the simulations section of this thesis, as well the potential gain in reconstruction quality (objective and subjective) from this generalisation. The resulting reconstructions are minimisers of f(Ax), and so they also demonstrate what could be achieved with direct methods if stable inverses to (A^T S A)^{-1} exist and could be found. Thus the problem of exact inversion of (A^T S A)^{-1} is of great interest for application to tomographic experiments with significant noise and/or minor nonlinearities, both of which may be incorporated into a quadratic penalty function f(Ax). This thesis is a foray into the theory underlying the inversion of \bsaproj, and a brief exhibition of a new series of practical iterative algorithms which may be applied to arbitrary quadratic forms

    Imaging Sensors and Applications

    Get PDF
    In past decades, various sensor technologies have been used in all areas of our lives, thus improving our quality of life. In particular, imaging sensors have been widely applied in the development of various imaging approaches such as optical imaging, ultrasound imaging, X-ray imaging, and nuclear imaging, and contributed to achieve high sensitivity, miniaturization, and real-time imaging. These advanced image sensing technologies play an important role not only in the medical field but also in the industrial field. This Special Issue covers broad topics on imaging sensors and applications. The scope range of imaging sensors can be extended to novel imaging sensors and diverse imaging systems, including hardware and software advancements. Additionally, biomedical and nondestructive sensing applications are welcome

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    Drug development progress in duchenne muscular dystrophy

    Get PDF
    Duchenne muscular dystrophy (DMD) is a severe, progressive, and incurable X-linked disorder caused by mutations in the dystrophin gene. Patients with DMD have an absence of functional dystrophin protein, which results in chronic damage of muscle fibers during contraction, thus leading to deterioration of muscle quality and loss of muscle mass over time. Although there is currently no cure for DMD, improvements in treatment care and management could delay disease progression and improve quality of life, thereby prolonging life expectancy for these patients. Furthermore, active research efforts are ongoing to develop therapeutic strategies that target dystrophin deficiency, such as gene replacement therapies, exon skipping, and readthrough therapy, as well as strategies that target secondary pathology of DMD, such as novel anti-inflammatory compounds, myostatin inhibitors, and cardioprotective compounds. Furthermore, longitudinal modeling approaches have been used to characterize the progression of MRI and functional endpoints for predictive purposes to inform Go/No Go decisions in drug development. This review showcases approved drugs or drug candidates along their development paths and also provides information on primary endpoints and enrollment size of Ph2/3 and Ph3 trials in the DMD space

    Harnessing Neural Dynamics as a Computational Resource

    Get PDF
    Researchers study nervous systems at levels of scale spanning several orders of magnitude, both in terms of time and space. While some parts of the brain are well understood at specific levels of description, there are few overarching theories that systematically bridge low-level mechanism and high-level function. The Neural Engineering Framework (NEF) is an attempt at providing such a theory. The NEF enables researchers to systematically map dynamical systems—corresponding to some hypothesised brain function—onto biologically constrained spiking neural networks. In this thesis, we present several extensions to the NEF that broaden both the range of neural resources that can be harnessed for spatiotemporal computation and the range of available biological constraints. Specifically, we suggest a method for harnessing the dynamics inherent in passive dendritic trees for computation, allowing us to construct single-layer spiking neural networks that, for some functions, achieve substantially lower errors than larger multi-layer networks. Furthermore, we suggest “temporal tuning” as a unifying approach to harnessing temporal resources for computation through time. This allows modellers to directly constrain networks to temporal tuning observed in nature, in ways not previously well-supported by the NEF. We then explore specific examples of neurally plausible dynamics using these techniques. In particular, we propose a new “information erasure” technique for constructing LTI systems generating temporal bases. Such LTI systems can be used to establish an optimal basis for spatiotemporal computation. We demonstrate how this captures “time cells” that have been observed throughout the brain. As well, we demonstrate the viability of our extensions by constructing an adaptive filter model of the cerebellum that successfully reproduces key features of eyeblink conditioning observed in neurobiological experiments. Outside the cognitive sciences, our work can help exploit resources available on existing neuromorphic computers, and inform future neuromorphic hardware design. In machine learning, our spatiotemporal NEF populations map cleanly onto the Legendre Memory Unit (LMU), a promising artificial neural network architecture for stream-to-stream processing that outperforms competing approaches. We find that one of our LTI systems derived through “information erasure” may serve as a computationally less expensive alternative to the LTI system commonly used in the LMU
    • …
    corecore