114 research outputs found

    Patient-specific anisotropic model of human trunk based on MR data

    Get PDF
    There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.Natural Sciences and Engineering Research Council (NSERC) of Canada and the MEDITIS training program (´Ecole Polytechnique de Montreal and NSERC)

    3D tooth surface reconstruction

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Three-dimensional model-based analysis of vascular and cardiac images

    Get PDF
    This thesis is concerned with the geometrical modeling of organs to perform medical image analysis tasks. The thesis is divided in two main parts devoted to model linear vessel segments and the left ventricle of the heart, respectively. Chapters 2 to 4 present different aspects of a model-based technique for semi-automated quantification of linear vessel segments from 3-D Magnetic Resonance Angiography (MRA). Chapter 2 is concerned with a multiscale filter for the enhancement of vessels in 2-D and 3-D angiograms. Chapter 3 applies the filter developed in Chapter 2 to determine the central vessel axis in 3-D MRA images. This procedure is initialized using an efficient user interaction technique that naturally incorporates the knowledge of the operator about the vessel of interest. Also in this chapter, a linear vessel model is used to recover the position of the vessel wall in order to carry out an accurate quantitative analysis of vascular morphology. Prior knowledge is provided in two main forms: a cylindrical model introduces a shape prior while prior knowledge on the image acquisition (type of MRA technique) is used to define an appropriate vessel boundary criterion. In Chapter 4 an extensive in vitro and in vivo evaluation of the algorithm introduced in Chapter 3 is described. Chapters 5 to 7 change the focus to 3D cardiac image analysis from Magnetic Resonance Imaging. Chapter 5 presents an extensive survey, a categorization and a critical review of the field of cardiac modeling. Chapter 6 and Chapter 7 present successive refinements of a method for building statistical models of shape variability with particular emphasis on cardiac modeling. The method is based on an elastic registration method using hierarchical free-form deformations. A 3D shape model of the left and right ventricles of the heart was constructed. This model contains both the average shape of these organs as well as their shape variability. The methodology presented in the last two chapters could also be applied to other anatomical structures. This has been illustrated in Chapter 6 with examples of geometrical models of the nucleus caudate and the radius

    Entanglement & Correlations in exactly solvable models

    Get PDF
    The phenomenon of entanglement is probably the most fundamental characteristic distinguishing the quantum from the classical world. It was one of the first aspects of quantum physics to be studied and discussed, and after more than 75 years from the publication of the classical papers by Einstein, Podolsky and Rosen and by Schrodinger, the interest in the properties of entanglement is still growing. The quantum nature of entanglement makes difficult any intuitive description, and it is better to consider directly what it implies. Entanglement means that the measurement of an observable of a subsystem may affect drastically and instantaneously the possible outcome of a measurement on another part of the system, no matter how far apart it is spatially. The weird and fascinating aspect is that the first measurement affects the second one with infinite speed. After about 30 years from the appearance of concept of entanglement Bell published one of its most famous works in which he showed that the entanglement forbids an explanation of quantum randomness via hidden variables, unraveling the EPR paradox, once and for all. But only 15 years later, when the Hawking radiation has been put in relation with the entanglement entropy, it has been realized that entanglement could provide unexpected information. The interest in understanding the properties of entangled states has received an impressive boost with the advent of “quantum information”, in nineties. For quantum information the entanglement is a resource, indeed quantum (non-local) correlations are fundamental for quantum teleportation or for enhancing the efficiency of quantum protocols. The progress made in quantum information for quantifying the entanglement has found important applications in the study of extended quantum systems. In this context the entanglement entropy becomes an indicator of quantum phase transitions, and its behavior at different subsystem sizes and geometries uncovers universal quantities characterizing the critical points. In comparison with quantum correlation functions, the entanglement entropy measures the fundamental properties of critical neighborhoods in a “cleaner” way, e.g. the simple (linear) dependence of entanglement entropy on the central charge in a conformal system. The first part of the thesis fits into this last genre of research: The entanglement between a subsystem and the rest in the ground state of a 1D system is investigated. In particular the dependence of the entanglement entropies on the geometry of the subsystem and on boundary conditions is widely discussed. The second part of the thesis is focused on non-equilibrium dynamics. The issue of equilibration of quantum systems has been firstly posed in a seminal paper by von Neumann in 1929, but for long time it remained only an academic problem. Indeed, in solid state physics there are many difficulties in designing experiments in which the system's parameters can be tuned. Moreover the genuine quantum features of systems could not be preserved for large enough times, because of dissipation and decoherence. Consequently, the research on quantum non-equilibrium problems blew over. Only in the last decade, the many-body physics of ultracold atomic gases overcame these problems: these are highly tunable systems, weakly coupled to the environment, so that quantum coherence is preserved for large times. In fact, a unique feature of many-body physics of cold atoms is the possibility to “simulate” quantum systems in which both the interactions and external potentials can be modified dynamically. In addition, the experimental realization of low-dimensional structures has unveiled the role that dimensionality and conservation laws play in quantum non-equilibrium dynamics. These aspects were addressed recently in a fascinating experiment on the time evolution of non-equlibrium Bose gases in one dimension, interpreted as the quantum equivalent of Newton's cradle. One of the most important open problems is the characterization of a system that evolves from a non-equilibrium state prepared by suddenly tuning an external parameter. This is commonly called quantum quench and it is the simplest example of out-of-equilibrium dynamics. The time-dependence of the various local observables could be theoretically calculated from first principles, but in general this is a too hard task that cannot be solved even by the most powerful computers (incidentally, this is also the reason why quantum computers can be extremely more effective than classical ones). Insights can be obtained exploiting the most advanced mathematical techniques for low-dimensional quantum systems to draw very general conclusions about the quantum quenches. For example, if for very large times local observables become stationary (even though the entire system will never attain equilibrium), one could describe the system by an effective stationary state that can be obtained without solving the too complicated non-equilibrium dynamics. This is an intriguing aspect of quantum quenches that led to a vigorous research for clarifying the role played by fundamental features of the system, first of all integrability, that is to say the existence of an infinite number of conservation laws. The common belief is that in non-integrable systems (i.e. with a finite number of conservation laws) the stationary state can be described by a single parameter, that is an effective temperature encoding the loss of information about non local observables. Eventually the state at late times is to all intents and purposes equivalent to a thermal one with that temperature. This interesting picture opens the way for a quantum interpretation of thermalization as a local effective description in closed systems. When there are many (infinite) conserved quantities, as in integrable systems, the effective temperature is not sufficient to describe the system's features at late times. It is widely believed that the behavior of local observables could be explained by generalizations of the celebrated Gibbs ensemble. In the thesis, this hypothesis has been tested and proved for the paradigm of systems undergoing quantum phase transitions: the quantum Ising model. For quenches within the ordered phase of the Ising model, an analytic formula that describes the evolution of the equal-time two-point correlation function of the order parameter has been obtained

    New perspectives in statistical mechanics and high-dimensional inference

    Get PDF
    The main purpose of this thesis is to go beyond two usual assumptions that accompany theoretical analysis in spin-glasses and inference: the i.i.d. (independently and identically distributed) hypothesis on the noise elements and the finite rank regime. The first one appears since the early birth of spin-glasses. The second one instead concerns the inference viewpoint. Disordered systems and Bayesian inference have a well-established relation, evidenced by their continuous cross-fertilization. The thesis makes use of techniques coming both from the rigorous mathematical machinery of spin-glasses, such as the interpolation scheme, and from Statistical Physics, such as the replica method. The first chapter contains an introduction to the Sherrington-Kirkpatrick and spiked Wigner models. The first is a mean field spin-glass where the couplings are i.i.d. Gaussian random variables. The second instead amounts to establish the information theoretical limits in the reconstruction of a fixed low rank matrix, the “spike”, blurred by additive Gaussian noise. In chapters 2 and 3 the i.i.d. hypothesis on the noise is broken by assuming a noise with inhomogeneous variance profile. In spin-glasses this leads to multi-species models. The inferential counterpart is called spatial coupling. All the previous models are usually studied in the Bayes-optimal setting, where everything is known about the generating process of the data. In chapter 4 instead we study the spiked Wigner model where the prior on the signal to reconstruct is ignored. In chapter 5 we analyze the statistical limits of a spiked Wigner model where the noise is no longer Gaussian, but drawn from a random matrix ensemble, which makes its elements dependent. The thesis ends with chapter 6, where the challenging problem of high-rank probabilistic matrix factorization is tackled. Here we introduce a new procedure called "decimation" and we show that it is theoretically to perform matrix factorization through it

    Modeling, estimation and control of ring laser gyroscopes for the accurate estimation of the earth rotation

    Get PDF
    He − Ne ring lasers gyroscopes are, at present, the most precise devices for absolute angular velocity measurements. Limitations to their performances come from the non-linear dynamics of the laser. Accordingly to the Lamb semi-classical theory of gas lasers, a model can be applied to a He–Ne ring laser gyroscope to estimate and remove the laser dynamics contribution from the rotation measurements. We find a set of critical parameters affecting the long term stability of the system. We propose a method for estimating the long term drift of the laser parameters, and for filtering out the laser dynamics effects, e.g. the light backscattering. The intensities of the counterpropagating laser beams exiting one cavity mirror are continuously measured, together with the monitor of the laser population inversion. These quantities, once properly calibrated with a dedicated procedure, allow us to estimate cold cavity and active medium parameters of the Lamb theory. Our identification procedure, based on the perturbative solutions of the laser dynamics, allow us for the application of the Kalman Filter theory for the estimation of the angular velocity. The parameter identification and backscattering subtraction procedure has been verified by means of a Monte Carlo studies of the system, and then applied to the experimental data of the ring lasers G-PISA and G-WETTZELL. After the subtraction of laser dynamics effects by Kalman filter, the relative systematic error of G-PISA reduces from 50 to 5 parts in 103, and it can be attributed to the residual uncertainties on geometrical scale factor and orientation of the ring. We also report that after the backscattering subtraction, the relative systematic errors of G-WETTZELL are reduced too. Conversely, in the last decade an increasing attention was drawn to high precision optical experiments, e.g. ring laser experiments, which combine high sensitivity, accuracy and long term stability. Due to the experimental requirements, position and orientation of optical elements and laser beams formation must be controlled in the field of nano-positioning and ultra-precision instruments. Existing methods for beam direction computing in resonators, e.g. iterative ray tracing or generalized ray transfer matrices, are either computationally expensive or rely on overparametrized models of optical elements. By exploiting the Fermat’s principle, we develop a novel method to compute the beam directions in resonant optical cavities formed by spherical mirrors, as a function of mirror positions and curvature radii. The proposed procedure is based on the geometric Newton method on matrix manifold, a tool with second order convergence rate that relies on a second order model of the cavity optical length. As we avoid coordinates to parametrize the beam position on mirror surfaces, the computation of the second order model does not involve the second derivatives of the parametrization. With the help of numerical tests, we show that the convergence properties of our procedure hold for non-planar polygonal cavities, and we assess the effectiveness of the geometric Newton method in determining their configurations with an high degree of accuracy and negligible computational effort. We also presents a method to account for the (ring laser) cavity deformations due to mirrors displacement, seen as the residual motions of the mirrors centers after the removal of rigid body motions. Having the cavity configuration and the model to account for mirrors movements, the calibration and active control of the optical cavity can be addressed as a control problem. In fact, our results are of some importance not only for the design and simulation of ring laser gyroscopes, but also for the active control of the optical cavities. In the final part of this work we detail a complete model including the simulation of the physical processes of interest in the operation of a ring laser gyroscope. Simulation results for the application of the model to the ring laser GP2 are presented and discusse

    Computational and numerical aspects of full waveform seismic inversion

    Get PDF
    Full-waveform inversion (FWI) is a nonlinear optimisation procedure, seeking to match synthetically-generated seismograms with those observed in field data by iteratively updating a model of the subsurface seismic parameters, typically compressional wave (P-wave) velocity. Advances in high-performance computing have made FWI of 3-dimensional models feasible, but the low sensitivity of the objective function to deeper, low-wavenumber components of velocity makes these difficult to recover using FWI relative to more traditional, less automated, techniques. While the use of inadequate physics during the synthetic modelling stage is a contributing factor, I propose that this weakness is substantially one of ill-conditioning, and that efforts to remedy it should focus on the development of both more efficient seismic modelling techniques, and more sophisticated preconditioners for the optimisation iterations. I demonstrate that the problem of poor low-wavenumber velocity recovery can be reproduced in an analogous one-dimensional inversion problem, and that in this case it can be remedied by making full use of the available curvature information, in the form of the Hessian matrix. In two or three dimensions, this curvature information is prohibitively expensive to obtain and store as part of an inversion procedure. I obtain the complete Hessian matrices for a realistically-sized, two-dimensional, towed-streamer inversion problem at several stages during the inversion and link properties of these matrices to the behaviour of the inversion. Based on these observations, I propose a method for approximating the action of the Hessian and suggest it as a path forward for more sophisticated preconditioning of the inversion process.Open Acces
    corecore