102 research outputs found

    From medical images to individualized cardiac mechanics: A Physiome approach

    Get PDF
    Cardiac mechanics is a branch of science that deals with forces, kinematics, and material properties of the heart, which is valuable for clinical applications and physiological studies. Although anatomical and biomechanical experiments are necessary to provide the fundamental knowledge of cardiac mechanics, the invasive nature of the procedures limits their further applicability. In consequence, noninvasive alternatives are required, and cardiac images provide an excellent source of subject-specific and in vivo information. Noninvasive and individualized cardiac mechanical studies can be achieved through coupling general physiological models derived from invasive experiments with subject-specific information extracted from medical images. Nevertheless, as data extracted from images are gross, sparse, or noisy, and do not directly provide the information of interest in general, the couplings between models and measurements are complicated inverse problems with numerous issues need to be carefully considered. The goal of this research is to develop a noninvasive framework for studying individualized cardiac mechanics through systematic coupling between cardiac physiological models and medical images according to their respective merits. More specifically, nonlinear state-space filtering frameworks for recovering individualized cardiac deformation and local material parameters of realistic nonlinear constitutive laws have been proposed. To ensure the physiological meaningfulness, clinical relevance, and computational feasibility of the frameworks, five key issues have to be properly addressed, including the cardiac physiological model, the heart representation in the computational environment, the information extraction from cardiac images, the coupling between models and image information, and also the computational complexity. For the cardiac physiological model, a cardiac physiome model tailored for cardiac image analysis has been proposed to provide a macroscopic physiological foundation for the study. For the heart representation, a meshfree method has been adopted to facilitate implementations and spatial accuracy refinements. For the information extraction from cardiac images, a registration method based on free-form deformation has been adopted for robust motion tracking. For the coupling between models and images, state-space filtering has been applied to systematically couple the models with the measurements. For the computational complexity, a mode superposition approach has been adopted to project the system into an equivalent mathematical space with much fewer dimensions for computationally feasible filtering. Experiments were performed on both synthetic and clinical data to verify the proposed frameworks

    Learning the dynamics and time-recursive boundary detection of deformable objects

    Get PDF
    We propose a principled framework for recursively segmenting deformable objects across a sequence of frames. We demonstrate the usefulness of this method on left ventricular segmentation across a cardiac cycle. The approach involves a technique for learning the system dynamics together with methods of particle-based smoothing as well as non-parametric belief propagation on a loopy graphical model capturing the temporal periodicity of the heart. The dynamic system state is a low-dimensional representation of the boundary, and the boundary estimation involves incorporating curve evolution into recursive state estimation. By formulating the problem as one of state estimation, the segmentation at each particular time is based not only on the data observed at that instant, but also on predictions based on past and future boundary estimates. Although the paper focuses on left ventricle segmentation, the method generalizes to temporally segmenting any deformable object

    Probabilistic and sequential computation of optical flow using temporal coherence

    Full text link

    Advances in automated tongue diagnosis techniques

    Get PDF
    This paper reviews the recent advances in a significant constituent of traditional oriental medicinal technology, called tongue diagnosis. Tongue diagnosis can be an effective, noninvasive method to perform an auxiliary diagnosis any time anywhere, which can support the global need in the primary healthcare system. This work explores the literature to evaluate the works done on the various aspects of computerized tongue diagnosis, namely preprocessing, tongue detection, segmentation, feature extraction, tongue analysis, especially in traditional Chinese medicine (TCM). In spite of huge volume of work done on automatic tongue diagnosis (ATD), there is a lack of adequate survey, especially to combine it with the current diagnosis trends. This paper studies the merits, capabilities, and associated research gaps in current works on ATD systems. After exploring the algorithms used in tongue diagnosis, the current trend and global requirements in health domain motivates us to propose a conceptual framework for the automated tongue diagnostic system on mobile enabled platform. This framework will be able to connect tongue diagnosis with the future point-of-care health system

    Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification

    Get PDF
    ©2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/83.935033In this work, we first address the problem of simultaneous image segmentation and smoothing by approaching the Mumford–Shah paradigm from a curve evolution perspective. In particular, we let a set of deformable contours define the boundaries between regions in an image where we model the data via piecewise smooth functions and employ a gradient flow to evolve these contours. Each gradient step involves solving an optimal estimation problem for the data within each region, connecting curve evolution and the Mumford–Shah functional with the theory of boundary-value stochastic processes. The resulting active contour model offers a tractable implementation of the original Mumford–Shah model (i.e., without resorting to elliptic approximations which have traditionally been favored for greater ease in implementation) to simultaneously segment and smoothly reconstruct the data within a given image in a coupled manner. Various implementations of this algorithm are introduced to increase its speed of convergence.We also outline a hierarchical implementation of this algorithm to handle important image features such as triple points and other multiple junctions. Next, by generalizing the data fidelity term of the original Mumford– Shah functional to incorporate a spatially varying penalty, we extend our method to problems in which data quality varies across the image and to images in which sets of pixel measurements are missing. This more general model leads us to a novel PDE-based approach for simultaneous image magnification, segmentation, and smoothing, thereby extending the traditional applications of the Mumford–Shah functional which only considers simultaneous segmentation and smoothing

    Structure-aware image denoising, super-resolution, and enhancement methods

    Get PDF
    Denoising, super-resolution and structure enhancement are classical image processing applications. The motive behind their existence is to aid our visual analysis of raw digital images. Despite tremendous progress in these fields, certain difficult problems are still open to research. For example, denoising and super-resolution techniques which possess all the following properties, are very scarce: They must preserve critical structures like corners, should be robust to the type of noise distribution, avoid undesirable artefacts, and also be fast. The area of structure enhancement also has an unresolved issue: Very little efforts have been put into designing models that can tackle anisotropic deformations in the image acquisition process. In this thesis, we design novel methods in the form of partial differential equations, patch-based approaches and variational models to overcome the aforementioned obstacles. In most cases, our methods outperform the existing approaches in both quality and speed, despite being applicable to a broader range of practical situations.Entrauschen, Superresolution und Strukturverbesserung sind klassische Anwendungen der Bildverarbeitung. Ihre Existenz bedingt sich in dem Bestreben, die visuelle Begutachtung digitaler Bildrohdaten zu unterstützen. Trotz erheblicher Fortschritte in diesen Feldern bedürfen bestimmte schwierige Probleme noch weiterer Forschung. So sind beispielsweise Entrauschungsund Superresolutionsverfahren, welche alle der folgenden Eingenschaften besitzen, sehr selten: die Erhaltung wichtiger Strukturen wie Ecken, Robustheit bezüglich der Rauschverteilung, Vermeidung unerwünschter Artefakte und niedrige Laufzeit. Auch im Gebiet der Strukturverbesserung liegt ein ungelöstes Problem vor: Bisher wurde nur sehr wenig Forschungsaufwand in die Entwicklung von Modellen investieret, welche anisotrope Deformationen in bildgebenden Verfahren bewältigen können. In dieser Arbeit entwerfen wir neue Methoden in Form von partiellen Differentialgleichungen, patch-basierten Ansätzen und Variationsmodellen um die oben erwähnten Hindernisse zu überwinden. In den meisten Fällen übertreffen unsere Methoden nicht nur qualitativ die bisher verwendeten Ansätze, sondern lösen die gestellten Aufgaben auch schneller. Zudem decken wir mit unseren Modellen einen breiteren Bereich praktischer Fragestellungen ab

    Automated volume measurements in echocardiography by utilizing expert knowledge

    Get PDF
    Left ventricular (LV) volumes and ejection fraction (EF) are important parameters for diagnosis, prognosis, and treatment planning in patients with heart disease. These parameters are commonly measured by manual tracing in echocardiographic images, a procedure that is time consuming, prone to inter- and intra-observer variability, and require highly trained operators. This is particularly the case in three-dimensional (3D) echocardiography, where the increased amount of data makes manual tracing impractical. Automated methods for measuring LV volumes and EF can therefore improve efficiency and accuracy of echocardiographic examinations, giving better diagnosis at a lower cost. The main goal of this thesis was to improve the efficiency and quality of cardiac measurements. More specifically, the goal was to develop rapid and accurate methods that utilize expert knowledge for automated evaluation of cardiac function in echocardiography. The thesis presents several methods for automated volume and EF measurements in echocardiographic data. For two-dimensional (2D) echocardiography, an atlas based segmentation algorithm is presented in paper A. This method utilizes manually traced endocardial contours in a validated case database to control a snake optimized by dynamic programming. The challenge with this approach is to find the most optimal case in the database. More promising results are achieved in triplane echocardiography using a multiview and multi-frame extension to the active appearance model (AAM) framework, as demonstrated in paper B. The AAM generalizes better to new patient data and is based on more robust optimization schemes than the atlas-based method. In triplane images, the results of the AAM algorithm may be improved further by integrating a snake algorithm into the AAM framework and by constraining the AAM to manually defined landmarks, and this is shown in paper C. For 3D echocardiograms, a clinical semi-automated volume measurement tool with expert selected points is validated in paper D. This tool compares favorably to a reference measurement tool, with good agreement in measured volumes, and with a significantly lower analysis time. Finally, in paper E, fully automated real-time segmentation in 3D echocardiography is demonstrated using a 3D active shape model (ASM) of the left ventricle in a Kalman filter framework. The main advantage of this approach is its processing performance, allowing for real-time volume and EF estimates. Statistical models such as AAMs and ASMs provide elegant frameworks for incorporating expert knowledge into segmentation algorithms. Expert knowledge can also be utilized directly through manual input to semi-automated methods, allowing for manual initialization and correction of automatically determined volumes. The latter technique is particularly suitable for clinical routine examinations, while the fully automated 3D ASM method can extend the use of echocardiography to new clinical areas such as automated patient monitoring. In this thesis, different methods for utilizing expert knowledge in automated segmentation algorithms for echocardiography have been developed and evaluated. Particularly in 3D echocardiography, these contributions are expected to improve efficiency and quality of cardiac measurements
    • …
    corecore