1,521 research outputs found

    Geometric description for the anatomy of the mitral valve: A review

    Get PDF
    The mitral valve is a complex anatomical structure whose physiological functioning relies on the biomechanical properties and structural integrity of its components. Their compromise can lead to mitral valve dysfunction, associated with morbidity and mortality. Therefore, a review on the morphometry of the mitral valve is crucial, more specifically on the importance of valve dimensions and shape for its function. This review initially provides a brief background on the anatomy and physiology of the mitral valve, followed by an analysis of the morphological information available. A characterisation of mathematical descriptions of several parts of the valve is performed and the impact of different dimensions and shape changes in disease is then outlined. Finally, a section regarding future directions and recommendations for the use of morphometric information in clinical analysis of the mitral valve is presented

    Model validation for a noninvasive arterial stenosis detection problem

    Get PDF
    Copyright @ 2013 American Institute of Mathematical SciencesA current thrust in medical research is the development of a non-invasive method for detection, localization, and characterization of an arterial stenosis (a blockage or partial blockage in an artery). A method has been proposed to detect shear waves in the chest cavity which have been generated by disturbances in the blood flow resulting from a stenosis. In order to develop this methodology further, we use both one-dimensional pressure and shear wave experimental data from novel acoustic phantoms to validate corresponding viscoelastic mathematical models, which were developed in a concept paper [8] and refined herein. We estimate model parameters which give a good fit (in a sense to be precisely defined) to the experimental data, and use asymptotic error theory to provide confidence intervals for parameter estimates. Finally, since a robust error model is necessary for accurate parameter estimates and confidence analysis, we include a comparison of absolute and relative models for measurement error.The National Institute of Allergy and Infectious Diseases, the Air Force Office of Scientific Research, the Deopartment of Education and the Engineering and Physical Sciences Research Council (EPSRC)

    A Hierarchical Multivariate Two-Part Model for Profiling Providers\u27 Effects on Healthcare Charges

    Get PDF
    Procedures for analyzing and comparing healthcare providers\u27 effects on health services delivery and outcomes have been referred to as provider profiling. In a typical profiling procedure, patient-level responses are measured for clusters of patients treated by providers that in turn, can be regarded as statistically exchangeable. Thus, a hierarchical model naturally represents the structure of the data. When provider effects on multiple responses are profiled, a multivariate model rather than a series of univariate models, can capture associations among responses at both the provider and patient levels. When responses are in the form of charges for healthcare services and sampled patients include non-users of services, charge variables are a mix of zeros and highly-skewed positive values that present a modeling challenge. For analysis of regressor effects on charges for a single service, a frequently used approach is a two-part model (Duan, Manning, Morris, and Newhouse 1983) that combines logistic or probit regression on any use of the service and linear regression on the log of positive charges given use of the service. Here, we extend the two-part model to the case of charges for multiple services, using a log-linear model and a general multivariate log-normal model, and employ the resultant multivariate two-part model as the within-provider component of a hierarchical model. The log-linear likelihood is reparameterized as proposed by Fitzmaurice and Laird (1993), so that regressor effects on any use of each service are marginal with respect to any use of other services. The general multivariate log-normal likelihood is constructed in such a way that variances of log of positive charges for each service are provider-specific but correlations between log of positive charges for different services are uniform across providers. A data augmentation step is included in the Gibbs sampler used to fit the hierarchical model, in order to accommodate the fact that values of log of positive charges are undefined for unused service. We apply this hierarchical, multivariate, two-part model to analyze the effects of primary care physicians on their patients\u27 annual charges for two services, primary care and specialty care. Along the way, we also demonstrate an approach for incorporating prior information about the effects of patient morbidity on response variables, to improve the accuracy of provider profiles that are based on patient samples of limited size

    A mathematical model for breath gas analysis of volatile organic compounds with special emphasis on acetone

    Full text link
    Recommended standardized procedures for determining exhaled lower respiratory nitric oxide and nasal nitric oxide have been developed by task forces of the European Respiratory Society and the American Thoracic Society. These recommendations have paved the way for the measurement of nitric oxide to become a diagnostic tool for specific clinical applications. It would be desirable to develop similar guidelines for the sampling of other trace gases in exhaled breath, especially volatile organic compounds (VOCs) which reflect ongoing metabolism. The concentrations of water-soluble, blood-borne substances in exhaled breath are influenced by: (i) breathing patterns affecting gas exchange in the conducting airways; (ii) the concentrations in the tracheo-bronchial lining fluid; (iii) the alveolar and systemic concentrations of the compound. The classical Farhi equation takes only the alveolar concentrations into account. Real-time measurements of acetone in end-tidal breath under an ergometer challenge show characteristics which cannot be explained within the Farhi setting. Here we develop a compartment model that reliably captures these profiles and is capable of relating breath to the systemic concentrations of acetone. By comparison with experimental data it is inferred that the major part of variability in breath acetone concentrations (e.g., in response to moderate exercise or altered breathing patterns) can be attributed to airway gas exchange, with minimal changes of the underlying blood and tissue concentrations. Moreover, it is deduced that measured end-tidal breath concentrations of acetone determined during resting conditions and free breathing will be rather poor indicators for endogenous levels. Particularly, the current formulation includes the classical Farhi and the Scheid series inhomogeneity model as special limiting cases.Comment: 38 page

    The use of the Joint Models to improve the accuracy of prognostication of death in patients with heart failure and reduced ejection fraction (HFrEF)

    Get PDF
    The work presented in this thesis has been developed during a scholarship at the Scientific Directorate - Unit of Biostatistics of the Galliera Hospital in Genoa under the supervision of Dr. Matteo Puntoni. This scholarship was partially supported by a grant from Ministry of Health, Italy "Bando Ricerca Finalizzata - Giovani Ricercatori" (Project code: GR-2013-02355479) won by Dr. Puntoni for conducting a cancer research study. The main objective of my research was to apply the Joint Model for longitudinal and survival data to improve the dynamic prediction of cardiovascular diseases in patients undergoing cancer treatment. These patients are usually followed after the start of the therapy with several visits in the course of which different longitudinal data are collected. These data are usually collected and interpreted by clinicians but not in a systematic way. The innovation of my project consisted in a more formal use of these data in a statistical model. The Joint Model is essentially based on the simultaneous modelling of a linear mixed model for longitudinal data and a survival model for the probability of an event. The utility of this model is twofold: on one hand it links the change of a longitudinal measurement to a change in the risk of an event, on the other hand the prediction of survival probabilities using the Joint Model can be updated whenever a new measurement is taken. Unfortunately, the clinical study on cancer therapy for which the project was thought is still ongoing at this moment and the longitudinal data are not available. So, we applied the developed methods based on Joint Model to another dataset with a similar clinical interest. The case of study presented in the Chapter 6 of this thesis is developed after a meeting between Dr. Puntoni and me and Dr. Marco Canepa of the Cardiovascular Disease Unit of the San Martino Hospital in Genoa. The necessity of the last one was to prove that the longitudinal data collected in patients after a heart failure could be used to improve the prognostication of death and, more in general, the patient management and care with a personalized therapy. The last one could be better calibrated by a dynamic update of the prognosis of patients related to a better analysis of the longitudinal data provided during each follow-up visit. The Joint Model for longitudinal and survival data solves the problem of the simultaneous analysis of the biomarkers collected at each follow-up visits and the dynamic update of the survival probabilities each time a new measurements are collected (see Chapter 4). The next step, developed in the Chapter 5, was to find a statistical index that was simple to understand and practical for clinicians but also methodologically adequate to assess and prove that the longitudinal data are advantage in the prognostication of death. To do this, two different indexes seemed most suitable: the area under the Receiver Operating Characteristic Curve (AUC-ROC) to assess the prediction capability of the Joint Model, and the Net Reclassification Improvement (NRI) to evaluate the improvement in prognostication in comparison with other approaches commonly used in clinical studies. In Section 5.3, a new definition of time-dependent AUC-ROC and time-dependent NRI in the Joint Model context is given. Even if a function to derive the AUC after a Joint Model was present in literature, we needed to reformulate it and implement in the statistical software R to make it comparable with the index derived after the use of the common survival models, such as the Weibull Model. Regarding the NRI, no indexes are present in the literature. Some methods and functions were developed for binary and survival context but no one for the Joint Model. A new definition of time-dependent NRI is presented in Section 5.3.2 and used to compare the common Weibull survival model and the Joint Model. This thesis is divided in 6 chapters. Chapters 1 and 2 are preparatory to the introduction of the Joint Model in Chapter 3. In particular, Chapter 1 is an introduction to the analysis of longitudinal data with the use of Linear Mixed Models while Chapter 2 presents concepts and models used in the thesis from survival analysis. In Chapter 3 the elements introduced in the first two chapters are joined to defined the Joint Model for longitudinal and survival data following the approach proposed by Rizopoulos (2012). Chapter 4 introduces the main ideas behind dynamic prediction in the Joint Model context. In Chapter 5 relevant notions of prediction capability are introduced in relation to the indexes AUC and NRI. Initially, these two indexes are presented in relation to a binary outcome. Then, it is shown how they change when the outcome is the time to an event of interest. Ending, the definitions of time-dependent AUC and NRI are formulated in the Joint Model context. The case of study is presented in the Chapter 6 along with strength and limitations related to the use of the Joint Model in clinical studies

    Algorithmic Analysis Techniques for Molecular Imaging

    Get PDF
    This study addresses image processing techniques for two medical imaging modalities: Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI), which can be used in studies of human body functions and anatomy in a non-invasive manner. In PET, the so-called Partial Volume Effect (PVE) is caused by low spatial resolution of the modality. The efficiency of a set of PVE-correction methods is evaluated in the present study. These methods use information about tissue borders which have been acquired with the MRI technique. As another technique, a novel method is proposed for MRI brain image segmen- tation. A standard way of brain MRI is to use spatial prior information in image segmentation. While this works for adults and healthy neonates, the large variations in premature infants preclude its direct application. The proposed technique can be applied to both healthy and non-healthy premature infant brain MR images. Diffusion Weighted Imaging (DWI) is a MRI-based technique that can be used to create images for measuring physiological properties of cells on the structural level. We optimise the scanning parameters of DWI so that the required acquisition time can be reduced while still maintaining good image quality. In the present work, PVE correction methods, and physiological DWI models are evaluated in terms of repeatabilityof the results. This gives in- formation on the reliability of the measures given by the methods. The evaluations are done using physical phantom objects, correlation measure- ments against expert segmentations, computer simulations with realistic noise modelling, and with repeated measurements conducted on real pa- tients. In PET, the applicability and selection of a suitable partial volume correction method was found to depend on the target application. For MRI, the data-driven segmentation offers an alternative when using spatial prior is not feasible. For DWI, the distribution of b-values turns out to be a central factor affecting the time-quality ratio of the DWI acquisition. An optimal b-value distribution was determined. This helps to shorten the imaging time without hampering the diagnostic accuracy.Siirretty Doriast

    Distinguishing cause from effect using observational data: methods and benchmarks

    Get PDF
    The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X, Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of no confounding, no feedback loops, and no selection bias, such bivariate causal discovery problems are challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: Additive Noise Methods (ANM) and Information Geometric Causal Inference (IGCI). We present the benchmark CauseEffectPairs that consists of data for 100 different cause-effect pairs selected from 37 datasets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.) and motivate our decisions regarding the "ground truth" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. One of the best performing methods overall is the additive-noise method originally proposed by Hoyer et al. (2009), which obtains an accuracy of 63+-10 % and an AUC of 0.74+-0.05 on the real-world benchmark. As the main theoretical contribution of this work we prove the consistency of that method.Comment: 101 pages, second revision submitted to Journal of Machine Learning Researc
    corecore