1,558 research outputs found

    A proof of concept study for machine learning application to stenosis detection

    Get PDF
    This proof of concept (PoC) assesses the ability of machine learning (ML) classifiers to predict the presence of a stenosis in a three vessel arterial system consisting of the abdominal aorta bifurcating into the two common iliacs. A virtual patient database (VPD) is created using one-dimensional pulse wave propagation model of haemodynamics. Four different machine learning (ML) methods are used to train and test a series of classifiers—both binary and multiclass—to distinguish between healthy and unhealthy virtual patients (VPs) using different combinations of pressure and flow-rate measurements. It is found that the ML classifiers achieve specificities larger than 80% and sensitivities ranging from 50 to 75%. The most balanced classifier also achieves an area under the receiver operative characteristic curve of 0.75, outperforming approximately 20 methods used in clinical practice, and thus placing the method as moderately accurate. Other important observations from this study are that (i) few measurements can provide similar classification accuracies compared to the case when more/all the measurements are used; (ii) some measurements are more informative than others for classification; and (iii) a modification of standard methods can result in detection of not only the presence of stenosis, but also the stenosed vessel

    A Bayesian constitutive model selection framework for biaxial mechanical testing of planar soft tissues: Application to porcine aortic valves

    Get PDF
    A variety of constitutive models have been developed for soft tissue mechanics. However, there is no established criterion to select a suitable model for a specific application. Although the model that best fits the experimental data can be deemed the most suitable model, this practice often can be insufficient given the inter-sample variability of experimental observations. Herein, we present a Bayesian approach to calculate the relative probabilities of constitutive models based on biaxial mechanical testing of tissue samples. 46 samples of porcine aortic valve tissue were tested using a biaxial stretching setup. For each sample, seven ratios of stresses along and perpendicular to the fiber direction were applied. The probabilities of eight invariant-based constitutive models were calculated based on the experimental data using the proposed model selection framework. The calculated probabilities showed that, out of the considered models and based on the information available through the utilized experimental dataset, the May–Newman model was the most probable model for the porcine aortic valve data. When the samples were grouped into different cusp types, the May–Newman model remained the most probable for the left- and right-coronary cusps, whereas for non-coronary cusps two models were found to be equally probable: the Lee–Sacks model and the May–Newman model. This difference between cusp types was found to be associated with the first principal component analysis (PCA) mode, where this mode’s amplitudes of the non-coronary and right-coronary cusps were found to be significantly different. Our results show that a PCA-based statistical model can capture significant variations in the mechanical properties of soft tissues. The presented framework is applicable to any tissue type, and has the potential to provide a structured and rational way of making simulations population-based

    More on Entanglement and Chaos near Critical Point in Strongly Coupled Gauge Theory

    Full text link
    We perform a holographic study of the high and low temperature behaviours of logarithmic negativity (LN) and entanglement wedge cross section (EWCS) in a large NN strongly coupled thermal field theory with critical point having a well defined gravity dual known as 1RC black hole. The critical point is defined via Ο→2\xi \to 2 limit where, Ο\xi is dimensionless parameter proportional to the charge of the 1RC black hole. We show that the logarithmic negativity in low and high thermal limits enhances with increasing Ο\xi. We analytically compute the EWCS in low and high thermal limits and find an agreement with the previously reported numerical results. We holographically explore the correlation between two identical copies of thermal field theory with critical point forming a thermofield double state (TFD) by computing the thermo mutual information (TMI). TMI shows an increasing behaviour with respect to the width of the boundary region. Further, we analyze the impact of an early perturbation on the field theory by analyzing a shock wave perturbation that grows exponentially in the dual eternal 1 RC black hole and then estimate the degradation of TMI. However rate of such disruption of TMI slows down as the value of critical parameter Ο\xi takes higher values.Comment: 41 pages, 13 figure

    Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations

    Get PDF
    Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton’s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton’s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley’s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problem

    Machine learning for detection of stenoses and aneurysms: application in a physiologically realistic virtual patient database

    Get PDF
    This study presents an application of machine learning (ML) methods for detecting the presence of stenoses and aneurysms in the human arterial system. Four major forms of arterial disease—carotid artery stenosis (CAS), subclavian artery stenosis (SAS), peripheral arterial disease (PAD), and abdominal aortic aneurysms (AAA)—are considered. The ML methods are trained and tested on a physiologically realistic virtual patient database (VPD) containing 28,868 healthy subjects, adapted from the authors previous work and augmented to include disease. It is found that the tree-based methods of Random Forest and Gradient Boosting outperform other approaches. The performance of ML methods is quantified through the F1 score and computation of sensitivities and specificities. When using six haemodynamic measurements (pressure in the common carotid, brachial, and radial arteries; and flow-rate in the common carotid, brachial, and femoral arteries), it is found that maximum F1 scores larger than 0.9 are achieved for CAS and PAD, larger than 0.85 for SAS, and larger than 0.98 for both low- and high-severity AAAs. Corresponding sensitivities and specificities are larger than 90% for CAS and PAD, larger than 85% for SAS, and larger than 98% for both low- and high-severity AAAs. When reducing the number of measurements, performance is degraded by less than 5% when three measurements are used, and less than 10% when only two measurements are used for classification. For AAA, it is shown that F1 scores larger than 0.85 and corresponding sensitivities and specificities larger than 85% are achievable when using only a single measurement. The results are encouraging to pursue AAA monitoring and screening through wearable devices which can reliably measure pressure or flow-rates

    Squeezing function corresponding to polydisk

    Full text link
    In the present article, we define squeezing function corresponding to polydisk and study its properties. We investigate relationship between squeezing fuction and squeezing function corresponding to polydisk.Comment: Published in Complex Analysis and its Synergie

    Squeezing function for dd-balanced domains

    Full text link
    We introduce the notion of squeezing function corresponding to dd-balanced domains motivated by the concept of generalized squeezing function given by Rong and Yang. In this work we study some of its properties and its relation with Fridman invariant.Comment: 11 pages, comments are welcom

    A note on squeezing function and its generalizations

    Full text link
    This note investigates the relation between squeezing function and its generalizations. Using the relation obtained, we present an alternate method to find expression of generalized squeezing function of unit ball corresponding to the generalized complex ellipsoids.Comment: 9 page

    Quantifying the efficacy of voltage protocols in characterising ion channel kinetics: A novel information‐theoretic approach

    Get PDF
    Voltage-clamp experiments are commonly utilised to characterise cellular ion channel kinetics. In these experiments, cells are stimulated using a known time-varying voltage, referred to as the voltage protocol, and the resulting cellular response, typically in the form of current, is measured. Parameters of models that describe ion channel kinetics are then estimated by solving an inverse problem which aims to minimise the discrepancy between the predicted response of the model and the actual measured cell response. In this paper, a novel framework to evaluate the information content of voltage-clamp protocols in relation to ion channel model parameters is presented. Additional quantitative information metrics that allow for comparisons among various voltage protocols are proposed. These metrics offer a foundation for future optimal design frameworksto devise novel, information-rich protocols. The efficacy of the proposed framework is evidenced through the analysis of seven voltage protocols from the literature. By comparing known numerical results for inverse problems using these protocols with the information-theoretic metrics, the proposed approach is validated. The essential steps of the framework are: (i) generate random samples of the parameters from chosen prior distributions; (ii) run the model to generate model output (current) for all samples; (iii) construct reduceddimensional representations of the time-varying current output using Proper Orthogonal Decomposition (POD); (iv) estimate information-theoretic metrics such as mutual information, entropy equivalent variance, and conditional mutual information using non-parametric methods; (v) interpret the metrics; for example, a higher mutual information between a parameter and the current output suggests the protocol yields greaterinformation about that parameter, resulting in improved identifiability; and (vi) integrate the informationtheoretic metrics into a single quantitative criterion, encapsulating the protocol’s efficacy in estimating model parameters
    • 

    corecore