15 research outputs found

    A location-scale joint model for studying the link between the time-dependent subject-specific variability of blood pressure and competing events

    Full text link
    Given the high incidence of cardio and cerebrovascular diseases (CVD), and its association with morbidity and mortality, its prevention is a major public health issue. A high level of blood pressure is a well-known risk factor for these events and an increasing number of studies suggest that blood pressure variability may also be an independent risk factor. However, these studies suffer from significant methodological weaknesses. In this work we propose a new location-scale joint model for the repeated measures of a marker and competing events. This joint model combines a mixed model including a subject-specific and time-dependent residual variance modeled through random effects, and cause-specific proportional intensity models for the competing events. The risk of events may depend simultaneously on the current value of the variance, as well as, the current value and the current slope of the marker trajectory. The model is estimated by maximizing the likelihood function using the Marquardt-Levenberg algorithm. The estimation procedure is implemented in a R-package and is validated through a simulation study. This model is applied to study the association between blood pressure variability and the risk of CVD and death from other causes. Using data from a large clinical trial on the secondary prevention of stroke, we find that the current individual variability of blood pressure is associated with the risk of CVD and death. Moreover, the comparison with a model without heterogeneous variance shows the importance of taking into account this variability in the goodness-of-fit and for dynamic predictions

    Bounce-averaged drifts: Equivalent definitions, numerical implementations, and example cases

    Full text link
    In this article we provide various analytical and numerical methods for calculating the average drift of magnetically trapped particles across field lines in complex geometries, and we compare these methods against each other. To evaluate bounce-integrals, we introduce a generalisation of the trapezoidal rule which is able to circumvent integrable singularities. We contrast this method with more standard quadrature methods in a parabolic magnetic well and find that the computational cost is significantly lower for the trapezoidal method, though at the cost of accuracy. With numerical routines in place, we next investigate conditions on particles which cross the computational boundary, and we find that important differences arise for particles affected by this boundary, which can depend on the specific implementation of the calculation. Finally, we investigate the bounce-averaged drifts in the optimized stellarator NCSX. From investigating the drifts, one can readily deduce important properties, such as what subset of particles can drive trapped-particle modes, and in what regions radial drifts are most deleterious to the stability of such modes.Comment: 12 pages, 6 figure

    Globally Adaptive Control Variate for Robust Numerical Integration

    Get PDF
    International audienceMany methods in computer graphics require the integration of functions on low- to-middle-dimensional spaces. However, no available method can handle all the possible integrands accurately and rapidly. This paper presents a robust numerical integration method, able to handle arbitrary non-singular scalar or vector-valued functions defined on low-to-middle-dimensional spaces. Our method combines control variate, globally adaptive subdivision and Monte-Carlo estimation to achieve fast and accurate computations of any non-singular integral. The runtime is linear with respect to standard deviation while standard Monte-Carlo methods are quadratic. We additionally show through numerical tests that our method is extremely stable from a computation time and memory footprint point-of-view, assessing its robustness. We demonstrate our method on a partic- ipating media voxelization application, which requires the computation of several millions integrals for complex media

    A Simultaneous Numerical Integration Routine for the Fast Calculation of Similar Integrations

    Get PDF
    In this paper, a fast and simultaneous integration routine tailored for obtaining results of multiple numerical integrations is introduced. In the routine, the same nodes are used when integrating different functions along the same integration path. In the paper it is demonstrated by several examples that if the integrands of interest are similar on the integration path, then using the same nodes decreases the computational costs dramatically. While the method is introduced by updating the popular Gauss-Kronrod quadrature rule, the same steps given in the paper can be applied to any other numerical integration rule

    High-Order Numerical Integration on Domains Bounded by Intersecting Level Sets

    Full text link
    We present a high-order method that provides numerical integration on volumes, surfaces, and lines defined implicitly by two smooth intersecting level sets. To approximate the integrals, the method maps quadrature rules defined on hypercubes to the curved domains of the integrals. This enables the numerical integration of a wide range of integrands since integration on hypercubes is a well known problem. The mappings are constructed by treating the isocontours of the level sets as graphs of height functions. Numerical experiments with smooth integrands indicate a high-order of convergence for transformed Gauss quadrature rules on domains defined by polynomial, rational, and trigonometric level sets. We show that the approach we have used can be combined readily with adaptive quadrature methods. Moreover, we apply the approach to numerically integrate on difficult geometries without requiring a low-order fallback method

    The Cross-entropy of Piecewise Linear Probability Density Functions

    Get PDF
    The cross-entropy and its related terms from information theory (e.g.~entropy, Kullback–Leibler divergence) are used throughout artificial intelligence and machine learning. This includes many of the major successes, both current and historic, where they commonly appear as the natural objective of an optimisation procedure for learning model parameters, or their distributions. This paper presents a novel derivation of the differential cross-entropy between two 1D probability density functions represented as piecewise linear functions. Implementation challenges are resolved and experimental validation is presented, including a rigorous analysis of accuracy and a demonstration of using the presented result as the objective of a neural network. Previously, cross-entropy would need to be approximated via numerical integration, or equivalent, for which calculating gradients is impractical. Machine learning models with high parameter counts are optimised primarily with gradients, so if piecewise linear density representations are to be used then the presented analytic solution is essential. This paper contributes the necessary theory for the practical optimisation of information theoretic objectives when dealing with piecewise linear distributions directly. Removing this limitation expands the design space for future algorithms

    A hyper-reduction method using adaptivity to cut the assembly costs of reduced order models

    Get PDF
    At every iteration or timestep of the online phase of some reduced-order modelling schemes, large linear systems must be assembled and then projected onto a reduced order basis of small dimension. The projected small linear systems are cheap to solve, but assembly and projection are now the dominant computational cost. In this paper we introduce a new hyper-reduction strategy called reduced assembly (RA) that drastically cuts these costs. RA consists of a triangulation adaptation algorithm that uses a local error indicator to con- struct a reduced assembly triangulation specially suited to the reduced order basis. Crucially, this reduced assembly triangulation has fewer cells than the original one, resulting in lower assembly and projection costs. We demonstrate the efficacy of RA on a Galerkin-POD type reduced order model (RAPOD). We show performance increases of up to five times over the baseline Galerkin-POD method on a non-linear reaction-diffusion problem solved with a semi-implicit time-stepping scheme and up to seven times for a 3D hyperelasticity problem solved with a continuation Newton-Raphson algorithm. The examples are implemented in the DOLFIN finite element solver using PETSc and SLEPc for linear algebra. Full code and data files to produce the results in this paper are provided as supplementary material
    corecore