3,067 research outputs found
Computing Individual Risks based on Family History in Genetic Disease in the Presence of Competing Risks
When considering a genetic disease with variable age at onset (ex: diabetes ,
familial amyloid neuropathy, cancers, etc.), computing the individual risk of
the disease based on family history (FH) is of critical interest both for
clinicians and patients. Such a risk is very challenging to compute because: 1)
the genotype X of the individual of interest is in general unknown; 2) the
posterior distribution P(X|FH, T > t) changes with t (T is the age at disease
onset for the targeted individual); 3) the competing risk of death is not
negligible. In this work, we present a modeling of this problem using a
Bayesian network mixed with (right-censored) survival outcomes where hazard
rates only depend on the genotype of each individual. We explain how belief
propagation can be used to obtain posterior distribution of genotypes given the
FH, and how to obtain a time-dependent posterior hazard rate for any individual
in the pedigree. Finally, we use this posterior hazard rate to compute
individual risk, with or without the competing risk of death. Our method is
illustrated using the Claus-Easton model for breast cancer (BC). This model
assumes an autosomal dominant genetic risk factor such as non-carriers
(genotype 00) have a BC hazard rate 0 (t) while carriers (genotypes
01, 10 and 11) have a (much greater) hazard rate 1 (t). Both hazard
rates are assumed to be piecewise constant with known values (cuts at 20, 30,.
.. , 80 years). The competing risk of death is derived from the national French
registry
A coupled approximate deconvolution and dynamic mixed scale model for large-eddy simulation
Large-eddy simulations of incompressible Newtonian fluid flows with
approximate deconvolution models based on the van Cittert method are reported.
The Legendre spectral element method is used for the spatial discretization to
solve the filtered Navier--Stokes equations. A novel variant of approximate
deconvolution models blended with a mixed scale model using a dynamic
evaluation of the subgrid-viscosity constant is proposed. This model is
validated by comparing the large-eddy simulation with the direct numerical
simulation of the flow in a lid-driven cubical cavity, performed at a Reynolds
number of 12'000. Subgrid modeling in the case of a flow with coexisting
laminar, transitional and turbulent zones such as the lid-driven cubical cavity
flow represents a challenging problem. Moreover, the coupling with the spectral
element method having very low numerical dissipation and dispersion builds a
well suited framework to analyze the efficiency of a subgrid model. First- and
second-order statistics obtained using this new model are showing very good
agreement with the direct numerical simulation. Filtering operations rely on an
invertible filter applied in a modal basis and preserving the C0-continuity
across elements. No clipping on dynamic parameters was needed to preserve
numerical stability
Shock formation in electron-ion plasmas: mechanism and timing
We analyse the full shock formation process in electron-ion plasmas in theory
and simulations. It is accepted that electromagnetic shocks in initially
unmagnetised relativistic plasmas are triggered by the filamentation
instability. However, the transition from the first unstable phase to the
quasi-steady shock is still missing. We derive a theoretical model for the
shock formation time, taking into account the filament merging in the
non-linear phase of the filamentation instability. This process is much slower
than in electron-positron pair shocks, so that the shock formation is longer by
a factor proportional to sqrt(m_i/m_e) ln(m_i/m_e)
Is turbulent mixing a self convolution process ?
Experimental results for the evolution of the probability distribution
function (PDF) of a scalar mixed by a turbulence flow in a channel are
presented. The sequence of PDF from an initial skewed distribution to a sharp
Gaussian is found to be non universal. The route toward homogeneization depends
on the ratio between the cross sections of the dye injector and the channel. In
link with this observation, advantages, shortcomings and applicability of
models for the PDF evolution based on a self-convolution mechanisms are
discussed.Comment: 4 page
Asymptotic behavior of age-structured and delayed Lotka-Volterra models
In this work we investigate some asymptotic properties of an age-structured
Lotka-Volterra model, where a specific choice of the functional parameters
allows us to formulate it as a delayed problem, for which we prove the
existence of a unique coexistence equilibrium and characterize the existence of
a periodic solution. We also exhibit a Lyapunov functional that enables us to
reduce the attractive set to either the nontrivial equilibrium or to a periodic
solution. We then prove the asymptotic stability of the nontrivial equilibrium
where, depending on the existence of the periodic trajectory, we make explicit
the basin of attraction of the equilibrium. Finally, we prove that these
results can be extended to the initial PDE problem.Comment: 29 page
Some remarks on quasi-Hermitian operators
A quasi-Hermitian operator is an operator that is similar to its adjoint in
some sense, via a metric operator, i.e., a strictly positive self-adjoint
operator. Whereas those metric operators are in general assumed to be bounded,
we analyze the structure generated by unbounded metric operators in a Hilbert
space. Following our previous work, we introduce several generalizations of the
notion of similarity between operators. Then we explore systematically the
various types of quasi-Hermitian operators, bounded or not. Finally we discuss
their application in the so-called pseudo-Hermitian quantum mechanics.Comment: 18page
LISACode : A scientific simulator of LISA
A new LISA simulator (LISACode) is presented. Its ambition is to achieve a
new degree of sophistication allowing to map, as closely as possible, the
impact of the different sub-systems on the measurements. LISACode is not a
detailed simulator at the engineering level but rather a tool whose purpose is
to bridge the gap between the basic principles of LISA and a future,
sophisticated end-to-end simulator. This is achieved by introducing, in a
realistic manner, most of the ingredients that will influence LISA's
sensitivity as well as the application of TDI combinations. Many user-defined
parameters allow the code to study different configurations of LISA thus
helping to finalize the definition of the detector. Another important use of
LISACode is in generating time series for data analysis developments
Exact reconstruction with directional wavelets on the sphere
A new formalism is derived for the analysis and exact reconstruction of
band-limited signals on the sphere with directional wavelets. It represents an
evolution of the wavelet formalism developed by Antoine & Vandergheynst (1999)
and Wiaux et al. (2005). The translations of the wavelets at any point on the
sphere and their proper rotations are still defined through the continuous
three-dimensional rotations. The dilations of the wavelets are directly defined
in harmonic space through a new kernel dilation, which is a modification of an
existing harmonic dilation. A family of factorized steerable functions with
compact harmonic support which are suitable for this kernel dilation is firstly
identified. A scale discretized wavelet formalism is then derived, relying on
this dilation. The discrete nature of the analysis scales allows the exact
reconstruction of band-limited signals. A corresponding exact multi-resolution
algorithm is finally described and an implementation is tested. The formalism
is of interest notably for the denoising or the deconvolution of signals on the
sphere with a sparse expansion in wavelets. In astrophysics, it finds a
particular application for the identification of localized directional features
in the cosmic microwave background (CMB) data, such as the imprint of
topological defects, in particular cosmic strings, and for their reconstruction
after separation from the other signal components.Comment: 22 pages, 2 figures. Version 2 matches version accepted for
publication in MNRAS. Version 3 (identical to version 2) posted for code
release announcement - "Steerable scale discretised wavelets on the sphere" -
S2DW code available for download at
http://www.mrao.cam.ac.uk/~jdm57/software.htm
Constitutively active acetylcholine-dependent potassium current increases atrial defibrillation threshold by favoring post-shock re-initiation
Electrical cardioversion (ECV), a mainstay in atrial fibrillation (AF) treatment, is unsuccessful in up to 10-20% of patients. An important aspect of the remodeling process caused by AF is the constitutive activition of the atrium-specific acetylcholine-dependent potassium current (I-K,I-ACh -> I-K,I-ACh-c), which is associated with ECV failure. This study investigated the role of I-K,I-ACh-c in ECV failure and setting the atrial defibrillation threshold (aDFT) in optically mapped neonatal rat cardiomyocyte monolayers. AF was induced by burst pacing followed by application of biphasic shocks of 25-100 V to determine aDFT. Blocking I-K,I-ACh-c by tertiapin significantly decreased DFT, which correlated with a significant increase in wavelength during reentry. Genetic knockdown experiments, using lentiviral vectors encoding a Kcnj5-specific shRNA to modulate I-K,I-ACh-c, yielded similar results. Mechanistically, failed ECV was attributed to incomplete phase singularity (PS) removal or reemergence of PSs (i.e. re-initiation) through unidirectional propagation of shock-induced action potentials. Re-initiation occurred at significantly higher voltages than incomplete PS-removal and was inhibited by I-K,I-ACh-c blockade. Whole-heart mapping confirmed our findings showing a 60% increase in ECV success rate after I-K,I-ACh-c blockade. This study provides new mechanistic insight into failing ECV of AF and identifies I-K,I-ACh-c as possible atrium-specific target to increase ECV effectiveness, while decreasing its harmfulness
- …