1,006 research outputs found
Multilevel ensemble Kalman filtering for spatio-temporal processes
We design and analyse the performance of a multilevel ensemble Kalman filter
method (MLEnKF) for filtering settings where the underlying state-space model
is an infinite-dimensional spatio-temporal process. We consider underlying
models that needs to be simulated by numerical methods, with discretization in
both space and time. The multilevel Monte Carlo (MLMC) sampling strategy,
achieving variance reduction through pairwise coupling of ensemble particles on
neighboring resolutions, is used in the sample-moment step of MLEnKF to produce
an efficient hierarchical filtering method for spatio-temporal models. Under
sufficient regularity, MLEnKF is proven to be more efficient for weak
approximations than EnKF, asymptotically in the large-ensemble and
fine-numerical-resolution limit. Numerical examples support our theoretical
findings.Comment: Version 1: 39 pages, 4 figures.arXiv admin note: substantial text
overlap with arXiv:1608.08558 . Version 2 (this version): 52 pages, 6
figures. Revision primarily of the introduction and the numerical examples
sectio
The Angiotensin Affair: How Great Minds Thinking Alike Came to a Historical Agreement
In 1934, J. C. Fasciolo had to submit a thesis and Dr. Houssay suggested he investigate about nephrogenic hypertension. E. BraunâMenĂ©ndez showed interest in helping and Drs. L.F. Leloir and J.M. Muñoz from the Institute of Physiology joined them in their attempt to isolate and purify the pressor substance. In 1939, they extracted the substance âhypertensionâ from the venous blood from the ischemic kidneys. They proposed an enzymeâsubstrate reaction. They named hypertensinogen the substrate and hypertensinases the enzymes that break down the hypertension. Two months following the Argentine publication, the team in the United States, formed by I.H. Page and O.M. Helmer, published their findings, which were in agreement with those reported by the Argentine team. By 1940, they isolated angiotonin, the equivalent of hypertension, and called the renin substrate hypertensinogen. In 1957, in the conference held in Ann Arbor, BraunâMenĂ©ndez and Page agreed on a new nomenclature. As a result, the words angiotensinogen and angiotensin were born from the combination of the names originally set by both teams. The discovery of the reninâangiotensin system is an example that science should follow: Value the progress made by colleagues, collaborate side by side, and pursue the ultimate truth
Fast approximation by periodic kernel-based lattice-point interpolation with application in uncertainty quantification
This paper deals with the kernel-based approximation of a multivariate
periodic function by interpolation at the points of an integration lattice -- a
setting that, as pointed out by Zeng, Leung, Hickernell (MCQMC2004, 2006) and
Zeng, Kritzer, Hickernell (Constr. Approx., 2009), allows fast evaluation by
fast Fourier transform, so avoiding the need for a linear solver. The main
contribution of the paper is the application to the approximation problem for
uncertainty quantification of elliptic partial differential equations, with the
diffusion coefficient given by a random field that is periodic in the
stochastic variables, in the model proposed recently by Kaarnioja, Kuo, Sloan
(SIAM J. Numer. Anal., 2020). The paper gives a full error analysis, and full
details of the construction of lattices needed to ensure a good (but inevitably
not optimal) rate of convergence and an error bound independent of dimension.
Numerical experiments support the theory.Comment: 37 pages, 5 figure
Smolyak's algorithm: A powerful black box for the acceleration of scientific computations
We provide a general discussion of Smolyak's algorithm for the acceleration
of scientific computations. The algorithm first appeared in Smolyak's work on
multidimensional integration and interpolation. Since then, it has been
generalized in multiple directions and has been associated with the keywords:
sparse grids, hyperbolic cross approximation, combination technique, and
multilevel methods. Variants of Smolyak's algorithm have been employed in the
computation of high-dimensional integrals in finance, chemistry, and physics,
in the numerical solution of partial and stochastic differential equations, and
in uncertainty quantification. Motivated by this broad and ever-increasing
range of applications, we describe a general framework that summarizes
fundamental results and assumptions in a concise application-independent
manner
Characterizing normal crossing hypersurfaces
The objective of this article is to give an effective algebraic
characterization of normal crossing hypersurfaces in complex manifolds. It is
shown that a hypersurface has normal crossings if and only if it is a free
divisor, has a radical Jacobian ideal and a smooth normalization. Using K.
Saito's theory of free divisors, also a characterization in terms of
logarithmic differential forms and vector fields is found and and finally
another one in terms of the logarithmic residue using recent results of M.
Granger and M. Schulze.Comment: v2: typos fixed, final version to appear in Math. Ann.; 24 pages, 2
figure
On the Importance of Electroweak Corrections for Majorana Dark Matter Indirect Detection
Recent analyses have shown that the inclusion of electroweak corrections can
alter significantly the energy spectra of Standard Model particles originated
from dark matter annihilations. We investigate the important situation where
the radiation of electroweak gauge bosons has a substantial influence: a
Majorana dark matter particle annihilating into two light fermions. This
process is in p-wave and hence suppressed by the small value of the relative
velocity of the annihilating particles. The inclusion of electroweak radiation
eludes this suppression and opens up a potentially sizeable s-wave contribution
to the annihilation cross section. We study this effect in detail and explore
its impact on the fluxes of stable particles resulting from the dark matter
annihilations, which are relevant for dark matter indirect searches. We also
discuss the effective field theory approach, pointing out that the opening of
the s-wave is missed at the level of dimension-six operators and only encoded
by higher orders.Comment: 25 pages, 6 figures. Minor corrections to match version published in
JCA
Photonic quasi-crystal terahertz lasers
Quasi-crystal structures do not present a full spatial periodicity but are nevertheless constructed starting from deterministic generation rules. When made of different dielectric materials, they often possess fascinating optical properties, which lie between those of periodic photonic crystals and those of a random arrangement of scatterers. Indeed, they can support extended band-like states with pseudogaps in the energy spectrum, but lacking translational invariance, they also intrinsically feature a pattern of 'defects', which can give rise to critically localized modes confined in space, similar to Anderson modes in random structures. If used as laser resonators, photonic quasi-crystals open up design possibilities that are simply not possible in a conventional periodic photonic crystal. In this letter, we exploit the concept of a 2D photonic quasi crystal in an electrically injected laser; specifically, we pattern the top surface of a terahertz quantum-cascade laser with a Penrose tiling of pentagonal rotational symmetry, reaching 0.1-0.2% wall-plug efficiencies and 65 mW peak output powers with characteristic surface-emitting conical beam profiles, result of the rich quasi-crystal Fourier spectrum
Dark blood ischemic LGE segmentation using a deep learning approach
The extent of ischemic scar detected by Cardiac Magnetic Resonance (CMR) with late gadolinium enhancement (LGE) is linked with long-term prognosis, but scar quantification is time-consuming. Deep Learning (DL) approaches appear promising in CMR segmentation. Purpose: To train and apply a deep learning approach to dark blood (DB) CMR-LGE for ischemic scar segmentation, comparing results to 4-Standard Deviation (4-SD) semi-automated method. Methods: We trained and validated a dual neural network infrastructure on a dataset of DB-LGE short-axis stacks, acquired at 1.5T from 33 patients with ischemic scar. The DL architectures were an evolution of the U-Net Convolutional Neural Network (CNN), using data augmentation to increase generalization. The CNNs worked together to identify and segment 1) the myocardium and 2) areas of LGE. The first CNN simultaneously cropped the region of interest (RoI) according to the bounding box of the heart and calculated the area of myocardium. The cropped RoI was then processed by the second CNN, which identified the overall LGE area. The extent of scar was calculated as the ratio of the two areas. For comparison, endo- and epi-cardial borders were manually contoured and scars segmented by a 4-SD technique with a validated software. Results: The two U-Net networks were implemented with two free and open-source software library for machine learning. We performed 5-fold cross-validation over a dataset of 108 and 385 labelled CMR images of the myocardium and scar, respectively. We obtained high performance (> âŒ0.85) as measured by the Intersection over Union metric (IoU) on the training sets, in the case of scar segmentation. With regards to heart recognition, the performance was lower (> âŒ0.7), although improved (⌠0.75) by detecting the cardiac area instead of heart boundaries. On the validation set, performances oscillated between 0.8 and 0.85 for scar tissue recognition, and dropped to âŒ0.7 for myocardium segmentation. We believe that underrepresented samples and noise might be affecting the overall performances, so that additional data might be beneficial. Figure1: examples of heart segmentation (upper left panel: training; upper right panel: validation) and of scar segmentation (lower left panel: training; lower right panel: validation). Conclusion: Our CNNs show promising results in automatically segmenting LV and quantify ischemic scars on DB-LGE-CMR images. The performances of our method can further improve by expanding the data set used for the training. If implemented in a clinical routine, this process can speed up the CMR analysis process and aid in the clinical decision-making
Portrait of Candida albicans Adherence Regulators
Cell-substrate adherence is a fundamental property of microorganisms that enables them to exist in biofilms. Our study focuses on adherence of the fungal pathogen Candida albicans to one substrate, silicone, that is relevant to device-associated infection. We conducted a mutant screen with a quantitative flow-cell assay to identify thirty transcription factors that are required for adherence. We then combined nanoString gene expression profiling with functional analysis to elucidate relationships among these transcription factors, with two major goals: to extend our understanding of transcription factors previously known to govern adherence or biofilm formation, and to gain insight into the many transcription factors we identified that were relatively uncharacterized, particularly in the context of adherence or cell surface biogenesis. With regard to the first goal, we have discovered a role for biofilm regulator Bcr1 in adherence, and found that biofilm regulator Ace2 is a major functional target of chromatin remodeling factor Snf5. In addition, Bcr1 and Ace2 share several target genes, pointing to a new connection between them. With regard to the second goal, our findings reveal existence of a large regulatory network that connects eleven adherence regulators, the zinc-response regulator Zap1, and approximately one quarter of the predicted cell surface protein genes in this organism. This limited yet sensitive glimpse of mutant gene expression changes had thus defined one of the broadest cell surface regulatory networks in C. albicans
A posteriori error analysis and adaptive non-intrusive numerical schemes for systems of random conservation laws
In this article we consider one-dimensional random systems of hyperbolic
conservation laws. We first establish existence and uniqueness of random
entropy admissible solutions for initial value problems of conservation laws
which involve random initial data and random flux functions. Based on these
results we present an a posteriori error analysis for a numerical approximation
of the random entropy admissible solution. For the stochastic discretization,
we consider a non-intrusive approach, the Stochastic Collocation method. The
spatio-temporal discretization relies on the Runge--Kutta Discontinuous
Galerkin method. We derive the a posteriori estimator using continuous
reconstructions of the discrete solution. Combined with the relative entropy
stability framework this yields computable error bounds for the entire
space-stochastic discretization error. The estimator admits a splitting into a
stochastic and a deterministic (space-time) part, allowing for a novel
residual-based space-stochastic adaptive mesh refinement algorithm. We conclude
with various numerical examples investigating the scaling properties of the
residuals and illustrating the efficiency of the proposed adaptive algorithm
- âŠ