1,230 research outputs found
Approximation and geometric modeling with simplex B-splines associated with irregular triangles
Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the domain. We emphasize here that the vertices of the triangles initially define the knots of the B-splines and do generally not coincide with the abscissae of the data. Thus, this approach is well suited to process scattered data.\ud
\ud
With each vertex of a given triangle we associate two additional points which give rise to six configurations of five knots defining six linearly independent bivariate quadratic B-splines supported on the convex hull of the corresponding five knots.\ud
\ud
If we consider the vertices of the triangulation as threefold knots, the bivariate quadratic B-splines turn into the well known bivariate quadratic Bernstein-Bézier-form polynomials on triangles. Thus we might be led to think of B-splines as of smoothed versions of Bernstein-Bézier polynomials with respect to the entire domain. From the degenerate Bernstein-Bézier situation we deduce rules how to locate the additional points associated with each vertex to establish knot configurations that allow the modeling of discontinuities of the function itself or any of its directional derivatives. We find that four collinear knots out of the set of five defining an individual quadratic B-spline generate a discontinuity in the surface along the line they constitute, and that analogously three collinear knots generate a discontinuity in a first derivative.\ud
Finally, the coefficients of the linear combinations of normalized simplicial B-splines are visualized as geometric control points satisfying the convex hull property.\ud
Thus, bivariate quadratic B-splines associated with irregular triangles provide a great flexibility to approximate and model fast changing or even functions with any given discontinuities from scattered data.\ud
An example for least squares approximation with simplex splines is presented
From the help desk: Polynomial distributed lag models
Polynomial distributed lag models (PDLs) are finite-order distributed lag models with the impulse-response function constrained to lie on a polynomial of known degree. You can estimate the parameters of a PDL directly via constrained ordinary least squares, or you can derive a reduced form of the model via a linear transformation of the structural model, estimate the reduced-form parameters, and recover estimates of the structural parameters via an inverse linear transformation of the reduced-form parameter estimates. This article demonstrates both methods using Stata. Copyright 2004 by StataCorp LP.polynomial distributed lag, Almon, Lagrangian interpolation polynomials
A Compressed Sampling and Dictionary Learning Framework for WDM-Based Distributed Fiber Sensing
We propose a compressed sampling and dictionary learning framework for
fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is
generated from a model for the reflected sensor signal. Imperfect prior
knowledge is considered in terms of uncertain local and global parameters. To
estimate a sparse representation and the dictionary parameters, we present an
alternating minimization algorithm that is equipped with a pre-processing
routine to handle dictionary coherence. The support of the obtained sparse
signal indicates the reflection delays, which can be used to measure
impairments along the sensing fiber. The performance is evaluated by
simulations and experimental data for a fiber sensor system with common core
architecture.Comment: Accepted for publication in Journal of the Optical Society of America
A [ \copyright\ 2017 Optical Society of America.]. One print or electronic
copy may be made for personal use only. Systematic reproduction and
distribution, duplication of any material in this paper for a fee or for
commercial purposes, or modifications of the content of this paper are
prohibite
Reconstruction of the Dark Energy equation of state
One of the main challenges of modern cosmology is to investigate the nature
of dark energy in our Universe. The properties of such a component are normally
summarised as a perfect fluid with a (potentially) time-dependent
equation-of-state parameter . We investigate the evolution of this
parameter with redshift by performing a Bayesian analysis of current
cosmological observations. We model the temporal evolution as piecewise linear
in redshift between `nodes', whose -values and redshifts are allowed to
vary. The optimal number of nodes is chosen by the Bayesian evidence. In this
way, we can both determine the complexity supported by current data and locate
any features present in . We compare this node-based reconstruction with
some previously well-studied parameterisations: the Chevallier-Polarski-Linder
(CPL), the Jassal-Bagla-Padmanabhan (JBP) and the Felice-Nesseris-Tsujikawa
(FNT). By comparing the Bayesian evidence for all of these models we find an
indication towards possible time-dependence in the dark energy
equation-of-state. It is also worth noting that the CPL and JBP models are
strongly disfavoured, whilst the FNT is just significantly disfavoured, when
compared to a simple cosmological constant . We find that our node-based
reconstruction model is slightly disfavoured with respect to the CDM
model.Comment: 17 pages, 5 figures, minor correction
Parametrization of dark energy equation of state Revisited
A comparative study of various parametrizations of the dark energy equation
of state is made. Astrophysical constraints from LSS, CMB and BBN are laid down
to test the physical viability and cosmological compatibility of these
parametrizations. A critical evaluation of the 4-index parametrizations reveals
that Hannestad-M\"{o}rtsell as well as Lee parametrizations are simple and
transparent in probing the evolution of the dark energy during the expansion
history of the universe and they satisfy the LSS, CMB and BBN constraints on
the dark energy density parameter for the best fit values.Comment: 11 page
Large scale ab-initio simulations of dislocations
We present a novel methodology to compute relaxed dislocations core configurations, and their energies in crystalline metallic materials using large-scale ab-intio simulations. The approach is based on MacroDFT, a coarse-grained density functional theory method that accurately computes the electronic structure with sub-linear scaling resulting in a tremendous reduction in cost. Due to its implementation in real-space, MacroDFT has the ability to harness petascale resources to study materials and alloys through accurate ab-initio calculations. Thus, the proposed methodology can be used to investigate dislocation cores and other defects where long range elastic effects play an important role, such as in dislocation cores, grain boundaries and near precipitates in crystalline materials. We demonstrate the method by computing the relaxed dislocation cores in prismatic dislocation loops and dislocation segments in magnesium (Mg). We also study the interaction energy with a line of Aluminum (Al) solutes. Our simulations elucidate the essential coupling between the quantum mechanical aspects of the dislocation core and the long range elastic fields that they generate. In particular, our quantum mechanical simulations are able to describe the logarithmic divergence of the energy in the far field as is known from classical elastic theory. In order to reach such scaling, the number of atoms in the simulation cell has to be exceedingly large, and cannot be achieved with the state-of-the-art density functional theory implementations
- …