3,399 research outputs found

    Interpolating point spread function anisotropy

    Full text link
    Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). We also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σsys2\sigma_{sys}^2 better than the 1×10−71\times10^{-7} upper bound expected by future space-based surveys, with the local interpolators performing better than the global ones

    Robust localization methods for passivity enforcement of linear macromodels

    Get PDF
    In this paper we solve a non-smooth convex formulation for passivity enforcement of linear macromodels using robust localization based algorithms such as the ellipsoid and the cutting plane methods. Differently from existing perturbation based techniques, we solve the formulation based on the direct ℌ∞ norm minimization through perturbation of state-space model parameters. We provide a systematic way of defining an initial set which is guaranteed to contain the global optimum. We also provide a lower bound on the global minimum, that grows tighter at each iteration and hence guarantees ή - optimality of the computed solution. We demonstrate the robustness of our implementation by generating accurate passive models for challenging examples for which existing algorithms either failed or exhibited extremely slow convergenc

    Actions for signature change

    Get PDF
    This is a contribution on the controversy about junction conditions for classical signature change. The central issue in this debate is whether the extrinsic curvature on slices near the hypersurface of signature change has to be continuous ({\it weak} signature change) or to vanish ({\it strong} signature change). Led by a Lagrangian point of view, we write down eight candidate action functionals S1S_1,\dots S8S_8 as possible generalizations of general relativity and investigate to what extent each of these defines a sensible variational problem, and which junction condition is implied. Four of the actions involve an integration over the total manifold. A particular subtlety arises from the precise definition of the Einstein-Hilbert Lagrangian density ∣g∣1/2R[g]|g|^{1/2} R[g]. The other four actions are constructed as sums of integrals over singe-signature domains. The result is that {\it both} types of junction conditions occur in different models, i.e. are based on different first principles, none of which can be claimed to represent the ''correct'' one, unless physical predictions are taken into account. From a point of view of naturality dictated by the variational formalism, {\it weak} signature change is slightly favoured over {\it strong} one, because it requires less {\it \`a priori} restrictions for the class of off-shell metrics. In addition, a proposal for the use of the Lagrangian framework in cosmology is made.Comment: 36 pages, LaTeX, no figures; some corrections have been made, several Comments and further references are included and a note has been added

    A Comparative Review of Dimension Reduction Methods in Approximate Bayesian Computation

    Get PDF
    Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.Comment: Published in at http://dx.doi.org/10.1214/12-STS406 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A comparative study of surrogate musculoskeletal models using various neural network configurations

    Get PDF
    Title from PDF of title page, viewed on August 13, 2013Thesis advisor: Reza R. DerakhshaniVitaIncludes bibliographic references (pages 85-88)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2013The central idea in musculoskeletal modeling is to be able to predict body-level (e.g. muscle forces) as well as tissue-level information (tissue-level stress, strain, etc.). To develop computationally efficient techniques to analyze such models, surrogate models have been introduced which concurrently predict both body-level and tissue-level information using multi-body and finite-element analysis, respectively. However, this kind of surrogate model is not an optimum solution as it involves the usage of finite element models which are computation intensive and involve complex meshing methods especially during real-time movement simulations. An alternative surrogate modeling method is the use of artificial neural networks in place of finite-element models. The ultimate objective of this research is to predict tissue-level stresses experienced by the cartilage and ligaments during movement and achieve concurrent simulation of muscle force and tissue stress using various surrogate neural network models, where stresses obtained from finite-element models provide the frame of reference. Over the last decade, neural networks have been successfully implemented in several biomechanical modeling applications. Their adaptive ability to learn from examples, simple implementation techniques, and fast simulation times make neural networks versatile and robust when compared to other techniques. The neural network models are trained with reaction forces from multi-body models and stresses from finite element models obtained at the interested elements. Several configurations of static and dynamic neural networks are modeled, and accuracies close to 93% were achieved, where the correlation coefficient is the chosen measure of goodness. Using neural networks, the simulation time was reduced nearly 40,000 times when compared to the finite-element models. This study also confirms theoretical concepts that special network configurations--including average committee, stacked generalization, and negative correlation learning--provide considerably better results when compared to individual networks themselves.Introduction -- Methods -- Results -- Conclusion -- Future work -- Appendix A. Various linear and non-linear modeling techniques -- Appendix B. Error analysi
    • 

    corecore