70,155 research outputs found

    A Probabilistic Model for LCF

    Full text link
    Fatigue life of components or test specimens often exhibit a significant scatter. Furthermore, size effects have a non-negligible influence on fatigue life of parts with different geometries. We present a new probabilistic model for low-cycle fatigue (LCF) in the context of polycrystalline metal. The model takes size effects and inhomogeneous strain fields into account by means of the Poisson point process (PPP). This approach is based on the assumption of independently occurring LCF cracks and the Coffin-Manson-Basquin (CMB) equation. Within the probabilistic model, we give a new and more physical interpretation of the CMB parameters which are in the original approach no material parameters in a strict sense, as they depend on the specimen geometry. Calibration and validation of the proposed model is performed using results of strain controlled LCF tests of specimens with different surface areas. The test specimens are made of the nickel base superalloy RENE 80.Comment: 20 pages, 6 figure

    Improvements in prevalence trend fitting and incidence estimation in EPP 2013

    Get PDF
    OBJECTIVE: Describe modifications to the latest version of the Joint United Nations Programme on AIDS (UNAIDS) Estimation and Projection Package component of Spectrum (EPP 2013) to improve prevalence fitting and incidence trend estimation in national epidemics and global estimates of HIV burden. METHODS: Key changes made under the guidance of the UNAIDS Reference Group on Estimates, Modelling and Projections include: availability of a range of incidence calculation models and guidance for selecting a model; a shift to reporting the Bayesian median instead of the maximum likelihood estimate; procedures for comparison and validation against reported HIV and AIDS data; incorporation of national surveys as an integral part of the fitting and calibration procedure, allowing survey trends to inform the fit; improved antenatal clinic calibration procedures in countries without surveys; adjustment of national antiretroviral therapy reports used in the fitting to include only those aged 15–49 years; better estimates of mortality among people who inject drugs; and enhancements to speed fitting. RESULTS: The revised models in EPP 2013 allow closer fits to observed prevalence trend data and reflect improving understanding of HIV epidemics and associated data. CONCLUSION: Spectrum and EPP continue to adapt to make better use of the existing data sources, incorporate new sources of information in their fitting and validation procedures, and correct for quantifiable biases in inputs as they are identified and understood. These adaptations provide countries with better calibrated estimates of incidence and prevalence, which increase epidemic understanding and provide a solid base for program and policy planning

    The Jeffreys-Lindley Paradox and Discovery Criteria in High Energy Physics

    Full text link
    The Jeffreys-Lindley paradox displays how the use of a p-value (or number of standard deviations z) in a frequentist hypothesis test can lead to an inference that is radically different from that of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930s and common today. The setting is the test of a well-specified null hypothesis (such as the Standard Model of elementary particle physics, possibly with "nuisance parameters") versus a composite alternative (such as the Standard Model plus a new force of nature of unknown strength). The p-value, as well as the ratio of the likelihood under the null hypothesis to the maximized likelihood under the alternative, can strongly disfavor the null hypothesis, while the Bayesian posterior probability for the null hypothesis can be arbitrarily large. The academic statistics literature contains many impassioned comments on this paradox, yet there is no consensus either on its relevance to scientific communication or on its correct resolution. The paradox is quite relevant to frontier research in high energy physics. This paper is an attempt to explain the situation to both physicists and statisticians, in the hope that further progress can be made.Comment: v4: Continued editing for clarity. Figure added. v5: Minor fixes to biblio. Same as published version except for minor copy-edits, Synthese (2014). v6: fix typos, and restore garbled sentence at beginning of Sec 4 to v

    A Bayesian approach for energy-based estimation of acoustic aberrations in high intensity focused ultrasound treatment

    Get PDF
    High intensity focused ultrasound is a non-invasive method for treatment of diseased tissue that uses a beam of ultrasound to generate heat within a small volume. A common challenge in application of this technique is that heterogeneity of the biological medium can defocus the ultrasound beam. Here we reduce the problem of refocusing the beam to the inverse problem of estimating the acoustic aberration due to the biological tissue from acoustic radiative force imaging data. We solve this inverse problem using a Bayesian framework with a hierarchical prior and solve the inverse problem using a Metropolis-within-Gibbs algorithm. The framework is tested using both synthetic and experimental datasets. We demonstrate that our approach has the ability to estimate the aberrations using small datasets, as little as 32 sonication tests, which can lead to significant speedup in the treatment process. Furthermore, our approach is compatible with a wide range of sonication tests and can be applied to other energy-based measurement techniques

    Sampling-based Motion Planning for Active Multirotor System Identification

    Full text link
    This paper reports on an algorithm for planning trajectories that allow a multirotor micro aerial vehicle (MAV) to quickly identify a set of unknown parameters. In many problems like self calibration or model parameter identification some states are only observable under a specific motion. These motions are often hard to find, especially for inexperienced users. Therefore, we consider system model identification in an active setting, where the vehicle autonomously decides what actions to take in order to quickly identify the model. Our algorithm approximates the belief dynamics of the system around a candidate trajectory using an extended Kalman filter (EKF). It uses sampling-based motion planning to explore the space of possible beliefs and find a maximally informative trajectory within a user-defined budget. We validate our method in simulation and on a real system showing the feasibility and repeatability of the proposed approach. Our planner creates trajectories which reduce model parameter convergence time and uncertainty by a factor of four.Comment: Published at ICRA 2017. Video available at https://www.youtube.com/watch?v=xtqrWbgep5

    Comprehensive Two-Point Analyses of Weak Gravitational Lensing Surveys

    Full text link
    We present a framework for analyzing weak gravitational lensing survey data, including lensing and source-density observables, plus spectroscopic redshift calibration data. All two-point observables are predicted in terms of parameters of a perturbed Robertson-Walker metric, making the framework independent of the models for gravity, dark energy, or galaxy properties. For Gaussian fluctuations the 2-point model determines the survey likelihood function and allows Fisher-matrix forecasting. The framework includes nuisance terms for the major systematic errors: shear measurement errors, magnification bias and redshift calibration errors, intrinsic galaxy alignments, and inaccurate theoretical predictions. We propose flexible parameterizations of the many nuisance parameters related to galaxy bias and intrinsic alignment. For the first time we can integrate many different observables and systematic errors into a single analysis. As a first application of this framework, we demonstrate that: uncertainties in power-spectrum theory cause very minor degradation to cosmological information content; nearly all useful information (excepting baryon oscillations) is extracted with ~3 bins per decade of angular scale; and the rate at which galaxy bias varies with redshift substantially influences the strength of cosmological inference. The framework will permit careful study of the interplay between numerous observables, systematic errors, and spectroscopic calibration data for large weak-lensing surveys.Comment: submitted to Ap

    The non-Gaussianity of the cosmic shear likelihood - or: How odd is the Chandra Deep Field South?

    Full text link
    (abridged) We study the validity of the approximation of a Gaussian cosmic shear likelihood. We estimate the true likelihood for a fiducial cosmological model from a large set of ray-tracing simulations and investigate the impact of non-Gaussianity on cosmological parameter estimation. We investigate how odd the recently reported very low value of σ8\sigma_8 really is as derived from the \textit{Chandra} Deep Field South (CDFS) using cosmic shear by taking the non-Gaussianity of the likelihood into account as well as the possibility of biases coming from the way the CDFS was selected. We find that the cosmic shear likelihood is significantly non-Gaussian. This leads to both a shift of the maximum of the posterior distribution and a significantly smaller credible region compared to the Gaussian case. We re-analyse the CDFS cosmic shear data using the non-Gaussian likelihood. Assuming that the CDFS is a random pointing, we find σ8=0.68−0.16+0.09\sigma_8=0.68_{-0.16}^{+0.09} for fixed Ωm=0.25\Omega_{\rm m}=0.25. In a WMAP5-like cosmology, a value equal to or lower than this would be expected in ≈5\approx 5% of the times. Taking biases into account arising from the way the CDFS was selected, which we model as being dependent on the number of haloes in the CDFS, we obtain σ8=0.71−0.15+0.10\sigma_8 = 0.71^{+0.10}_{-0.15}. Combining the CDFS data with the parameter constraints from WMAP5 yields Ωm=0.26−0.02+0.03\Omega_{\rm m} = 0.26^{+0.03}_{-0.02} and σ8=0.79−0.03+0.04\sigma_8 = 0.79^{+0.04}_{-0.03} for a flat universe.Comment: 18 pages, 16 figures, accepted for publication in A&A; New Bayesian treatment of field selection bia
    • …
    corecore