50 research outputs found

    Normalized Power Prior Bayesian Analysis

    Get PDF
    The elicitation of power prior distributions is based on the availability of historical data, and is realized by raising the likelihood function of the historical data to a fractional power. However, an arbitrary positive constant before the like- lihood function of the historical data could change the inferential results when one uses the original power prior. This raises a question that which likelihood function should be used, one from raw data, or one from a su±cient-statistics. We propose a normalized power prior that can better utilize the power parameter in quantifying the heterogeneity between current and historical data. Furthermore, when the power parameter is random, the optimality of the normalized power priors is shown in the sense of maximizing Shannon's mutual information. Some comparisons between the original and the normalized power prior approaches are made and a water-quality monitoring data is used to show that the normalized power prior is more sensible.Bayesian analysis, historical data, normalized power prior, power prior, prior elicitation, Shannon's mutual information.

    Objective Bayesian analysis for the generalized exponential distribution

    Full text link
    In this paper, we consider objective Bayesian inference of the generalized exponential distribution using the independence Jeffreys prior and validate the propriety of the posterior distribution under a family of structured priors. We propose an efficient sampling algorithm via the generalized ratio-of-uniforms method to draw samples for making posterior inference. We carry out simulation studies to assess the finite-sample performance of the proposed Bayesian approach. Finally, a real-data application is provided for illustrative purposes.Comment: 13 pages, 5 figures, 2 table

    PPG-based Heart Rate Estimation with Efficient Sensor Sampling and Learning Models

    Full text link
    Recent studies showed that Photoplethysmography (PPG) sensors embedded in wearable devices can estimate heart rate (HR) with high accuracy. However, despite of prior research efforts, applying PPG sensor based HR estimation to embedded devices still faces challenges due to the energy-intensive high-frequency PPG sampling and the resource-intensive machine-learning models. In this work, we aim to explore HR estimation techniques that are more suitable for lower-power and resource-constrained embedded devices. More specifically, we seek to design techniques that could provide high-accuracy HR estimation with low-frequency PPG sampling, small model size, and fast inference time. First, we show that by combining signal processing and ML, it is possible to reduce the PPG sampling frequency from 125 Hz to only 25 Hz while providing higher HR estimation accuracy. This combination also helps to reduce the ML model feature size, leading to smaller models. Additionally, we present a comprehensive analysis on different ML models and feature sizes to compare their accuracy, model size, and inference time. The models explored include Decision Tree (DT), Random Forest (RF), K-nearest neighbor (KNN), Support vector machines (SVM), and Multi-layer perceptron (MLP). Experiments were conducted using both a widely-utilized dataset and our self-collected dataset. The experimental results show that our method by combining signal processing and ML had only 5% error for HR estimation using low-frequency PPG data. Moreover, our analysis showed that DT models with 10 to 20 input features usually have good accuracy, while are several magnitude smaller in model sizes and faster in inference time

    Relations of Change in Plasma Levels of LDL‐C, Non‐HDL‐C and apoB With Risk Reduction From Statin Therapy: A Meta‐Analysis of Randomized Trials

    Get PDF
    Background: Identifying the best markers to judge the adequacy of lipid‐lowering treatment is increasingly important for coronary heart disease (CHD) prevention given that several novel, potent lipid‐lowering therapies are in development. Reductions in LDL‐C, non‐HDL‐C, or apoB can all be used but which most closely relates to benefit, as defined by the reduction in events on statin treatment, is not established. Methods and Results: We performed a random‐effects frequentist and Bayesian meta‐analysis of 7 placebo‐controlled statin trials in which LDL‐C, non‐HDL‐C, and apoB values were available at baseline and at 1‐year follow‐up. Summary level data for change in LDL‐C, non‐HDL‐C, and apoB were related to the relative risk reduction from statin therapy in each trial. In frequentist meta‐analyses, the mean CHD risk reduction (95% CI) per standard deviation decrease in each marker across these 7 trials were 20.1% (15.6%, 24.3%) for LDL‐C; 20.0% (15.2%, 24.7%) for non‐HDL‐C; and 24.4% (19.2%, 29.2%) for apoB. Compared within each trial, risk reduction per change in apoB averaged 21.6% (12.0%, 31.2%) greater than changes in LDL‐C (P<0.001) and 24.3% (22.4%, 26.2%) greater than changes in non‐HDL‐C (P<0.001). Similarly, in Bayesian meta‐analyses using various prior distributions, Bayes factors (BFs) favored reduction in apoB as more closely related to risk reduction from statins compared with LDL‐C or non‐HDL‐C (BFs ranging from 484 to 2380). Conclusions: Using both a frequentist and Bayesian approach, relative risk reduction across 7 major placebo‐controlled statin trials was more closely related to reductions in apoB than to reductions in either non‐HDL‐C or LDL‐C

    Noninformative priors in Bayesian analysis

    No full text
    The reference priors, introduced by Bernardo (1979) and as further developed in Berger and Bernardo (1989a,b,c), are studied in several situations. These include nonlinear regression, sequential problems, and the unbalanced variance components problem. For nonlinear regression problems, there is a long history of the difficulties (such as impropriety of the posterior) resulting from common noninformative priors. The new group-ordered reference priors of Berger and Bernardo (1990b) are derived and shown to overcome the difficulties. Bayesian inferences under these priors are compared to each other and also compared to frequentist inference using the MLE. The results indicate considerable success for the preferred reference prior. In sequential experiments, where a stopping time is used, the Jeffreys noninformative prior for a multidimensional parameter is obtained as well as the reference prior. These noninformative priors depend on the expected stopping time. It is demonstrated that the Jeffreys prior depends on the stopping time in an inappropriate fashion for a multiparameter problem, while the reference prior does not. Some results on the admissibility of the resulting generalized Bayes rules are also developed. Finally, reference priors for the unbalanced variance components problem are derived and studied with respect to risk performance

    A Bayesian hierarchical approach to dual response surface modelling

    No full text
    In modern quality engineering, dual response surface methodology is a powerful tool to model an industrial process by using both the mean and the standard deviation of the measurements as the responses. The least squares method in regression is often used to estimate the coefficients in the mean and standard deviation models, and various decision criteria are proposed by researchers to find the optimal conditions. Based on the inherent hierarchical structure of the dual response problems, we propose a Bayesian hierarchical approach to model dual response surfaces. Such an approach is compared with two frequentist least squares methods by using two real data sets and simulated data.
    corecore