48 research outputs found

    Theory of minimum variance estimation with applications

    Get PDF

    Problems related to efficacy measurement and analyses

    Get PDF
    In clinical research it is very common to compare two treatments on the basis of an efficacy variable. More specifically, if X and Y denote the responses of patients on the two treatments A and B, respectively, the quantity P(Y\u3eX) (which can be called the probabilistic index for the Effect Size), is of interest in clinical statistics. The objective of this study is to derive an efficacy measure that would compare two treatments more informatively and objectively compared to the earlier approaches. Kernel density estimation is a useful non-parametric method that has not been well utilized as an applied statistical tool, mainly due to its computational complexity. The current study shows that this method is robust even under correlation structures that arise during the computation of all possible differences. The kernel methods can be applied to the estimation of the ROC (Receiver Operating Characteristic) curve as well as to the implementation of nonparametric regression of ROC. The area under the ROC curve (AUC), which is exactly equal to the quantity P(Y\u3eX), is also explored in this dissertation. The methodology used for this study is easy to generalize to other areas of application

    On Bayesian Shrinkage Setup for Item Failure Data Under a Family of Life Testing Distribution

    Get PDF
    Properties of the Bayes shrinkage estimator for the parameter are studied of a family of probability density function when item failure data are available. The symmetric and asymmetric loss functions are considered for two different prior distributions. In addition, the Bayes estimates of reliability function and hazard rate are obtained and their properties are studied

    A Comparison of Parametric and Coarsened Bayesian Interval Estimation in the Presence of a Known Mean-Variance Relationship

    Get PDF
    While the use of Bayesian methods of analysis have become increasingly common, classical frequentist hypothesis testing still holds sway in medical research - especially clinical trials. One major difference between a standard frequentist approach and the most common Bayesian approaches is that even when a frequentist hypothesis test is derived from parametric models, the interpretation and operating characteristics of the test may be considered in a distribution-free manner. Bayesian inference, on the other hand, is often conducted in a parametric setting where the interpretation of the results is dependent on the parametric model. Here we consider a Bayesian counterpart to the most standard frequentist approach to inference. Instead of specifying a sampling distribution for the data we specify an approximate distribution of a summary statistic, thereby resulting in a ``coarsening\u27\u27 of the data. This approach is robust in that it provides some protection against model misspecification and allows one to account for the possibility of a specified mean-variance relationship. Notably, the method also allows one to place prior mass directly on the quantity of interest or, alternatively, to employ a noninformative prior - a counterpart to the standard frequentist approach. We explore interval estimation of a population location parameter in the presence of a mean-variance relationship - a problem that is not well addressed by standard nonparametric frequentist methods. We find that the method has performance comparable to the correct parametric model, and performs notably better than some plausible yet incorrect models. Finally, we apply the method to a real data set and compare ours to previously reported results

    Analysis of Repeated Measures Data Under Circular Covariance

    Get PDF
    Circular covariance is important in modelling phenomena in epidemiological, communications and numerous physical contexts. We introduce and develop a variety of methods which make it a more versatile tool. First, we present two classes of estimators for use in the presence of missing observations. Using simulations, we show that the mean squared errors of the estimators of one of these classes are smaller than those of the Maximum Likelihood (ML) estimators under certain conditions. Next, we propose and discuss a parsimonious, autoregressive type of circular covariance structure which involves only two parameters. We specify ML and other types of estimators of these parameters, and present techniques for selection between various covariance structures related to circular covariance. Finally, we consider estimation assuming that observations on different individuals are correlated in various ways. This model is generalized for use when varying numbers of observations are taken on individuals. In all these contexts, we combine the measurements on individuals with covariates of varying dimensions, and consider estimation of the correlation between the observations and the covariates

    Finite mixtures of distributions; the problem of estimating the mixing proportions

    Get PDF
    Constructing estimators for the parameters of a mixture of distributions has attracted many statisticians. Given that the distribution function G of a random variable X is a mixture of known distribution functions with mixing proportions respectively where estimation of the mixing proportions is considered. Different estimation techniques are studied in depth and the properties of the resulting estimators are discussed. The necessary background to mixtures of distributions is first given and an extension of the method of moments for estimating is then proposed. The generalized (weighted) least squares method, when the observations are grouped into (m+l) intervals, is considered and it is shown that the estimators possess certain desired asymptotic properties. The case when is also investigated. Since the set of equations leading to the generalized least squares estimators are not in general solvable, an iteration process is proposed and is shown to produce satisfactory results after even one cycle. Finally, when the problem of maximum likelihood estimation of 0 is considered and the Fisher's scoring method is suggested to solve the likelihood equation. Properties of the first and second cycle solutions are derived. <p

    Codes and Goals of Neuronal Representations

    Get PDF
    This thesis combines arguments of efficient coding with models and constraints of population coding and population dynamics in order to derive optimal population codes. Starting from the standard model of population coding for the study of optimal tuning widths, diverging conclusions in the literature are resolved by the introduction of a new independent parameter, namely the dynamic range of a tuning function. The difficulties of applying this standard model to neuronal representations of, say natural images, motivates a more exhaustive search for characteristic features of population codes that are most relevant for coding efficiency. Minimizing the dynamic ranges of the tuning functions turns out to be most important for the maximization of Fisher information. At the same time, however, the optimization of population codes without strong a priori constraints on the shape of tuning functions uncovers severe limitations of Fisher information as a measure for coding efficiency. Direct numerical evaluations of the minimum mean square error are used (for the first time in the literature) to compare the efficiency of characteristic examples of population codes, confirming the advantage of a small dynamic range. The results on optimal population coding in the first part of this thesis are summarized in the proposal of the Bernoulli coding hypothesis. In short, it states that rate coding at physiologically plausible time scales suggests the use of binary coding rather than analog coding.The Bernoulli coding hypothesis is challenged by criteria other than coding efficiency as well. Additionally to the study of the influence of computational constraints on the neuronal readout, the question of the robustness of a code and the possibility of faithful signal transmission in spite of the neuronal dynamics are investigated in the second part of the thesis. In particular the latter provides an additional, independent argument for the Bernoulli coding hypothesis

    Efficient estimation of choice-based sample methods with the method of moments

    Get PDF

    Efficient estimation of choice-based sample methods with the method of moments

    Get PDF
    Estimation;Choice Theory;mathematische statistiek

    Statistical Foundations of Actuarial Learning and its Applications

    Get PDF
    This open access book discusses the statistical modeling of insurance problems, a process which comprises data collection, data analysis and statistical model building to forecast insured events that may happen in the future. It presents the mathematical foundations behind these fundamental statistical concepts and how they can be applied in daily actuarial practice. Statistical modeling has a wide range of applications, and, depending on the application, the theoretical aspects may be weighted differently: here the main focus is on prediction rather than explanation. Starting with a presentation of state-of-the-art actuarial models, such as generalized linear models, the book then dives into modern machine learning tools such as neural networks and text recognition to improve predictive modeling with complex features. Providing practitioners with detailed guidance on how to apply machine learning methods to real-world data sets, and how to interpret the results without losing sight of the mathematical assumptions on which these methods are based, the book can serve as a modern basis for an actuarial education syllabus
    corecore