254 research outputs found

    The β-model—maximum likelihood, Cramér–Rao bounds, and hypothesis testing

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this recordWe study the maximum-likelihood estimator in a setting where the dependent variable is a random graph and covariates are available on a graph level. The model generalizes the well-known β-model for random graphs by replacing the constant model parameters with regression functions. Cramer-Rao bounds are derived for special cases of the undirected β-model, the directed β-model, and the covariate-based β-model. The corresponding maximum-likelihood estimators are compared with the bounds by means of simulations. Moreover, examples are given on how to use the presented maximum-likelihood estimators to test for directionality and significance. Finally, the applicability of the model is demonstrated using temporal social network data describing communication among healthcare workers

    Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental findings and applications

    Full text link
    Inferring information from a set of acquired data is the main objective of any signal processing (SP) method. In particular, the common problem of estimating the value of a vector of parameters from a set of noisy measurements is at the core of a plethora of scientific and technological advances in the last decades; for example, wireless communications, radar and sonar, biomedicine, image processing, and seismology, just to name a few. Developing an estimation algorithm often begins by assuming a statistical model for the measured data, i.e. a probability density function (pdf) which if correct, fully characterizes the behaviour of the collected data/measurements. Experience with real data, however, often exposes the limitations of any assumed data model since modelling errors at some level are always present. Consequently, the true data model and the model assumed to derive the estimation algorithm could differ. When this happens, the model is said to be mismatched or misspecified. Therefore, understanding the possible performance loss or regret that an estimation algorithm could experience under model misspecification is of crucial importance for any SP practitioner. Further, understanding the limits on the performance of any estimator subject to model misspecification is of practical interest. Motivated by the widespread and practical need to assess the performance of a mismatched estimator, the goal of this paper is to help to bring attention to the main theoretical findings on estimation theory, and in particular on lower bounds under model misspecification, that have been published in the statistical and econometrical literature in the last fifty years. Secondly, some applications are discussed to illustrate the broad range of areas and problems to which this framework extends, and consequently the numerous opportunities available for SP researchers.Comment: To appear in the IEEE Signal Processing Magazin

    Maximum Fidelity

    Full text link
    The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversion to absolute model concordance (p value). Fidelity maximization allows identification of the most concordant model distribution, generating a method for parameter estimation, with neighboring, less concordant distributions providing the "uncertainty" in this estimate. Maximum fidelity provides an optimal approach for parameter estimation (superior to maximum likelihood) and a generally optimal approach for goodness-of-fit assessment of arbitrary models applied to univariate data. Extensions to binary data, binned data, multidimensional data, and classical parametric and nonparametric statistical tests are described. Maximum fidelity provides a philosophically consistent, robust, and seemingly optimal foundation for statistical inference. All findings are presented in an elementary way to be immediately accessible to all researchers utilizing statistical analysis.Comment: 66 pages, 32 figures, 7 tables, submitte

    Uncertainty and trade-offs in quantum multiparameter estimation

    Get PDF
    Uncertainty relations in quantum mechanics express bounds on our ability to simultaneously obtain knowledge about expectation values of non-commuting observables of a quantum system. They quantify trade-offs in accuracy between complementary pieces of information about the system. In quantum multiparameter estimation, such trade-offs occur for the precision achievable for different parameters characterizing a density matrix: an uncertainty relation emerges between the achievable variances of the different estimators. This is in contrast to classical multiparameter estimation, where simultaneous optimal precision is attainable in the asymptotic limit. We study trade-off relations that follow from known tight bounds in quantum multiparameter estimation. We compute trade-off curves and surfaces from Cramer-Rao type bounds which provide a compelling graphical representation of the information encoded in such bounds, and argue that bounds on simultaneously achievable precision in quantum multiparameter estimation should be regarded as measurement uncertainty relations. From the state-dependent bounds on the expected cost in parameter estimation, we derive a state-independent uncertainty relation between the parameters of a qubit system

    Estimation in the group action channel

    Full text link
    We analyze the problem of estimating a signal from multiple measurements on a \mbox{group action channel} that linearly transforms a signal by a random group action followed by a fixed projection and additive Gaussian noise. This channel is motivated by applications such as multi-reference alignment and cryo-electron microscopy. We focus on the large noise regime prevalent in these applications. We give a lower bound on the mean square error (MSE) of any asymptotically unbiased estimator of the signal's orbit in terms of the signal's moment tensors, which implies that the MSE is bounded away from 0 when N/σ2dN/\sigma^{2d} is bounded from above, where NN is the number of observations, σ\sigma is the noise standard deviation, and dd is the so-called \mbox{moment order cutoff}. In contrast, the maximum likelihood estimator is shown to be consistent if N/σ2dN /\sigma^{2d} diverges.Comment: 5 pages, conferenc

    Symmetric Normal Mixture GARCH

    Get PDF
    Normal mixture (NM) GARCH models are better able to account for leptokurtosis in financial data and offer a more intuitive and tractable framework for risk analysis and option pricing than student’s t-GARCH models. We present a general, symmetric parameterisation for NM-GARCH(1,1) models, derive the analytic derivatives for the maximum likelihood estimation of the model parameters and their standard errors and compute the moments of the error term. Also, we formulate specific conditions on the model parameters to ensure positive, finite conditional and unconditional second and fourth moments. Simulations quantify the potential bias and inefficiency of parameter estimates as a function of the mixing law. We show that there is a serious bias on parameter estimates for volatility components having very low weight in the mixing law. An empirical application uses moment specification tests and information criteria to determine the optimal number of normal densities in the mixture. For daily returns on three US Dollar foreign exchange rates (British pound, euro and Japanese yen) we find that, whilst normal GARCH(1,1) models fail the moment tests, a simple mixture of two normal densities is sufficient to capture the conditional excess kurtosis in the data. According to our chosen criteria, and given our simulation results, we conclude that a two regime symmetric NM-GARCH model, which quantifies volatility corresponding to ‘normal’ and ‘exceptional’ market circumstances, is optimal for these exchange rate data.Volatility regimes, conditional excess kurtosis, normal mixture, heavy trails, exchange rates, conditional heteroscedasticity, GARCH models.

    Near-Field Positioning and Attitude Sensing Based on Electromagnetic Propagation Modeling

    Full text link
    Positioning and sensing over wireless networks are imperative for many emerging applications. However, traditional wireless channel models cannot be used for sensing the attitude of the user equipment (UE), since they over-simplify the UE as a point target. In this paper, a comprehensive electromagnetic propagation modeling (EPM) based on electromagnetic theory is developed to precisely model the near-field channel. For the noise-free case, the EPM model establishes the non-linear functional dependence of observed signals on both the position and attitude of the UE. To address the difficulty in the non-linear coupling, we first propose to divide the distance domain into three regions, separated by the defined Phase ambiguity distance and Spacing constraint distance. Then, for each region, we obtain the closed-form solutions for joint position and attitude estimation with low complexity. Next, to investigate the impact of random noise on the joint estimation performance, the Ziv-Zakai bound (ZZB) is derived to yield useful insights. The expected Cram\'er-Rao bound (ECRB) is further provided to obtain the simplified closed-form expressions for the performance lower bounds. Our numerical results demonstrate that the derived ZZB can provide accurate predictions of the performance of estimators in all signal-to-noise ratio (SNR) regimes. More importantly, we achieve the millimeter-level accuracy in position estimation and attain the 0.1-level accuracy in attitude estimation.Comment: 16 pages, 9 figures. Submitted to JSAC - Special Issue on Positioning and Sensing Over Wireless Network

    Caractérisation des performances minimales d'estimation pour des modèles d'observations non-standards

    Get PDF
    In the parametric estimation context, estimators performances can be characterized, inter alia, by the mean square error and the resolution limit. The first quantities the accuracy of estimated values and the second defines the ability of the estimator to allow a correct resolvability. This thesis deals first with the prediction the "optimal" MSE by using lower bounds in the hybrid estimation context (i.e. when the parameter vector contains both random and non-random parameters), second with the extension of Cramér-Rao bounds for non-standard estimation problems and finally to the characterization of estimators resolution. This manuscript is then divided into three parts :First, we fill some lacks of hybrid lower bound on the MSE by using two existing Bayesian lower bounds: the Weiss-Weinstein bound and a particular form of Ziv-Zakai family lower bounds. We show that these extended lower bounds are tighter than the existing hybrid lower bounds in order to predict the optimal MSE.Second, we extend Cramer-Rao lower bounds for uncommon estimation contexts. Precisely: (i) Where the non-random parameters are subject to equality constraints (linear or nonlinear). (ii) For discrete-time filtering problems when the evolution of states are defined by a Markov chain. (iii) When the observation model differs to the real data distribution.Finally, we study the resolution of the estimators when their probability distributions are known. This approach is an extension of the work of Oh and Kashyap and the work of Clark to multi-dimensional parameters estimation problems.Dans le contexte de l'estimation paramétrique, les performances d'un estimateur peuvent être caractérisées, entre autre, par son erreur quadratique moyenne (EQM) et sa résolution limite. La première quantifie la précision des valeurs estimées et la seconde définit la capacité de l'estimateur à séparer plusieurs paramètres. Cette thèse s'intéresse d'abord à la prédiction de l'EQM "optimale" à l'aide des bornes inférieures pour des problèmes d'estimation simultanée de paramètres aléatoires et non-aléatoires (estimation hybride), puis à l'extension des bornes de Cramér-Rao pour des modèles d'observation moins standards. Enfin, la caractérisation des estimateurs en termes de résolution limite est également étudiée. Ce manuscrit est donc divisé en trois parties :Premièrement, nous complétons les résultats de littérature sur les bornes hybrides en utilisant deux bornes bayésiennes : la borne de Weiss-Weinstein et une forme particulière de la famille de bornes de Ziv-Zakaï. Nous montrons que ces bornes "étendues" sont plus précises pour la prédiction de l'EQM optimale par rapport à celles existantes dans la littérature.Deuxièmement, nous proposons des bornes de type Cramér-Rao pour des contextes d'estimation moins usuels, c'est-à-dire : (i) Lorsque les paramètres non-aléatoires sont soumis à des contraintes d'égalité linéaires ou non-linéaires (estimation sous contraintes). (ii) Pour des problèmes de filtrage à temps discret où l'évolution des états (paramètres) est régit par une chaîne de Markov. (iii) Lorsque la loi des observations est différente de la distribution réelle des données.Enfin, nous étudions la résolution et la précision des estimateurs en proposant un critère basé directement sur la distribution des estimées. Cette approche est une extension des travaux de Oh et Kashyap et de Clark pour des problèmes d'estimation de paramètres multidimensionnels
    corecore