151,525 research outputs found

    A Maximum Entropy Procedure to Solve Likelihood Equations

    Get PDF
    In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon\u2019s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth\u2019s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation

    The Calculus of M-estimation in R with geex

    Get PDF
    M-estimation, or estimating equation, methods are widely applicable for point estimation and asymptotic inference. In this paper, we present an R package that can find roots and compute the empirical sandwich variance estimator for any set of user-specified, unbiased estimating equations. Examples from the M-estimation primer by Stefanski and Boos (2002) demonstrate use of the software. The package also includes a framework for finite sample variance corrections and a website with an extensive collection of tutorials

    FDD massive MIMO channel spatial covariance conversion using projection methods

    Full text link
    Knowledge of second-order statistics of channels (e.g. in the form of covariance matrices) is crucial for the acquisition of downlink channel state information (CSI) in massive MIMO systems operating in the frequency division duplexing (FDD) mode. Current MIMO systems usually obtain downlink covariance information via feedback of the estimated covariance matrix from the user equipment (UE), but in the massive MIMO regime this approach is infeasible because of the unacceptably high training overhead. This paper considers instead the problem of estimating the downlink channel covariance from uplink measurements. We propose two variants of an algorithm based on projection methods in an infinite-dimensional Hilbert space that exploit channel reciprocity properties in the angular domain. The proposed schemes are evaluated via Monte Carlo simulations, and they are shown to outperform current state-of-the art solutions in terms of accuracy and complexity, for typical array geometries and duplex gaps.Comment: Paper accepted on 29/01/2018 for presentation at ICASSP 201

    Optimal solution error covariance in highly nonlinear problems of variational data assimilation

    Get PDF
    The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem (see, e.g.[1]) to find the initial condition, boundary conditions or model parameters. The input data contain observation and background errors, hence there is an error in the optimal solution. For mildly nonlinear dynamics, the covariance matrix of the optimal solution error can be approximated by the inverse Hessian of the cost functional of an auxiliary data assimilation problem ([2], [3]). The relationship between the optimal solution error covariance matrix and the Hessian of the auxiliary control problem is discussed for different degrees of validity of the tangent linear hypothesis. For problems with strongly nonlinear dynamics a new statistical method based on computation of a sample of inverse Hessians is suggested. This method relies on the efficient computation of the inverse Hessian by means of iterative methods (Lanczos and quasi-Newton BFGS) with preconditioning. The method allows us to get a sensible approximation of the posterior covariance matrix with a small sample size. Numerical examples are presented for the model governed by Burgers equation with a nonlinear viscous term. The first author acknowledges the funding through the project 09-01-00284 of the Russian Foundation for Basic Research, and the FCP program "Kadry"
    corecore