362,361 research outputs found

    Semiparametrically efficient rank-based inference for shape II. Optimal R-estimation of shape

    Full text link
    A class of R-estimators based on the concepts of multivariate signed ranks and the optimal rank-based tests developed in Hallin and Paindaveine [Ann. Statist. 34 (2006)] is proposed for the estimation of the shape matrix of an elliptical distribution. These R-estimators are root-n consistent under any radial density g, without any moment assumptions, and semiparametrically efficient at some prespecified density f. When based on normal scores, they are uniformly more efficient than the traditional normal-theory estimator based on empirical covariance matrices (the asymptotic normality of which, moreover, requires finite moments of order four), irrespective of the actual underlying elliptical density. They rely on an original rank-based version of Le Cam's one-step methodology which avoids the unpleasant nonparametric estimation of cross-information quantities that is generally required in the context of R-estimation. Although they are not strictly affine-equivariant, they are shown to be equivariant in a weak asymptotic sense. Simulations confirm their feasibility and excellent finite-sample performances.Comment: Published at http://dx.doi.org/10.1214/009053606000000948 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Seeing the Wood for the Trees: A Critical Evaluation of Methods to Estimate the Parameters of Stochastic Differential Equations. Working paper #2

    Get PDF
    Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This paper provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox-Ingersoll-Ross and Ornstein-Uhlenbeck equations respectively.stochastic differential equations, parameter estimation, maximum likelihood, simulation, moments

    Efficient Regression in Time Series Partial Linear Models

    Get PDF
    This paper studies efficient estimation of partial linear regression in time series models. In particular, it combines two topics that have attracted a good deal of attention in econometrics, viz. spectral regression and partial linear regression, and proposes an efficient frequency domain estimator for partial linear models with serially correlated residuals. A nonparametric treatment of regression errors is permitted so that it is not necessary to be explicit about the dynamic specification of the errors other than to assume stationarity. A new concept of weak dependence is introduced based on regularity conditions on the joint density. Under these and some other regularity conditions, it is shown that the spectral estimator is root-n-consistent, asymptotically normal, and asymptotically efficient.Efficient estimation, Partial linear regression, Spectral regression, Kernel estimation, Nonparametric, Semiparametric, Weak dependence

    HOG, LBP and SVM based Traffic Density Estimation at Intersection

    Full text link
    Increased amount of vehicular traffic on roads is a significant issue. High amount of vehicular traffic creates traffic congestion, unwanted delays, pollution, money loss, health issues, accidents, emergency vehicle passage and traffic violations that ends up in the decline in productivity. In peak hours, the issues become even worse. Traditional traffic management and control systems fail to tackle this problem. Currently, the traffic lights at intersections aren't adaptive and have fixed time delays. There's a necessity of an optimized and sensible control system which would enhance the efficiency of traffic flow. Smart traffic systems perform estimation of traffic density and create the traffic lights modification consistent with the quantity of traffic. We tend to propose an efficient way to estimate the traffic density on intersection using image processing and machine learning techniques in real time. The proposed methodology takes pictures of traffic at junction to estimate the traffic density. We use Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP) and Support Vector Machine (SVM) based approach for traffic density estimation. The strategy is computationally inexpensive and can run efficiently on raspberry pi board. Code is released at https://github.com/DevashishPrasad/Smart-Traffic-Junction.Comment: paper accepted at IEEE PuneCon 201

    Highly efficient Bayesian joint inversion for receiver-based data and its application to lithospheric structure beneath the southern Korean Peninsula

    Get PDF
    With the deployment of extensive seismic arrays, systematic and efficient parameter and uncertainty estimation is of increasing importance and can provide reliable, regional models for crustal and upper-mantle structure.We present an efficient Bayesian method for the joint inversion of surface-wave dispersion and receiver-function data that combines trans-dimensional (trans-D) model selection in an optimization phase with subsequent rigorous parameter uncertainty estimation. Parameter and uncertainty estimation depend strongly on the chosen parametrization such that meaningful regional comparison requires quantitative model selection that can be carried out efficiently at several sites. While significant progress has been made for model selection (e.g. trans-D inference) at individual sites, the lack of efficiency can prohibit application to large data volumes or cause questionable results due to lack of convergence. Studies that address large numbers of data sets have mostly ignored model selection in favour of more efficient/simple estimation techniques (i.e. focusing on uncertainty estimation but employing ad-hoc model choices). Our approach consists of a two-phase inversion that combines trans-D optimization to select the most probable parametrization with subsequent Bayesian sampling for uncertainty estimation given that parametrization. The trans-D optimization is implemented here by replacing the likelihood function with the Bayesian information criterion (BIC). The BIC provides constraints on model complexity that facilitate the search for an optimal parametrization. Parallel tempering (PT) is applied as an optimization algorithm. After optimization, the optimal model choice is identified by the minimum BIC value from all PT chains. Uncertainty estimation is then carried out in fixed dimension. Data errors are estimated as part of the inference problem by a combination of empirical and hierarchical estimation. Data covariance matrices are estimated from data residuals (the difference between prediction and observation) and periodically updated. In addition, a scaling factor for the covariance matrix magnitude is estimated as part of the inversion. The inversion is applied to both simulated and observed data that consist of phase- and group-velocity dispersion curves (Rayleigh wave), and receiver functions. The simulation results show that model complexity and important features are well estimated by the fixed dimensional posterior probability density. Observed data for stations in different tectonic regions of the southern Korean Peninsula are considered. The results are consistent with published results, but important features are better constrained than in previous regularized inversions and are more consistent across the stations. For example, resolution of crustal and Moho interfaces, and absolute values and gradients of velocities in lower crust and upper mantle are better constrained

    Linear Estimating Equations for Exponential Families with Application to Gaussian Linear Concentration Models

    Full text link
    In many families of distributions, maximum likelihood estimation is intractable because the normalization constant for the density which enters into the likelihood function is not easily available. The score matching estimator of Hyv\"arinen (2005) provides an alternative where this normalization constant is not required. The corresponding estimating equations become linear for an exponential family. The score matching estimator is shown to be consistent and asymptotically normally distributed for such models, although not necessarily efficient. Gaussian linear concentration models are examples of such families. For linear concentration models that are also linear in the covariance we show that the score matching estimator is identical to the maximum likelihood estimator, hence in such cases it is also efficient. Gaussian graphical models and graphical models with symmetries form particularly interesting subclasses of linear concentration models and we investigate the potential use of the score matching estimator for this case

    Estimating the Effects of Large Shareholders Using a Geographic Instrument

    Get PDF
    Large shareholders may play an important role for firm policies and performance, but identifying an effect empirically presents a challenge due to the endogeneity of ownership structures. However, unlike other blockholders, individuals tend to hold blocks in corporations that are located close to where they live. Using this fact, we create an instrument – the density of wealthy individuals near a firm’s headquarters – for the presence of a large, non-managerial individual shareholder in a public firm. We show that these shareholders have a large impact on firms. Consistent with theories of large shareholders as monitors, we find that they increase firm profitability, increase dividends, reduce corporate cash holdings, and reduce executive compensation. Consistent with the view that there exist conflicts between large and small owners in public firms, we uncover evidence of substitution toward less tax-efficient forms of distribution (dividends over repurchases). In addition, our analysis shows that large shareholders reduce the liquidity of the firm’s stock.Large shareholders; blockholders; corporate policies; firm performance; liquidity; instrumental variable estimation
    corecore