12,905 research outputs found

    Optimal designs for minimising covariances among parameter estimators in a linear model

    Get PDF
    We construct approximate optimal designs for minimising absolute covariances between least-squares estimators of the parameters (or linear functions of the parameters) of a linear model, thereby rendering relevant parameter estimators approximately uncorrelated with each other. In particular, we consider first the case of the covariance between two linear combinations. We also consider the case of two such covariances. For this we first set up a compound optimisation problem which we transform to one of maximising two functions of the design weights simultaneously. The approaches are formulated for a general regression model and are explored through some examples including one practical problem arising in chemistry

    Discrete spherical means of directional derivatives and Veronese maps

    Get PDF
    We describe and study geometric properties of discrete circular and spherical means of directional derivatives of functions, as well as discrete approximations of higher order differential operators. For an arbitrary dimension we present a general construction for obtaining discrete spherical means of directional derivatives. The construction is based on using the Minkowski's existence theorem and Veronese maps. Approximating the directional derivatives by appropriate finite differences allows one to obtain finite difference operators with good rotation invariance properties. In particular, we use discrete circular and spherical means to derive discrete approximations of various linear and nonlinear first- and second-order differential operators, including discrete Laplacians. A practical potential of our approach is demonstrated by considering applications to nonlinear filtering of digital images and surface curvature estimation

    Inference Under Convex Cone Alternatives for Correlated Data

    Full text link
    In this research, inferential theory for hypothesis testing under general convex cone alternatives for correlated data is developed. While there exists extensive theory for hypothesis testing under smooth cone alternatives with independent observations, extension to correlated data under general convex cone alternatives remains an open problem. This long-pending problem is addressed by (1) establishing that a "generalized quasi-score" statistic is asymptotically equivalent to the squared length of the projection of the standard Gaussian vector onto the convex cone and (2) showing that the asymptotic null distribution of the test statistic is a weighted chi-squared distribution, where the weights are "mixed volumes" of the convex cone and its polar cone. Explicit expressions for these weights are derived using the volume-of-tube formula around a convex manifold in the unit sphere. Furthermore, an asymptotic lower bound is constructed for the power of the generalized quasi-score test under a sequence of local alternatives in the convex cone. Applications to testing under order restricted alternatives for correlated data are illustrated.Comment: 31 page

    Imposing Economic Constraints in Nonparametric Regression: Survey, Implementation and Extension

    Get PDF
    Economic conditions such as convexity, homogeneity, homotheticity, and monotonicity are all important assumptions or consequences of assumptions of economic functionals to be estimated. Recent research has seen a renewed interest in imposing constraints in nonparametric regression. We survey the available methods in the literature, discuss the challenges that present themselves when empirically implementing these methods and extend an existing method to handle general nonlinear constraints. A heuristic discussion on the empirical implementation for methods that use sequential quadratic programming is provided for the reader and simulated and empirical evidence on the distinction between constrained and unconstrained nonparametric regression surfaces is covered.identification, concavity, Hessian, constraint weighted bootstrapping, earnings function

    Labeling the Features Not the Samples: Efficient Video Classification with Minimal Supervision

    Full text link
    Feature selection is essential for effective visual recognition. We propose an efficient joint classifier learning and feature selection method that discovers sparse, compact representations of input features from a vast sea of candidates, with an almost unsupervised formulation. Our method requires only the following knowledge, which we call the \emph{feature sign}---whether or not a particular feature has on average stronger values over positive samples than over negatives. We show how this can be estimated using as few as a single labeled training sample per class. Then, using these feature signs, we extend an initial supervised learning problem into an (almost) unsupervised clustering formulation that can incorporate new data without requiring ground truth labels. Our method works both as a feature selection mechanism and as a fully competitive classifier. It has important properties, low computational cost and excellent accuracy, especially in difficult cases of very limited training data. We experiment on large-scale recognition in video and show superior speed and performance to established feature selection approaches such as AdaBoost, Lasso, greedy forward-backward selection, and powerful classifiers such as SVM.Comment: arXiv admin note: text overlap with arXiv:1411.771

    Minimum Divergence, Generalized Empirical Likelihoods, and Higher Order Expansions

    Get PDF
    This paper studies the Minimum Divergence (MD) class of estimators for econometric models specified through moment restrictions. We show that MD estimators can be obtained as solutions to a computationally tractable optimization problem. This problem is similar to the one solved by the Generalized Empirical Likelihood estimators of Newey and Smith (2004), but it is equivalent to it only for a subclass of divergences. The MD framework provides a coherent testing theory: tests for overidentification and parametric restrictions in this framework can be interpreted as semiparametric versions of Pearson-type goodness of fit tests. The higher order properties of MD estimators are also studied and it is shown that MD estimators that have the same higher order bias as the Empirical Likelihood (EL) estimator also share the same higher order Mean Square Error and are all higher order efficient. We identify members of the MD class that are not only higher order efficient, but, unlike the EL estimator, well behaved when the moment restrictions are misspecified.Minimum divergence; GMM; Generalized empirical likelihood; Higher order efficiency; Misspecified models
    • …
    corecore