21,906 research outputs found

    A Question of Empowerment: Information Technology and Civic Engagement in New Haven, Connecticut

    Get PDF
    Extravagant claims have been made for the capacity of IT (information technology) to empower citizens and to enhance the capacity of civic organizations. This study of IT use by organizations and agencies in New Haven, Connecticut, 1998-2004, tests these claims, finding that the use of IT by nonprofits is selective, tending to serve agencies patronized by community elites rather than populations in need. In addition, the study finds that single interest groups are far more effective in using IT than more diverse civic and neighborhood groups.This publication is Hauser Center Working Paper No. 30. The Hauser Center Working Paper Series was launched during the summer of 2000. The Series enables the Hauser Center to share with a broad audience important works-in-progress written by Hauser Center scholars and researchers

    Testing the suitability of polynomial models in errors-in-variables problems

    Get PDF
    A low-degree polynomial model for a response curve is used commonly in practice. It generally incorporates a linear or quadratic function of the covariate. In this paper we suggest methods for testing the goodness of fit of a general polynomial model when there are errors in the covariates. There, the true covariates are not directly observed, and conventional bootstrap methods for testing are not applicable. We develop a new approach, in which deconvolution methods are used to estimate the distribution of the covariates under the null hypothesis, and a ``wild'' or moment-matching bootstrap argument is employed to estimate the distribution of the experimental errors (distinct from the distribution of the errors in covariates). Most of our attention is directed at the case where the distribution of the errors in covariates is known, although we also discuss methods for estimation and testing when the covariate error distribution is estimated. No assumptions are made about the distribution of experimental error, and, in particular, we depart substantially from conventional parametric models for errors-in-variables problems.Comment: Published in at http://dx.doi.org/10.1214/009053607000000361 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Wavelet-based estimation with multiple sampling rates

    Full text link
    We suggest an adaptive sampling rule for obtaining information from noisy signals using wavelet methods. The technique involves increasing the sampling rate when relatively high-frequency terms are incorporated into the wavelet estimator, and decreasing it when, again using thresholded terms as an empirical guide, signal complexity is judged to have decreased. Through sampling in this way the algorithm is able to accurately recover relatively complex signals without increasing the long-run average expense of sampling. It achieves this level of performance by exploiting the opportunities for near-real time sampling that are available if one uses a relatively high primary resolution level when constructing the basic wavelet estimator. In the practical problems that motivate the work, where signal to noise ratio is particularly high and the long-run average sampling rate may be several hundred thousand operations per second, high primary resolution levels are quite feasible.Comment: Published at http://dx.doi.org/10.1214/009053604000000751 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonparametric estimation of a point-spread function in multivariate problems

    Full text link
    The removal of blur from a signal, in the presence of noise, is readily accomplished if the blur can be described in precise mathematical terms. However, there is growing interest in problems where the extent of blur is known only approximately, for example in terms of a blur function which depends on unknown parameters that must be computed from data. More challenging still is the case where no parametric assumptions are made about the blur function. There has been a limited amount of work in this setting, but it invariably relies on iterative methods, sometimes under assumptions that are mathematically convenient but physically unrealistic (e.g., that the operator defined by the blur function has an integrable inverse). In this paper we suggest a direct, noniterative approach to nonparametric, blind restoration of a signal. Our method is based on a new, ridge-based method for deconvolution, and requires only mild restrictions on the blur function. We show that the convergence rate of the method is close to optimal, from some viewpoints, and demonstrate its practical performance by applying it to real images.Comment: Published in at http://dx.doi.org/10.1214/009053606000001442 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonparametric estimation of mean-squared prediction error in nested-error regression models

    Full text link
    Nested-error regression models are widely used for analyzing clustered data. For example, they are often applied to two-stage sample surveys, and in biology and econometrics. Prediction is usually the main goal of such analyses, and mean-squared prediction error is the main way in which prediction performance is measured. In this paper we suggest a new approach to estimating mean-squared prediction error. We introduce a matched-moment, double-bootstrap algorithm, enabling the notorious underestimation of the naive mean-squared error estimator to be substantially reduced. Our approach does not require specific assumptions about the distributions of errors. Additionally, it is simple and easy to apply. This is achieved through using Monte Carlo simulation to implicitly develop formulae which, in a more conventional approach, would be derived laboriously by mathematical arguments.Comment: Published at http://dx.doi.org/10.1214/009053606000000579 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Methodology and theory for partial least squares applied to functional data

    Full text link
    The partial least squares procedure was originally developed to estimate the slope parameter in multivariate parametric models. More recently it has gained popularity in the functional data literature. There, the partial least squares estimator of slope is either used to construct linear predictive models, or as a tool to project the data onto a one-dimensional quantity that is employed for further statistical analysis. Although the partial least squares approach is often viewed as an attractive alternative to projections onto the principal component basis, its properties are less well known than those of the latter, mainly because of its iterative nature. We develop an explicit formulation of partial least squares for functional data, which leads to insightful results and motivates new theory, demonstrating consistency and establishing convergence rates.Comment: Published in at http://dx.doi.org/10.1214/11-AOS958 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Assessing extrema of empirical principal component functions

    Full text link
    The difficulties of estimating and representing the distributions of functional data mean that principal component methods play a substantially greater role in functional data analysis than in more conventional finite-dimensional settings. Local maxima and minima in principal component functions are of direct importance; they indicate places in the domain of a random function where influence on the function value tends to be relatively strong but of opposite sign. We explore statistical properties of the relationship between extrema of empirical principal component functions, and their counterparts for the true principal component functions. It is shown that empirical principal component funcions have relatively little trouble capturing conventional extrema, but can experience difficulty distinguishing a ``shoulder'' in a curve from a small bump. For example, when the true principal component function has a shoulder, the probability that the empirical principal component function has instead a bump is approximately equal to 1/2. We suggest and describe the performance of bootstrap methods for assessing the strength of extrema. It is shown that the subsample bootstrap is more effective than the standard bootstrap in this regard. A ``bootstrap likelihood'' is proposed for measuring extremum strength. Exploratory numerical methods are suggested.Comment: Published at http://dx.doi.org/10.1214/009053606000000371 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Defining probability density for a distribution of random functions

    Full text link
    The notion of probability density for a random function is not as straightforward as in finite-dimensional cases. While a probability density function generally does not exist for functional data, we show that it is possible to develop the notion of density when functional data are considered in the space determined by the eigenfunctions of principal component analysis. This leads to a transparent and meaningful surrogate for density defined in terms of the average value of the logarithms of the densities of the distributions of principal components for a given dimension. This density approximation is estimable readily from data. It accurately represents, in a monotone way, key features of small-ball approximations to density. Our results on estimators of the densities of principal component scores are also of independent interest; they reveal interesting shape differences that have not previously been considered. The statistical implications of these results and properties are identified and discussed, and practical ramifications are illustrated in numerical work.Comment: Published in at http://dx.doi.org/10.1214/09-AOS741 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore