188,863 research outputs found

    A sieve M-theorem for bundled parameters in semiparametric models, with application to the efficient estimation in a linear model for censored data

    Full text link
    In many semiparametric models that are parameterized by two types of parameters---a Euclidean parameter of interest and an infinite-dimensional nuisance parameter---the two parameters are bundled together, that is, the nuisance parameter is an unknown function that contains the parameter of interest as part of its argument. For example, in a linear regression model for censored survival data, the unspecified error distribution function involves the regression coefficients. Motivated by developing an efficient estimating method for the regression parameters, we propose a general sieve M-theorem for bundled parameters and apply the theorem to deriving the asymptotic theory for the sieve maximum likelihood estimation in the linear regression model for censored survival data. The numerical implementation of the proposed estimating method can be achieved through the conventional gradient-based search algorithms such as the Newton--Raphson algorithm. We show that the proposed estimator is consistent and asymptotically normal and achieves the semiparametric efficiency bound. Simulation studies demonstrate that the proposed method performs well in practical settings and yields more efficient estimates than existing estimating equation based methods. Illustration with a real data example is also provided.Comment: Published in at http://dx.doi.org/10.1214/11-AOS934 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Distributed Parameter Estimation via Pseudo-likelihood

    Full text link
    Estimating statistical models within sensor networks requires distributed algorithms, in which both data and computation are distributed across the nodes of the network. We propose a general approach for distributed learning based on combining local estimators defined by pseudo-likelihood components, encompassing a number of combination methods, and provide both theoretical and experimental analysis. We show that simple linear combination or max-voting methods, when combined with second-order information, are statistically competitive with more advanced and costly joint optimization. Our algorithms have many attractive properties including low communication and computational cost and "any-time" behavior.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012

    Dynamic Graph Stream Algorithms in o(n)o(n) Space

    Get PDF
    In this paper we study graph problems in dynamic streaming model, where the input is defined by a sequence of edge insertions and deletions. As many natural problems require Ω(n)\Omega(n) space, where nn is the number of vertices, existing works mainly focused on designing O~(n)\tilde{O}(n) space algorithms. Although sublinear in the number of edges for dense graphs, it could still be too large for many applications (e.g. nn is huge or the graph is sparse). In this work, we give single-pass algorithms beating this space barrier for two classes of problems. We present o(n)o(n) space algorithms for estimating the number of connected components with additive error εn\varepsilon n and (1+ε)(1+\varepsilon)-approximating the weight of minimum spanning tree, for any small constant ε>0\varepsilon>0. The latter improves previous O~(n)\tilde{O}(n) space algorithm given by Ahn et al. (SODA 2012) for connected graphs with bounded edge weights. We initiate the study of approximate graph property testing in the dynamic streaming model, where we want to distinguish graphs satisfying the property from graphs that are ε\varepsilon-far from having the property. We consider the problem of testing kk-edge connectivity, kk-vertex connectivity, cycle-freeness and bipartiteness (of planar graphs), for which, we provide algorithms using roughly O~(n1ε)\tilde{O}(n^{1-\varepsilon}) space, which is o(n)o(n) for any constant ε\varepsilon. To complement our algorithms, we present Ω(n1O(ε))\Omega(n^{1-O(\varepsilon)}) space lower bounds for these problems, which show that such a dependence on ε\varepsilon is necessary.Comment: ICALP 201

    Approximate Span Programs

    Get PDF
    Span programs are a model of computation that have been used to design quantum algorithms, mainly in the query model. For any decision problem, there exists a span program that leads to an algorithm with optimal quantum query complexity, but finding such an algorithm is generally challenging. We consider new ways of designing quantum algorithms using span programs. We show how any span program that decides a problem ff can also be used to decide "property testing" versions of ff, or more generally, approximate the span program witness size, a property of the input related to ff. For example, using our techniques, the span program for OR, which can be used to design an optimal algorithm for the OR function, can also be used to design optimal algorithms for: threshold functions, in which we want to decide if the Hamming weight of a string is above a threshold or far below, given the promise that one of these is true; and approximate counting, in which we want to estimate the Hamming weight of the input. We achieve these results by relaxing the requirement that 1-inputs hit some target exactly in the span program, which could make design of span programs easier. We also give an exposition of span program structure, which increases the understanding of this important model. One implication is alternative algorithms for estimating the witness size when the phase gap of a certain unitary can be lower bounded. We show how to lower bound this phase gap in some cases. As applications, we give the first upper bounds in the adjacency query model on the quantum time complexity of estimating the effective resistance between ss and tt, Rs,t(G)R_{s,t}(G), of O~(1ϵ3/2nRs,t(G))\tilde O(\frac{1}{\epsilon^{3/2}}n\sqrt{R_{s,t}(G)}), and, when μ\mu is a lower bound on λ2(G)\lambda_2(G), by our phase gap lower bound, we can obtain O~(1ϵnRs,t(G)/μ)\tilde O(\frac{1}{\epsilon}n\sqrt{R_{s,t}(G)/\mu}), both using O(logn)O(\log n) space

    A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters

    Get PDF
    Markov Chain Monte Carlo (MCMC) methods have become increasingly popular for estimating the posterior probability distribution of parameters in hydrologic models. However, MCMC methods require the a priori definition of a proposal or sampling distribution, which determines the explorative capabilities and efficiency of the sampler and therefore the statistical properties of the Markov Chain and its rate of convergence. In this paper we present an MCMC sampler entitled the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), which is well suited to infer the posterior distribution of hydrologic model parameters. The SCEM-UA algorithm is a modified version of the original SCE-UA global optimization algorithm developed by Duan et al. [1992]. The SCEM-UA algorithm operates by merging the strengths of the Metropolis algorithm, controlled random search, competitive evolution, and complex shuffling in order to continuously update the proposal distribution and evolve the sampler to the posterior target distribution. Three case studies demonstrate that the adaptive capability of the SCEM-UA algorithm significantly reduces the number of model simulations needed to infer the posterior distribution of the parameters when compared with the traditional Metropolis-Hastings samplers
    corecore