206 research outputs found

    Optimal change point detection and localization in sparse dynamic networks

    Get PDF
    We study the problem of change point localization in dynamic networks models. We assume that we observe a sequence of independent adjacency matrices of the same size, each corresponding to a realization of an unknown inhomogeneous Bernoulli model. The underlying distribution of the adjacency matrices are piecewise constant, and may change over a subset of the time points, called change points. We are concerned with recovering the unknown number and positions of the change points. In our model setting, we allow for all the model parameters to change with the total number of time points, including the network size, the minimal spacing between consecutive change points, the magnitude of the smallest change and the degree of sparsity of the networks. We first identify a region of impossibility in the space of the model parameters such that no change point estimator is provably consistent if the data are generated according to parameters falling in that region. We propose a computationally-simple algorithm for network change point localization, called network binary segmentation, that relies on weighted averages of the adjacency matrices. We show that network binary segmentation is consistent over a range of the model parameters that nearly cover the complement of the impossibility region, thus demonstrating the existence of a phase transition for the problem at hand. Next, we devise a more sophisticated algorithm based on singular value thresholding, called local refinement, that delivers more accurate estimates of the change point locations. Under appropriate conditions, local refinement guarantees a minimax optimal rate for network change point localization while remaining computationally feasible

    Univariate Mean Change Point Detection: Penalization, CUSUM and Optimality

    Get PDF
    The problem of univariate mean change point detection and localization based on a sequence of nn independent observations with piecewise constant means has been intensively studied for more than half century, and serves as a blueprint for change point problems in more complex settings. We provide a complete characterization of this classical problem in a general framework in which the upper bound σ2\sigma^2 on the noise variance, the minimal spacing Δ\Delta between two consecutive change points and the minimal magnitude κ\kappa of the changes, are allowed to vary with nn. We first show that consistent localization of the change points, when the signal-to-noise ratio κΔσ<log(n)\frac{\kappa \sqrt{\Delta}}{\sigma} < \sqrt{\log(n)}, is impossible. In contrast, when κΔσ\frac{\kappa \sqrt{\Delta}}{\sigma} diverges with nn at the rate of at least log(n)\sqrt{\log(n)}, we demonstrate that two computationally-efficient change point estimators, one based on the solution to an 0\ell_0-penalized least squares problem and the other on the popular wild binary segmentation algorithm, are both consistent and achieve a localization rate of the order σ2κ2log(n)\frac{\sigma^2}{\kappa^2} \log(n). We further show that such rate is minimax optimal, up to a log(n)\log(n) term

    Divide and Conquer Dynamic Programming: An Almost Linear Time Change Point Detection Methodology in High Dimensions

    Full text link
    We develop a novel, general and computationally efficient framework, called Divide and Conquer Dynamic Programming (DCDP), for localizing change points in time series data with high-dimensional features. DCDP deploys a class of greedy algorithms that are applicable to a broad variety of high-dimensional statistical models and can enjoy almost linear computational complexity. We investigate the performance of DCDP in three commonly studied change point settings in high dimensions: the mean model, the Gaussian graphical model, and the linear regression model. In all three cases, we derive non-asymptotic bounds for the accuracy of the DCDP change point estimators. We demonstrate that the DCDP procedures consistently estimate the change points with sharp, and in some cases, optimal rates while incurring significantly smaller computational costs than the best available algorithms. Our findings are supported by extensive numerical experiments on both synthetic and real data.Comment: 84 pages, 4 figures, 6 table
    corecore