64,722 research outputs found

    Block-Conditional Missing at Random Models for Missing Data

    Full text link
    Two major ideas in the analysis of missing data are (a) the EM algorithm [Dempster, Laird and Rubin, J. Roy. Statist. Soc. Ser. B 39 (1977) 1--38] for maximum likelihood (ML) estimation, and (b) the formulation of models for the joint distribution of the data Z{Z} and missing data indicators M{M}, and associated "missing at random"; (MAR) condition under which a model for M{M} is unnecessary [Rubin, Biometrika 63 (1976) 581--592]. Most previous work has treated Z{Z} and M{M} as single blocks, yielding selection or pattern-mixture models depending on how their joint distribution is factorized. This paper explores "block-sequential"; models that interleave subsets of the variables and their missing data indicators, and then make parameter restrictions based on assumptions in each block. These include models that are not MAR. We examine a subclass of block-sequential models we call block-conditional MAR (BCMAR) models, and an associated block-monotone reduced likelihood strategy that typically yields consistent estimates by selectively discarding some data. Alternatively, full ML estimation can often be achieved via the EM algorithm. We examine in some detail BCMAR models for the case of two multinomially distributed categorical variables, and a two block structure where the first block is categorical and the second block arises from a (possibly multivariate) exponential family distribution.Comment: Published in at http://dx.doi.org/10.1214/10-STS344 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An Expectation Maximization Method to Learn the Group Structure of Deep Neural Network

    Get PDF
    Department of Computer Science and EngineeringAnalyzing multivariate time series data is important for many applications such as automated control, sensor fault diagnosis and financial data analysis. One of the key challenges is to learn latent features automatically from dynamically changing multivariate input. Convolutional neural networks (CNNs) have been successful to learn generalized feature extractors with shared parameters over the spatial domain in visual recognition tasks. For high-dimensional multivariate time series, designing an appropriate CNN model structure is challenging because the kernels may need to be extended through the full dimension of the input volume. To address this issue, we propose an Expectation Maximization (EM) method to learn the group structure of deep neural networks so that we can process the multiple high-dimensional kernels efficiently. This algorithm groups the kernels for each channel using the EM method and partition the kernel matrix into a block matrix. The EM method assumes the Gaussian Mixture Model (GMM) and the parameters of the GMM is updated together with the parameters of deep neural network by end-to-end backpropagation learning.ope

    Scalable Text and Link Analysis with Mixed-Topic Link Models

    Full text link
    Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes.Comment: 11 pages, 4 figure
    corecore