43,038 research outputs found

    Periodic subvarieties of a projective variety under the action of a maximal rank abelian group of positive entropy

    No full text
    We determine positive-dimensional G-periodic proper subvarieties of an n-dimensional normal projective variety X under the action of an abelian group G of maximal rank n-1 and of positive entropy. The motivation of the paper is to understand the obstruction for X to be G-equivariant birational to the quotient variety of an abelian variety modulo the action of a finite group.Comment: Asian Journal of Mathematics (to appear), Special issue on the occasion of Prof N. Mok's 60th birthda

    Learning High-Dimensional Markov Forest Distributions: Analysis of Error Rates

    Get PDF
    The problem of learning forest-structured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the Chow-Liu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size. For the high-dimensional scenario where the size of the model d and the number of edges k scale with the number of samples n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy structural and risk consistencies. In addition, the extremal structures for learning are identified; we prove that the independent (resp. tree) model is the hardest (resp. easiest) to learn using the proposed algorithm in terms of error rates for structure learning.Comment: Accepted to the Journal of Machine Learning Research (Feb 2011

    High-Dimensional Gaussian Graphical Model Selection: Walk Summability and Local Separation Criterion

    Full text link
    We consider the problem of high-dimensional Gaussian graphical model selection. We identify a set of graphs for which an efficient estimation algorithm exists, and this algorithm is based on thresholding of empirical conditional covariances. Under a set of transparent conditions, we establish structural consistency (or sparsistency) for the proposed algorithm, when the number of samples n=omega(J_{min}^{-2} log p), where p is the number of variables and J_{min} is the minimum (absolute) edge potential of the graphical model. The sufficient conditions for sparsistency are based on the notion of walk-summability of the model and the presence of sparse local vertex separators in the underlying graph. We also derive novel non-asymptotic necessary conditions on the number of samples required for sparsistency

    Astrochemical confirmation of the rapid evolution of massive YSOs and explanation for the inferred ages of hot cores

    Get PDF
    Aims. To understand the roles of infall and protostellar evolution on the envelopes of massive young stellar objects (YSOs). Methods. The chemical evolution of gas and dust is traced, including infall and realistic source evolution. The temperatures are determined self-consistently. Both ad/desorption of ices using recent laboratory temperature-programmed-desorption measurements are included. Results. The observed water abundance jump near 100 K is reproduced by an evaporation front which moves outward as the luminosity increases. Ion-molecule reactions produce water below 100 K. The age of the source is constrained to t \~ 8 +/- 4 x 10^4 yrs since YSO formation. It is shown that the chemical age-dating of hot cores at ~ few x 10^3 - 10^4 yr and the disappearance of hot cores on a timescale of ~ 10^5 yr is a natural consequence of infall in a dynamic envelope and protostellar evolution. Dynamical structures of ~ 350AU such as disks should contain most of the complex second generation species. The assumed order of desorption kinetics does not affect these results.Comment: Accepted by A&A Letters; 4 pages, 5 figure

    Learning Latent Tree Graphical Models

    Get PDF
    We study the problem of learning a latent tree graphical model where samples are available only from a subset of variables. We propose two consistent and computationally efficient algorithms for learning minimal latent trees, that is, trees without any redundant hidden nodes. Unlike many existing methods, the observed nodes (or variables) are not constrained to be leaf nodes. Our first algorithm, recursive grouping, builds the latent tree recursively by identifying sibling groups using so-called information distances. One of the main contributions of this work is our second algorithm, which we refer to as CLGrouping. CLGrouping starts with a pre-processing procedure in which a tree over the observed variables is constructed. This global step groups the observed nodes that are likely to be close to each other in the true latent tree, thereby guiding subsequent recursive grouping (or equivalent procedures) on much smaller subsets of variables. This results in more accurate and efficient learning of latent trees. We also present regularized versions of our algorithms that learn latent tree approximations of arbitrary distributions. We compare the proposed algorithms to other methods by performing extensive numerical experiments on various latent tree graphical models such as hidden Markov models and star graphs. In addition, we demonstrate the applicability of our methods on real-world datasets by modeling the dependency structure of monthly stock returns in the S&P index and of the words in the 20 newsgroups dataset
    • 

    corecore