3,247 research outputs found

    A graph polynomial for independent sets of bipartite graphs

    Get PDF
    We introduce a new graph polynomial that encodes interesting properties of graphs, for example, the number of matchings and the number of perfect matchings. Most importantly, for bipartite graphs the polynomial encodes the number of independent sets (#BIS). We analyze the complexity of exact evaluation of the polynomial at rational points and show that for most points exact evaluation is #P-hard (assuming the generalized Riemann hypothesis) and for the rest of the points exact evaluation is trivial. We conjecture that a natural Markov chain can be used to approximately evaluate the polynomial for a range of parameters. The conjecture, if true, would imply an approximate counting algorithm for #BIS, a problem shown, by [Dyer et al. 2004], to be complete (with respect to, so called, AP-reductions) for a rich logically defined sub-class of #P. We give a mild support for our conjecture by proving that the Markov chain is rapidly mixing on trees. As a by-product we show that the "single bond flip" Markov chain for the random cluster model is rapidly mixing on constant tree-width graphs

    A Chemical Proteomic Probe for Detecting Dehydrogenases: \u3cem\u3eCatechol Rhodanine\u3c/em\u3e

    Get PDF
    The inherent complexity of the proteome often demands that it be studied as manageable subsets, termed subproteomes. A subproteome can be defined in a number of ways, although a pragmatic approach is to define it based on common features in an active site that lead to binding of a common small molecule ligand (ex. a cofactor or a cross-reactive drug lead). The subproteome, so defined, can be purified using that common ligand tethered to a resin, with affinity chromatography. Affinity purification of a subproteome is described in the next chapter. That subproteome can then be analyzed using a common ligand probe, such as a fluorescent common ligand that can be used to stain members of the subproteome in a native gel. Here, we describe such a fluorescent probe, based on a catechol rhodanine acetic acid (CRAA) ligand that binds to dehydrogenases. The CRAA ligand is fluorescent and binds to dehydrogenases at pH \u3e 7, and hence can be used effectively to stain dehydrogenases in native gels to identify what subset of proteins in a mixture are dehydrogenases. Furthermore, if one is designing inhibitors to target one or more of these dehydrogenases, the CRAA staining can be performed in a competitive assay format, with or without inhibitor, to assess the selectivity of the inhibitor for the targeted dehydrogenase. Finally, the CRAA probe is a privileged scaffold for dehydrogenases, and hence can easily be modified to increase affinity for a given dehydrogenase

    A Tensor Approach to Learning Mixed Membership Community Models

    Get PDF
    Community detection is the task of detecting hidden communities from observed interactions. Guaranteed community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we remove this restriction, and provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced by Airoldi et al. This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning these models via a tensor spectral decomposition method. Our estimator is based on low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is fast and is based on simple linear algebraic operations, e.g. singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters and present a careful finite sample analysis of our learning method. As an important special case, our results match the best known scaling requirements for the (homogeneous) stochastic block model

    Tensor decompositions for learning latent variable models

    Get PDF
    This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models---including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation---which exploits a certain tensor structure in their low-order observable moments (typically, of second- and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models
    • …
    corecore