28 research outputs found

    Track D Social Science, Human Rights and Political Science

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/138414/1/jia218442.pd

    Interval Edge-Colorings of Cartesian Products of Graphs I

    No full text
    A proper edge-coloring of a graph G G with colors 1,...,t 1, . . ., t is an interval tt-coloring if all colors are used and the colors of edges incident to each vertex of GG form an interval of integers. A graph GG is interval colorable if it has an interval tt-coloring for some positive integer tt. Let N \mathfrak{N} be the set of all interval colorable graphs. For a graph G∈N G \in \mathfrak{N} , the least and the greatest values of tt for which GG has an interval tt-coloring are denoted by w(G)w(G) and W(G)W(G), respectively. In this paper we first show that if GG is an rr-regular graph and G∈NG \in \mathfrak{N}, then W(G□Pm)≥W(G)+W(Pm)+(m−1)rW(G \square P_m) \geq W(G) + W(P_m) + (m − 1)r (m∈N)(m \in \mathbb{N}) and W(G□C2n)≥W(G)+W(C2n)+nrW(G \square C_{2n}) \geq W(G) +W(C_{2n}) + nr (n≥2)(n \geq 2). Next, we investigate interval edge-colorings of grids, cylinders and tori. In particular, we prove that if G□HG \square H is planar and both factors have at least 3 vertices, then G□H∈N G \square H \in \mathfrak{N} and w(G□H)leq6 w(G \square H) leq 6 . Finally, we confirm the first author’s conjecture on the nn-dimensional cube QnQ_n and show that QnQ_n has an interval tt-coloring if and only if n≤t≤n(n+1)2n \leq t \leq \frac{n(n+1)}{2}

    Improving VAE based molecular representations for compound property prediction

    Full text link
    Collecting labeled data for many important tasks in chemoinformatics is time consuming and requires expensive experiments. In recent years, machine learning has been used to learn rich representations of molecules using large scale unlabeled molecular datasets and transfer the knowledge to solve the more challenging tasks with limited datasets. Variational autoencoders are one of the tools that have been proposed to perform the transfer for both chemical property prediction and molecular generation tasks. In this work we propose a simple method to improve chemical property prediction performance of machine learning models by incorporating additional information on correlated molecular descriptors in the representations learned by variational autoencoders. We verify the method on three property prediction asks. We explore the impact of the number of incorporated descriptors, correlation between the descriptors and the target properties, sizes of the datasets etc. Finally, we show the relation between the performance of property prediction models and the distance between property prediction dataset and the larger unlabeled dataset in the representation space
    corecore