52 research outputs found

    Subject Index Volumes 1–200

    Get PDF

    Learning Linear Causal Representations from Interventions under General Nonlinear Mixing

    Full text link
    We study the problem of learning causal representations from unknown, latent interventions in a general setting, where the latent distribution is Gaussian but the mixing function is completely general. We prove strong identifiability results given unknown single-node interventions, i.e., without having access to the intervention targets. This generalizes prior works which have focused on weaker classes, such as linear maps or paired counterfactual data. This is also the first instance of causal identifiability from non-paired interventions for deep neural network embeddings. Our proof relies on carefully uncovering the high-dimensional geometric structure present in the data distribution after a non-linear density transformation, which we capture by analyzing quadratic forms of precision matrices of the latent distributions. Finally, we propose a contrastive algorithm to identify the latent variables in practice and evaluate its performance on various tasks.Comment: 38 page

    Global hypercontractivity and its applications

    Get PDF
    The hypercontractive inequality on the discrete cube plays a crucial role in many fundamental results in the Analysis of Boolean functions, such as the KKL theorem, Friedgut's junta theorem and the invariance principle. In these results the cube is equipped with the uniform measure, but it is desirable, particularly for applications to the theory of sharp thresholds, to also obtain such results for general pp-biased measures. However, simple examples show that when p=o(1)p = o(1), there is no hypercontractive inequality that is strong enough. In this paper, we establish an effective hypercontractive inequality for general pp that applies to `global functions', i.e. functions that are not significantly affected by a restriction of a small set of coordinates. This class of functions appears naturally, e.g. in Bourgain's sharp threshold theorem, which states that such functions exhibit a sharp threshold. We demonstrate the power of our tool by strengthening Bourgain's theorem, thereby making progress on a conjecture of Kahn and Kalai and by establishing a pp-biased analog of the invariance principle. Our results have significant applications in Extremal Combinatorics. Here we obtain new results on the Tur\'an number of any bounded degree uniform hypergraph obtained as the expansion of a hypergraph of bounded uniformity. These are asymptotically sharp over an essentially optimal regime for both the uniformity and the number of edges and solve a number of open problems in the area. In particular, we give general conditions under which the crosscut parameter asymptotically determines the Tur\'an number, answering a question of Mubayi and Verstra\"ete. We also apply the Junta Method to refine our asymptotic results and obtain several exact results, including proofs of the Huang--Loh--Sudakov conjecture on cross matchings and the F\"uredi--Jiang--Seiver conjecture on path expansions.Comment: Subsumes arXiv:1906.0556

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Randomised algorithms for counting and generating combinatorial structures

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:D85048 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    A generative model for latent position graphs

    Get PDF
    Recently, there has been an explosion of research into machine learning methods applied to graph data. Most work is focused on performing either node classification or graph classification; however, there is much to be gained by learning instead a generative model for the underlying random graph distribution. We present a novel neural network-based approach to learning generative models for random graphs. The features used for training are graphlets, subgraph counts of small order, and the loss function is based on a moment estimator for these features. Random graphs are realized by feeding random noise into the network and applying a kernel to the output; in this way, our model is a generalization of the ubiquitous Random Dot Product Graph. Networks produced this way are demonstrated to be able to imitate data from chemistry, medicine, and social networks. The created graphs are similar enough to the target data to be able to fool discriminator neural networks otherwise capable of separating classes of random graphs. This method is inexpensive, accurate, and is readily applied to data-poor problems

    Influences of Probability Instruction on Undergraduates\u27 Understanding of Counting Processes

    Get PDF
    Historically, students in an introductory finite mathematics course at a major university in the mid-south have struggled the most with the counting and probability unit, leading instructors to question if there was a better way to help students master the material. The purpose of this study was to begin to understand connections that undergraduate finite mathematics students are making between counting and probability. By examining student performance in counting and probability, this study provides insights that inform future instruction in courses that include counting and probability. Consequently, this study lays the groundwork for future inquiries in the field of undergraduate combinatorics education that will further improve student learning and resulting performance in counting and probability
    corecore