23 research outputs found

    Input and Weight Space Smoothing for Semi-supervised Learning

    Full text link
    We propose regularizing the empirical loss for semi-supervised learning by acting on both the input (data) space, and the weight (parameter) space. We show that the two are not equivalent, and in fact are complementary, one affecting the minimality of the resulting representation, the other insensitivity to nuisance variability. We propose a method to perform such smoothing, which combines known input-space smoothing with a novel weight-space smoothing, based on a min-max (adversarial) optimization. The resulting Adversarial Block Coordinate Descent (ABCD) algorithm performs gradient ascent with a small learning rate for a random subset of the weights, and standard gradient descent on the remaining weights in the same mini-batch. It achieves comparable performance to the state-of-the-art without resorting to heavy data augmentation, using a relatively simple architecture

    Program Synthesis Meets Deep Learning for Decoding Regulatory Networks

    Get PDF
    With ever growing data sets spanning DNA sequencing all the way to single-cell transcriptomics, we are now facing the question of how can we turn this vast amount of information into knowledge. How do we integrate these large data sets into a coherent whole to help understand biological programs? The last few years have seen a growing interest in machine learning methods to analyse patterns in high-throughput data sets and an increasing interest in using program synthesis techniques to reconstruct and analyse executable models of gene regulatory networks. In this review, we discuss the synergies between the two methods and share our views on how they can be combined to reconstruct executable mechanistic programs directly from large-scale genomic data
    corecore