1,720 research outputs found

    Mode Regularized Generative Adversarial Networks

    Full text link
    Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.Comment: Published as a conference paper at ICLR 201

    Investigating the Non-Linear Relationships in the Expectancy Theory: The Case of Crowdsourcing Marketplace

    Get PDF
    Crowdsourcing marketplace as a new platform for companies or individuals to source ideas or works from the public has become popular in the contemporary world. A key issue about the sustainability of this type of marketplace relies on the effort that problem solvers expend on the online tasks. However, the predictors of effort investment in the crowdsourcing context is rarely investigated. In this study, based on the expectancy theory which suggests the roles of reward valence, trust and self efficacy, we develop a research model to study the factors influencing effort. Further, the non-linear relationships between self efficacy and effort is proposed. Based on a field survey, we found that: (1) reward valence and trust positively influence effort; (2) when task complexity is high, there will be a convex relationship between self efficacy and effort; and (3) when task complexity is low, there will be a concave relationship between self efficacy and effort. Theoretical and practical implications are also discussed

    Verifying Concurrent Data Structures Using Data-Expansion

    Get PDF
    We present the first thread modular proof of a highly concurrent binary search tree. This proof tackles the problem of reasoning about complicated thread interferences using only thread modular invariants. The key tool in this proof is the Data-Expansion Lemma, a novel lemma that allows us to reason about search operations in any given state. We highlight the power of this lemma when combined with our generalized version of the classical Hindsight Lemma, which enables us to prove linearizability by reasoning about the temporal properties of the operations instead of reasoning about the linearizability points directly. The Data-Expansion Lemma provides an interesting solution to the proof blowup prob-lem when reasoning about concurrent data structures by separating the verification of effectful and effectless operations. We show that our proof methodology is widely applicable to several published algorithms and argue that many advanced highly concurrent data structures can be surprisingly easy to verify using thread-modular arguments
    • 

    corecore