73 research outputs found

    Compelling Intimacies: Domesticity, Sexuality, and Agency

    Get PDF
    This introduction highlights what we call Compelling Intimacies —the multiple desires, affects, and affinities that arise at the intersection of institutions, actors, technologies, and ethical discourses to exert persuasive pressures on subjects. Each article animates different facets of the intensities born of intimacy as they operate across social and relational fields. The authors separate agency from intention in their efforts to identify the vitality of human and non-human relations. Together, the articles demonstrate how domesticities arise through diverse sets of circumstances, emerging in multiple incarnations—often in the same household—in such a way as to generate a wide range of affects and affinities. Finally, each author turns attention to the so-called small events that come to affirm or deny life as given form in everyday household arrangements, kin relations, friendships, and institutional settings, thereby suggesting the political stakes evoked by differing forms of care

    SINVAD: Search-based Image Space Navigation for DNN Image Classifier Test Input Generation

    Full text link
    The testing of Deep Neural Networks (DNNs) has become increasingly important as DNNs are widely adopted by safety critical systems. While many test adequacy criteria have been suggested, automated test input generation for many types of DNNs remains a challenge because the raw input space is too large to randomly sample or to navigate and search for plausible inputs. Consequently, current testing techniques for DNNs depend on small local perturbations to existing inputs, based on the metamorphic testing principle. We propose new ways to search not over the entire image space, but rather over a plausible input space that resembles the true training distribution. This space is constructed using Variational Autoencoders (VAEs), and navigated through their latent vector space. We show that this space helps efficiently produce test inputs that can reveal information about the robustness of DNNs when dealing with realistic tests, opening the field to meaningful exploration through the space of highly structured images

    Gazing at subculture with Lacan

    Get PDF

    On the Challenges of Physical Implementations of RBMs

    Full text link
    Restricted Boltzmann machines (RBMs) are powerful machine learning models, but learning and some kinds of inference in the model require sampling-based approximations, which, in classical digital computers, are implemented using expensive MCMC. Physical computation offers the opportunity to reduce the cost of sampling by building physical systems whose natural dynamics correspond to drawing samples from the desired RBM distribution. Such a system avoids the burn-in and mixing cost of a Markov chain. However, hardware implementations of this variety usually entail limitations such as low-precision and limited range of the parameters and restrictions on the size and topology of the RBM. We conduct software simulations to determine how harmful each of these restrictions is. Our simulations are designed to reproduce aspects of the D-Wave quantum computer, but the issues we investigate arise in most forms of physical computation

    An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks

    Full text link
    Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models "forget" how to perform the first task. This is widely believed to be a serious problem for neural networks. Here, we investigate the extent to which the catastrophic forgetting problem occurs for modern neural networks, comparing both established and recent gradient-based training algorithms and activation functions. We also examine the effect of the relationship between the first task and the second task on catastrophic forgetting. We find that it is always best to train using the dropout algorithm--the dropout algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes. We find that different tasks and relationships between tasks result in very different rankings of activation function performance. This suggests the choice of activation function should always be cross-validated

    Challenges in Representation Learning: A report on three machine learning contests

    Full text link
    The ICML 2013 Workshop on Challenges in Representation Learning focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge. We describe the datasets created for these challenges and summarize the results of the competitions. We provide suggestions for organizers of future challenges and some comments on what kind of knowledge can be gained from machine learning competitions.Comment: 8 pages, 2 figure
    • …
    corecore