11,941 research outputs found

    Adversarial attacks hidden in plain sight

    Full text link
    Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. Several defensive approaches increase robustness against adversarial attacks, demanding attacks of greater magnitude, which lead to visible artifacts. By considering human visual perception, we compose a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer. We carry out a user study on classifying adversarially modified images to validate the perceptual quality of our approach and find significant evidence for its concealment with regards to human visual perception

    The Second-Person Perspective in the Preface of Nicholas of Cusa’s De Visione Dei

    Get PDF
    In De visione Dei’s preface, a multidimensional, embodied experience of the second-person perspective becomes the medium by which Nicholas of Cusa’s audience, the benedictine brothers of Tegernsee, receive answers to questions regarding whether and in what sense mystical theology’s divine term is an object of contemplation, and whether union with God is a matter of knowledge or love. The experience of joint attention that is described in this text is enigmatic, dynamic, integrative, and transformative. As such, it instantiates the coincidentia oppositorum and docta ignorantia which, for Cusa, alone can give rise to a vision of the infinite

    From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing

    Get PDF
    In recent studies [1][13][12] Recurrent Neural Networks were used for generative processes and their surprising performance can be explained by their ability to create good predictions. In addition, data compression is also based on predictions. What the problem comes down to is whether a data compressor could be used to perform as well as recurrent neural networks in natural language processing tasks. If this is possible,then the problem comes down to determining if a compression algorithm is even more intelligent than a neural network in specific tasks related to human language. In our journey we discovered what we think is the fundamental difference between a Data Compression Algorithm and a Recurrent Neural Network

    Information Compression, Intelligence, Computing, and Mathematics

    Full text link
    This paper presents evidence for the idea that much of artificial intelligence, human perception and cognition, mainstream computing, and mathematics, may be understood as compression of information via the matching and unification of patterns. This is the basis for the "SP theory of intelligence", outlined in the paper and fully described elsewhere. Relevant evidence may be seen: in empirical support for the SP theory; in some advantages of information compression (IC) in terms of biology and engineering; in our use of shorthands and ordinary words in language; in how we merge successive views of any one thing; in visual recognition; in binocular vision; in visual adaptation; in how we learn lexical and grammatical structures in language; and in perceptual constancies. IC via the matching and unification of patterns may be seen in both computing and mathematics: in IC via equations; in the matching and unification of names; in the reduction or removal of redundancy from unary numbers; in the workings of Post's Canonical System and the transition function in the Universal Turing Machine; in the way computers retrieve information from memory; in systems like Prolog; and in the query-by-example technique for information retrieval. The chunking-with-codes technique for IC may be seen in the use of named functions to avoid repetition of computer code. The schema-plus-correction technique may be seen in functions with parameters and in the use of classes in object-oriented programming. And the run-length coding technique may be seen in multiplication, in division, and in several other devices in mathematics and computing. The SP theory resolves the apparent paradox of "decompression by compression". And computing and cognition as IC is compatible with the uses of redundancy in such things as backup copies to safeguard data and understanding speech in a noisy environment

    Look, Listen and Learn

    Full text link
    We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself -- the correspondence between the visual and the audio streams, and we introduce a novel "Audio-Visual Correspondence" learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art self-supervised approaches on ImageNet classification. We also demonstrate that the network is able to localize objects in both modalities, as well as perform fine-grained recognition tasks.Comment: Appears in: IEEE International Conference on Computer Vision (ICCV) 201

    Meditation Experiences, Self, and Boundaries of Consciousness

    Get PDF
    Our experiences with the external world are possible mainly through vision, hearing, taste, touch, and smell providing us a sense of reality. How the brain is able to seamlessly integrate stimuli from our external and internal world into our sense of reality has yet to be adequately explained in the literature. We have previously proposed a three-dimensional unified model of consciousness that partly explains the dynamic mechanism. Here we further expand our model and include illustrations to provide a better conception of the ill-defined space within the self, providing insight into a unified mind-body concept. In this article, we propose that our senses “super-impose” on an existing dynamic space within us after a slight, imperceptible delay. The existing space includes the entire intrapersonal space and can also be called the “the body’s internal 3D default space”. We provide examples from meditation experiences to help explain how the sense of ‘self’ can be experienced through meditation practice associated with underlying physiological processes that take place through cardio-respiratory synchronization and coherence that is developed among areas of the brain. Meditation practice can help keep the body in a parasympathetic dominant state during meditation, allowing an experience of inner ‘self’. Understanding this physical and functional space could help unlock the mysteries of the function of memory and cognition, allowing clinicians to better recognize and treat disorders of the mind by recommending proven techniques to reduce stress as an adjunct to medication treatment
    corecore