3,477 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Autoencoding sensory substitution

    Get PDF
    Tens of millions of people live blind, and their number is ever increasing. Visual-to-auditory sensory substitution (SS) encompasses a family of cheap, generic solutions to assist the visually impaired by conveying visual information through sound. The required SS training is lengthy: months of effort is necessary to reach a practical level of adaptation. There are two reasons for the tedious training process: the elongated substituting audio signal, and the disregard for the compressive characteristics of the human hearing system. To overcome these obstacles, we developed a novel class of SS methods, by training deep recurrent autoencoders for image-to-sound conversion. We successfully trained deep learning models on different datasets to execute visual-to-auditory stimulus conversion. By constraining the visual space, we demonstrated the viability of shortened substituting audio signals, while proposing mechanisms, such as the integration of computational hearing models, to optimally convey visual features in the substituting stimulus as perceptually discernible auditory components. We tested our approach in two separate cases. In the first experiment, the author went blindfolded for 5 days, while performing SS training on hand posture discrimination. The second experiment assessed the accuracy of reaching movements towards objects on a table. In both test cases, above-chance-level accuracy was attained after a few hours of training. Our novel SS architecture broadens the horizon of rehabilitation methods engineered for the visually impaired. Further improvements on the proposed model shall yield hastened rehabilitation of the blind and a wider adaptation of SS devices as a consequence

    On the Simultaneous Perception of Sound and Three-Dimensional Objects

    Get PDF
    Although examples of work investigating the perceptual relationship and possibilities of sound and image are common, relatively litle work has been carried out into multimedia works combining sound and three-dimensional objects. A practice-based investigation into this subject is presented with original artworks and contectual material from sound art, sculpture, moving image and psychology. The project sets out to more examine the perception of multimedia work, specifically through the creation and analysis of artworks combining sound and physical objects. It considers three main areas of study: sound’s ability to draw attention to, or modify, the existing properties of an object; techniques which encourage sound and object to appear cohesively as part of the same work; and a discussion of cognitive effects that may occur as a result of their simulataneous perception. Using the concept of the search space from evolutionary computing as an example, the case is made that multimedia artworksde can present a larger field of creative opportunity than single-media works, due to the enhanced interplay between the two media and the viewer's a priori knowledge. The roles of balance, dynamism and interactivity in multimedia work are also explored. Throughout the thesis examples of original artworks are given which exemplify the issues raised. The main outcome of the study is a proposed framework for categorising and analysing the perception of multimedia artworks, based on increasing semantic separation between the sensory elements. It is claimed that as the relationship between these elements becomes less obvious, more work is demanded of the viewer's imagination in trying to reconcile the gap, leading to active engagement and the possibility of extra imaginary forms which do not exist in the original material. It is proposed that the framework and ideas in this document will be applicable beyond the sound/object focus of this study, and it is hoped they will inform research into multimedia work in other forms

    Tactile echoes:multisensory augmented reality for the hand

    Get PDF

    An object's smell in the multisensory brain : how our senses interact during olfactory object processing

    Get PDF
    Object perception is a remarkable and fundamental cognitive ability that allows us to interpret and interact with the world we are living in. In our everyday life, we constantly perceive objects–mostly without being aware of it and through several senses at the same time. Although it might seem that object perception is accomplished without any effort, the underlying neural mechanisms are anything but simple. How we perceive objects in the world surrounding us is the result of a complex interplay of our senses. The aim of the present thesis was to explore, by means of functional magnetic resonance imaging, how our senses interact when we perceive an object’s smell in a multisensory setting where the amount of sensory stimulation increases, as well as in a unisensory setting where we perceive an object’s smell in isolation. In Study I, we sought to determine whether and how multisensory object information influences the processing of olfactory object information in the posterior piriform cortex (PPC), a region linked to olfactory object encoding. In Study II, we then expanded our search for integration effects during multisensory object perception to the whole brain because previous research has demonstrated that multisensory integration is accomplished by a network of early sensory cortices and higher-order multisensory integration sites. We specifically aimed at determining whether there exist cortical regions that process multisensory object information independent of from which senses and from how many senses the information arises. In Study III, we then sought to unveil how our senses interact during olfactory object perception in a unisensory setting. Other previous studies have shown that even in such unisensory settings, olfactory object processing is not exclusively accomplished by regions within the olfactory system but instead engages a more widespread network of brain regions, such as regions belonging to the visual system. We aimed at determining what this visual engagement represents. That is, whether areas of the brain that are principally concerned with processing visual object information also hold neural representations of olfactory object information, and if so, whether these representations are similar for smells and pictures of the same objects. In Study I we demonstrated that assisting inputs from our senses of vision and hearing increase the processing of olfactory object information in the PPC, and that the more assisting input we receive the more the processing is enhanced. As this enhancement occurred only for matching inputs, it likely reflects integration of multisensory object information. Study II provided evidence for convergence of multisensory object information in form of a non-linear response enhancement in the inferior parietal cortex: activation increased for bimodal compared to unimodal stimulation, and increased even further for trimodal compared to bimodal stimulation. As this multisensory response enhancement occurred independent of the congruency of the incoming signals, it likely reflects a process of relating the incoming sensory information streams to each other. Finally, Study III revealed that regions of the ventral visual object stream are engaged in recognition of an object’s smell and represent olfactory object information in form of distinct neural activation patterns. While the visual system encodes information about both visual and olfactory objects, it appears to keep information from the two sensory modalities separate by representing smells and pictures of objects differently. Taken together, the studies included in this thesis reveal that olfactory object perception is a multisensory process that engages a widespread network of early sensory as well higher-order cortical regions, even if we do not encounter ourselves in a multisensory setting but exclusively perceive an object’s smell

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures
    • …
    corecore