308,101 research outputs found

    Separate cortical stages in amodal completion revealed by functional magnetic resonance adaptation : research article

    Get PDF
    Background Objects in our environment are often partly occluded, yet we effortlessly perceive them as whole and complete. This phenomenon is called visual amodal completion. Psychophysical investigations suggest that the process of completion starts from a representation of the (visible) physical features of the stimulus and ends with a completed representation of the stimulus. The goal of our study was to investigate both stages of the completion process by localizing both brain regions involved in processing the physical features of the stimulus as well as brain regions representing the completed stimulus. Results Using fMRI adaptation we reveal clearly distinct regions in the visual cortex of humans involved in processing of amodal completion: early visual cortex - presumably V1 - processes the local contour information of the stimulus whereas regions in the inferior temporal cortex represent the completed shape. Furthermore, our data suggest that at the level of inferior temporal cortex information regarding the original local contour information is not preserved but replaced by the representation of the amodally completed percept. Conclusion These findings provide neuroimaging evidence for a multiple step theory of amodal completion and further insights into the neuronal correlates of visual perception

    Separate cortical stages in amodal completion revealed by functional magnetic resonance adaptation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Objects in our environment are often partly occluded, yet we effortlessly perceive them as whole and complete. This phenomenon is called visual amodal completion. Psychophysical investigations suggest that the process of completion starts from a representation of the (visible) physical features of the stimulus and ends with a completed representation of the stimulus. The goal of our study was to investigate both stages of the completion process by localizing both brain regions involved in processing the physical features of the stimulus as well as brain regions representing the completed stimulus.</p> <p>Results</p> <p>Using fMRI adaptation we reveal clearly distinct regions in the visual cortex of humans involved in processing of amodal completion: early visual cortex – presumably V1 -processes the local contour information of the stimulus whereas regions in the inferior temporal cortex represent the completed shape. Furthermore, our data suggest that at the level of inferior temporal cortex information regarding the original local contour information is not preserved but replaced by the representation of the amodally completed percept.</p> <p>Conclusion</p> <p>These findings provide neuroimaging evidence for a multiple step theory of amodal completion and further insights into the neuronal correlates of visual perception.</p

    Unsupervised Multiple Person Tracking using AutoEncoder-Based Lifted Multicuts

    Full text link
    Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present an unsupervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without superivison. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an autoencoder to generate suitable latent representation. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features could be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking

    Gazealytics: A Unified and Flexible Visual Toolkit for Exploratory and Comparative Gaze Analysis

    Full text link
    We present a novel, web-based visual eye-tracking analytics tool called Gazealytics. Our open-source toolkit features a unified combination of gaze analytics features that support flexible exploratory analysis, along with annotation of areas of interest (AOI) and filter options based on multiple criteria to visually analyse eye tracking data across time and space. Gazealytics features coordinated views unifying spatiotemporal exploration of fixations and scanpaths for various analytical tasks. A novel matrix representation allows analysis of relationships between such spatial or temporal features. Data can be grouped across samples, user-defined AOIs or time windows of interest (TWIs) to support aggregate or filtered analysis of gaze activity. This approach exceeds the capabilities of existing systems by supporting flexible comparison between and within subjects, hypothesis generation, data analysis and communication of insights. We demonstrate in a walkthrough that Gazealytics supports multiple types of eye tracking datasets and analytical tasks

    Multi-image Super Resolution of Remotely Sensed Images using Residual Feature Attention Deep Neural Networks

    Get PDF
    Convolutional Neural Networks (CNNs) have been consistently proved state-of-the-art results in image Super-Resolution (SR), representing an exceptional opportunity for the remote sensing field to extract further information and knowledge from captured data. However, most of the works published in the literature have been focusing on the Single-Image Super-Resolution problem so far. At present, satellite based remote sensing platforms offer huge data availability with high temporal resolution and low spatial resolution. In this context, the presented research proposes a novel residual attention model (RAMS) that efficiently tackles the multi-image super-resolution task, simultaneously exploiting spatial and temporal correlations to combine multiple images. We introduce the mechanism of visual feature attention with 3D convolutions in order to obtain an aware data fusion and information extraction of the multiple low-resolution images, transcending limitations of the local region of convolutional operations. Moreover, having multiple inputs with the same scene, our representation learning network makes extensive use of nestled residual connections to let flow redundant low-frequency signals and focus the computation on more important high-frequency components. Extensive experimentation and evaluations against other available solutions, either for single or multi-image super-resolution, have demonstrated that the proposed deep learning-based solution can be considered state-of-the-art for Multi-Image Super-Resolution for remote sensing applications
    • …
    corecore