7,070 research outputs found

    LFP beta amplitude is predictive of mesoscopic spatio-temporal phase patterns

    Full text link
    Beta oscillations observed in motor cortical local field potentials (LFPs) recorded on separate electrodes of a multi-electrode array have been shown to exhibit non-zero phase shifts that organize into a planar wave propagation. Here, we generalize this concept by introducing additional classes of patterns that fully describe the spatial organization of beta oscillations. During a delayed reach-to-grasp task in monkey primary motor and dorsal premotor cortices we distinguish planar, synchronized, random, circular, and radial phase patterns. We observe that specific patterns correlate with the beta amplitude (envelope). In particular, wave propagation accelerates with growing amplitude, and culminates at maximum amplitude in a synchronized pattern. Furthermore, the occurrence probability of a particular pattern is modulated with behavioral epochs: Planar waves and synchronized patterns are more present during movement preparation where beta amplitudes are large, whereas random phase patterns are dominant during movement execution where beta amplitudes are small

    A semantic and language-based representation of an environmental scene

    Get PDF
    The modeling of a landscape environment is a cognitive activity that requires appropriate spatial representations. The research presented in this paper introduces a structural and semantic categorization of a landscape view based on panoramic photographs that act as a substitute of a given natural environment. Verbal descriptions of a landscape scene provide themodeling input of our approach. This structure-based model identifies the spatial, relational, and semantic constructs that emerge from these descriptions. Concepts in the environment are qualified according to a semantic classification, their proximity and direction to the observer, and the spatial relations that qualify them. The resulting model is represented in a way that constitutes a modeling support for the study of environmental scenes, and a contribution for further research oriented to the mapping of a verbal description onto a geographical information system-based representation

    Data-driven depth and 3D architectural layout estimation of an interior environment from monocular panoramic input

    Get PDF
    Recent years have seen significant interest in the automatic 3D reconstruction of indoor scenes, leading to a distinct and very-active sub-field within 3D reconstruction. The main objective is to convert rapidly measured data representing real-world indoor environments into models encompassing geometric, structural, and visual abstractions. This thesis focuses on the particular subject of extracting geometric information from single panoramic images, using either visual data alone or sparse registered depth information. The appeal of this setup lies in the efficiency and cost-effectiveness of data acquisition using 360o images. The challenge, however, is that creating a comprehensive model from mostly visual input is extremely difficult, due to noise, missing data, and clutter. My research has concentrated on leveraging prior information, in the form of architectural and data-driven priors derived from large annotated datasets, to develop end-to-end deep learning solutions for specific tasks in the structured reconstruction pipeline. My first contribution consists in a deep neural network architecture for estimating a depth map from a single monocular indoor panorama, operating directly on the equirectangular projection. Leveraging the characteristics of indoor 360-degree images and recognizing the impact of gravity on indoor scene design, the network efficiently encodes the scene into vertical spherical slices. By exploiting long- and short- term relationships among these slices, it recovers an equirectangular depth map directly from the corresponding RGB image. My second contribution generalizes the approach to handle multimodal input, also covering the situation in which the equirectangular input image is paired with a sparse depth map, as provided from common capture setups. Depth is inferred using an efficient single-branch network with a dynamic gating system, processing both dense visual data and sparse geometric data. Additionally, a new augmentation strategy enhances the model's robustness to various types of sparsity, including those from structured light sensors and LiDAR setups. While the first two contributions focus on per-pixel geometric information, my third contribution addresses the recovery of the 3D shape of permanent room surfaces from a single panoramic image. Unlike previous methods, this approach tackles the problem in 3D, expanding the reconstruction space. It employs a graph convolutional network to directly infer the room structure as a 3D mesh, deforming a graph- encoded tessellated sphere mapped to the spherical panorama. Gravity- aligned features are actively incorporated using a projection layer with multi-head self-attention, and specialized losses guide plausible solutions in the presence of clutter and occlusions. The benchmarks on publicly available data show that all three methods provided significant improvements over the state-of-the-art

    Cognitively plausible representations for the alignment of sketch and geo-referenced maps

    Get PDF
    In many geo-spatial applications, freehand sketch maps are considered as an intuitive way to collect user-generated spatial information. The task of automatically mapping information from such hand-drawn sketch maps to geo-referenced maps is known as the alignment task. Researchers have proposed various qualitative representations to capture distorted and generalized spatial information in sketch maps, however thus far the effectiveness of these representations has not been evaluated in the context of an alignment task. This paper empirically evaluates a set of cognitively plausible representations for alignment using real sketch maps collected from two different study areas with the corresponding geo-referenced maps. Firstly, the representations are evaluated in a single-aspect alignment approach by demonstrating the alignment of maps for each individual sketch aspect. Secondly, representations are evaluated across multiple sketch aspects using more than one representation in the alignment task. The evaluations demonstrated the suitability of the chosen representation for aligning user-generated content with geo-referenced maps in a real-world scenario

    Class-Based Feature Matching Across Unrestricted Transformations

    Get PDF
    We develop a novel method for class-based feature matching across large changes in viewing conditions. The method is based on the property that when objects share a similar part, the similarity is preserved across viewing conditions. Given a feature and a training set of object images, we first identify the subset of objects that share this feature. The transformation of the feature's appearance across viewing conditions is determined mainly by properties of the feature, rather than of the object in which it is embedded. Therefore, the transformed feature will be shared by approximately the same set of objects. Based on this consistency requirement, corresponding features can be reliably identified from a set of candidate matches. Unlike previous approaches, the proposed scheme compares feature appearances only in similar viewing conditions, rather than across different viewing conditions. As a result, the scheme is not restricted to locally planar objects or affine transformations. The approach also does not require examples of correct matches. We show that by using the proposed method, a dense set of accurate correspondences can be obtained. Experimental comparisons demonstrate that matching accuracy is significantly improved over previous schemes. Finally, we show that the scheme can be successfully used for invariant object recognition
    • …
    corecore