7,070 research outputs found
LFP beta amplitude is predictive of mesoscopic spatio-temporal phase patterns
Beta oscillations observed in motor cortical local field potentials (LFPs)
recorded on separate electrodes of a multi-electrode array have been shown to
exhibit non-zero phase shifts that organize into a planar wave propagation.
Here, we generalize this concept by introducing additional classes of patterns
that fully describe the spatial organization of beta oscillations. During a
delayed reach-to-grasp task in monkey primary motor and dorsal premotor
cortices we distinguish planar, synchronized, random, circular, and radial
phase patterns. We observe that specific patterns correlate with the beta
amplitude (envelope). In particular, wave propagation accelerates with growing
amplitude, and culminates at maximum amplitude in a synchronized pattern.
Furthermore, the occurrence probability of a particular pattern is modulated
with behavioral epochs: Planar waves and synchronized patterns are more present
during movement preparation where beta amplitudes are large, whereas random
phase patterns are dominant during movement execution where beta amplitudes are
small
A semantic and language-based representation of an environmental scene
The modeling of a landscape environment is a cognitive activity that requires appropriate spatial representations. The research presented in this paper introduces a structural and semantic categorization of a landscape view based on panoramic photographs that act as a substitute of a given natural environment. Verbal descriptions of a landscape scene provide themodeling input of our approach. This structure-based model identifies the spatial, relational, and semantic constructs that emerge from these descriptions. Concepts in the environment are qualified according to a semantic classification, their proximity and direction to the observer, and the spatial relations that qualify them. The resulting model is represented in a way that constitutes a modeling support for the study of environmental scenes, and a contribution for further research oriented to the mapping of a verbal description onto a geographical information system-based representation
Recommended from our members
Introduction: Creating new worlds out of old texts
Despite initial expectations that globalization would eradicate the need for geographical space and distance, "maps matter" today in ways that were unimaginable a mere two decades ago. Technological advances have brought to the fore an entirely new set of methods for representing and interacting with spatial formations, while the ever-increasing mobility of ideas, capital, and people has created a world in which urban and regional inequalities are being heightened at an accelerating pace. As a result, the ability of any given place to reap the benefits of global socio-technical flows mainly hinges on the forging of connections that can transcend the limits of its material location. In contrast to the traditional "topographic" perspective, the territorial extent of economic and political realms is being increasingly conceived through a "topological" lens: as a set of overlapping reticulations in which the nature and frequency of links among different sites matter more than the physical distances between them.
At the same time, a parallel stream of innovation has revolutionized the understanding of space in disciplines such as history, archaeology, classics, and linguistics. Much of this work has been concentrated in the burgeoning field of the "digital humanities", which has been persistently breaking new ground in the conceptualization of past and present places. When seen in the context of globalization-induced dynamics, such developments emphasize the need for developing cartographic approaches that can bring out the inherently networked structure of social space via a lens that is both theoretically integrative and heuristically sharp.
We have decided to respond to these analytical and methodological challenges by focusing on ancient Greek literature: a corpus of work that has often been characterized as being free of the constraints imposed by post-Enlightenment cartography, despite setting the foundations of many contemporary map-making methods. In the 12 chapters that follow, we highlight the rich array of representational devices employed by authors from this era, whose narrative depictions of spatial relations defy the logic of images and surfaces that dominates contemporary cartographic thought. There is a particular focus on Herodotus' Histories - a text that is increasingly taken up by classicists as the example of how ancient perceptions of space may have been rather different to the cartographic view that we tend to assume. But this volume also considers the spatial imaginary through the lens of other authors (e.g. Aristotle), genres (e.g. hymns), cultural contexts (e.g. Babylon), and disciplines (e.g. archaeology), with a view to stimulating a broad-based discussion among readers and critics of Herodotus and ancient Greek literature and culture more generally.
In fact, many of the disciplinary and conceptual perspectives explored here are at their inception, and have a more general relevance for the wider community of humanities and social science researchers interested in novel mapping techniques. The resulting juxtaposition of more "traditional", philological discussions of space with chapters dedicated to the exploration of new technologies may jar or appear uneven, especially since we have not set out to privilege one method over another. But it is through viewing these different approaches in the round and reading them alongside each other that, we maintain, we can best disrupt customary ways of thinking (and writing) about space and catch a glimpse of new possibilities
Data-driven depth and 3D architectural layout estimation of an interior environment from monocular panoramic input
Recent years have seen significant interest in the automatic 3D reconstruction of indoor scenes, leading to a distinct and very-active sub-field within 3D reconstruction. The main objective is to convert rapidly measured data representing real-world indoor environments into models encompassing geometric, structural, and visual abstractions. This thesis focuses on the particular subject of extracting geometric information from single panoramic images, using either visual data alone or sparse registered depth information. The appeal of this setup lies in the efficiency and cost-effectiveness of data acquisition using 360o images. The challenge, however, is that creating a comprehensive model from mostly visual input is extremely difficult, due to noise, missing data, and clutter.
My research has concentrated on leveraging prior information, in the form of architectural and data-driven priors derived from large annotated datasets, to develop end-to-end deep learning solutions for specific tasks in the structured reconstruction pipeline.
My first contribution consists in a deep neural network architecture for estimating a depth map from a single monocular indoor panorama, operating directly on the equirectangular projection. Leveraging the characteristics of indoor 360-degree images and recognizing the impact of gravity on indoor scene design, the network efficiently encodes the scene into vertical spherical slices. By exploiting long- and short- term relationships among these slices, it recovers an equirectangular depth map directly from the corresponding RGB image.
My second contribution generalizes the approach to handle multimodal input, also covering the situation in which the equirectangular input image is paired with a sparse depth map, as provided from common capture setups. Depth is inferred using an efficient single-branch network with a dynamic gating system, processing both dense visual data and sparse geometric data. Additionally, a new augmentation strategy enhances the model's robustness to various types of sparsity, including those from structured light sensors and LiDAR setups.
While the first two contributions focus on per-pixel geometric information, my third contribution addresses the recovery of the 3D shape of permanent room surfaces from a single panoramic image. Unlike previous methods, this approach tackles the problem in 3D, expanding the reconstruction space. It employs a graph convolutional network to directly infer the room structure as a 3D mesh, deforming a graph- encoded tessellated sphere mapped to the spherical panorama. Gravity- aligned features are actively incorporated using a projection layer with multi-head self-attention, and specialized losses guide plausible solutions in the presence of clutter and occlusions.
The benchmarks on publicly available data show that all three methods provided significant improvements over the state-of-the-art
Cognitively plausible representations for the alignment of sketch and geo-referenced maps
In many geo-spatial applications, freehand sketch maps are considered as an intuitive way to collect user-generated spatial information. The task of automatically mapping information from such hand-drawn sketch maps to geo-referenced maps is known as the alignment task. Researchers have proposed various qualitative representations to capture distorted and generalized spatial information in sketch maps, however thus far the effectiveness of these representations has not been evaluated in the context of an alignment task. This paper empirically evaluates a set of cognitively plausible representations for alignment using real sketch maps collected from two different study areas with the corresponding geo-referenced maps. Firstly, the representations are evaluated in a single-aspect alignment approach by demonstrating the alignment of maps for each individual sketch aspect. Secondly, representations are evaluated across multiple sketch aspects using more than one representation in the alignment task. The evaluations demonstrated the suitability of the chosen representation for aligning user-generated content with geo-referenced maps in a real-world scenario
Class-Based Feature Matching Across Unrestricted Transformations
We develop a novel method for class-based feature matching across large changes in viewing conditions. The method is based on the property that when objects share a similar part, the similarity is preserved across viewing conditions. Given a feature and a training set of object images, we first identify the subset of objects that share this feature. The transformation of the feature's appearance across viewing conditions is determined mainly by properties of the feature, rather than of the object in which it is embedded. Therefore, the transformed feature will be shared by approximately the same set of objects. Based on this consistency requirement, corresponding features can be reliably identified from a set of candidate matches. Unlike previous approaches, the proposed scheme compares feature appearances only in similar viewing conditions, rather than across different viewing conditions. As a result, the scheme is not restricted to locally planar objects or affine transformations. The approach also does not require examples of correct matches. We show that by using the proposed method, a dense set of accurate correspondences can be obtained. Experimental comparisons demonstrate that matching accuracy is significantly improved over previous schemes. Finally, we show that the scheme can be successfully used for invariant object recognition
- âŚ