16,790 research outputs found
Neurocognitive Informatics Manifesto.
Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given
The Emotional and Chromatic Layers of Urban Smells
People are able to detect up to 1 trillion odors. Yet, city planning is
concerned only with a few bad odors, mainly because odors are currently
captured only through complaints made by urban dwellers. To capture both good
and bad odors, we resort to a methodology that has been recently proposed and
relies on tagging information of geo-referenced pictures. In doing so for the
cities of London and Barcelona, this work makes three new contributions. We
study 1) how the urban smellscape changes in time and space; 2) which emotions
people share at places with specific smells; and 3) what is the color of a
smell, if it exists. Without social media data, insights about those three
aspects have been difficult to produce in the past, further delaying the
creation of urban restorative experiences.Comment: 11 pages, 18 figures, final version published in the Proceedings of
the Tenth International Conference on Web and Social Media (ICWSM 2016
Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach.
Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a "containerized" approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data "Levels," each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org)
Understanding Heterogeneous EO Datasets: A Framework for Semantic Representations
Earth observation (EO) has become a valuable source of comprehensive, reliable, and persistent
information for a wide number of applications. However, dealing with the complexity of land cover is
sometimes difficult, as the variety of EO sensors reflects in the multitude of details recorded in several types
of image data. Their properties dictate the category and nature of the perceptible land structures. The data
heterogeneity hampers proper understanding, preventing the definition of universal procedures for content
exploitation. The main shortcomings are due to the different human and sensor perception on objects, as well
as to the lack of coincidence between visual elements and similarities obtained by computation. In order to
bridge these sensory and semantic gaps, the paper presents a compound framework for EO image information
extraction. The proposed approach acts like a common ground between the user's understanding, who is
visually shortsighted to the visible domain, and the machines numerical interpretation of a much wider
information. A hierarchical data representation is considered. At first, basic elements are automatically
computed. Then, users can enforce their judgement on the data processing results until semantic structures
are revealed. This procedure completes a user-machine knowledge transfer. The interaction is formalized as
a dialogue, where communication is determined by a set of parameters guiding the computational process
at each level of representation. The purpose is to maintain the data-driven observable connected to the level
of semantics and to human awareness. The proposed concept offers flexibility and interoperability to users,
allowing them to generate those results that best fit their application scenario. The experiments performed on
different satellite images demonstrate the ability to increase the performances in case of semantic annotation
by adjusting a set of parameters to the particularities of the analyzed data
- …