14,525 research outputs found

    On Quantifying Qualitative Geospatial Data: A Probabilistic Approach

    Full text link
    Living in the era of data deluge, we have witnessed a web content explosion, largely due to the massive availability of User-Generated Content (UGC). In this work, we specifically consider the problem of geospatial information extraction and representation, where one can exploit diverse sources of information (such as image and audio data, text data, etc), going beyond traditional volunteered geographic information. Our ambition is to include available narrative information in an effort to better explain geospatial relationships: with spatial reasoning being a basic form of human cognition, narratives expressing such experiences typically contain qualitative spatial data, i.e., spatial objects and spatial relationships. To this end, we formulate a quantitative approach for the representation of qualitative spatial relations extracted from UGC in the form of texts. The proposed method quantifies such relations based on multiple text observations. Such observations provide distance and orientation features which are utilized by a greedy Expectation Maximization-based (EM) algorithm to infer a probability distribution over predefined spatial relationships; the latter represent the quantified relationships under user-defined probabilistic assumptions. We evaluate the applicability and quality of the proposed approach using real UGC data originating from an actual travel blog text corpus. To verify the quality of the result, we generate grid-based maps visualizing the spatial extent of the various relations

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Interpretation of complex situations in a semantic-based surveillance framework

    Get PDF
    The integration of cognitive capabilities in computer vision systems requires both to enable high semantic expressiveness and to deal with high computational costs as large amounts of data are involved in the analysis. This contribution describes a cognitive vision system conceived to automatically provide high-level interpretations of complex real-time situations in outdoor and indoor scenarios, and to eventually maintain communication with casual end users in multiple languages. The main contributions are: (i) the design of an integrative multilevel architecture for cognitive surveillance purposes; (ii) the proposal of a coherent taxonomy of knowledge to guide the process of interpretation, which leads to the conception of a situation-based ontology; (iii) the use of situational analysis for content detection and a progressive interpretation of semantically rich scenes, by managing incomplete or uncertain knowledge, and (iv) the use of such an ontological background to enable multilingual capabilities and advanced end-user interfaces. Experimental results are provided to show the feasibility of the proposed approach.This work was supported by the project 'CONSOLIDER-INGENIO 2010 Multimodal interaction in pattern recognition and computer vision' (V-00069). This work is supported by EC Grants IST-027110 for the HERMES project and IST-045547 for the VIDI-video project, and by the Spanish MEC under Projects TIN2006-14606 and CONSOLIDER-INGENIO 2010 (CSD2007-00018). Jordi Gonzàlez also acknowledges the support of a Juan de la Cierva Postdoctoral fellowship from the Spanish MEC.Peer Reviewe

    The ability model of emotional intelligence: Principles and updates

    Get PDF
    This article presents seven principles that have guided our thinking about emotional intelligence, some of them new. We have reformulated our original ability model here guided by these principles, clarified earlier statements of the model that were unclear, and revised portions of it in response to current research. In this revision, we also positioned emotional intelligence amidst other hot intelligences including personal and social intelligences, and examined the implications of the changes to the model. We discuss the present and future of the concept of emotional intelligence as a mental ability

    Special Issue on Smart Data and Semantics in a Sensor World

    Get PDF
    Introduction Since its first inception in 2001, the application of the Semantic Web [1, 2] has carried out an extensive use of ontologies [3–5], reasoning, and semantics in diverse fields, such as Information Integration, Software Engineering, Bioinformatics, eGovernment, eHealth, and social networks. This widespread use of ontologies has led to an incredible advance in the development of techniques to manipulate, share, reuse, and integrate information across heterogeneous data sources. In recent years, the growth of the IoT (Internet of Things) required to face the challenges of “Big Data” [6–10]. The cost of sensors is decreasing, while their use is expanding. Moreover, the use of multiple personal smart devices is an emerging trend and all of them can embed sensors to monitor the surrounding environment. Therefore, the number of available sensors is exploding. On the one hand, the flows of sensor data are massive and continuous, and the data could be obtained in real time or with a delay of just a few seconds. Then, the volume of sensor data is increasing continuously every day. On the other hand, the variety of data being generated is also increasing, due to plenty of different devices and different measures to record. There are many kinds of structured and unstructured sensor data in diverse formats. Moreover, data veracity, which is the degree of accuracy or truthfulness of a data set, is an important aspect to consider. In the context of sensor data, it represents the trustworthiness of the data source and the processing of data. The need for more accurate and reliable data was always declared, but often overlooked for the sake of larger and cheaper..
    corecore