43 research outputs found

    Support Vector Methods for Higher-Level Event Extraction in Point Data

    Get PDF
    Phenomena occur both in space and time. Correspondingly, ability to model spatiotemporal behavior translates into ability to model phenomena as they occur in reality. Given the complexity inherent when integrating spatial and temporal dimensions, however, the establishment of computational methods for spatiotemporal analysis has proven relatively elusive. Nonetheless, one method, the spatiotemporal helix, has emerged from the field of video processing. Designed to efficiently summarize and query the deformation and movement of spatiotemporal events, the spatiotemporal helix has been demonstrated as capable of describing and differentiating the evolution of hurricanes from sequences of images. Being derived from image data, the representations of events for which the spatiotemporal helix was originally created appear in areal form (e.g., a hurricane covering several square miles is represented by groups of pixels). ii Many sources of spatiotemporal data, however, are not in areal form and instead appear as points. Examples of spatiotemporal point data include those from an epidemiologist recording the time and location of cases of disease and environmental observations collected by a geosensor at the point of its location. As points, these data cannot be directly incorporated into the spatiotemporal helix for analysis. However, with the analytic potential for clouds of point data limited, phenomena represented by point data are often described in terms of events. Defined as change units localized in space and time, the concept of events allows for analysis at multiple levels. For instance lower-level events refer to occurrences of interest described by single data streams at point locations (e.g., an individual case of a certain disease or a significant change in chemical concentration in the environment) while higher-level events describe occurrences of interest derived from aggregations of lower-level events and are frequently described in areal form (e.g., a disease cluster or a pollution cloud). Considering that these higher-level events appear in areal form, they could potentially be incorporated into the spatiotemporal helix. With deformation being an important element of spatiotemporal analysis, however, at the crux of a process for spatiotemporal analysis based on point data would be accurate translation of lower-level event points into representations of higher-level areal events. A limitation of current techniques for the derivation of higher-level events is that they imply bias a priori regarding the shape of higher-level events (e.g., elliptical, convex, linear) which could limit the description of the deformation of higher-level events over time. The objective of this research is to propose two newly developed kernel methods, support vector clustering (SVC) and support vector machines (SVMs), as means for iii translating lower-level event points into higher-level event areas that follow the distribution of lower-level points. SVC is suggested for the derivation of higher-level events arising in point process data while SVMs are explored for their potential with scalar field data (i.e., spatially continuous real-valued data). Developed in the field of machine learning to solve complex non-linear problems, both of these methods are capable of producing highly non-linear representations of higher-level events that may be more suitable than existing methods for spatiotemporal analysis of deformation. To introduce these methods, this thesis is organized so that a context for these methods is first established through a description of existing techniques. This discussion leads to a technical explanation of the mechanics of SVC and SVMs and to the implementation of each of the kernel methods on simulated datasets. Results from these simulations inform discussion regarding the application potential of SVC and SVMs

    SEI+II Information Integration Through Events

    Get PDF
    Many environmental observations are collected at different space and time scales that preclude easy integration of the data and hinder broader understanding of ecosystem dynamics. Ocean Observing Systems provide a specific example of multi-sensor systems observing several variables in different space - time regimes. This project integrates diverse space-time environmental sensor streams based on the conversion of their information content to a common higher-level abstraction: a space-time event data type. The space-time event data type normalizes across the diversity of observation level data to produce a common data type for exploration and analysis. Gulf of Maine Ocean Observing System (GOMOOS) data provide the multivariate time and space-time series from which space-time events are detected and assembled. Event detection employs a combined top down-bottom up approach. The top down component specifies an event ontology while the bottom up component is based on extraction of primitive events (e.g. decreasing, increasing, local maxima and minima sequences) from time and space-time series. Exploration and analysis of the extracted events employs a graphic exploratory environment based on a graphic primitive called an event band and its composition into event band stacks and panels that support investigation of various space-time patterns.The project contributes a new information integration approach based on the concept of an event that can be extended to many domains including socio-economic, financial, legislative, surveillance and health related information. The project will contribute new data mining strategies for event detection in time and space-time series and a set of flexible exploratory tools for examination and development of hypotheses on space-time event patterns and interactions

    Development of a GIS-based method for sensor network deployment and coverage optimization

    Get PDF
    Au cours des derniĂšres annĂ©es, les rĂ©seaux de capteurs ont Ă©tĂ© de plus en plus utilisĂ©s dans diffĂ©rents contextes d’application allant de la surveillance de l’environnement au suivi des objets en mouvement, au dĂ©veloppement des villes intelligentes et aux systĂšmes de transport intelligent, etc. Un rĂ©seau de capteurs est gĂ©nĂ©ralement constituĂ© de nombreux dispositifs sans fil dĂ©ployĂ©s dans une rĂ©gion d'intĂ©rĂȘt. Une question fondamentale dans un rĂ©seau de capteurs est l'optimisation de sa couverture spatiale. La complexitĂ© de l'environnement de dĂ©tection avec la prĂ©sence de divers obstacles empĂȘche la couverture optimale de plusieurs zones. Par consĂ©quent, la position du capteur affecte la façon dont une rĂ©gion est couverte ainsi que le coĂ»t de construction du rĂ©seau. Pour un dĂ©ploiement efficace d'un rĂ©seau de capteurs, plusieurs algorithmes d'optimisation ont Ă©tĂ© dĂ©veloppĂ©s et appliquĂ©s au cours des derniĂšres annĂ©es. La plupart de ces algorithmes reposent souvent sur des modĂšles de capteurs et de rĂ©seaux simplifiĂ©s. En outre, ils ne considĂšrent pas certaines informations spatiales de l'environnement comme les modĂšles numĂ©riques de terrain, les infrastructures construites humaines et la prĂ©sence de divers obstacles dans le processus d'optimisation. L'objectif global de cette thĂšse est d'amĂ©liorer les processus de dĂ©ploiement des capteurs en intĂ©grant des informations et des connaissances gĂ©ospatiales dans les algorithmes d'optimisation. Pour ce faire, trois objectifs spĂ©cifiques sont dĂ©finis. Tout d'abord, un cadre conceptuel est dĂ©veloppĂ© pour l'intĂ©gration de l'information contextuelle dans les processus de dĂ©ploiement des rĂ©seaux de capteurs. Ensuite, sur la base du cadre proposĂ©, un algorithme d'optimisation sensible au contexte local est dĂ©veloppĂ©. L'approche Ă©largie est un algorithme local gĂ©nĂ©rique pour le dĂ©ploiement du capteur qui a la capacitĂ© de prendre en considĂ©ration de l'information spatiale, temporelle et thĂ©matique dans diffĂ©rents contextes d'applications. Ensuite, l'analyse de l'Ă©valuation de la prĂ©cision et de la propagation d'erreurs est effectuĂ©e afin de dĂ©terminer l'impact de l'exactitude des informations contextuelles sur la mĂ©thode d'optimisation du rĂ©seau de capteurs proposĂ©e. Dans cette thĂšse, l'information contextuelle a Ă©tĂ© intĂ©grĂ©e aux mĂ©thodes d'optimisation locales pour le dĂ©ploiement de rĂ©seaux de capteurs. L'algorithme dĂ©veloppĂ© est basĂ© sur le diagramme de VoronoĂŻ pour la modĂ©lisation et la reprĂ©sentation de la structure gĂ©omĂ©trique des rĂ©seaux de capteurs. Dans l'approche proposĂ©e, les capteurs change leur emplacement en fonction des informations contextuelles locales (l'environnement physique, les informations de rĂ©seau et les caractĂ©ristiques des capteurs) visant Ă  amĂ©liorer la couverture du rĂ©seau. La mĂ©thode proposĂ©e est implĂ©mentĂ©e dans MATLAB et est testĂ©e avec plusieurs jeux de donnĂ©es obtenus Ă  partir des bases de donnĂ©es spatiales de la ville de QuĂ©bec. Les rĂ©sultats obtenus Ă  partir de diffĂ©rentes Ă©tudes de cas montrent l'efficacitĂ© de notre approche.In recent years, sensor networks have been increasingly used for different applications ranging from environmental monitoring, tracking of moving objects, development of smart cities and smart transportation system, etc. A sensor network usually consists of numerous wireless devices deployed in a region of interest. A fundamental issue in a sensor network is the optimization of its spatial coverage. The complexity of the sensing environment with the presence of diverse obstacles results in several uncovered areas. Consequently, sensor placement affects how well a region is covered by sensors as well as the cost for constructing the network. For efficient deployment of a sensor network, several optimization algorithms are developed and applied in recent years. Most of these algorithms often rely on oversimplified sensor and network models. In addition, they do not consider spatial environmental information such as terrain models, human built infrastructures, and the presence of diverse obstacles in the optimization process. The global objective of this thesis is to improve sensor deployment processes by integrating geospatial information and knowledge in optimization algorithms. To achieve this objective three specific objectives are defined. First, a conceptual framework is developed for the integration of contextual information in sensor network deployment processes. Then, a local context-aware optimization algorithm is developed based on the proposed framework. The extended approach is a generic local algorithm for sensor deployment, which accepts spatial, temporal, and thematic contextual information in different situations. Next, an accuracy assessment and error propagation analysis is conducted to determine the impact of the accuracy of contextual information on the proposed sensor network optimization method. In this thesis, the contextual information has been integrated in to the local optimization methods for sensor network deployment. The extended algorithm is developed based on point Voronoi diagram in order to represent geometrical structure of sensor networks. In the proposed approach sensors change their location based on local contextual information (physical environment, network information and sensor characteristics) aiming to enhance the network coverage. The proposed method is implemented in MATLAB and tested with several data sets obtained from Quebec City spatial database. Obtained results from different case studies show the effectiveness of our approach

    New Generation Sensor Web Enablement

    Get PDF
    Many sensor networks have been deployed to monitor Earth’s environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement

    A semantic sensor web for environmental decision support applications

    Get PDF
    Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison

    SST: Integrated Fluorocarbon Microsensor System Using Catalytic Modification

    Get PDF
    Selective, sensitive, and reliable sensors are urgently needed to detect air-borne halogenated volatile organic compounds (VOCs). This broad class of compounds includes chlorine, fluorine, bromine, and iodine containing hydrocarbons used as solvents, refrigerants, herbicides, and more recently as chemical warfare agents (CWAs). It is important to be able to detect very low concentrations of halocarbon solvents and insecticides because of their acute health effects even in very low concentrations. For instance, the nerve agent sarin (isopropyl methylphosphonofluoridate), first developed as an insecticide by German chemists in 1938, is so toxic that a ten minute exposure at an airborne concentration of only 65 parts per billion (ppb) can be fatal. Sarin became a household term when religious cult members on Tokyo subway trains poisoned over 5,500 people, killing 12. Sarin and other CWAs remain a significant threat to the health and safety of the general public. The goal of this project is to design a sensor system to detect and identify the composition and concentration of fluorinated VOCs. The system should be small, robust, compatible with metal oxide semiconductor (MOS) technology, cheap, if produced in large scale, and has the potential to be versatile in terms of low power consumption, detection of other gases, and integration in a portable system. The proposed VOC sensor system has three major elements that will be integrated into a microreactor flow cell: a temperature-programmable microhotplate array/reactor system which serves as the basic sensor platform; an innovative acoustic wave sensor, which detects material removal (instead of deposition) to verify and quantify the presence of fluorine; and an intelligent method, support vector machines, that will analyze the complex and high dimensional data furnished by the sensor system. The superior and complementary aspects of the three elements will be carefully integrated to create a system which is more sensitive and selective than other CWA detection systems that are commercially available or described in the research literature. While our sensor system will be developed to detect fluorinated VOCs, it can be adapted for other applications in which a target analyte can be catalytically converted for selective detection. Therefore, this investigation will examine the relationships between individual sensor element performance and joint sensor platform performance, integrated with state-of-the-art data analysis techniques. During development of the sensor system, the investigators will consider traditional reactor design concepts such as mass transfer and residence time effects, and will apply them to the emerging field of microsystems. The proposed research will provide the fundamental basis and understanding for examining multifunctional sensor platforms designed to provide extreme selectivity to targeted molecules. The project will involve interdisciplinary researchers and students and will connect to K-12 and RET programs for underrepresented students from rural areas

    Rapid visual presentation to support geospatial big data processing

    Get PDF
    Given the limited number of human GIS/image analysts at any organization, use of their time and organizational resources is important, especially in light of Big Data application scenarios when organizations may be overwhelmed with vast amounts of geospatial data. The current manuscript is devoted to the description of experimental research outlining the concept of Human-Computer Symbiosis where computers perform tasks, such as classification on a large image dataset, and, in sequence, humans perform analysis with Brain-Computer Interfaces (BCIs) to classify those images that machine learning had difficulty with. The addition of the BCI analysis is to utilize the brain\u27s ability to better answer questions like: Is the object in this image the object being sought? In order to determine feasibility of such a system, a supervised multi-layer convolutional neural network (CNN) was trained to detect the difference between ships\u27 and no ships\u27 from satellite imagery data. A prediction layer was then added to the trained model to output the probability that a given image was within each of those two classifications. If the probabilities were within one standard deviation of the mean of a gaussian distribution centered at 0.5, they would be stored in a separate dataset for Rapid Serial Visual Presentations (RSVP), implemented with PsyhoPy, to a human analyst using a low cost EMOTIV Insight EEG BCI headset. During the RSVP phase, hundreds of images per minute can be sequentially demonstrated. At such a pace, human analysts are not capable of making any conscious decisions about what is in each image; however, the subliminal aha-moment still can be detected by the headset. The discovery of these moments are parsed out by exposition of Event Related Potentials (ERPs), specifically the P300 ERPs. If a P300 ERP is generated for detection of a ship, then the relevant image would be moved to its rightful designation dataset; otherwise, if the image classification is still unclear, it is set aside for another RSVP iteration where the time afforded to the analyst for observation of each image is increased each time. If classification is still uncertain after a respectable amount of RSVP iterations, the images in question would be located within the grid matrix of its larger image scene. The adjacent images to those of interest on the grid would then be added to the presentation to give an analyst more contextual information via the expanded field of view. If classification is still uncertain, one final expansion of the field of view is afforded. Lastly, if somehow the classification of the image is indeterminable, the image is stored in an archive dataset

    Spatiotemporal Wireless Sensor Network Field Approximation with Multilayer Perceptron Artificial Neural Network Models

    Get PDF
    As sensors become increasingly compact and dependable in natural environments, spatially-distributed heterogeneous sensor network systems steadily become more pervasive. However, any environmental monitoring system must account for potential data loss due to a variety of natural and technological causes. Modeling a natural spatial region can be problematic due to spatial nonstationarities in environmental variables, and as particular regions may be subject to specific influences at different spatial scales. Relationships between processes within these regions are often ephemeral, so models designed to represent them cannot remain static. Integrating temporal factors into this model engenders further complexity. This dissertation evaluates the use of multilayer perceptron neural network models in the context of sensor networks as a possible solution to many of these problems given their data-driven nature, their representational flexibility and straightforward fitting process. The relative importance of parameters is determined via an adaptive backpropagation training process, which converges to a best-fit model for sensing platforms to validate collected data or approximate missing readings. As conditions evolve over time such that the model can no longer adapt to changes, new models are trained to replace the old. We demonstrate accuracy results for the MLP generally on par with those of spatial kriging, but able to integrate additional physical and temporal parameters, enabling its application to any region with a collection of available data streams. Potential uses of this model might be not only to approximate missing data in the sensor field, but also to flag potentially incorrect, unusual or atypical data returned by the sensor network. Given the potential for spatial heterogeneity in a monitored phenomenon, this dissertation further explores the benefits of partitioning a space and applying individual MLP models to these partitions. A system of neural models using both spatial and temporal parameters can be envisioned such that a spatiotemporal space partitioned by k-means is modeled by k neural models with internal weightings varying individually according to the dominant processes within the assigned region of each. Evaluated on simulated and real data on surface currents of theGulf ofMaine, partitioned models show significant improved results over single global models

    Internet of things

    Get PDF
    Manual of Digital Earth / Editors: Huadong Guo, Michael F. Goodchild, Alessandro Annoni .- Springer, 2020 .- ISBN: 978-981-32-9915-3Digital Earth was born with the aim of replicating the real world within the digital world. Many efforts have been made to observe and sense the Earth, both from space (remote sensing) and by using in situ sensors. Focusing on the latter, advances in Digital Earth have established vital bridges to exploit these sensors and their networks by taking location as a key element. The current era of connectivity envisions that everything is connected to everything. The concept of the Internet of Things(IoT)emergedasaholisticproposaltoenableanecosystemofvaried,heterogeneous networked objects and devices to speak to and interact with each other. To make the IoT ecosystem a reality, it is necessary to understand the electronic components, communication protocols, real-time analysis techniques, and the location of the objects and devices. The IoT ecosystem and the Digital Earth (DE) jointly form interrelated infrastructures for addressing today’s pressing issues and complex challenges. In this chapter, we explore the synergies and frictions in establishing an efïŹcient and permanent collaboration between the two infrastructures, in order to adequately address multidisciplinary and increasingly complex real-world problems. Although there are still some pending issues, the identiïŹed synergies generate optimism for a true collaboration between the Internet of Things and the Digital Earth
    corecore