4,024 research outputs found

    Data modelling for emergency response

    Get PDF
    Emergency response is one of the most demanding phases in disaster management. The fire brigade, paramedics, police and municipality are the organisations involved in the first response to the incident. They coordinate their work based on welldefined policies and procedures, but they also need the most complete and up-todate information about the incident, which would allow a reliable decision-making.\ud There is a variety of systems answering the needs of different emergency responders, but they have many drawbacks: the systems are developed for a specific sector; it is difficult to exchange information between systems; the systems offer too much or little information, etc. Several systems have been developed to share information during emergencies but usually they maintain the nformation that is coming from field operations in an unstructured way.\ud This report presents a data model for organisation of dynamic data (operational and situational data) for emergency response. The model is developed within the RGI-239 project ‘Geographical Data Infrastructure for Disaster Management’ (GDI4DM)

    ITR/IM: Enabling the Creation and Use of GeoGrids for Next Generation Geospatial Information

    Get PDF
    The objective of this project is to advance science in information management, focusing in particular on geospatial information. It addresses the development of concepts, algorithms, and system architectures to enable users on a grid to query, analyze, and contribute to multivariate, quality-aware geospatial information. The approach consists of three complementary research areas: (1) establishing a statistical framework for assessing geospatial data quality; (2) developing uncertainty-based query processing capabilities; and (3) supporting the development of space- and accuracy-aware adaptive systems for geospatial datasets. The results of this project will support the extension of the concept of the computational grid to facilitate ubiquitous access, interaction, and contributions of quality-aware next generation geospatial information. By developing novel query processes as well as quality and similarity metrics the project aims to enable the integration and use of large collections of disperse information of varying quality and accuracy. This supports the evolution of a novel geocomputational paradigm, moving away from current standards-driven approaches to an inclusive, adaptive system, with example potential applications in mobile computing, bioinformatics, and geographic information systems. This experimental research is linked to educational activities in three different academic programs among the three participating sites. The outreach activities of this project include collaboration with U.S. federal agencies involved in geospatial data collection, an international partner (Brazil\u27s National Institute for Space Research), and the organization of a 2-day workshop with the participation of U.S. and international experts

    Digital Government: Knowledge Management Over Time-Varying Geospatial Datasets

    Get PDF
    Spatially-related data is collected by many government agencies in various formats and for various uses. This project seeks to facilitate the integration of these data, thus providing new uses. This will require the development of a knowledge management framework to provide syntax, context, and semantics, as well as exploring the introduction of time-varying data into the framework. Education and outreach will be part of the project through the development of an on-line short courses related to data integration in the area of geographical information systems. The grantees will be working with government partners (National Imagery and Mapping Agency, the National Agricultural Statistics Service, and the US Army Topographic Engineering Center), as well as an industrial organization, Base Systems, and the non-profit OpenGIS Consortium, which works closely with vendors of GIS products

    Multi-Modality Human Action Recognition

    Get PDF
    Human action recognition is very useful in many applications in various areas, e.g. video surveillance, HCI (Human computer interaction), video retrieval, gaming and security. Recently, human action recognition becomes an active research topic in computer vision and pattern recognition. A number of action recognition approaches have been proposed. However, most of the approaches are designed on the RGB images sequences, where the action data was collected by RGB/intensity camera. Thus the recognition performance is usually related to various occlusion, background, and lighting conditions of the image sequences. If more information can be provided along with the image sequences, more data sources other than the RGB video can be utilized, human actions could be better represented and recognized by the designed computer vision system.;In this dissertation, the multi-modality human action recognition is studied. On one hand, we introduce the study of multi-spectral action recognition, which involves the information from different spectrum beyond visible, e.g. infrared and near infrared. Action recognition in individual spectra is explored and new methods are proposed. Then the cross-spectral action recognition is also investigated and novel approaches are proposed in our work. On the other hand, since the depth imaging technology has made a significant progress recently, where depth information can be captured simultaneously with the RGB videos. The depth-based human action recognition is also investigated. I first propose a method combining different type of depth data to recognize human actions. Then a thorough evaluation is conducted on spatiotemporal interest point (STIP) based features for depth-based action recognition. Finally, I advocate the study of fusing different features for depth-based action analysis. Moreover, human depression recognition is studied by combining facial appearance model as well as facial dynamic model

    SEI+II Information Integration Through Events

    Get PDF
    Many environmental observations are collected at different space and time scales that preclude easy integration of the data and hinder broader understanding of ecosystem dynamics. Ocean Observing Systems provide a specific example of multi-sensor systems observing several variables in different space - time regimes. This project integrates diverse space-time environmental sensor streams based on the conversion of their information content to a common higher-level abstraction: a space-time event data type. The space-time event data type normalizes across the diversity of observation level data to produce a common data type for exploration and analysis. Gulf of Maine Ocean Observing System (GOMOOS) data provide the multivariate time and space-time series from which space-time events are detected and assembled. Event detection employs a combined top down-bottom up approach. The top down component specifies an event ontology while the bottom up component is based on extraction of primitive events (e.g. decreasing, increasing, local maxima and minima sequences) from time and space-time series. Exploration and analysis of the extracted events employs a graphic exploratory environment based on a graphic primitive called an event band and its composition into event band stacks and panels that support investigation of various space-time patterns.The project contributes a new information integration approach based on the concept of an event that can be extended to many domains including socio-economic, financial, legislative, surveillance and health related information. The project will contribute new data mining strategies for event detection in time and space-time series and a set of flexible exploratory tools for examination and development of hypotheses on space-time event patterns and interactions

    CAREER/EPSCoR: Geospatial Database-Driven Extraction of Information from Digital Aerial Imagery

    Get PDF
    This project aims at the advancement of the ability to extract spatial information from digital aerial imagery by taking advantage of geospatial databases to support and guide object extraction operations. It deals with aerial images representing scenes for which prior or complementary information already exists. Examples of such information are pre-existing digital maps, digital terrain models, and spatial information systems in general. The research plan involves: (1) matching images to existing databases for change detection; (2) analysis of scale differences between database information and image; and (3) developing metadata structures to convey accuracy information for objects contained in geospatial databases. By embedding object extraction processes within the framework of spatial information systems, digital image analysis will be able to exploit the advantages offered by the availability of spatial data from various sources and in diverse formats, and, in turn, contribute to the improvement of the temporal and quantitative quality and completeness of the data contained in those sources. The educational aspect of the project includes initiatives designed to take advantage of and incorporate the project research advancements in the graduate and undergraduate curriculum, as well as in the high-school outreach program of the Department of Spatial Information Engineering. A 3-day workshop, with participation of U.S. and international experts is also planned. Combined, the issues addressed in this project will substantially advance science in digital image processing and analysis, and will complement parallel advancements in a variety of related disciplines, most notably digital libraries, geographic information systems, and remote sensor technology

    Data Mining

    Get PDF

    Human and Animal Behavior Understanding

    Get PDF
    Human and animal behavior understanding is an important yet challenging task in computer vision. It has a variety of real-world applications including human computer interaction (HCI), video surveillance, pharmacology, genetics, etc. We first present an evaluation of spatiotemporal interest point features (STIPs) for depth-based human action recognition, and then propose a framework call TriViews for 3D human action recognition with RGB-D data. Finally, we investigate a new approach for animal behavior recognition based on tracking, video content extraction and data fusion.;STIPs features are widely used with good performance for action recognition using the visible light videos. Recently, with the advance of depth imaging technology, a new modality has appeared for human action recognition. It is important to assess the performance and usefulness of the STIPs features for action analysis on the new modality of 3D depth map. Three detectors and six descriptors are combined to form various STIPs features in this thesis. Experiments are conducted on four challenging depth datasets.;We present an effective framework called TriViews to utilize 3D information for human action recognition. It projects the 3D depth maps into three views, i.e., front, side, and top views. Under this framework, five features are extracted from each view, separately. Then the three views are combined to derive a complete description of the 3D data. The five features characterize action patterns from different aspects, among which the top three best features are selected and fused based on a probabilistic fusion approach (PFA). We evaluate the proposed framework on three challenging depth action datasets. The experimental results show that the proposed TriViews framework achieves the most accurate results for depth-based action recognition, better than the state-of-the-art methods on all three databases.;Compared to human actions, animal behaviors exhibit some different characteristics. For example, animal body is much less expressive than human body, so some visual features and frameworks which are widely used for human action representation, cannot work well for animals. We investigate two features for mice behavior recognition, i.e., sparse and dense trajectory features. Sparse trajectory feature relies on tracking heavily. If tracking fails, the performance of sparse trajectory feature may deteriorate. In contrast, dense trajectory features are much more robust without relying on the tracking, thus the integration of these two features could be of practical significance. A fusion approach is proposed for mice behavior recognition. Experimental results on two public databases show that the integration of sparse and dense trajectory features can improve the recognition performance
    corecore