192 research outputs found

    Application Of Coastal And Marine Ecological Classification Standard (Cmecs) To Remotely Operated Vehicle (Rov) Video Data For Enhanced Geospatial Analysis Of Deep Sea Environments

    Get PDF
    The Coastal and Marine Ecological Classification Standard (CMECS) provides a comprehensive framework of common terminology for organizing physical, chemical, biological, and geological information about marine ecosystems. Federally endorsed as a dynamic content standard, all federally funded data must be compliant by 2018; however, applying CMECS to deep sea datasets and underwater video have not been extensively examined. The presented research demonstrates the extent to which CMECS can be applied to deep sea benthic habitats, assesses the feasibility of applying CMECS to remotely operated vehicle (ROV) video data in near-real-time, and establishes best practices for mapping environmental aspects and observed deep sea habitats as viewed by the ROV’s forwardacing camera. All data were collected during 2014 in the Northern Gulf of Mexico by the National Oceanic and Atmospheric Administration’s (NOAA) ROV Deep Discoverer and ship Okeanos Explorer

    Big Data decision support system

    Get PDF
    Includes bibliographical references.2022 Fall.Each day, the amount of data produced by sensors, social and digital media, and Internet of Things is rapidly increasing. The volume of digital data is expected to be doubled within the next three years. At some point, it might not be financially feasible to store all the data that is received. Hence, if data is not analyzed as it is received, the information collected could be lost forever. Actionable Intelligence is the next level of Big Data analysis where data is being used for decision making. This thesis document describes my scientific contribution to Big Data Actionable Intelligence generations. Chapter 1 consists of my colleagues and I's contribution in Big Data Actionable Intelligence Architecture. The proven architecture has demonstrated to support real-time actionable intelligence generation using disparate data sources (e.g., social media, satellite, newsfeeds). This work has been published in the Journal of Big Data. Chapter 2 shows my original method to perform real-time detection of moving targets using Remote Sensing Big Data. This work has also been published in the Journal of Big Data and it has received an issuance of a U.S. patent. As the Field-of-View (FOV) in remote sensing continues to expand, the number of targets observed by each sensor continues to increase. The ability to track large quantities of targets in real-time poses a significant challenge. Chapter 3 describes my colleague and I's contribution to the multi-target tracking domain. We have demonstrated that we can overcome real-time tracking challenges when there are large number of targets. Our work was published in the Journal of Sensors

    MANAGING CO2 EMISSIONS REGIONALLY USING GEOGRAPHICAL INFORMATION SYSTEM (GIS) SPATIAL MODELING AND PINCH ANALYSIS

    Get PDF
    Climate change has become the major global challenge of sustainability; among various anthropogenic sources of carbon dioxide (CO2) emissions, the burning of fossil fuels for energy to support commercial, residential, municipal and industrial sectors is considered to be the primary cause of increasing levels of carbon dioxide emissions. However, because climate change is regionally driven with global consequences, to analyze emissions data, energy planning techniques must be developed which are simple, replicable and optimized for maximum benefit. Climate scenarios are continually derived from global models despite these models containing little to no regional or local specificity. Place-based research, well grounded in local experience, offers a more tractable alternative for defining complex interactions among the environmental, economic, and social processes that drive greenhouse gas emissions. The focus of this study involves the development of a balanced energy supply and demand model under carbon constraints for the Southern Illinois energy sector; this sector represents the local specificity desired to build a carbon emissions pinch analysis model at the local level. This project is intended to formulate a robust methodology for constructing a Geographic Data Base Management System by employing a bottom/up approach to CO2 emissions modeling; the resulting data base can serve as the foundation for an environmental applications model employing pinch analysis techniques to address the allocation of energy resources and technologies to reduce CO2 emissions

    Automated mapping of oblique imagery collected with unmanned vehicles in coastal and marine environments

    Get PDF
    Recent technological advances in unmanned observational platforms, including remotely operated vehicles (ROVs) and small unmanned aerial systems (sUAS), have made them highly effective tools for research and monitoring within marine and coastal environments. One of the primary types of data collected by these systems is video imagery, which is often captured at an angle oblique to the Earth’s surface, rather than normal to it (e.g., downward looking). This thesis presents a newly developed suite of tools designed to digitally map oblique imagery data collected with ROV and sUAS in coastal and marine environments and quantitatively evaluates the accuracy of the resultant maps. Results indicate that maps generated from oblique imagery collected with unmanned vehicles have highly variable accuracy relative to maps generated with imagery data collected with conventional mapping platforms. These results suggest that resultant maps have the potential to match or even surpass the accuracy of maps generated with imagery data collected with conventional mapping platforms but realizing that potential is largely dependent upon careful survey design

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection

    Spatiotemporal anomaly detection: streaming architecture and algorithms

    Get PDF
    Includes bibliographical references.2020 Summer.Anomaly detection is the science of identifying one or more rare or unexplainable samples or events in a dataset or data stream. The field of anomaly detection has been extensively studied by mathematicians, statisticians, economists, engineers, and computer scientists. One open research question remains the design of distributed cloud-based architectures and algorithms that can accurately identify anomalies in previously unseen, unlabeled streaming, multivariate spatiotemporal data. With streaming data, time is of the essence, and insights are perishable. Real-world streaming spatiotemporal data originate from many sources, including mobile phones, supervisory control and data acquisition enabled (SCADA) devices, the internet-of-things (IoT), distributed sensor networks, and social media. Baseline experiments are performed on four (4) non-streaming, static anomaly detection multivariate datasets using unsupervised offline traditional machine learning (TML), and unsupervised neural network techniques. Multiple architectures, including autoencoders, generative adversarial networks, convolutional networks, and recurrent networks, are adapted for experimentation. Extensive experimentation demonstrates that neural networks produce superior detection accuracy over TML techniques. These same neural network architectures can be extended to process unlabeled spatiotemporal streaming using online learning. Space and time relationships are further exploited to provide additional insights and increased anomaly detection accuracy. A novel domain-independent architecture and set of algorithms called the Spatiotemporal Anomaly Detection Environment (STADE) is formulated. STADE is based on federated learning architecture. STADE streaming algorithms are based on a geographically unique, persistently executing neural networks using online stochastic gradient descent (SGD). STADE is designed to be pluggable, meaning that alternative algorithms may be substituted or combined to form an ensemble. STADE incorporates a Stream Anomaly Detector (SAD) and a Federated Anomaly Detector (FAD). The SAD executes at multiple locations on streaming data, while the FAD executes at a single server and identifies global patterns and relationships among the site anomalies. Each STADE site streams anomaly scores to the centralized FAD server for further spatiotemporal dependency analysis and logging. The FAD is based on recent advances in DNN-based federated learning. A STADE testbed is implemented to facilitate globally distributed experimentation using low-cost, commercial cloud infrastructure provided by Microsoft™. STADE testbed sites are situated in the cloud within each continent: Africa, Asia, Australia, Europe, North America, and South America. Communication occurs over the commercial internet. Three STADE case studies are investigated. The first case study processes commercial air traffic flows, the second case study processes global earthquake measurements, and the third case study processes social media (i.e., Twitter™) feeds. These case studies confirm that STADE is a viable architecture for the near real-time identification of anomalies in streaming data originating from (possibly) computationally disadvantaged, geographically dispersed sites. Moreover, the addition of the FAD provides enhanced anomaly detection capability. Since STADE is domain-independent, these findings can be easily extended to additional application domains and use cases

    A shape-based heuristic for the detection of urban block artifacts in street networks

    Full text link
    Street networks are ubiquitous components of cities, guiding their development and enabling movement from place to place; street networks are also the critical components of many urban analytical methods. However, their graph representation is often designed primarily for transportation purposes. This representation is less suitable for other use cases where transportation networks need to be simplified as a mandatory pre-processing step, e.g., in the case of morphological analysis, visual navigation, or drone flight routing. While the urgent demand for automated pre-processing methods comes from various fields, it is still an unsolved challenge. In this article, we tackle this challenge by proposing a cheap computational heuristic for the identification of "face artifacts", i.e., geometries that are enclosed by transportation edges but do not represent urban blocks. The heuristic is based on combining the frequency distributions of shape compactness metrics and area measurements of street network face polygons. We test our method on 131 globally sampled large cities and show that it successfully identifies face artifacts in 89% of analyzed cities. Our heuristic of detecting artifacts caused by data being collected for another purpose is the first step towards an automated street network simplification workflow. Moreover, the proposed face artifact index uncovers differences in structural rules guiding the development of cities in different world regions.Comment: Zenodo: https://doi.org/10.5281/zenodo.8300730 ; GitHub: https://github.com/martinfleis/urban-block-artifact

    Quality of Service Aware Data Stream Processing for Highly Dynamic and Scalable Applications

    Get PDF
    Huge amounts of georeferenced data streams are arriving daily to data stream management systems that are deployed for serving highly scalable and dynamic applications. There are innumerable ways at which those loads can be exploited to gain deep insights in various domains. Decision makers require an interactive visualization of such data in the form of maps and dashboards for decision making and strategic planning. Data streams normally exhibit fluctuation and oscillation in arrival rates and skewness. Those are the two predominant factors that greatly impact the overall quality of service. This requires data stream management systems to be attuned to those factors in addition to the spatial shape of the data that may exaggerate the negative impact of those factors. Current systems do not natively support services with quality guarantees for dynamic scenarios, leaving the handling of those logistics to the user which is challenging and cumbersome. Three workloads are predominant for any data stream, batch processing, scalable storage and stream processing. In this thesis, we have designed a quality of service aware system, SpatialDSMS, that constitutes several subsystems that are covering those loads and any mixed load that results from intermixing them. Most importantly, we natively have incorporated quality of service optimizations for processing avalanches of geo-referenced data streams in highly dynamic application scenarios. This has been achieved transparently on top of the codebases of emerging de facto standard best-in-class representatives, thus relieving the overburdened shoulders of the users in the presentation layer from having to reason about those services. Instead, users express their queries with quality goals and our system optimizers compiles that down into query plans with an embedded quality guarantee and leaves logistic handling to the underlying layers. We have developed standard compliant prototypes for all the subsystems that constitutes SpatialDSMS

    Remote Sensing for Land Administration

    Get PDF
    • …
    corecore