197 research outputs found

    Granite: A scientific database model and implementation

    Get PDF
    The principal goal of this research was to develop a formal comprehensive model for representing highly complex scientific data. An effective model should provide a conceptually uniform way to represent data and it should serve as a framework for the implementation of an efficient and easy-to-use software environment that implements the model. The dissertation work presented here describes such a model and its contributions to the field of scientific databases. In particular, the Granite model encompasses a wide variety of datatypes used across many disciplines of science and engineering today. It is unique in that it defines dataset geometry and topology as separate conceptual components of a scientific dataset. We provide a novel classification of geometries and topologies that has important practical implications for a scientific database implementation. The Granite model also offers integrated support for multiresolution and adaptive resolution data. Many of these ideas have been addressed by others, but no one has tried to bring them all together in a single comprehensive model. The datasource portion of the Granite model offers several further contributions. In addition to providing a convenient conceptual view of rectilinear data, it also supports multisource data. Data can be taken from various sources and combined into a unified view. The rod storage model is an abstraction for file storage that has proven an effective platform upon which to develop efficient access to storage. Our spatial prefetching technique is built upon the rod storage model, and demonstrates very significant improvement in access to scientific datasets, and also allows machines to access data that is far too large to fit in main memory. These improvements bring the extremely large datasets now being generated in many scientific fields into the realm of tractability for the ordinary researcher. We validated the feasibility and viability of the model by implementing a significant portion of it in the Granite system. Extensive performance evaluations of the implementation indicate that the features of the model can be provided in a user-friendly manner with an efficiency that is competitive with more ad hoc systems and more specialized application specific solutions

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    ARKTOS: An Intelligent System for Satellite Sea Ice Image Analysis

    Get PDF
    We present an intelligent system for satellite sea ice image analysis named ARKTOS (Advanced Reasoning using Knowledge for Typing Of Sea ice). The underlying methodology of ARKTOS is to perform fully automated analysis of sea ice images by mimicking the reasoning process of sea ice experts and photo-interpreters. Hence, our approach is feature-based, rule-based classification supported by multisource data fusion and knowledge bases. A feature can be an ice floe, for example. ARKTOS computes a host of descriptors for that feature and then applies expert rules to classify the floe into one of several ice classes. ARKTOS also incorporates information derived from other sources, fusing different data towards more accurate classification. This modular, flexible, and extensible approach allows ARKTOS be refined and evaluated by expert users. As a software package, ARKTOS comprises components in image processing, rule-based classification, multisource data fusion, and GUI-based knowledge engineering and modification. As a research project over the past 10 years, ARKTOS has undergone phases such as knowledge acquisition, prototyping, refinement, evaluation and deployment, and finally operationalization at the National Ice Center (NIC). In this paper, we will focus on the methodology of ARKTOS

    Evaluation of a Change Detection Methodology by Means of Binary Thresholding Algorithms and Informational Fusion Processes

    Get PDF
    Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution

    Error-driven adaptive resolutions for large scientific data sets

    Get PDF
    The process of making observations and drawing conclusions from large data sets is an essential part of modern scientific research. However, the size of these data sets can easily exceed the available resources of a typical workstation, making visualization and analysis a formidable challenge. Many solutions, including multiresolution and adaptive resolution representations, have been proposed and implemented to address these problems. This thesis describes an error model for calculating and representing localized error from data reduction and a process for constructing error-driven adaptive resolutions from this data, allowing fully-renderable error driven adaptive resolutions to be constructed from a single, high-resolution data set. We evaluated the performance of these adaptive resolutions generated with various parameters compared to the original data set. We found that adaptive resolutions generated with reasonable subdomain sizes and error tolerances show improved performance daring visualization

    Out-of-core visualization using iterator-aware multidimensional prefetching

    Full text link

    Toward utilizing multitemporal multispectral airborne laser scanning, Sentinel-2, and mobile laser scanning in map updating

    Get PDF
    The rapid development of remote sensing technologies pro-vides interesting possibilities for the further development of nationwide mapping procedures that are currently based mainly on passive aerial images. In particular, we assume that there is a large undiscovered potential in multitemporal airborne laser scanning (ALS) for topographic mapping. In this study, automated change detection from multitemporal multispectral ALS data was tested for the first time. The results showed that direct comparisons between height and intensity data from different dates reveal even small chang-es related to the development of a suburban area. A major challenge in future work is to link the changes with objects that are interesting in map production. In order to effectively utilize multisource remotely sensed data in mapping in the future, we also investigated the potential of satellite images and ground-based data to complement multispectral ALS. A method for continuous change monitoring from a time series of Sentinel-2 satellite images was developed and tested. Finally, a high-density point cloud was acquired with terres-trial mobile laser scanning and automatically classified into four classes. The results were compared with the ALS data, and the possible roles of the different data sources in a fu-ture map updating process were discussed
    • …
    corecore