1,092 research outputs found

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    Conceptual and application issues in the implementation of object-oriented GIS

    Get PDF
    The adoption of object-oriented technology for spatial data modeling is becoming a significant trend in GIS. This research explores the concepts of Object-Oriented GIS (OOGIS) and illustrates its versatility in two case studies. OOGIS provides a feature-based, intuitive representation of real world features. The study emphasizes the fundamental concepts of inheritance, polymorphism, and encapsulation in OOGIS and explores schema design, long transactions, and versioning. Further, the study discusses the advantages of OOGIS in the management and analysis of geospatial data. The case studies demonstrate both the conceptual basis of OOGIS and specific functionality including behavior, methods, versioning, long transactions and data locking. OOGIS demonstrates many advantages over the traditional entity-relationship model in database maintenance and functionality

    Tracking decision-making during architectural design

    Get PDF
    There is a powerful cocktail of circumstances governing the way decisions are made during the architectural design process of a building project. There is considerable potential for misunderstandings, inappropriate changes, change which give rise to unforeseen difficulties, decisions which are not notified to all interested parties, and many other similar problems. The paper presents research conducted within the frame of the EPSRC funded ADS project aiming at addressing the problems linked with the evolution and changing environment of project information to support better decision-making. The paper presents the conceptual framework as well as the software environment that has been developed to support decision-making during building projects, and reports on work carried out on the application of the approach to the architectural design stage. This decision-tracking environment has been evaluated and validated by professionals and practitioners from industry using several instruments as described in the paper

    Securing Data in Storage: A Review of Current Research

    Full text link
    Protecting data from malicious computer users continues to grow in importance. Whether preventing unauthorized access to personal photographs, ensuring compliance with federal regulations, or ensuring the integrity of corporate secrets, all applications require increased security to protect data from talented intruders. Specifically, as more and more files are preserved on disk the requirement to provide secure storage has increased in importance. This paper presents a survey of techniques for securely storing data, including theoretical approaches, prototype systems, and existing systems currently available. Due to the wide variety of potential solutions available and the variety of techniques to arrive at a particular solution, it is important to review the entire field prior to selecting an implementation that satisfies particular requirements. This paper provides an overview of the prominent characteristics of several systems to provide a foundation for making an informed decision. Initially, the paper establishes a set of criteria for evaluating a storage solution based on confidentiality, integrity, availability, and performance. Then, using these criteria, the paper explains the relevant characteristics of select storage systems and provides a comparison of the major differences.Comment: 22 pages, 4 figures, 3 table

    ArrayBridge: Interweaving declarative array processing with high-performance computing

    Full text link
    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.Comment: 12 pages, 13 figure
    • …
    corecore