3,259 research outputs found

    Regional Data Archiving and Management for Northeast Illinois

    Get PDF
    This project studies the feasibility and implementation options for establishing a regional data archiving system to help monitor and manage traffic operations and planning for the northeastern Illinois region. It aims to provide a clear guidance to the regional transportation agencies, from both technical and business perspectives, about building such a comprehensive transportation information system. Several implementation alternatives are identified and analyzed. This research is carried out in three phases. In the first phase, existing documents related to ITS deployments in the broader Chicago area are summarized, and a thorough review is conducted of similar systems across the country. Various stakeholders are interviewed to collect information on all data elements that they store, including the format, system, and granularity. Their perception of a data archive system, such as potential benefits and costs, is also surveyed. In the second phase, a conceptual design of the database is developed. This conceptual design includes system architecture, functional modules, user interfaces, and examples of usage. In the last phase, the possible business models for the archive system to sustain itself are reviewed. We estimate initial capital and recurring operational/maintenance costs for the system based on realistic information on the hardware, software, labor, and resource requirements. We also identify possible revenue opportunities. A few implementation options for the archive system are summarized in this report; namely: 1. System hosted by a partnering agency 2. System contracted to a university 3. System contracted to a national laboratory 4. System outsourced to a service provider The costs, advantages and disadvantages for each of these recommended options are also provided.ICT-R27-22published or submitted for publicationis peer reviewe

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    K2/Kleisli and GUS: Experiments in Integrated Access to Genomic Data Sources

    Get PDF
    The integration of heterogeneous data sources and software systems is a major issue in the biomed ical community and several approaches have been explored: linking databases, on-the- fly integration through views, and integration through warehousing. In this paper we report on our experiences with two systems that were developed at the University of Pennsylvania: an integration system called K2, which has primarily been used to provide views over multiple external data sources and software systems; and a data warehouse called GUS which downloads, cleans, integrates and annotates data from multiple external data sources. Although the view and warehouse approaches each have their advantages, there is no clear winner . Therefore, users must consider how the data is to be used, what the performance guarantees must be, and how much programmer time and expertise is available to choose the best strategy for a particular application

    Remote maintenance of control system performance over the internet

    Get PDF
    The Internet provides a significant benefit for the remote maintenance and fault diagnosis of various devices and plants. One such example is the UK’s distributed aircraft maintenance environment (DAME) (www.cs.york.ac.uk/dame), which provides a genetic test bed for distributed diagnostics based on Grid-enabled technologies. This paper focuses on developing a systematic method for the design of such a remote maintenance systems specifically for process control systems. Design issues of Internet-based remote maintenance systems for process control such as that proposed here include control performance assessment, fault detection, control performance maintenance, and heterogeneous data transfer over the Internet. A back-end and front-end architecture is proposed, in which all the heavy calculations are carried out locally. Light data and the characteristics of any heavy data are sent to the front-end located on the remote server for consideration by remote experts. The remote maintenance system is illustrated by reference to the implementation in a process control rig

    Storing XML Documents in Databases

    Get PDF
    The authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as possible. The ubiquity of XML has sparked great interest in deploying concepts known from Relational Database Management Systems such as declarative query languages, transactions, indexes and integrity constraints. This chapter presents now bulkloading is done in Monet XML, a main memory XML database system, and evaluates the cost of bulkloading and bulk deletion with respect to strategies which base on insertion and deletion of individual nodes. Additionally, we survey the applicability of the techniques to a wider class of XML storage schemas

    Bulkloading and Maintaining XML Documents

    Get PDF
    The popularity of XML as a exchange and storage format brings about massive amounts of documents to be stored, maintained and analyzed -- a challenge that traditionally has been tackled with Database Management Systems (DBMS). To open up the content of XML documents to analysis with declarative query languages, efficient bulk loading techniques are necessary. Database technology has traditionally been offering support for these tasks but yet falls short of providing efficient automation techniques for the challenges that large collections of XML data raise. As storage back-end, many applications rely on relational databases, which are designed towards large data volumes. This paper studies the bulk load and update algorithms for XML data stored in relational format and outlines opportunities and problems. We investigate both (1) bulk insertion and deletion as well as (2) updates in the form of edit scripts which heavily use pointer-chasing techniques which often are considered orthogonal to the algebraic operations relational databases are optimized for. To get the most out of relational database systems, we show that one should make careful use of edit scripts and replace them with bulk operations if more than a very small portion of the database is updated. We implemented our ideas on top of the Monet Database System and benchmarked their performance

    Data Warehouse Technology and Application in Data Centre Design for E-government

    Get PDF

    Integrating data warehouses with web data : a survey

    Get PDF
    This paper surveys the most relevant research on combining Data Warehouse (DW) and Web data. It studies the XML technologies that are currently being used to integrate, store, query, and retrieve Web data and their application to DWs. The paper reviews different DW distributed architectures and the use of XML languages as an integration tool in these systems. It also introduces the problem of dealing with semistructured data in a DW. It studies Web data repositories, the design of multidimensional databases for XML data sources, and the XML extensions of OnLine Analytical Processing techniques. The paper addresses the application of information retrieval technology in a DW to exploit text-rich document collections. The authors hope that the paper will help to discover the main limitations and opportunities that offer the combination of the DW and the Web fields, as well as to identify open research line
    corecore