1,211 research outputs found

    Regional Data Archiving and Management for Northeast Illinois

    Get PDF
    This project studies the feasibility and implementation options for establishing a regional data archiving system to help monitor and manage traffic operations and planning for the northeastern Illinois region. It aims to provide a clear guidance to the regional transportation agencies, from both technical and business perspectives, about building such a comprehensive transportation information system. Several implementation alternatives are identified and analyzed. This research is carried out in three phases. In the first phase, existing documents related to ITS deployments in the broader Chicago area are summarized, and a thorough review is conducted of similar systems across the country. Various stakeholders are interviewed to collect information on all data elements that they store, including the format, system, and granularity. Their perception of a data archive system, such as potential benefits and costs, is also surveyed. In the second phase, a conceptual design of the database is developed. This conceptual design includes system architecture, functional modules, user interfaces, and examples of usage. In the last phase, the possible business models for the archive system to sustain itself are reviewed. We estimate initial capital and recurring operational/maintenance costs for the system based on realistic information on the hardware, software, labor, and resource requirements. We also identify possible revenue opportunities. A few implementation options for the archive system are summarized in this report; namely: 1. System hosted by a partnering agency 2. System contracted to a university 3. System contracted to a national laboratory 4. System outsourced to a service provider The costs, advantages and disadvantages for each of these recommended options are also provided.ICT-R27-22published or submitted for publicationis peer reviewe

    Fiat: Deductive Synthesis of Abstract Data Types in a Proof Assistant

    Get PDF
    We present Fiat, a library for the Coq proof assistant supporting refinement of declarative specifications into efficient functional programs with a high degree of automation. Each refinement process leaves a proof trail, checkable by the normal Coq kernel, justifying its soundness. We focus on the synthesis of abstract data types that package methods with private data. We demonstrate the utility of our framework by applying it to the synthesis of query structures--abstract data types with SQL-like query and insert operations. Fiat includes a library for writing specifications of query structures in SQL-inspired notation, expressing operations over relations (tables) in terms of mathematical sets. This library includes a suite of tactics for automating the refinement of specifications into efficient, correct- by-construction OCaml code. Using these tactics, a programmer can generate such an implementation completely automatically by only specifying the equivalent of SQL indexes, data structures capturing useful views of the abstract data. Throughout we speculate on the new programming modularity possibilities enabled by an automated refinement system with proved-correct rules. “Every block of stone has a statue inside it and it is the task of the sculptor to discover it.”--MichelangeloNational Science Foundation (U.S.) (NSF grant CCF-1253229)United States. Defense Advanced Research Projects Agency (DARPA, agreement number FA8750-12-2- 0293

    Metadata-driven Data Migration from Object-relational Database to NoSQL Document-oriented Database

    Get PDF
    The object-relational databases (ORDB) are powerful for managing complex data, but they suffer from problems of scalability and managing large-scale data. Therefore, the importance of the migration of ORDB to NoSQL derives from the fact that the large volume of data can be handled in the best way with high scalability and availability. This paper reports our metadata-driven approach for the migration of the ORDB to document-oriented NoSQL database. Our data migration approach involves three major stages: a preprocessing stage, to extract the data and the schema's components, a processing stage, to provide the data transformation, and a post-processing stage, to store the migrated data as BSON documents. The approach maintains the benefits of Oracle ORDB in NoSQL MongoDB by supporting integrity constraint checking. To validate our approach, we developed OR2DOD (Object Relational to Document-Oriented Databases) system, and the experimental results confirm the effectiveness of our proposal

    An Egocentric Spatial Data Model for Intelligent Mobile Geographic Information Systems

    Get PDF
    Individuals in unknown locations, such as utility workers in the field, soldiers on a mission, or sightseeing tourists, share the need for an answer to two basic questions: Where am I? and What is in front of me?Because such information is not readily available in foreign locations, aids in the form of paper maps or mobile GISs, which give individuals an all-inclusive view of the environment, are often used. This panoptic view may hinder the positioning and orienteering process, since people perceive their surroundings perspectively from their current position. In this thesis, I describe a novel framework that resolves this problem by applying sensors that gather the individual\u27s spatial frame of reference. This spatial frame of reference, in combination with an egocentric spatial data model enables an injective mapping between the real world and the data frame of reference, hence, alleviating the individual\u27s cognitive workload. Furthermore, our egocentric spatial data model allows intelligent mobile Geographic Information Systems to capture the notions of here and there, and, consequently, provides insight into the individual\u27s surroundings. Finally, our framework, in conjunction with the context given by the task to be performed, enables intelligent mobile Geographic Information Systems to implicitly answer questions with respect to where, what, and how

    User Defined Types and Nested Tables in Object Relational Databases

    Get PDF
    Bernadette Byrne, Mary Garvey, ‘User Defined Types and Nested Tables in Object Relational Databases’, paper presented at the United Kingdom Academy for Information Systems 2006: Putting Theory into Practice, Cheltenham, UK, 5-7 June, 2006.There has been much research and work into incorporating objects into databases with a number of object databases being developed in the 1980s and 1990s. During the 1990s the concept of object relational databases became popular, with object extensions to the relational model. As a result, several relational databases have added such extensions. There has been little in the way of formal evaluation of object relational extensions to commercial database systems. In this work an airline flight logging system, a real-world database application, was taken and a database developed using a regular relational database and again using object relational extensions, allowing the evaluation of the relational extensions.Peer reviewe

    SAP Core Data Services (CDS)

    Get PDF
    SAP Core Data Services (CDS), apparu avec SAP HANA, est considĂ©rĂ© comme la prochaine gĂ©nĂ©ration utilisĂ©e pour la dĂ©finition et l’accĂšs aux donnĂ©es SAP. BasĂ©e sur SQL, cette nouvelle infrastructure propose tant des amĂ©liorations sur les fonctions et expressions existantes que de nouvelles fonctionnalitĂ©s. L’objectif de ce travail est de se familiariser avec le concept, de dĂ©velopper un prototype mettant en avant les diffĂ©rentes fonctionnalitĂ©s du CDS, puis de dĂ©terminer s’il est plus avantageux de passer Ă  SAP CDS ou de rester sur l’ABAP Dictionary

    Srql: Sorted relational query language

    Get PDF
    A relation is an unordered collection of records. Often, however, there is an underlying order (e.g., a sequence of stock prices), and users want to pose queries that reflect this order (e.g., find a weekly moving average). SQL provides no support for posing such queries. In this paper, we show how a rich class of queries reflecting sort order can be naturally expressed and efficiently executed with simple extensions to SQL. 1

    PARTE : automatic program partitioning for efficient computation over encrypted data

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 45-47).Many modern applications outsource their data storage and computation needs to third parties. Although this lifts many infrastructure burdens from the application developer, he must deal with an increased risk of data leakage (i.e. there are more distributed copies of the data, the third party may be insecure and/or untrustworthy). Oftentimes, the most practical option is to tolerate this risk. This is far from ideal and in case of highly sensitive data (e.g. medical records, location history) it is unacceptable. We present PARTE, a tool to aid application developers in lowering the risk of data leakage. PARTE statically analyzes a program's source, annotated to indicate types which will hold sensitive data (i.e. data that should not be leaked), and outputs a partitioned version of the source. One partition will operate only on encrypted copies of sensitive data to lower the risk of data leakage and can safely be run by a third party or otherwise untrusted environment. The second partition must have plaintext access to sensitive data and therefore should be run in a trusted environment. Program execution will flow between the partitions, levaraging third party resources when data leakage risk is low. Further, we identify operations which, if efficiently supported by some encryption scheme, would improve the performance of partitioned execution. To demonstrate the feasiblity of these ideas, we implement PARTE in Haskell and run it on a web application, hpaste, which allows users to upload and share text snippets. The partitioned hpaste services web request 1.2 - 2.5 x slower than the original hpaste. We find this overhead to be moderately high. Moreover, the partitioning does not allow much code to run on encrypted data. We discuss why we feel our techniques did not produce an attractive partitioning and offer insight on new research directions which could yield better results.by Meelap Shah.S.M
    • 

    corecore