42 research outputs found

    Heterogeneous Distributed Data Base Management Systems

    Get PDF
    This work analyzes the design and implementation issues of Heterogeneous Distributed Data Base Management Systems (HDDBMS). To date, HD-DBMS research projects and implementa tions have been limited. The few such systems which have been constructed provide valuable insight into the nature of problems faced due to heterogeneity. Some of these systems (SIRIUS-DELTA, MULTIBASE, AIDA) , are presented in order to examine their solutions to the problems. The major issues described in the thesis are: the architec ture of the distributed system; query translation; schema mapping; and integration of the schemata within the hetero geneous distributed database, in seeking solutions to these issues, an architecture for a HD-DBMS is proposed

    The application of artificial intelligence techniques to large distributed networks

    Get PDF
    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases

    Multidatabase concurrency control

    Get PDF

    IPAD 2: Advances in Distributed Data Base Management for CAD/CAM

    Get PDF
    The Integrated Programs for Aerospace-Vehicle Design (IPAD) Project objective is to improve engineering productivity through better use of computer-aided design and manufacturing (CAD/CAM) technology. The focus is on development of technology and associated software for integrated company-wide management of engineering information. The objectives of this conference are as follows: to provide a greater awareness of the critical need by U.S. industry for advancements in distributed CAD/CAM data management capability; to present industry experiences and current and planned research in distributed data base management; and to summarize IPAD data management contributions and their impact on U.S. industry and computer hardware and software vendors

    Puredata Systems for Analytics: Concurrency and Workload Management

    Get PDF
    PureDataTM System for Analytics also called as Netezza is a data warehouse server handling analytic operations capable of providing throughput 1000 times greater and faster than traditional database servers. Impressively, it requires minimal system tuning thereby providing high-end performance as well as requiring a low total cost of ownership (TCO). Database performance is directly linked to the allocation of system resources on a database management system. The heart of the Netezza appliance, Field-Programmable Gate Array (FPGA) plays a key role in boosting the overall performance of a server. I/O operations are always a bottleneck in any database server and it is the FPGA that eradicates the I/O problem in Netezza by filtering the data across each snippet processing unit (SPU), processing and running the queries faster thereby pumping server’s performance greatly. This paper describes the current problems the companies face in a “big data” environment which includes concurrency handling and query performance. There are various factors which affect a query\u27s performance, which includes bad data distribution, stale statistics, server load and uneven system resources. Since this paper is restricted to only the system resources, an in-depth analysis of system resources and its components will be analyzed in this research. A database server’s performance is directly related to its underlying allocation of system resources. Work Load Management (WLM) and each of its features are described in this paper which gives the reader a clear notion of how a query\u27s performance is altered using various mechanisms. The paper describes the current performance problems that exist on the traditional database servers and how the Work Load Management components can be tweaked along with the predefined system configurations to process a query to run faster on a Netezza machine

    Space station data system analysis/architecture study. Task 2: Options development, DR-5. Volume 2: Design options

    Get PDF
    The primary objective of Task 2 is the development of an information base that will support the conduct of trade studies and provide sufficient data to make key design/programmatic decisions. This includes: (1) the establishment of option categories that are most likely to influence Space Station Data System (SSDS) definition; (2) the identification of preferred options in each category; and (3) the characterization of these options with respect to performance attributes, constraints, cost and risk. This volume contains the options development for the design category. This category comprises alternative structures, configurations and techniques that can be used to develop designs that are responsive to the SSDS requirements. The specific areas discussed are software, including data base management and distributed operating systems; system architecture, including fault tolerance and system growth/automation/autonomy and system interfaces; time management; and system security/privacy. Also discussed are space communications and local area networking

    Secure and efficient processing of outsourced data structures using trusted execution environments

    Full text link
    In recent years, more and more companies make use of cloud computing; in other words, they outsource data storage and data processing to a third party, the cloud provider. From cloud computing, the companies expect, for example, cost reductions, fast deployment time, and improved security. However, security also presents a significant challenge as demonstrated by many cloud computing–related data breaches. Whether it is due to failing security measures, government interventions, or internal attackers, data leakages can have severe consequences, e.g., revenue loss, damage to brand reputation, and loss of intellectual property. A valid strategy to mitigate these consequences is data encryption during storage, transport, and processing. Nevertheless, the outsourced data processing should combine the following three properties: strong security, high efficiency, and arbitrary processing capabilities. Many approaches for outsourced data processing based purely on cryptography are available. For instance, encrypted storage of outsourced data, property-preserving encryption, fully homomorphic encryption, searchable encryption, and functional encryption. However, all of these approaches fail in at least one of the three mentioned properties. Besides approaches purely based on cryptography, some approaches use a trusted execution environment (TEE) to process data at a cloud provider. TEEs provide an isolated processing environment for user-defined code and data, i.e., the confidentiality and integrity of code and data processed in this environment are protected against other software and physical accesses. Additionally, TEEs promise efficient data processing. Various research papers use TEEs to protect objects at different levels of granularity. On the one end of the range, TEEs can protect entire (legacy) applications. This approach facilitates the development effort for protected applications as it requires only minor changes. However, the downsides of this approach are that the attack surface is large, it is difficult to capture the exact leakage, and it might not even be possible as the isolated environment of commercially available TEEs is limited. On the other end of the range, TEEs can protect individual, stateless operations, which are called from otherwise unchanged applications. This approach does not suffer from the problems stated before, but it leaks the (encrypted) result of each operation and the detailed control flow through the application. It is difficult to capture the leakage of this approach, because it depends on the processed operation and the operation’s location in the code. In this dissertation, we propose a trade-off between both approaches: the TEE-based processing of data structures. In this approach, otherwise unchanged applications call a TEE for self-contained data structure operations and receive encrypted results. We examine three data structures: TEE-protected B+-trees, TEE-protected database dictionaries, and TEE-protected file systems. Using these data structures, we design three secure and efficient systems: an outsourced system for index searches; an outsourced, dictionary-encoding–based, column-oriented, in-memory database supporting analytic queries on large datasets; and an outsourced system for group file sharing supporting large and dynamic groups. Due to our approach, the systems have a small attack surface, a low likelihood of security-relevant bugs, and a data owner can easily perform a (formal) code verification of the sensitive code. At the same time, we prevent low-level leakage of individual operation results. For all systems, we present a thorough security evaluation showing lower bounds of security. Additionally, we use prototype implementations to present upper bounds on performance. For our implementations, we use a widely available TEE that has a limited isolated environment—Intel Software Guard Extensions. By comparing our systems to related work, we show that they provide a favorable trade-off regarding security and efficiency
    corecore