437 research outputs found

    A generalized system performance model for object-oriented database applications

    Get PDF
    Although relational database systems have met many needs in traditional business applications, such technology is inadequate for non-traditional applications such as computer-aided design, computer-aided software engineering, and knowledge bases. Object-oriented database systems (OODB) enhance the data modeling power and performance of database management systems for these applications. Response time is an important issue facing OODB. However, standard measures of on-line transaction processing are irrelevant for OODB . Benchmarks compare alternative implementations of OODB system software, running a constant application workload. Few attempts have been made to characterize performance implications of OODB application design, given a fixed OODB and operating system platform. In this study, design features of the 007 Benchmark database application (Carey, DeWitt, and Naughton, 1993 ) were varied to explore the impact on response time to perform database operations Sensitivity to the degree of aggregation and to the degree of inheritance in the application were measured. Variability in response times also was measured, using a sequence of database operations to simulate a user transaction workload. Degree of aggregation was defined as the number of relationship objects processed during a database operation. Response time was linear with the degree of aggregation. The size of the database segment processed, compared to the size of available memory, affected the coefficients of the regression line. Degree of inheritance was defined as the Number of Children (Chidamber and Kemerer, 1994) in the application class definitions, and as the extent to which run-time polymorphism was implemented. In this study, increased inheritance caused a statistically significant increase in response time for the 007 Traversal 1 only, although this difference was not meaningful. In the simulated transaction workload of nine 007 operations, response times were highly variable. Response times per operation depended on the number of objects processed and the effect of preceding operations on memory contents. Operations that used disparate physical segments or had large working sets relative to the size of memory caused large increases in response time. Average response times and variability were reduced by removing these operations from the sequence (equivalent to scheduling these transactions at some time when the impact would be minimized)

    Storing RDF as a Graph

    Get PDF
    RDF is the first W3C standard for enriching information resources of the Web with detailed meta data. The semantics of RDF data is defined using a RDF schema. The most expressive language for querying RDF is RQL, which enables querying of semantics. In order to support RQL, a RDF storage system has to map the RDF graph model onto its storage structure. Several storage systems for RDF data have been developed, which store the RDF data as triples in a relational database. To evaluate an RQL query on those triple structures, the graph model has to be rebuilt from the triples. In this paper, we presented a new approach to store RDF data as a graph in a object-oriented database. Our approach avoids the costly rebuilding of the graph and efficiently queries the storage structure directly. The advantages of our approach have been shown by performance test on our prototype implementation OO-Store

    Compensation methods to support generic graph editing: A case study in automated verification of schema requirements for an advanced transaction model

    Get PDF
    Compensation plays an important role in advanced transaction models, cooperative work, and workflow systems. However, compensation operations are often simply written as a^−1 in transaction model literature. This notation ignores any operation parameters, results, and side effects. A schema designer intending to use an advanced transaction model is expected (required) to write correct method code. However, in the days of cut-and-paste, this is much easier said than done. In this paper, we demonstrate the feasibility of using an off-the-shelf theorem prover (also called a proof assistant) to perform automated verification of compensation requirements for an OODB schema. We report on the results of a case study in verification for a particular advanced transaction model that supports cooperative applications. The case study is based on an OODB schema that provides generic graph editing functionality for the creation, insertion, and manipulation of nodes and links

    Database independent Migration of Objects into an Object-Relational Database

    Get PDF
    This paper reports on the CERN-based WISDOM project which is studying the serialisation and deserialisation of data to/from an object database (objectivity) and ORACLE 9i.Comment: 26 pages, 18 figures; CMS CERN Conference Report cr02_01

    Migrating relational data to an OODB: strategies and lessions from a molecular biology experience

    Get PDF
    Journal ArticleThe growing maturity of OODB technology is causing many enterprises to consider migrating relational databases to OODBs. While data remapping is relatively straightforward, greater challenges lie in economically and non-invasively adapting legacy application software. We report on a genetics laboratory database migration experiment, which was facilitated by both organization of the relational data in object-like form and a C++ framework designed to insulate application code from relational artifacts. To our surprise, the framework failed to encapsulate three subtle aspects of the relational implementation, thereby "contaminating" application code. We describe the underlying issues, and offer cautionary guidance to future migrators

    Multi-Layer Distributed Storage of LHD Plasma Diagnostic Database

    Get PDF
    At the end of LHD experimental campaign in 2003, the amount of whole plasma diagnostics raw data had reached 3.16 GB in a long-pulse experiment. This is a new world record in fusion plasma experiments, far beyond the previous value of 1.5 GB/shot. The total size of the LHD diagnostic data is about 21.6 TB for the whole six years of experiments, and it continues to grow at an increasing rate. The LHD diagnostic database and storage system, i.e. the LABCOM system, has a completely distributed architecture to be sufficiently flexible and easily expandable to maintain integrity of the total amount of data. It has three categories of the storage layer: OODBMS volumes in data acquisition servers, RAID servers, and mass storage systems, such as MO jukeboxes and DVD-R changers. These are equally accessible through the network. By data migration between them, they can be considered a virtual OODB extension area. Their data contents have been listed in a “facilitator” PostgreSQL RDBMS, which now contains about 6.2 million entries, and informs the optimized priority to clients requesting data. Using the “glib” compression for all of the binary data and applying the three-tier application model for the OODB data transfer/retrieval, an optimized OODB read-out rate of 1.7 MB/s and effective client access speed of 3?25 MB/s have been achieved. As a result, the LABCOM data system has succeeded in combination of the use of RDBMS, OODBMS, RAID, and MSS to enable a virtual and always expandable storage volume, simultaneously with rapid data access
    corecore