6 research outputs found
Recommended from our members
A Systematic Performance Study of Object Database Management Systems
Many previous performance benchmarks for Object Database Management Systems (ODBMSs) have typically used arbitrary sets of tests based on what their designers felt were the characteristics of Engineering applications. Increasingly, however, ODBMSs are being used in non-engineering domains, such as Financial Trading, Clinical Healthcare, Telecommunications Network Management, etc. Part of the reason for this is that the technology has matured over the past few years and has become a less risky choice for organisations looking for better w'ays to manage complex data. However, the development of suitable application- or industry-specific benchmarks, based on actual performance studies, has not paralleled this growth.
The research reported here approaches performance evaluation of ODBMSs pragmatically. It uses a combination of case studies and benchmark experiments to investigate the performance characteristics of ODBMSs for particular applications, following the successful use of this approach by Youssef [Youss93] for studying the performance of On- Line Transaction Processing (OLTP) applications for Relational Database Management Systems (RDBMSs).
Six case studies at five organisations showā that organisations consider a wide range of factors when undertaking their own performance studies or benchmarks. Furthermore, none of the studied organisations considered using any public benchmarks. Six current and derived benchmarks also highlight statistically significant performance differences between three major commercial products: Objectivity/DB, ObjectStore and UniSQL. These benchmarks indicate the suitability of the products tested for particular application domains.
The research could not find any evidence at this time to support the concept of a generic or canonical performance workload for ODBMSs. This is demonstrated by the case studies and supported by the benchmark experiments. However, the research shows that performance benchmarks serve a very useful role in ODBMS evaluations and can help identify architectural and quality problems with products that would not otherwise be observed until significant application or system development was already in progress
The Integration of Product Data with Workflow Management Systems Through a Common Data Model
Traditionally product models, and their definitions, have been handled separately from process models and their definitions. In industry, each has been managed by database systems defined for their specific domain, e.g. Product Data Management (PDM) for product definitions and Workflow Management System (WfM) for process definitions. There is little or no overlap between these two views of systems even though product and process information interact over the complete life cycle from design to production..
Managing the consistency of distributed documents
Many businesses produce documents as part of their daily activities: software engineers
produce requirements specifications, design models, source code, build scripts and more;
business analysts produce glossaries, use cases, organisation charts, and domain ontology
models; service providers and retailers produce catalogues, customer data, purchase orders,
invoices and web pages.
What these examples have in common is that the content of documents is often semantically
related: source code should be consistent with the design model, a domain ontology
may refer to employees in an organisation chart, and invoices to customers should be consistent
with stored customer data and purchase orders. As businesses grow and documents
are added, it becomes difficult to manually track and check the increasingly complex relationships
between documents. The problem is compounded by current trends towards
distributed working, either over the Internet or over a global corporate network in large
organisations. This adds complexity as related information is not only scattered over
a number of documents, but the documents themselves are distributed across multiple
physical locations.
This thesis addresses the problem of managing the consistency of distributed and possibly
heterogeneous documents. āDocumentsā is used here as an abstract term, and does not
necessarily refer to a human readable textual representation. We use the word to stand
for a file or data source holding structured information, like a database table, or some
source of semi-structured information, like a file of comma-separated values or a document
represented in a hypertext markup language like XML [Bray et al., 2000]. Document
heterogeneity comes into play when data with similar semantics is represented in different
ways: for example, a design model may store a class as a rectangle in a diagram whereas
a source code file will embed it as a textual string; and an invoice may contain an invoice
identifier that is composed of a customer name and date, both of which may be recorded
and managed separately.
Consistency management in this setting encompasses a number of steps. Firstly, checks
must be executed in order to determine the consistency status of documents. Documents
are inconsistent if their internal elements hold values that do not meet the properties
expected in the application domain or if there are conflicts between the values of elements
in multiple documents. The results of a consistency check have to be accumulated and
reported back to the user. And finally, the user may choose to change the documents to
bring them into a consistent state.
The current generation of tools and techniques is not always sufficiently equipped to deal
with this problem. Consistency checking is mostly tightly integrated or hardcoded into tools, leading to problems with extensibility with respect to new types of documents.
Many tools do not support checks of distributed data, insisting instead on accumulating
everything in a centralized repository. This may not always be possible, due to organisational
or time constraints, and can represent excessive overhead if the only purpose of
integration is to improve data consistency rather than deriving any additional benefit.
This thesis investigates the theoretical background and practical support necessary to
support consistency management of distributed documents. It makes a number of contributions
to the state of the art, and the overall approach is validated in significant case
studies that provide evidence of its practicality and usefulness
Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems
This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence
CORBA and ODBMSs in Viewpoint Development Environment Architectures
Viewpoints are re#ections of software systems from multiple perspectives. Anumber of consistency conditions apply to viewpoints and developers require a tool for eachtype of viewpoint. These tools need to support consistency management. Inter-viewpoint consistency can only be checked when tools are integrated into a viewpoint development environment. We brie#y outline the functionality developers require from these environments. We discuss the suitability of abstract syntax graphs as a common viewpoint representation scheme. The main purpose of the paper is to present an object-oriented architecture for viewpoint-based environments. The architecture bene#ts from the integration of object database management systems and object request brokers. 1 Introduction The production process of modern software systems passes many stages, such as requirements analysis, architectural design, detailed design, coding, testing. During these stages, the system is considered from multiple per..