32 research outputs found

    Operator Performance Support System (OPSS)

    Get PDF
    In the complex and fast reaction world of military operations, present technologies, combined with tactical situations, have flooded the operator with assorted information that he is expected to process instantly. As technologies progress, this flow of data and information have both guided and overwhelmed the operator. However, the technologies that have confounded many operators today can be used to assist him -- thus the Operator Performance Support Team. In this paper we propose an operator support station that incorporates the elements of Video and Image Databases, productivity Software, Interactive Computer Based Training, Hypertext/Hypermedia Databases, Expert Programs, and Human Factors Engineering. The Operator Performance Support System will provide the operator with an integrating on-line information/knowledge system that will guide expert or novice to correct systems operations. Although the OPSS is being developed for the Navy, the performance of the workforce in today's competitive industry is of major concern. The concepts presented in this paper which address ASW systems software design issues are also directly applicable to industry. the OPSS will propose practical applications in how to more closely align the relationships between technical knowledge and equipment operator performance

    Security in heterogeneous interoperable database environments

    Get PDF
    The paper deals with the security of interoperable heterogeneous database environments. It contains a general discussion of the issues involved as well as a description of our experiences gained during the development and implementation of the security module of IRO-DB - an European ESPRIT III funded project with the goal to develop interoperable access between relational and object-oriented databases

    Object Persistency for HEP data using an Object-Relational Database

    Get PDF
    We present an initial study of the object features of Oracle 9i - the first of the market-leading object-relational database systems that supports a true object model on the server side as well as an ODMG-style C++ language binding on the client side. We discuss how these features can be used to provide persistent object storage in the HEP environment

    Building Distributed Systems for the Pragmatic Object Web

    Get PDF
    We review the growing power and capability of commodity computing and communication technologies largely driven by commercial distributed information systems. These systems are built from CORBA, Microsoft\u27s COM, JavaBeans, and rapidly advancing Web approaches. One can abstract these to a three-tier model with largely independent clients connected to a distributed network of servers. The latter host various services including object and relational databases and of course parallel and sequential computing. High performance can be obtained by combining concurrency at the middle server tier with optimized parallel back end services. The resultant system combines the needed performance for large-scale HPCC applications with the rich functionality of commodity systems. Further the architecture with distinct interface, server and specialized service implementation layers, naturally allows advances in each area to be easily incorporated. We illustrate how performance can be obtained within a commodity architecture and we propose a middleware integration approach based on JWORB (Java Web Object Broker) multi-protocol server technology. We illustrate our approach on a set of prototype applications in areas such as collaborative systems, support of multidisciplinary interactions, WebFlow based visual metacomputing, WebFlow over Globus, Quantum Monte Carlo and distributed interactive simulations

    Scaling Heterogeneous Databases and the Design of Disco

    Get PDF
    Access to large numbers of data sources introduces new problems for users of heterogeneous distributed databases. End users and application programmers must deal with unavailable data sources. Database administrators must deal with incorporating new sources into the model. Database implementors must deal with the translation of queries between query languages and schemas. The Distributed Information Search COmponent (Disco) 1 addresses these problems. Query processing semantics are developed to process queries over data sources which do not return answers. Data modeling techniques manage connections to data sources. The component interface to data sources flexibly handles different query languages and translates queries. This paper describes (a) the distributed mediator architecture ofDisco, (b) its query processing semantics, (c) the data model and its modeling of data source connections, and (d) the interface to underlying data sources. 1

    Mediators Metadata Management Services: An Implementation Using GOA++ System

    Get PDF
    The main contribution of this work is the development of a Metadata Manager to interconnect heterogeneous and autonomous information sources in a flexible, expandable and transparent way. The interoperability at the semantic level is reached using an integration layer, structured in a hierarchical way, based on the concept of Mediators. Services of a Mediator Metadata Manager (MMM) are specified and implemented using functions based on the Outlines of GOA++. The MMM services e are available in the form of a GOA++ API and they can be accessed remotely via CORBA or through local API calls.Sociedad Argentina de Informática e Investigación Operativ

    A generalized system performance model for object-oriented database applications

    Get PDF
    Although relational database systems have met many needs in traditional business applications, such technology is inadequate for non-traditional applications such as computer-aided design, computer-aided software engineering, and knowledge bases. Object-oriented database systems (OODB) enhance the data modeling power and performance of database management systems for these applications. Response time is an important issue facing OODB. However, standard measures of on-line transaction processing are irrelevant for OODB . Benchmarks compare alternative implementations of OODB system software, running a constant application workload. Few attempts have been made to characterize performance implications of OODB application design, given a fixed OODB and operating system platform. In this study, design features of the 007 Benchmark database application (Carey, DeWitt, and Naughton, 1993 ) were varied to explore the impact on response time to perform database operations Sensitivity to the degree of aggregation and to the degree of inheritance in the application were measured. Variability in response times also was measured, using a sequence of database operations to simulate a user transaction workload. Degree of aggregation was defined as the number of relationship objects processed during a database operation. Response time was linear with the degree of aggregation. The size of the database segment processed, compared to the size of available memory, affected the coefficients of the regression line. Degree of inheritance was defined as the Number of Children (Chidamber and Kemerer, 1994) in the application class definitions, and as the extent to which run-time polymorphism was implemented. In this study, increased inheritance caused a statistically significant increase in response time for the 007 Traversal 1 only, although this difference was not meaningful. In the simulated transaction workload of nine 007 operations, response times were highly variable. Response times per operation depended on the number of objects processed and the effect of preceding operations on memory contents. Operations that used disparate physical segments or had large working sets relative to the size of memory caused large increases in response time. Average response times and variability were reduced by removing these operations from the sequence (equivalent to scheduling these transactions at some time when the impact would be minimized)

    Current Metadata Process Practices

    Get PDF
    An effective and easily maintainable information organization system is vital to the success of any enterprise, be it in the public or private sector. The proper use of metadata is a key means to this end. Applying metadata, however, is not a static task. Rather, it is a process by which metadata is created and manipulated by any number of people, using a variety of tools at different stages of the lifecycle of a digital object. Information on these components of the metadata process was gathered via a survey of metadata practitioners. The study found the metadata process is quite complex, with substantial variability in all components. A large variety of software applications are used for manipulating of metadata. This result and the fact that in-house developed tools were most common suggest there is no single turn-key solution for the metadata process. In conclusion, careful planning of the metadata process, including consideration of all sources of an enterprise's information, is essential to the successful implementation of a metadata process

    J-schemas integrator : uma ferramenta para integraçao de esquemas de bancos de dados heterogeneos

    Get PDF
    Orientador : Marcos Sfair SunyeDissertaçao (mestrado) - Universidade Federal do Paraná, Setor de Ciencias ExatasResumo: Atualmente, várias organizações e companhias utilizam diversos sistemas de bancos de dados para gerenciar grande quantidade de seus dados. Entretanto, esses numerosos sistemas de banco de dados heterogêneos foram projetados para rodarem isoladamente e para não cooperarem entre si. Prover interoperabilidade entre esses bancos de dados é importante para o sucesso das organizações, nas quais ganhos de produtividade serão obtidos se esses sistemas puderem ser integrados e permitirem uma visão unificada dos dados. A integração de esquemas de bancos de dados heterogêneos pode ser definida como um processo que, através de uma entrada de um conjunto de esquemas de banco de dados, produz como saída, uma descrição unificada dos esquemas iniciais, chamado esquema integrado e a descrição da informação de mapeamento entre o esquema integrado e os esquemas iniciais. Essa dissertação de mestrado consiste na implementação de uma ferramenta cujo objetivo seja auxiliar e facilitar o processo de integração de esquemas de banco de dados. A ferramenta visual é responsável por importar esquemas de banco de dados, facilitar a identificação dos objetos conflitantes entre esquemas e pelo processo de integração e geração do esquema integrado e da informação de mapeamento entre o esquema integrado e os esquemas iniciais.Abstract: In today's modern organizations is common to find several different databases being used to accomplish their operational data management functions. These numerous heterogeneous databases systems were designed to run in isolation and they do not cooperate with each other. Providing interoperability among these databases is important to the successful operation of the organization. Improvements in productivity will be gained if the systems can be integrated - that is, made to cooperate with each other - to support global applications accessing multiple databases. This work is dedicated to the implementation of a tool to help developers in the task of integration of heterogeneous database schémas. The database schema integration can be defined as the process that, through an input of a set of databases schémas, produces, as the output, an unified description of the initials schémas, called integrated schema and the information mapping that supports the access to the data stored in the initial databases schémas from the integrated schema. The tool is responsible for importing/exporting schémas from/to database, facilitate the identification of conflicting objects between schémas and for the process of integration and generation of the integrated schema and the mapping information between the integrated schema and the initial schémas
    corecore