114 research outputs found

    Mediators Metadata Management Services: An Implementation Using GOA++ System

    Get PDF
    The main contribution of this work is the development of a Metadata Manager to interconnect heterogeneous and autonomous information sources in a flexible, expandable and transparent way. The interoperability at the semantic level is reached using an integration layer, structured in a hierarchical way, based on the concept of Mediators. Services of a Mediator Metadata Manager (MMM) are specified and implemented using functions based on the Outlines of GOA++. The MMM services e are available in the form of a GOA++ API and they can be accessed remotely via CORBA or through local API calls.Sociedad Argentina de Informática e Investigación Operativ

    Databases in High Energy Physics: a critial review

    Get PDF
    The year 2000 is marked by a plethora of significant milestones in the history of High Energy Physics. Not only the true numerical end to the second millennium, this watershed year saw the final run of CERN's Large Electron-Positron collider (LEP) - the world-class machine that had been the focus of the lives of many of us for such a long time. It is also closely related to the subject of this chapter in the following respects: - Classified as a nuclear installation, information on the LEP machine must be retained indefinitely. This represents a challenge to the database community that is almost beyond discussion - archiving of data for a relatively small number of years is indeed feasible, but retaining it for centuries, millennia or more is a very different issue; - There are strong scientific arguments as to why the data from the LEP machine should be retained for a short period. However, the complexity of the data itself, the associated metadata and the programs that manipulate it make even this a huge challenge; - The story of databases in HEP is closely linked to that of LEP itself: what were the basic requirements that were identified in the early years of LEP preparation? How well have these been satisfied? What are the remaining issues and key messages? - Finally, the year 2000 also marked the entry of Grid architectures into the central stage of HEP computing. How has the Grid affected the requirements on databases or the manner in which they are deployed? Furthermore, as the LEP tunnel and even parts of the detectors that it housed are readied for re-use for the Large Hadron Collider (LHC), how have our requirements on databases evolved at this new scale of computing? A number of the key players in the field of databases - as can be seen from the author list of the various publications - have since retired from the field or else this world. Given the fallibility of human memory, the need for a record of the use of databases for physics data processing is clearly needed before memories fade completely and the story is lost forever. It is necessarily somewhat CERN-centric, although effort has been made to cover important developments and events elsewhere. Frequent reference is made to the Computing in High Energy Physics (CHEP) conference series - the most accessible and consistent record of this field

    An extensible view system for supporting the integration and interoperation of heterogeneous, autonomous, and distributed database management systems

    Get PDF
    In this thesis the problem of integrating heterogeneous, autonomous and distributed database management systems (DBMSs) is addressed. To provide a solution, we have developed an approach, a design method, and a view system. Our approach is based on the invention of the abstract view constructs that have uniform and stable representations for supporting semantic relativism and distributed abstraction modeling. Our design method applies object-oriented techniques and software engineering concepts to manage the system complexity. Our view system has been constructed upon established experience with the development of large-scale distributed systems in a distributed object infrastructure provided by the Common Object Request Broker Architecture (CORBA). The scope of our research identifies the goals of Project Zeus in which we have created the Zeus View Mechanism ( ZVM) as the theoretical foundation of our approach. The notion of frameworks has been introduced as part of our design methodology to promote code/design reuse and enhance the portability/extensibility of the architectural design. A multidatabase system, the Zeus Multidatabase System ( ZMS), has provided a test bed for our concept. Project Zeus has exciting prospects. The foundation established in this research has created new directions in multidatabase research and will have a significant impact on future integration and interoperation technologies

    Advances in Database Technology

    Get PDF

    An object query language for multimedia federations

    Get PDF
    The Fischlar system provides a large centralised repository of multimedia files. As expansion is difficult in centralised systems and as different user groups have a requirement to define their own schemas, the EGTV (Efficient Global Transactions for Video) project was established to examine how the distribution of this database could be managed. The federated database approach is advocated where global schema is designed in a top-down approach, while all multimedia and textual data is stored in object-oriented (O-O) and object-relational (0-R) compliant databases. This thesis investigates queries and updates on large multimedia collections organised in the database federation. The goal of this research is to provide a generic query language capable of interrogating global and local multimedia database schemas. Therefore, a new query language EQL is defined to facilitate the querying of object-oriented and objectrelational database schemas in a database and platform independent manner, and acts as a canonical language for database federations. A new canonical language was required as the existing query language standards (SQL: 1999 and OQL) axe generally incompatible and translation between them is not trivial. EQL is supported with a formally defined object algebra and specified semantics for query evaluation. The ability to capture and store metadata of multiple database schemas is essential when constructing and querying a federated schema. Therefore we also present a new platform independent metamodel for specifying multimedia schemas stored in both object-oriented and object-relational databases. This metadata information is later used for the construction of a global schemas, and during the evaluation of local and global queries. Another important feature of any federated system is the ability to unambiguously define database schemas. The schema definition language for an EGTV database federation must be capable of specifying both object-oriented and object-relational schemas in the database independent format. As XML represents a standard for encoding and distributing data across various platforms, a language based upon XML has been developed as a part of our research. The ODLx (Object Definition Language XML) language specifies a set of XMLbased structures for defining complex database schemas capable of representing different multimedia types. The language is fully integrated with the EGTV metamodel through which ODLx schemas can be mapped to 0-0 and 0-R databases

    An approach to building a secure and persistent distributed object management system

    Full text link
    The Common Object Request Broker Architecture (CORBA) proposed by the Object Management Group (OMG) is a widely accepted standard to provide a system level framework in design and implementation of distributed objects. The core of the Object Management Architecture (OMA) is an Object Request Broker (ORB), which provides transparency of object location, activation, and communications. However, the specification provided by the OMG is not sufficient. For instance, there are no security specifications when handling object requests through the ORBs. The lack of such a security service prevents the use of CORBA from handling sensitive data such as personal and corporate financial information; In view of the above, this thesis identifies, explores, and provides an approach to handling secure objects in a distributed environment along with a persistent object service using the CORBA specification. The research specifically involves the design and implementation of a secured distributed object service. This object service requires a persistent service and object storage for storing and retrieving security specific information. To provide a secure distributed object environment, a secure object service using the specifications provided by the OMG has been designed and implemented. In addition, to preserve the persistence of secure information, an object service has been implemented to provide a persistent data store; The secure object service can provide a framework for handling distributed object in applications requiring security clearance such as distributed banking, online stock tradings, internet shopping, geographic and medical information systems

    Critical evaluation of the JDO API for the persistence and portability requirements of complex biological databases

    Get PDF
    BACKGROUND: Complex biological database systems have become key computational tools used daily by scientists and researchers. Many of these systems must be capable of executing on multiple different hardware and software configurations and are also often made available to users via the Internet. We have used the Java Data Object (JDO) persistence technology to develop the database layer of such a system known as the SigPath information management system. SigPath is an example of a complex biological database that needs to store various types of information connected by many relationships. RESULTS: Using this system as an example, we perform a critical evaluation of current JDO technology; discuss the suitability of the JDO standard to achieve portability, scalability and performance. We show that JDO supports portability of the SigPath system from a relational database backend to an object database backend and achieves acceptable scalability. To answer the performance question, we have created the SigPath JDO application benchmark that we distribute under the Gnu General Public License. This benchmark can be used as an example of using JDO technology to create a complex biological database and makes it possible for vendors and users of the technology to evaluate the performance of other JDO implementations for similar applications. CONCLUSIONS: The SigPath JDO benchmark and our discussion of JDO technology in the context of biological databases will be useful to bioinformaticians who design new complex biological databases and aim to create systems that can be ported easily to a variety of database backends

    The mediated data integration (MeDInt) : An approach to the integration of database and legacy systems

    Get PDF
    The information required for decision making by executives in organizations is normally scattered across disparate data sources including databases and legacy systems. To gain a competitive advantage, it is extremely important for executives to be able to obtain one unique view of information in an accurate and timely manner. To do this, it is necessary to interoperate multiple data sources, which differ structurally and semantically. Particular problems occur when applying traditional integration approaches, for example, the global schema needs to be recreated when the component schema has been modified. This research investigates the following heterogeneities between heterogeneous data sources: Data Model Heterogeneities, Schematic Heterogeneities and Semantic Heterogeneities. The problems of existing integration approaches are reviewed and solved by introducing and designing a new integration approach to logically interoperate heterogeneous data sources and to resolve three previously classified heterogeneities. The research attempts to reduce the complexity of the integration process by maximising the degree of automation. Mediation and wrapping techniques are employed in this research. The Mediated Data Integration (MeDint) architecture has been introduced to integrate heterogeneous data sources. Three major elements, the MeDint Mediator, wrappers, and the Mediated Data Model (MDM) play important roles in the integration of heterogeneous data sources. The MeDint Mediator acts as an intermediate layer transforming queries to sub-queries, resolving conflicts, and consolidating conflict-resolved results. Wrappers serve as translators between the MeDint Mediator and data sources. Both the mediator and wrappers arc well-supported by MDM, a semantically-rich data model which can describe or represent heterogeneous data schematically and semantically. Some organisational information systems have been tested and evaluated using the MeDint architecture. The results have addressed all the research questions regarding the interoperability of heterogeneous data sources. In addition, the results also confirm that the Me Dint architecture is able to provide integration that is transparent to users and that the schema evolution does not affect the integration
    corecore