7,234 research outputs found

    Distribution of the Object Oriented Databases. A Viewpoint of the MVDB Model's Methodology and Architecture

    Get PDF
    In databases, much work has been done towards extending models with advanced tools such as view technology, schema evolution support, multiple classification, role modeling and viewpoints. Over the past years, most of the research dealing with the object multiple representation and evolution has proposed to enrich the monolithic vision of the classical object approach in which an object belongs to one hierarchy class. In particular, the integration of the viewpoint mechanism to the conventional object-oriented data model gives it flexibility and allows one to improve the modeling power of objects. The viewpoint paradigm refers to the multiple descriptions, the distribution, and the evolution of object. Also, it can be an undeniable contribution for a distributed design of complex databases. The motivation of this paper is to define an object data model integrating viewpoints in databases and to present a federated database architecture integrating multiple viewpoint sources following a local-as-extended-view data integration approach.object-oriented data model, OQL language, LAEV data integration approach, MVDB model, federated databases, Local-As-View Strategy.

    Environmental modeling and recognition for an autonomous land vehicle

    Get PDF
    An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given

    NOSQL design for analytical workloads: Variability matters

    Get PDF
    Big Data has recently gained popularity and has strongly questioned relational databases as universal storage systems, especially in the presence of analytical workloads. As result, co-relational alternatives, commonly known as NOSQL (Not Only SQL) databases, are extensively used for Big Data. As the primary focus of NOSQL is on performance, NOSQL databases are directly designed at the physical level, and consequently the resulting schema is tailored to the dataset and access patterns of the problem in hand. However, we believe that NOSQL design can also benefit from traditional design approaches. In this paper we present a method to design databases for analytical workloads. Starting from the conceptual model and adopting the classical 3-phase design used for relational databases, we propose a novel design method considering the new features brought by NOSQL and encompassing relational and co-relational design altogether.Peer ReviewedPostprint (author's final draft

    Data integration through service-based mediation for web-enabled information systems

    Get PDF
    The Web and its underlying platform technologies have often been used to integrate existing software and information systems. Traditional techniques for data representation and transformations between documents are not sufficient to support a flexible and maintainable data integration solution that meets the requirements of modern complex Web-enabled software and information systems. The difficulty arises from the high degree of complexity of data structures, for example in business and technology applications, and from the constant change of data and its representation. In the Web context, where the Web platform is used to integrate different organisations or software systems, additionally the problem of heterogeneity arises. We introduce a specific data integration solution for Web applications such as Web-enabled information systems. Our contribution is an integration technology framework for Web-enabled information systems comprising, firstly, a data integration technique based on the declarative specification of transformation rules and the construction of connectors that handle the integration and, secondly, a mediator architecture based on information services and the constructed connectors to handle the integration process

    SBML models and MathSBML

    Get PDF
    MathSBML is an open-source, freely-downloadable Mathematica package that facilitates working with Systems Biology Markup Language (SBML) models. SBML is a toolneutral,computer-readable format for representing models of biochemical reaction networks, applicable to metabolic networks, cell-signaling pathways, genomic regulatory networks, and other modeling problems in systems biology that is widely supported by the systems biology community. SBML is based on XML, a standard medium for representing and transporting data that is widely supported on the internet as well as in computational biology and bioinformatics. Because SBML is tool-independent, it enables model transportability, reuse, publication and survival. In addition to MathSBML, a number of other tools that support SBML model examination and manipulation are provided on the sbml.org website, including libSBML, a C/C++ library for reading SBML models; an SBML Toolbox for MatLab; file conversion programs; an SBML model validator and visualizer; and SBML specifications and schemas. MathSBML enables SBML file import to and export from Mathematica as well as providing an API for model manipulation and simulation

    Variadic genericity through linguistic reflection : a performance evaluation

    Get PDF
    This work is partially supported by the EPSRC through Grant GR/L32699 “Compliant System Architecture” and by ESPRIT through Working Group EP22552 “PASTEL”.The use of variadic genericity within schema definitions increases the variety of databases that may be captured by a single specification. For example, a class of databases of engineering part objects, in which each database instance varies in the types of the parts and the number of part types, should lend itself to a single definition. However, precise specification of such a schema is beyond the capability of polymorphic type systems and schema definition languages. It is possible to capture such generality by introducing a level of interpretation, in which the variation in types and in the number of fields is encoded in a general data structure. Queries that interpret the encoded information can be written against this general data structure. An alternative approach to supporting such variadic genericity is to generate a precise database containing tailored data structures and queries for each different instance of the virtual schema.1 This involves source code generation and dynamic compilation, a process known as linguistic reflection. The motivation is that once generated, the specific queries may execute more efficiently than their generic counter-parts, since the generic code is “compiled away”. This paper compares the two approaches and gives performance measurements for an example using the persistent languages Napier88 and PJama.Postprin

    Grid infrastructures for secure access to and use of bioinformatics data: experiences from the BRIDGES project

    Get PDF
    The BRIDGES project was funded by the UK Department of Trade and Industry (DTI) to address the needs of cardiovascular research scientists investigating the genetic causes of hypertension as part of the Wellcome Trust funded (£4.34M) cardiovascular functional genomics (CFG) project. Security was at the heart of the BRIDGES project and an advanced data and compute grid infrastructure incorporating latest grid authorisation technologies was developed and delivered to the scientists. We outline these grid infrastructures and describe the perceived security requirements at the project start including data classifications and how these evolved throughout the lifetime of the project. The uptake and adoption of the project results are also presented along with the challenges that must be overcome to support the secure exchange of life science data sets. We also present how we will use the BRIDGES experiences in future projects at the National e-Science Centre
    corecore