571 research outputs found

    An object query language for multimedia federations

    Get PDF
    The Fischlar system provides a large centralised repository of multimedia files. As expansion is difficult in centralised systems and as different user groups have a requirement to define their own schemas, the EGTV (Efficient Global Transactions for Video) project was established to examine how the distribution of this database could be managed. The federated database approach is advocated where global schema is designed in a top-down approach, while all multimedia and textual data is stored in object-oriented (O-O) and object-relational (0-R) compliant databases. This thesis investigates queries and updates on large multimedia collections organised in the database federation. The goal of this research is to provide a generic query language capable of interrogating global and local multimedia database schemas. Therefore, a new query language EQL is defined to facilitate the querying of object-oriented and objectrelational database schemas in a database and platform independent manner, and acts as a canonical language for database federations. A new canonical language was required as the existing query language standards (SQL: 1999 and OQL) axe generally incompatible and translation between them is not trivial. EQL is supported with a formally defined object algebra and specified semantics for query evaluation. The ability to capture and store metadata of multiple database schemas is essential when constructing and querying a federated schema. Therefore we also present a new platform independent metamodel for specifying multimedia schemas stored in both object-oriented and object-relational databases. This metadata information is later used for the construction of a global schemas, and during the evaluation of local and global queries. Another important feature of any federated system is the ability to unambiguously define database schemas. The schema definition language for an EGTV database federation must be capable of specifying both object-oriented and object-relational schemas in the database independent format. As XML represents a standard for encoding and distributing data across various platforms, a language based upon XML has been developed as a part of our research. The ODLx (Object Definition Language XML) language specifies a set of XMLbased structures for defining complex database schemas capable of representing different multimedia types. The language is fully integrated with the EGTV metamodel through which ODLx schemas can be mapped to 0-0 and 0-R databases

    Security in heterogeneous interoperable database environments

    Get PDF
    The paper deals with the security of interoperable heterogeneous database environments. It contains a general discussion of the issues involved as well as a description of our experiences gained during the development and implementation of the security module of IRO-DB - an European ESPRIT III funded project with the goal to develop interoperable access between relational and object-oriented databases

    Scaling Heterogeneous Databases and the Design of Disco

    Get PDF
    Access to large numbers of data sources introduces new problems for users of heterogeneous distributed databases. End users and application programmers must deal with unavailable data sources. Database administrators must deal with incorporating new sources into the model. Database implementors must deal with the translation of queries between query languages and schemas. The Distributed Information Search COmponent (Disco) 1 addresses these problems. Query processing semantics are developed to process queries over data sources which do not return answers. Data modeling techniques manage connections to data sources. The component interface to data sources flexibly handles different query languages and translates queries. This paper describes (a) the distributed mediator architecture ofDisco, (b) its query processing semantics, (c) the data model and its modeling of data source connections, and (d) the interface to underlying data sources. 1

    Towards an effective processing of XML keyword query

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Data processing/display design for the space shuttle/spacelab Electromagnetic Environment Experiment (EEE)

    Get PDF
    Methods for data analysis, data compression including universal coding, storage and retrieval on random access storage devices, and display were developed and implemented on the GSFC Interdata computer. The original 64 bit per frequency band representation was reduced to 10 bits through source coding/universal coding, a compression ratio of 6.4, prior to storage. Rapid encoding/decoding was achieved by the algorithms used so that rapid random access is retained

    Estrategia de procesamiento y optimización de consultas en un gestor de bases de datos federadas

    Get PDF
    Based on BLOOM (BarceLona Object-Oriented Model), we propose to establish a global query processing strategy in the Federated Query Manager. First, the developing mechanism constructs a tree, which has nodes that initially represent explicit joins between federated classes and that are decomposed, into implicit joins between classes in the Component Schemas. Consequently, different heuristic techniques are performed in order to optimise the decomposition process, which generate one or more Execution Plans (EP). After that, the EP are analysed to get the optimum. The objective function of this strategy is to choose an execution plan with the least total resource usage and the best response time. Finally, the consolidation of partial results is carried out maintaining the federated result in the root node.Basado en el Proyecto BLOOM (BarceLona Object- Oriented Model), se propone establecer una estrategia de procesamiento de la consulta global dentro del Gestor de Consultas Federado. Primero, se construye un árbol cuyos nodos inicialmente representan joins explícitos entre clases federadas y los cuales son descompuestos en joins implícitos entre clases en los esquemas componentes. Posteriormente, diferentes técnicas heurísticas optimizan el proceso de descomposición, las cuales generan uno o mas Planes de Ejecución (PE). Después, éstos PE son analizados para obtener el optimo. La función objetiva de esta estrategia es encontrar un plan de ejecución con el menor uso de recursos y el mejor tiempo de respuesta. Finalmente, la consolidación de resultados parciales se lleva a cabo manteniendo la respuesta federada en el nodo raíz.Postprint (published version

    Managing IOT Data on Hyperledger Blockchain

    Full text link
    Blockchain is a rapidly evolving technology known for its security, immutability and decentralized nature. At its heart, it’s used for storing various kinds of data like transactions. But it is not limited to just the transactions or the cryptocurrency. It can also be used to store many other things like assets, IoT data or even multimedia data like songs, pictures, and videos. The number of IoT devices being connected to the internet is increasing day by day. In fact, Garter (Analyst Firm) predicts there will be 20.4 Billion IoT devices by the end of 2020 [IOTb]. With the increase in the number of IoT devices, there will be an increase in the amount of data they generate. Managing this huge data efficiently so that its available to every authorized user without any integrity loss will be very pivotal in the near future. HyperLedger is an open source project hosted by Linux Foundation. There are a lot of sub projects that come under the umbrella of HyperLedger consortia like HyperLedger Fabric, Indy, Composer and many more. HyperLedger Fabric is one of the projects initially developed by IBM and later contributed to HyperLedger. It allows us to develop private permissioned Blockchain following the best in industry standards and algorithms. In this thesis, we are managing IoT data on the HyperLedger Fabric Blockchain. We will be collecting data from the IoT sensors and securely transmitting it to our node running the HyperLedger Blockchain using the MQTT protocol. After receiving data from the sensor we will process the data and add it to our ledger. We also evaluate the performance of our network taking various parameters like batch timeout, batch size, and message count into consideration
    corecore