26,618 research outputs found

    A High Performance XML Querying Architecture

    Get PDF
    Data exchange on the Internet plays an essential role in electronic business (e-business). A recent trend in e-business is to create distributed databases to facilitate data exchange. In most cases, the distributed databases are developed by integrating existing systems, which may be in different database models, and on different hardware and/or software platforms. Heterogeneity may cause many difficulties. A solution to the difficulties is XML (the Extensible Markup Language). XML is becoming the dominant language for exchanging data on the Internet. To develop XML systems for practical applications, developers have to addresses the performance issues. In this paper, we describe a new XML querying architecture that can be used to build high performance systems. Experiments indicate that the architecture performs better than Oracle XML DB, which is one of the most commonly used commercial DBMSs for XML

    Web Service Sebagai Solusi Integrasi Data Pada Sistem Informasi Akademik Universitas Bina Darma

    Full text link
    Web service was the new paradigm in implementation the distributed system went through web that used the basis of XML technology, XML is a markup languange which represents document that is exchanged via internet. With exact structure and definition, XML can be used for representation and communication of distributed relational database. This research focusses on database representation and synchronization between relational databases. Bina Darma University has of Education Office is spread and separated by distance which will make the distribution of data in terms of student is not very effective and efficient. Online distribution also does not help because it has to moved from one site to another to retrieve the data. This study aims to build a web services technology that is capable to integrating the data in the Bina Darma University. With take advantage of XML, the integration of data from Bina Darma university which have different databases can be done. Keywords: Web Services, Data Integration, XM

    Towards Full Integration Of XML And Advanced Database Concepts

    Get PDF
    Most advanced database systems courses focus on core aspects of relational design, data modeling, transaction processing, and distributed database issues. Given the ever increasing importance of web enabled databases generally but particularly the influence of XML (eXtensible Markup Language), an alternative approach would be to teach the traditional core principles while integrating an XML module into the course.  The focus of this paper is to elaborate on how such an integration would be accomplished in an advanced database course

    Integrating uncertain XML data from different sources.

    Get PDF
    Data Integration has become increasingly important with today's rapid growth of information available on the web and in electronic form. In the past several years, extensive work has been done to make use of the available data from different sources, particularly, in the scientific and medical fields. In our work, we are interested in integrating data from different uncertain sources in which data are stored in semistructured databases, markedly XML-based data. This interest in XML-based databases came from the flexibility it provides for storing and exchanging data. Furthermore, we are concerned with reliability of different query answers from various sources and on specifying the source where the data came from (the provenance). In essence, our work lies among three areas of interest, data integration, uncertain databases and lineage or provenance in databases. This thesis extends previous work on information integration to accommodate integration of uncertain data from multiple sources

    Interoperability of Information Systems and Heterogenous Databases Using XML

    Get PDF
    Interoperabilily of information systerrrs is the most critical issue facing businesse! that need to access information from multiple idormution systems on tlifferent environments ancl diverse platforms. Interoperability has been a basic requirement for the modern information systems in a competitive and volatile business environment, particularly with the advent of distributed network system and the growing relevance of inter-network communications. Our objective in tltis paper is to develop a comprehensiveframework tofacilitate interoperability smong distributed and heterogeneous information systems and to develop prototype software to validate tlte application of XML in interoperability of infurmation systems and databases

    Information Integration - the process of integration, evolution and versioning

    Get PDF
    At present, many information sources are available wherever you are. Most of the time, the information needed is spread across several of those information sources. Gathering this information is a tedious and time consuming job. Automating this process would assist the user in its task. Integration of the information sources provides a global information source with all information needed present. All of these information sources also change over time. With each change of the information source, the schema of this source can be changed as well. The data contained in the information source, however, cannot be changed every time, due to the huge amount of data that would have to be converted in order to conform to the most recent schema.\ud In this report we describe the current methods to information integration, evolution and versioning. We distinguish between integration of schemas and integration of the actual data. We also show some key issues when integrating XML data sources

    Taming Data Explosion in Probabilistic Information Integration

    Get PDF
    Data integration has been a challenging problem for decades. In an ambient environment, where many autonomous devices have their own information sources and network connectivity is ad hoc and peer-to-peer, it even becomes a serious bottleneck. To enable devices to exchange information without the need for interaction with a user at data integration time and without the need for extensive semantic annotations, a probabilistic approach seems rather promising. It simply teaches the device how to cope with the uncertainty occurring during data integration. Unfortunately, without any kind of world knowledge, almost everything becomes uncertain, hence maintaining all possibilities produces huge integrated information sources. In this paper, we claim that only very simple and generic rules are enough world knowledge to drastically reduce the amount of uncertainty, hence to tame the data explosion to a manageable size
    • …
    corecore