12,853 research outputs found

    Web Service Sebagai Solusi Integrasi Data Pada Sistem Informasi Akademik Universitas Bina Darma

    Full text link
    Web service was the new paradigm in implementation the distributed system went through web that used the basis of XML technology, XML is a markup languange which represents document that is exchanged via internet. With exact structure and definition, XML can be used for representation and communication of distributed relational database. This research focusses on database representation and synchronization between relational databases. Bina Darma University has of Education Office is spread and separated by distance which will make the distribution of data in terms of student is not very effective and efficient. Online distribution also does not help because it has to moved from one site to another to retrieve the data. This study aims to build a web services technology that is capable to integrating the data in the Bina Darma University. With take advantage of XML, the integration of data from Bina Darma university which have different databases can be done. Keywords: Web Services, Data Integration, XM

    X-Databases - The Integration of XML into Enterprise Database Management Systems

    Get PDF
    An examination of how the eXtensible Markup Language (XML) and database management systems (DBMS) fit together, and current approaches to providing database technologies that support XML. Analysis of how XML is being deployed in four classes of XML Database (X-Database) applications provides a basis for understanding the direction of X-Database technology and associated standards. In a simple implementation, an XML Document Type Definition (DTD) is mapped to relational structures, and XML data are stored in a DBMS (Oracle8i). Sample queries are presented to retrieve XML from the database. A middleware tool (XSQL Java Servlet) is used to transform query results into records on a Web page. The results demonstrate that relational databases require data to be rigidly mapped to relational structures. The paper concludes by exploring future challenges to integrating XML and DTDs with X-Databases, which establishes the need for a more "native" integration approach

    Information Integration - the process of integration, evolution and versioning

    Get PDF
    At present, many information sources are available wherever you are. Most of the time, the information needed is spread across several of those information sources. Gathering this information is a tedious and time consuming job. Automating this process would assist the user in its task. Integration of the information sources provides a global information source with all information needed present. All of these information sources also change over time. With each change of the information source, the schema of this source can be changed as well. The data contained in the information source, however, cannot be changed every time, due to the huge amount of data that would have to be converted in order to conform to the most recent schema.\ud In this report we describe the current methods to information integration, evolution and versioning. We distinguish between integration of schemas and integration of the actual data. We also show some key issues when integrating XML data sources

    A Framework for XML-based Integration of Data, Visualization and Analysis in a Biomedical Domain

    Get PDF
    Biomedical data are becoming increasingly complex and heterogeneous in nature. The data are stored in distributed information systems, using a variety of data models, and are processed by increasingly more complex tools that analyze and visualize them. We present in this paper our framework for integrating biomedical research data and tools into a unique Web front end. Our framework is applied to the University of Washington’s Human Brain Project. Specifically, we present solutions to four integration tasks: definition of complex mappings from relational sources to XML, distributed XQuery processing, generation of heterogeneous output formats, and the integration of heterogeneous data visualization and analysis tools

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    Structurally Tractable Uncertain Data

    Full text link
    Many data management applications must deal with data which is uncertain, incomplete, or noisy. However, on existing uncertain data representations, we cannot tractably perform the important query evaluation tasks of determining query possibility, certainty, or probability: these problems are hard on arbitrary uncertain input instances. We thus ask whether we could restrict the structure of uncertain data so as to guarantee the tractability of exact query evaluation. We present our tractability results for tree and tree-like uncertain data, and a vision for probabilistic rule reasoning. We also study uncertainty about order, proposing a suitable representation, and study uncertain data conditioned by additional observations.Comment: 11 pages, 1 figure, 1 table. To appear in SIGMOD/PODS PhD Symposium 201
    corecore