9,428 research outputs found

    The advantages and cost effectiveness of database improvement methods

    Get PDF
    Relational databases have proved inadequate for supporting new classes of applications, and as a consequence, a number of new approaches have been taken (Blaha 1998), (Harrington 2000). The most salient alternatives are denormalisation and conversion to an object-oriented database (Douglas 1997). Denormalisation can provide better performance but has deficiencies with respect to data modelling. Object-oriented databases can provide increased performance efficiency but without the deficiencies in data modelling (Blaha 2000). Although there have been various benchmark tests reported, none of these tests have compared normalised, object oriented and de-normalised databases. This research shows that a non-normalised database for data containing type code complexity would be normalised in the process of conversion to an objectoriented database. This helps to correct badly organised data and so gives the performance benefits of de-normalisation while improving data modelling. The costs of conversion from relational databases to object oriented databases were also examined. Costs were based on published benchmark tests, a benchmark carried out during this study and case studies. The benchmark tests were based on an engineering database benchmark. Engineering problems such as computer-aided design and manufacturing have much to gain from conversion to object-oriented databases. Costs were calculated for coding and development, and also for operation. It was found that conversion to an object-oriented database was not usually cost effective as many of the performance benefits could be achieved by the far cheaper process of de-normalisation, or by using the performance improving facilities provided by many relational database systems such as indexing or partitioning or by simply upgrading the system hardware. It is concluded therefore that while object oriented databases are a better alternative for databases built from scratch, the conversion of a legacy relational database to an object oriented database is not necessarily cost effective

    Design and construction of maintainable knowledge bases through effective use of entity-relationship modeling techniques

    Get PDF
    The use of an accepted logical database design tool, Entity-Relationship Diagrams (E-RD), is explored as a method by which conceptual and pseudo-conceptual knowledge bases may be designed. Extensions to Peter Chen\u27s classic E-RD method which can model knowledge structure used by knowledge-based applications are explored. The use of E-RDs to design knowledge bases is proposed as a two-stage process. In the first stage, and E-RD, termed the Essential E-RD, is developed of the realm of the problem or enterprise being modeled. The Essential E-RD is completely independent of any knowledge representation model (KRM) and is intended for the understanding of the underlying conceptual entities and relationships in the domain of interest. The second stage of the proposed design process consists of expanding the Essential E-RD. The resulting E-RD, termed the Implementation E-RD, is a network of E-RD-modeled KRM constructs and will provide a method by which the proper KRM may be chosen and the knowledge base may be maintained. In some cases, the constructs of the Implementation E-RD may be mapped directly to a physical knowledge base. Using the proposed design tool will aid in both the development of the knowledge base and its maintenance. The need for building maintainable knowledge bases and problems often encountered during knowledge base construction will be explored. A case study is presented in which this tool is used to design a knowledge base. Problems avoided by the use of this method are highlighted, as are advantages the method presents to the maintenance of the knowledge base. Finally, a critique of the ramifications of this research is presented, as well as needs for future research

    Electronic Medical Records

    Get PDF
    Electronic Medical Record (EMR) relational database is considered to be a major component of any medical care information system. A major problem for researchers in medical informatics is finding the best way to use these databases to extract valued useful information to and about the patient’s diseases and treatments. Integrating different EMR databases is a great achievement that will improve health care systems. This paper presents an AI approach to extract generic EMR from different resources and transfer them to clinical cases. The utilized approach is based on retrieving different relationships between patients’ different data tables (files) and automatically generating EMRs in XML format, then building frame based medical cases to form a case repository that can be used in medical diagnostic systems

    Converting UML Class Diagrams into Temporal Object Relational DataBase

    Get PDF
    Number of active researchers and experts, are engaged to develop and implement new mechanism and features in time varying database management system (TVDBMS), to respond to the recommendation of modern business environment..Time-varying data management has been much taken into consideration with either the attribute or tuple time stamping schema. Our main approach here is to try to offer a better solution to all mentioned limitations of existing works in order to provide the non-procedural data definitions, queries of temporal data as complete as possible technical conversion that allow to easily realize and share all conceptual details of the UML class specifications, from conception and design point of view. This paper contributes to represent a logical design schema by UML class diagrams, which are handled by stereotypes to express a temporal object relational database with attribute timestamping

    Migrating relational databases into object-based and XML databases

    Get PDF
    Rapid changes in information technology, the emergence of object-based and WWW applications, and the interest of organisations in securing benefits from new technologies have made information systems re-engineering in general and database migration in particular an active research area. In order to improve the functionality and performance of existing systems, the re-engineering process requires identifying and understanding all of the components of such systems. An underlying database is one of the most important component of information systems. A considerable body of data is stored in relational databases (RDBs), yet they have limitations to support complex structures and user-defined data types provided by relatively recent databases such as object-based and XML databases. Instead of throwing away the large amount of data stored in RDBs, it is more appropriate to enrich and convert such data to be used by new systems. Most researchers into the migration of RDBs into object-based/XML databases have concentrated on schema translation, accessing and publishing RDB data using newer technology, while few have paid attention to the conversion of data, and the preservation of data semantics, e.g., inheritance and integrity constraints. In addition, existing work does not appear to provide a solution for more than one target database. Thus, research on the migration of RDBs is not fully developed. We propose a solution that offers automatic migration of an RDB as a source into the recent database technologies as targets based on available standards such as ODMG 3.0, SQL4 and XML Schema. A canonical data model (CDM) is proposed to bridge the semantic gap between an RDB and the target databases. The CDM preserves and enhances the metadata of existing RDBs to fit in with the essential characteristics of the target databases. The adoption of standards is essential for increased portability, flexibility and constraints preservation. This thesis contributes a solution for migrating RDBs into object-based and XML databases. The solution takes an existing RDB as input, enriches its metadata representation with the required explicit semantics, and constructs an enhanced relational schema representation (RSR). Based on the RSR, a CDM is generated which is enriched with the RDB's constraints and data semantics that may not have been explicitly expressed in the RDB metadata. The CDM so obtained facilitates both schema translation and data conversion. We design sets of rules for translating the CDM into each of the three target schemas, and provide algorithms for converting RDB data into the target formats based on the CDM. A prototype of the solution has been implemented, which generates the three target databases. Experimental study has been conducted to evaluate the prototype. The experimental results show that the target schemas resulting from the prototype and those generated by existing manual mapping techniques were comparable. We have also shown that the source and target databases were equivalent, and demonstrated that the solution, conceptually and practically, is feasible, efficient and correct

    The mediated data integration (MeDInt) : An approach to the integration of database and legacy systems

    Get PDF
    The information required for decision making by executives in organizations is normally scattered across disparate data sources including databases and legacy systems. To gain a competitive advantage, it is extremely important for executives to be able to obtain one unique view of information in an accurate and timely manner. To do this, it is necessary to interoperate multiple data sources, which differ structurally and semantically. Particular problems occur when applying traditional integration approaches, for example, the global schema needs to be recreated when the component schema has been modified. This research investigates the following heterogeneities between heterogeneous data sources: Data Model Heterogeneities, Schematic Heterogeneities and Semantic Heterogeneities. The problems of existing integration approaches are reviewed and solved by introducing and designing a new integration approach to logically interoperate heterogeneous data sources and to resolve three previously classified heterogeneities. The research attempts to reduce the complexity of the integration process by maximising the degree of automation. Mediation and wrapping techniques are employed in this research. The Mediated Data Integration (MeDint) architecture has been introduced to integrate heterogeneous data sources. Three major elements, the MeDint Mediator, wrappers, and the Mediated Data Model (MDM) play important roles in the integration of heterogeneous data sources. The MeDint Mediator acts as an intermediate layer transforming queries to sub-queries, resolving conflicts, and consolidating conflict-resolved results. Wrappers serve as translators between the MeDint Mediator and data sources. Both the mediator and wrappers arc well-supported by MDM, a semantically-rich data model which can describe or represent heterogeneous data schematically and semantically. Some organisational information systems have been tested and evaluated using the MeDint architecture. The results have addressed all the research questions regarding the interoperability of heterogeneous data sources. In addition, the results also confirm that the Me Dint architecture is able to provide integration that is transparent to users and that the schema evolution does not affect the integration
    • …
    corecore