293 research outputs found

    Transforming N-ary relationships to database schemas: an old and forgotten problem

    Get PDF
    The N-ary relationships, have been traditionally a source of confusion and still are. One important source of confusion is that the term cardinality in a relationship has several interpretations, two of them being very popular. But none of the two approaches, nor the two together, allow us to express all the possible cardinality patterns. The transformations from all the possible relationships to database schemas have never been described by the existing literature. Using the 14 ternary patterns as example, we discuss these transformations particularly the transformations from the patterns ignored in the literature.Postprint (published version

    A Logical Design Methodology for Relational Databases Using the Extended ER Model

    Full text link
    https://deepblue.lib.umich.edu/bitstream/2027.42/154152/1/39015099114723.pd

    Web and Semantic Web Query Languages

    Get PDF
    A number of techniques have been developed to facilitate powerful data retrieval on the Web and Semantic Web. Three categories of Web query languages can be distinguished, according to the format of the data they can retrieve: XML, RDF and Topic Maps. This article introduces the spectrum of languages falling into these categories and summarises their salient aspects. The languages are introduced using common sample data and query types. Key aspects of the query languages considered are stressed in a conclusion

    The mediated data integration (MeDInt) : An approach to the integration of database and legacy systems

    Get PDF
    The information required for decision making by executives in organizations is normally scattered across disparate data sources including databases and legacy systems. To gain a competitive advantage, it is extremely important for executives to be able to obtain one unique view of information in an accurate and timely manner. To do this, it is necessary to interoperate multiple data sources, which differ structurally and semantically. Particular problems occur when applying traditional integration approaches, for example, the global schema needs to be recreated when the component schema has been modified. This research investigates the following heterogeneities between heterogeneous data sources: Data Model Heterogeneities, Schematic Heterogeneities and Semantic Heterogeneities. The problems of existing integration approaches are reviewed and solved by introducing and designing a new integration approach to logically interoperate heterogeneous data sources and to resolve three previously classified heterogeneities. The research attempts to reduce the complexity of the integration process by maximising the degree of automation. Mediation and wrapping techniques are employed in this research. The Mediated Data Integration (MeDint) architecture has been introduced to integrate heterogeneous data sources. Three major elements, the MeDint Mediator, wrappers, and the Mediated Data Model (MDM) play important roles in the integration of heterogeneous data sources. The MeDint Mediator acts as an intermediate layer transforming queries to sub-queries, resolving conflicts, and consolidating conflict-resolved results. Wrappers serve as translators between the MeDint Mediator and data sources. Both the mediator and wrappers arc well-supported by MDM, a semantically-rich data model which can describe or represent heterogeneous data schematically and semantically. Some organisational information systems have been tested and evaluated using the MeDint architecture. The results have addressed all the research questions regarding the interoperability of heterogeneous data sources. In addition, the results also confirm that the Me Dint architecture is able to provide integration that is transparent to users and that the schema evolution does not affect the integration

    A Methodology for Reengineering Relational Databases to an Object-Oriented Database

    Get PDF
    This research proposes and evaluates a methodology for reengineering a relational database to an object-oriented database. We applied this methodology to reengineering the Air Force Institute of Technology Student Information System (AFITSIS) as our test case. With this test case, we could verify the applicability of the proposed methodology, especially because AFITSIS comes from an old version of Oracle RDBMS. We had the opportunity to implement part of the object model using an object-oriented database, and we present some peculiarities encountered during this implementation. The most important result of this research is that it demonstrated that the proposed methodology can be used for reengineering an arbitrarily selected relational database to an object-oriented database. It appears that this approach can be applied to any relational database

    An Ontology-Based Assistant For Analyzing Agents\u27 Activities

    Get PDF
    This thesis reports on work in progress on software that helps an analyst identify and analyze activities of actors (such as vehicles) in an intelligence-relevant scenario. A system is being developed to aid intelligence analysts, IAGOA ((Intelligence Analyst’s Geospatial and Ontological Assistant). Analysis may be accomplished by retrieving simulated satellite data of ground vehicles and interacting with software modules that allow the analyst to conjecture the activities in which the actor is engaged along with the (largely geospatial and temporal) features of the area of operation relevant to the natures of those activities. Activities are conceptualized by ontologies. The research relies on natural language components (semantic frames) gathered from the FrameNet lexical database, which captures the semantics of lexical items with an ontology using OWL. The software has two components, one for the analyst, and one for a modeler who produces HTML and parameterized KML documents used by the analyst. The most significant input to the modeler software is the FrameNet OWL file, and the interface for the analyst and, to some extent, the modeler is provided by the Google Earth API

    Introduction to Database Design (5th Edition)

    Get PDF

    Ontological approach for database integration

    Get PDF
    Database integration is one of the research areas that have gained a lot of attention from researcher. It has the goal of representing the data from different database sources in one unified form. To reach database integration we have to face two obstacles. The first one is the distribution of data, and the second is the heterogeneity. The Web ensures addressing the distribution problem, and for the case of heterogeneity there are many approaches that can be used to solve the database integration problem, such as data warehouse and federated databases. The problem in these two approaches is the lack of semantics. Therefore, our approach exploits the Semantic Web methodology. The hybrid ontology method can be facilitated in solving the database integration problem. In this method two elements are available; the source (database) and the domain ontology, however, the local ontology is missing. In fact, to ensure the success of this method the local ontologies should be produced. Our approach obtains the semantics from the logical model of database to generate local ontology. Then, the validation and the enhancement can be acquired from the semantics obtained from the conceptual model of the database. Now, our approach can be applied in the generation phase and the validation-enrichment phase. In the generation phase in our approach, we utilise the reverse engineering techniques in order to catch the semantics hidden in the SQL language. Then, the approach reproduces the logical model of the database. Finally, our transformation system will be applied to generate an ontology. In our transformation system, all the concepts of classes, relationships and axioms will be generated. Firstly, the process of class creation contains many rules participating together to produce classes. Our unique rules succeeded in solving problems such as fragmentation and hierarchy. Also, our rules eliminate the superfluous classes of multi-valued attribute relation as well as taking care of neglected cases such as: relationships with additional attributes. The final class creation rule is for generic relation cases. The rules of the relationship between concepts are generated with eliminating the relationships between integrated concepts. Finally, there are many rules that consider the relationship and the attributes constraints which should be transformed to axioms in the ontological model. The formal rules of our approach are domain independent; also, it produces a generic ontology that is not restricted to a specific ontology language. The rules consider the gap between the database model and the ontological model. Therefore, some database constructs would not have an equivalent in the ontological model. The second phase consists of the validation and the enrichment processes. The best way to validate the transformation result is to facilitate the semantics obtained from the conceptual model of the database. In the validation phase, the domain expert captures the missing or the superfluous concepts (classes or relationships). In the enrichment phase, the generalisation method can be applied to classes that share common attributes. Also, the concepts of complex or composite attributes can be represented as classes. We implement the transformation system by a tool called SQL2OWL in order to show the correctness and the functionally of our approach. The evaluation of our system showed the success of our proposed approach. The evaluation goes through many techniques. Firstly, a comparative study is held between the results produced by our approach and the similar approaches. The second evaluation technique is the weighting score system which specify the criteria that affect the transformation system. The final evaluation technique is the score scheme. We consider the quality of the transformation system by applying the compliance measure in order to show the strength of our approach compared to the existing approaches. Finally the measures of success that our approach considered are the system scalability and the completeness

    An ontology-driven unifying metamodel of UML Class Diagrams, EER, and ORM2

    Get PDF
    Software interoperability and application integration can be realized \linebreak through using their respective conceptual data models, which may be represented in different conceptual data modeling languages. Such modeling languages seem similar, yet are known to be distinct. Several translations between subsets of the languages' features exist, but there is no unifying framework that respects most language features of the static structural components and constraints. We aim to fill this gap. To this end, we designed a common and unified ontology-driven metamodel of the static, structural components and constraints in such a way that it unifies ER, EER, UML Class Diagrams v2.4.1, and ORM and ORM2 such that each one is a proper fragment of the consistent metamodel. The paper also presents some notable insights into the relatively few common entities and constraints, an analysis on roles, relationships, and attributes, and other modeling motivations are discussed. We describe two practical use cases of the metamodel, being a quantitative assessment of the entities of 30 models in ER/EER, UML, and ORM/ORM2, and a qualitative evaluation of inter-model assertions
    corecore