862 research outputs found

    On the applicability of schema integration techniques to database interoperation

    Get PDF
    We discuss the applicability of schema integration techniques developed for tightly-coupled database interoperation to interoperation of databases stemming from different modelling contexts. We illustrate that in such an environment, it is typically quite difficult to infer the real-world semantics of remote classes from their definition in remote databases. However, defining relationships between the real-world semantics of schema elements is essential in existing schema integration techniques. We propose to base database interoperation in such environments on instance-level semantic relationships, to be defined using what we call object comparison rules. Both the local and the remote classifications of the appropriately merged instances are maintained, allowing for the derivation of a global class hierarchy if desired

    Microarray Data Management. An Enterprise Information Approach: Implementations and Challenges

    Full text link
    The extraction of information form high-throughput experiments is a key aspect of modern biology. Early in the development of microarray technology, researchers recognized that the size of the datasets and the limitations of both computational and visualization techniques restricted their ability to find the biological meaning hidden in the data. In addition, most researchers wanted to make their datasets accessible to others. This resulted in the development of new and advanced data storage, analysis, and visualization tools enabling the cross-platform validation of the experiments and the identification of previously undetected patterns. In order to reap the benefits of this microarray data, researchers have needed to implement database management systems providing integration of different experiments and data types. Moreover, it was necessary to standardize the basic data structure and experimental techniques for the standardization of microarray platforms. In this chapter, we introduce the reader to the major concepts related to the use of controlled vocabularies (ontologies), the definition of Minimum Information About a Microarray Experiment (MIAME) and provide an overview of different microarray data management strategies in use today. We summarize the main characteristics of microarray data storage and sharing strategies including warehouses, datamarts, and federations. The fundamental challenges involved in the distribution, and retrieval of microarray data are presented, along with an overview of some emerging technologies.Comment: 10 pages, 12 figures, To apperar in: Database Modeling in Biology: Practices and Challenges. Ma, Zongmin; Chen, Jake (Eds.) Springer Sciences & Business Media, Inc., New York, USA (2006). ISBN: 0-387-30238-

    Geospatial data harmonization from regional level to european level: a usa case in forest fire data

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Geospatial data harmonization is becoming more and more important to increase interoperability of heterogeneous data derived from various sources in spatial data infrastructures. To address this harmonization issue we present the current status of data availability among different communities, languages, and administrative scales from regional to national and European levels. With a use case in forest data models in Europe, interoperability of burned area data derived from Europe and Valencia Community in Spain were tested and analyzed on the syntactic, schematic and semantic level. We suggest approaches for achieving a higher chance of data interoperability to guide forest domain experts in forest fire analysis. For testing syntactic interoperability, a common platform in the context of formats and web services was examined. We found that establishing OGC standard web services in a combination with GIS software applications that support various formats and web services can increase the chance of achieving syntactic interoperability between multiple geospatial data derived from different sources. For testing schematic and semantic interoperability, the ontology-based schema mapping approach was taken to transform a regional data model to a European data model on the conceptual level. The Feature Manipulation Engine enabled various types of data transformation from source to target attributes to achieve schematic interoperability. Ontological modelling in Protégé helped identify a common concept between the source and target data models, especially in cases where matching attributes were not found at the schematic level. Establishment of the domain ontology was explored to reach common ground between application ontologies and achieve a higher level of semantic interoperability

    Challenge of Design Data Exchange between heterogeneous Database Schema

    Get PDF
    The development of complex systems becomes increasingly difficult. The diversity and number of tools necessary to develop such a system is extensive. One solution to exchange engineering data between these tools is the Standard for Product Data Exchange (STEP), ISO 10303. It offers domain specific database schema called application protocols. This paper analyses the problems of the data exchange between tools via an application protocol. and proposes a Transformation Report, which records individual transformation steps during the exchange between two different tools. The Transformation Report supports the understanding of the data exchange process and helps correcting incomplete or incorrect transformed data

    A cooperative framework for molecular biology database integration using image object selection

    Get PDF
    The theme and the concept of 'Molecular Biology Database Integration' and the problems associated with this concept initiated the idea for this Ph.D research. The available technologies facilitate to analyse the data independently and discretely but it fails to integrate the data resources for more meaningful information. This along with the integration issues created the scope for this Ph.D research. The research has reviewed the 'database interoperability' problems and it has suggested a framework for integrating the molecular biology databases. The framework has proposed to develop a cooperative environment to share information on the basis of common purpose for the molecular biology databases. The research has also reviewed other implementation and interoperability issues for laboratory based, dedicated and target specific database. The research has addressed the following issues: diversity of molecular biology databases schemas, schema constructs and schema implementation multi-database query using image object keying, database integration technologies using context graph, automated navigation among these databases. This thesis has introduced a new approach for database implementation. It has introduced an interoperable component database concept to initiate multidatabase query on gene mutation data. A number of data models have been proposed for gene mutation data which is the basis for integrating the target specific component database to be integrated with the federated information system. The proposed data models are: data models for genetic trait analysis, classification of gene mutation data, pathological lesion data and laboratory data. The main feature of this component database is non-overlapping attributes and it will follow non-redundant integration approach as explained in the thesis. This will be achieved by storing attributes which will not have the union or intersection of any attributes that exist in public domain molecular biology databases. Unlike data warehousing technique, this feature is quite unique and novel. The component database will be integrated with other biological data sources for sharing information in a cooperative environment. This involves developing new tools. The thesis explains the role of these new tools which are: meta data extractor, mapping linker, query generator and result interpreter. These tools are used for a transparent integration without creating any global schema of the participating databases. The thesis has also established the concept of image object keying for multidatabase query and it has proposed a relevant algorithm for matching protein spot in gel electrophoresis image. An object spot in gel electrophoresis image will initiate the query when it is selected by the user. It matches the selected spot with other similar spots in other resource databases. This image object keying method is an alternative to conventional multidatabase query which requires writing complex SQL scripts. This method also resolve the semantic conflicts that exist among molecular biology databases. The research has proposed a new framework based on the context of the web data for interactions with different biological data resources. A formal description of the resource context is described in the thesis. The implementation of the context into Resource Document Framework (RDF) will be able to increase the interoperability by providing the description of the resources and the navigation plan for accessing the web based databases. A higher level construct is developed (has, provide and access) to implement the context into RDF for web interactions. The interactions within the resources are achieved by utilising an integration domain to extract the required information with a single instance and without writing any query scripts. The integration domain allows to navigate and to execute the query plan within the resource databases. An extractor module collects elements from different target webs and unify them as a whole object in a single page. The proposed framework is tested to find specific information e.g., information on Alzheimer's disease, from public domain biology resources, such as, Protein Data Bank, Genome Data Bank, Online Mendalian Inheritance in Man and local database. Finally, the thesis proposes further propositions and plans for future work

    Exploring Semantic Interoperability in e-Government Interoperability Frameworks for intra-African collaboration: A Systematic Literature Review

    Get PDF
    While many African countries have called for ICT based intra-African collaboration, services, and trade, it is not known whether this call is technically feasible. For such intra-African based collaboration, semantic interoperability would be required between the national e-government systems. This paper reviewed the e-government interoperability frameworks (e-GIFs) of English and Arabic speaking African countries to identify the evidence and conflict approaches to semantic interoperability. The results suggest that only seven African countries have e-GIFs, which have mainly been adopted from the UK\u27s e-Government Metadata Standards (eGMS) and on Dublin\u27s Core metadata (DC). However, many of the e-GIFs, with the exception of Nigeria, have not been contextualized to the local needs. The paper, therefore, concluded that more effort needs to be placed in developing e-GIFs in Africa, with particular emphasis on semantic interoperability, if the dream of intra-African collaboration is to be achieved

    Ontology-Based Resolution of Cloud Data Lock-in Problem

    Get PDF
    Cloud computing is nowadays becoming a popular paradigm for the provision of computing infrastructure that enables organizations to achieve financial savings. On the other hand, there are some known obstacles, among which vendor lock-in stands out. Furthermore, due to missing standards and heterogeneities of cloud storage systems, the migration of data to alternative cloud providers is expensive and time-consuming. We propose an approach based on Semantic Web services and AI planning to tackle cloud vendor data lock-in problem. To complete the mentioned task, data structures and data type mapping rules between different types of cloud storage systems are defined. The migration of data among different providers of platform as a service is presented in order to prove the practical applicability of the proposed approach. Additionally, this concept was also applied to software as a service model of cloud computing to perform one-shot data migration from Zoho CRM to Salesforce CRM

    Towards Conceptual Modelling Interoperability in a Web Tool for Ontology Engineering

    Get PDF
    The definition of suitable visual paradigms for ontology modelling is still an open issue. Despite obvious differences between the expressiveness of conceptual modelling (CM) languages and ontologies, many proposed tools have been based on UML, EER and ORM. Additionally, all of these tools support only one CM as visual language, reducing even more their modelling capabilities. In previous works, we have presented crowd as a Web architecture for graphical ontology designing in UML and logical reasoning to verify the relevant properties of these models. The aim of this tool is to extend the reasoning capabilities on top of visual representations as much as possible. In this paper, we present an extended crowd architecture and a new prototype focusing on an ontology-driven metamodel to enable different CMs visual languages for ontology modelling. Thus facilitating inter-model assertions across models represented in different languages, converting between modelling languages and reasoning on them. Finally, we detail the new architecture and demonstrate the usage of the prototype with simple examples

    Urban information model for city planning

    Get PDF
    City planning is a complex task and therefore needs to consider the interplay between multi-aspects of a city, for example, transport, pollution, and crime. A city model is important to representing urban issues in a clear manner to the relative stakeholders. Although some city models have been used in the planning process, they are often based on narrow data sets. When sustainability and the quality of urban life generally is considered a more holistic analysis of city issues during the planning process is needed. It calls for city models to be based on integrated data sets. The paper describes the concept and challenges of nD urban information model. The research work on how to develop an nD urban information model to accommodate data sets relevant to different aspects of city planning is presented
    • 

    corecore