15 research outputs found

    Metadata-driven Data Migration from Object-relational Database to NoSQL Document-oriented Database

    Get PDF
    The object-relational databases (ORDB) are powerful for managing complex data, but they suffer from problems of scalability and managing large-scale data. Therefore, the importance of the migration of ORDB to NoSQL derives from the fact that the large volume of data can be handled in the best way with high scalability and availability. This paper reports our metadata-driven approach for the migration of the ORDB to document-oriented NoSQL database. Our data migration approach involves three major stages: a preprocessing stage, to extract the data and the schema's components, a processing stage, to provide the data transformation, and a post-processing stage, to store the migrated data as BSON documents. The approach maintains the benefits of Oracle ORDB in NoSQL MongoDB by supporting integrity constraint checking. To validate our approach, we developed OR2DOD (Object Relational to Document-Oriented Databases) system, and the experimental results confirm the effectiveness of our proposal

    A data transformation model for relational and non-relational data

    Get PDF
    The information systems that support small, medium, and large organisations need data transformation solutions from multiple data sources to fulfill the requirements of new applications and decision-making to stay competitive. Relational data is the foundation for the majority of applications programme, whereas non-relational data is the foundation for the majority of newly produced applications. The relational model is the most elegant one; nonetheless, this kind of database has a drawback when it comes to managing very large volumes of data. Because they can handle massive volumes of data, non-relational databases have evolved into relational database substitutes. The key issue is that rules for data transformation processes across various data types are becoming less well-defined, leading to a steady decline in data quality. Therefore, to handle relational and non-relational data and satisfy the requirements for data quality, an empirical model in this domain knowledge is required. This study seeks to develop a data transformation model used for different data sources while satisfying data quality requirements, especially the transformation processes in relational and non-relational model, named Data Transformation with Two ETL Phases and Central-Library (DTTEPC). The different stages and methods in the developed model are used to transform the metadata information and stored data from relational to non-relational systems, and vice versa. The model is developed and validated through expert review, and the prototype based on the final version is employed in two case studies: education and healthcare. The results of the usability test demonstrate that the developed model is capable of transforming metadata data and stored data across systems. So enhancing the information systems in various organizations through data transformation solutions. The DTTEPC model improved the integrity and completeness of the data transformation processes. Moreover, supports decision-makers by utilizing information from various sources and systems in real-time demands

    NoSQL Database Modeling and Management: A Systematic Literature Review

    Get PDF
    The NoSQL databases that emerged this century were created to solve the limitations of relational database systems due to the different types of data that have appeared for information processing. In this paper, we present the results of a secondary study carried out to find and synthesize the research made up to now on modeling processes, characteristics of the used types of data, and management tools for NoSQL Databases. Currently, four types are recognized and classified according to the data model they use: key-value, document-oriented, column-based, and graph-based. With this study, it was possible to identify that the most frequently type of NoSQL database model is that of documents because it offers greater flexibility and versatility compared to the other three models. Although it offers more complex search methods, in terms of data, column and document schemas are the ones that usually describe their characteristics. It was also possible to observe a trend in the use of the column-oriented model and the document-oriented model in the management tools, and, although they all comply with the basic functionalities, the differences lie in the way in which the information is stored and the way they can be accessed

    An evaluation of the challenges of Multilingualism in Data Warehouse development

    Get PDF
    In this paper we discuss Business Intelligence and define what is meant by support for Multilingualism in a Business Intelligence reporting context. We identify support for Multilingualism as a challenging issue which has implications for data warehouse design and reporting performance. Data warehouses are a core component of most Business Intelligence systems and the star schema is the approach most widely used to develop data warehouses and dimensional Data Marts. We discuss the way in which Multilingualism can be supported in the Star Schema and identify that current approaches have serious limitations which include data redundancy and data manipulation, performance and maintenance issues. We propose a new approach to enable the optimal application of multilingualism in Business Intelligence. The proposed approach was found to produce satisfactory results when used in a proof-of-concept environment. Future work will include testing the approach in an enterprise environmen

    Disaster Data Management in Cloud Environments

    Get PDF
    Facilitating decision-making in a vital discipline such as disaster management requires information gathering, sharing, and integration on a global scale and across governments, industries, communities, and academia. A large quantity of immensely heterogeneous disaster-related data is available; however, current data management solutions offer few or no integration capabilities and limited potential for collaboration. Moreover, recent advances in cloud computing, Big Data, and NoSQL have opened the door for new solutions in disaster data management. In this thesis, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM) with the objectives of 1) facilitating information gathering and sharing, 2) storing large amounts of disaster-related data from diverse sources, and 3) facilitating search and supporting interoperability and integration. Data are stored in a cloud environment taking advantage of NoSQL data stores. The proposed framework is generic, but this thesis focuses on the disaster management domain and data formats commonly present in that domain, i.e., file-style formats such as PDF, text, MS Office files, and images. The framework component responsible for addressing simulation models is SimOnto. SimOnto, as proposed in this work, transforms domain simulation models into an ontology-based representation with the goal of facilitating integration with other data sources, supporting simulation model querying, and enabling rule and constraint validation. Two case studies presented in this thesis illustrate the use of Disaster-CDM on the data collected during the Disaster Response Network Enabled Platform (DR-NEP) project. The first case study demonstrates Disaster-CDM integration capabilities by full-text search and querying services. In contrast to direct full-text search, Disaster-CDM full-text search also includes simulation model files as well as text contained in image files. Moreover, Disaster-CDM provides querying capabilities and this case study demonstrates how file-style data can be queried by taking advantage of a NoSQL document data store. The second case study focuses on simulation models and uses SimOnto to transform proprietary simulation models into ontology-based models which are then stored in a graph database. This case study demonstrates Disaster-CDM benefits by showing how simulation models can be queried and how model compliance with rules and constraints can be validated

    Окружење за анализу и оцену квалитета великих и повезаних података

    Get PDF
    Linking and publishing data in the Linked Open Data format increases the interoperability and discoverability of resources over the Web. To accomplish this, the process comprises several design decisions, based on the Linked Data principles that, on one hand, recommend to use standards for the representation and the access to data on the Web, and on the other hand to set hyperlinks between data from different sources. Despite the efforts of the World Wide Web Consortium (W3C), being the main international standards organization for the World Wide Web, there is no one tailored formula for publishing data as Linked Data. In addition, the quality of the published Linked Open Data (LOD) is a fundamental issue, and it is yet to be thoroughly managed and considered. In this doctoral thesis, the main objective is to design and implement a novel framework for selecting, analyzing, converting, interlinking, and publishing data from diverse sources, simultaneously paying great attention to quality assessment throughout all steps and modules of the framework. The goal is to examine whether and to what extent are the Semantic Web technologies applicable for merging data from different sources and enabling end-users to obtain additional information that was not available in individual datasets, in addition to the integration into the Semantic Web community space. Additionally, the Ph.D. thesis intends to validate the applicability of the process in the specific and demanding use case, i.e. for creating and publishing an Arabic Linked Drug Dataset, based on open drug datasets from selected Arabic countries and to discuss the quality issues observed in the linked data life-cycle. To that end, in this doctoral thesis, a Semantic Data Lake was established in the pharmaceutical domain that allows further integration and developing different business services on top of the integrated data sources. Through data representation in an open machine-readable format, the approach offers an optimum solution for information and data dissemination for building domain-specific applications, and to enrich and gain value from the original dataset. This thesis showcases how the pharmaceutical domain benefits from the evolving research trends for building competitive advantages. However, as it is elaborated in this thesis, a better understanding of the specifics of the Arabic language is required to extend linked data technologies utilization in targeted Arabic organizations.Повезивање и објављивање података у формату "Повезани отворени подаци" (енг. Linked Open Data) повећава интероперабилност и могућности за претраживање ресурса преко Web-а. Процес је заснован на Linked Data принципима (W3C, 2006) који са једне стране елаборира стандарде за представљање и приступ подацима на Wебу (RDF, OWL, SPARQL), а са друге стране, принципи сугеришу коришћење хипервеза између података из различитих извора. Упркос напорима W3C конзорцијума (W3C је главна међународна организација за стандарде за Web-у), не постоји јединствена формула за имплементацију процеса објављивање података у Linked Data формату. Узимајући у обзир да је квалитет објављених повезаних отворених података одлучујући за будући развој Web-а, у овој докторској дисертацији, главни циљ је (1) дизајн и имплементација иновативног оквира за избор, анализу, конверзију, међусобно повезивање и објављивање података из различитих извора и (2) анализа примена овог приступа у фармацeутском домену. Предложена докторска дисертација детаљно истражује питање квалитета великих и повезаних екосистема података (енг. Linked Data Ecosystems), узимајући у обзир могућност поновног коришћења отворених података. Рад је мотивисан потребом да се омогући истраживачима из арапских земаља да употребом семантичких веб технологија повежу своје податке са отвореним подацима, као нпр. DBpedia-јом. Циљ је да се испита да ли отворени подаци из Арапских земаља омогућавају крајњим корисницима да добију додатне информације које нису доступне у појединачним скуповима података, поред интеграције у семантички Wеб простор. Докторска дисертација предлаже методологију за развој апликације за рад са повезаним (Linked) подацима и имплементира софтверско решење које омогућује претраживање консолидованог скупа података о лековима из изабраних арапских земаља. Консолидовани скуп података је имплементиран у облику Семантичког језера података (енг. Semantic Data Lake). Ова теза показује како фармацеутска индустрија има користи од примене иновативних технологија и истраживачких трендова из области семантичких технологија. Међутим, како је елаборирано у овој тези, потребно је боље разумевање специфичности арапског језика за имплементацију Linked Data алата и њухову примену са подацима из Арапских земаља

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    Efficient Management of Large Models via Static Analysis

    Get PDF
    As the size of software and system models grows, scalability issues in the current generation of model management languages (e.g. transformation, validation) and their supporting tooling become more prominent. With the growing popularity of MDE in larger projects, the efficient management and processing of large models have become critical considerations. To address this challenge, execution engines of model management programs need to become more efficient in their use of system resources. Effective resource management is essential not only for minimizing execution costs but also for optimizing resource usage, particularly in scenarios where resources are billed based on usage patterns. This thesis addresses this challenge by presenting an approach to enhance the efficiency of model management programs, which play a pivotal role in querying and manipulating models. This approach focuses on enabling execution engines to load only the necessary parts of models, minimizing the overhead associated with loading unnecessary model elements into memory. Through the utilization of in-advance knowledge obtained from static analysis of model management programs, execution engines can identify, load, and process only the model elements essential for execution. Furthermore, the approach ensures that elements are disposed of from memory when no longer needed, optimizing both memory utilization and processing time. Experimental evaluations demonstrate that our approach empowers model management programs to process larger models faster with a reduced memory footprint compared to current state-of-the-art approaches

    Toward building RDB to HBase conversion rules

    No full text
    Abstract Cloud data stores that can handle very large amounts of data, such as Apache HBase, have accelerated the use of non-relational databases (coined as NoSQL databases) as a way of addressing RDB database limitations with regards to scalability and performance of existing systems. Converting existing systems and their databases to this technology is not trivial, given that RDB practitioners tend to use the relational model design mindset when converting existing databases. This can result in inefficient NoSQL design, leading to suboptimal query speed or inefficient database schema. This paper reports on a two-phase experiment: (1) a conversion from RDB to HBase database, a NoSQL type of database, without the use of conversion rules and based on a heuristic approach and (2) an experiment with different schema designs to uncover and extract conversion rules that could be useful conversion guidelines for the industry

    Semantic Systems. The Power of AI and Knowledge Graphs

    Get PDF
    This open access book constitutes the refereed proceedings of the 15th International Conference on Semantic Systems, SEMANTiCS 2019, held in Karlsruhe, Germany, in September 2019. The 20 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 88 submissions. They cover topics such as: web semantics and linked (open) data; machine learning and deep learning techniques; semantic information management and knowledge integration; terminology, thesaurus and ontology management; data mining and knowledge discovery; semantics in blockchain and distributed ledger technologies
    corecore