10 research outputs found

    Comparative analysis of non-relational and relational databases

    No full text
    Relational databases (DB) provide good support for data with a predetermined structure. But the latest trends in the IT sphere have shown that the tools for working with massive data (Big Data), which have different presentations are need. Therefore, non-relational DBs (NoSQL DBs) appeared. One of the key characteristics of NoSQL DBs is that they can work with a huge range of different data structures. The article compares the most common relational and non-relational database.Реляційні бази даних (БД) забезпечують хорошу підтримку даних з наперед визначеною структурою. Але останні тренди в ІТ сфері показали, що необхідні інструменти для роботи з великими обсягами даних (Big Data), які мають різні структури представлення. Тому і з'явилися нереляційні БД (NoSQL БД). Однією з ключових характеристик NoSQL БД є те, що вони можуть працювати з величезними наборами даних різної структури. У статті порівнюються найпоширеніші реляційні і нереляційні БД

    Knowledge as a Service Framework for Disaster Data Management

    Get PDF
    Each year, a number of natural disasters strike across the globe, killing hundreds and causing billions of dollars in property and infrastructure damage. Minimizing the impact of disasters is imperative in today’s society. As the capabilities of software and hardware evolve, so does the role of information and communication technology in disaster mitigation, preparation, response, and recovery. A large quantity of disaster-related data is available, including response plans, records of previous incidents, simulation data, social media data, and Web sites. However, current data management solutions offer few or no integration capabilities. Moreover, recent advances in cloud computing, big data, and NoSQL open the door for new solutions in disaster data management. In this paper, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM), with the objectives of 1) storing large amounts of disaster-related data from diverse sources, 2) facilitating search, and 3) supporting their interoperability and integration. Data are stored in a cloud environment using a combination of relational and NoSQL databases. The case study presented in this paper illustrates the use of Disaster-CDM on an example of simulation models

    Database Security Issues and Challenges in Cloud Computing

    Get PDF
    The majority of enterprises have recently enthusiastically embraced cloud computing, and at the same time, the database has moved to the cloud. This cloud database paradigm can lower data administration expenses and free up new business to concentrate on the product that is being delivered. Furthermore, issues with scalability, flexibility, performance, availability, and affordability can be resolved with cloud computing. Security, however, has been noted as posing a serious risk to cloud databases and has been essential in fostering public acceptance of cloud computing. Several security factors should be taken into account before implementing any cloud database management system. These features comprise, but are not restricted to, data privacy, data isolation, data availability, data integrity, confidentiality, and defense against insider threats. In this paper, we discuss the most recent research that took into account the security risks and problems associated with adopting cloud databases. In order to better comprehend these problems and how they affect cloud databases, we also provide a conceptual model. Additionally, we look into these problems to the extent that they are relevant and provide two instances of vendors and security features that were used for cloud-based databases. Finally, we provide an overview of the security risks associated with open cloud databases and suggest possible future paths

    Collaborative knowledge as a service applied to the disaster management domain

    Get PDF
    Cloud computing offers services which promise to meet continuously increasing computing demands by using a large number of networked resources. However, data heterogeneity remains a major hurdle for data interoperability and data integration. In this context, a Knowledge as a Service (KaaS) approach has been proposed with the aim of generating knowledge from heterogeneous data and making it available as a service. In this paper, a Collaborative Knowledge as a Service (CKaaS) architecture is proposed, with the objective of satisfying consumer knowledge needs by integrating disparate cloud knowledge through collaboration among distributed KaaS entities. The NIST cloud computing reference architecture is extended by adding a KaaS layer that integrates diverse sources of data stored in a cloud environment. CKaaS implementation is domain-specific; therefore, this paper presents its application to the disaster management domain. A use case demonstrates collaboration of knowledge providers and shows how CKaaS operates with simulation models

    Proliferating Cloud Density through Big Data Ecosystem, Novel XCLOUDX Classification and Emergence of as-a-Service Era

    Get PDF
    Big Data is permeating through the bigger aspect of human life for scientific and commercial dependencies, especially for massive scale data analytics of beyond the exabyte magnitude. As the footprint of Big Data applications is continuously expanding, the reliability on cloud environments is also increasing to obtain appropriate, robust and affordable services to deal with Big Data challenges. Cloud computing avoids any need to locally maintain the overly scaled computing infrastructure that include not only dedicated space, but the expensive hardware and software also. Several data models to process Big Data are already developed and a number of such models are still emerging, potentially relying on heterogeneous underlying storage technologies, including cloud computing. In this paper, we investigate the growing role of cloud computing in Big Data ecosystem. Also, we propose a novel XCLOUDX {XCloudX, X…X} classification to zoom in to gauge the intuitiveness of the scientific name of the cloud-assisted NoSQL Big Data models and analyze whether XCloudX always uses cloud computing underneath or vice versa. XCloudX symbolizes those NoSQL Big Data models that embody the term “cloud” in their name, where X is any alphanumeric variable. The discussion is strengthen by a set of important case studies. Furthermore, we study the emergence of as-a-Service era, motivated by cloud computing drive and explore the new members beyond traditional cloud computing stack, developed over the last few years

    Understanding the Impact of Databases on the Energy Efficiency of Cloud Applications

    Get PDF
    RÉSUMÉ Aujourd'hui, les applications infonuagiques sont utilisées dans toutes les industries ; de la finance, au commerce de détail, en passant par l'éducation, la communication, la manufacture, les services publics et les transports. Malgré leur popularité et leur large adoption, peu d'informations sont disponibles sur l'empreinte énergétique de ces applications et, en particulier, celle de leurs bases de données, qui constituent l'épine dorsale de ces applications infonuagiques. Pourtant, la réduction de la consommation d'énergie des applications est un objectif majeur pour la société et continuera de l'être à l'avenir. Deux familles de bases de données sont actuellement utilisées dans les applications infonuagiques: Les bases de données relationnelles et non-relationnelles. Aussi, nous examinons la consommation d'énergie des trois bases de données utilisées par les applications infonuagiques : MySQL, PostgreSQL et MongoDB, respectivement relationelle, relationelle, et non-relationelle. Nous réalisons une série d'expériences avec trois applications infonuagiques (une application multi-thread RESTful, DVD Store, et JPetStore). Nous étudions également l'impact des patrons infonuagiques sur la consommation d'énergie parce que les bases de données dans les applications infonuagiques sont souvent implémentées conjointement avec des patrons infonuagiques tels que le Local Database Proxy, le Local Sharding Based Router, ou la Priority Message Queue. Nous mesurons la consommation d'énergie en utilisant l'outil Power-API pour garder une trace de l'énergie consommée au niveau de processus par les variantes des applications infonuagiques. Cette estimation énergétique au niveau processus donne une précision plus exacte que d'une estimation au niveau d'un logiciel en général. En plus de cela, nous mesurons le temps de réponse de l'application infonuagique pour mettre en contraste le temps de réponse avec l'efficacité énergétique, afin que les développeurs soient conscients des compromis entre ces deux indicateurs de qualité lors de la sélection d'une base de données pour leur application. Nous rapportons que le choix des bases de données peut réduire la consommation d'énergie d'une application infonuagique quelque soit les trois types des patrons infonuagiques étudiés. Nous avons montré que la base de données MySQL est la moins consommatrice d'énergie, mais est la plus lente parmi les trois bases de données étudiées. PostgreSQL est la plus consommatrice d'énergie entre les trois bases de données, mais est plus rapide que MySQL, mais plus lente que MongoDB. MongoDB consomme plus d'énergie que MySQL, mais moins que PostgreSQL et est la plus rapide parmi les trois bases de données étudiées.----------ABSTRACT Cloud-based applications are used in about every industry; from financial, retail, education, and communication, to manufacturing, utilities, and transportation. Despite their popularity and wide adoption, little is still known about the energy footprint of these applications and, in particular, of their databases, which are the backbone of cloud-based applications. Reducing the energy consumption of applications is a major objective for society and will continue to be so in the near to far future. Two families of databases are currently used in cloud-based applications: relational and non-relational databases. Consequently, in this thesis, we study the energy consumption of three databases used by cloud-based applications: MySQL, PostgreSQL, and MongoDB, which are respectively relational, relational, and non-relational. We devise a series of experiments with three cloud-based applications (a RESTful multi-threaded application, DVD Store, and JPetStore). We also study the impact of cloud patterns on the energy consumption because databases in cloud-based applications are often implemented in conjunction with patterns like Local Database Proxy, Local Sharding-Based Router, and Priority Message Queue. We measure the energy consumption using the Power-API tool to keep track of the energy consumed at the process-level by the variants of the cloud-based applications. We measure the response time of the cloud-based application because we wanted to contrast response time with energy efficiency, so that developers are aware of the trade-offs between these two quality indicators when selecting a database for their application. We report that the choice of the databases can reduce the energy consumption of a cloud-based application regardless of the three cloud patterns that are implemented. We showed that MySQL database is the least energy consuming but is the slowest among the three databases. PostgreSQL is the most energy consuming among the three databases, but is faster than MySQL but slower than MongoDB. MongoDB consumes more energy than MySQL but less than PostgreSQL and is the fastest among the three databases

    Disaster Data Management in Cloud Environments

    Get PDF
    Facilitating decision-making in a vital discipline such as disaster management requires information gathering, sharing, and integration on a global scale and across governments, industries, communities, and academia. A large quantity of immensely heterogeneous disaster-related data is available; however, current data management solutions offer few or no integration capabilities and limited potential for collaboration. Moreover, recent advances in cloud computing, Big Data, and NoSQL have opened the door for new solutions in disaster data management. In this thesis, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM) with the objectives of 1) facilitating information gathering and sharing, 2) storing large amounts of disaster-related data from diverse sources, and 3) facilitating search and supporting interoperability and integration. Data are stored in a cloud environment taking advantage of NoSQL data stores. The proposed framework is generic, but this thesis focuses on the disaster management domain and data formats commonly present in that domain, i.e., file-style formats such as PDF, text, MS Office files, and images. The framework component responsible for addressing simulation models is SimOnto. SimOnto, as proposed in this work, transforms domain simulation models into an ontology-based representation with the goal of facilitating integration with other data sources, supporting simulation model querying, and enabling rule and constraint validation. Two case studies presented in this thesis illustrate the use of Disaster-CDM on the data collected during the Disaster Response Network Enabled Platform (DR-NEP) project. The first case study demonstrates Disaster-CDM integration capabilities by full-text search and querying services. In contrast to direct full-text search, Disaster-CDM full-text search also includes simulation model files as well as text contained in image files. Moreover, Disaster-CDM provides querying capabilities and this case study demonstrates how file-style data can be queried by taking advantage of a NoSQL document data store. The second case study focuses on simulation models and uses SimOnto to transform proprietary simulation models into ontology-based models which are then stored in a graph database. This case study demonstrates Disaster-CDM benefits by showing how simulation models can be queried and how model compliance with rules and constraints can be validated

    A Framework for Spatio-Temporal Trajectory Data Segmentation and Query

    Get PDF
    Trajectory segmentation is a technique of dividing sequential trajectory data into segments. These segments are building blocks to various applications for big trajectory data. Hence a system framework is essential to support trajectory segment indexing, storage, and query. When the size of segments is beyond the computing capacity of a single processing node, a distributed solution is proposed. In this thesis, a distributed trajectory segmentation framework that includes a greedy-split segmentation method is created. This framework consists of distributed in-memory processing and a cluster of graph storage respectively. For fast trajectory queries, distributed spatial R-tree index of trajectory segments is applied. Using the trajectory indexes, this framework builds queries of segments from in-memory processing and from the graph storage. Based on this segmentation framework, two metrics to measure trajectory similarity and chance of collision are defined. These two metrics are further applied to identify moving groups of trajectories. This study quantitatively evaluates the effects of data partition, parallelism, and data size on the system. The study identifies the bottleneck factors at the data partition stage, and validate two mitigation solutions. The evaluation demonstrates the distributed segmentation method and the system framework scale as the growth of the workload and the size of the parallel cluster

    Development of an Ontology-based Framework and Tool for Employer Information Requirements (OntEIR)

    Get PDF
    The identification of proper requirements is a key factor for a successful construction project. Many attempts in the form of frameworks, models, and tools have been put forward to assist in identifying those requirements. In projects using Building Information Modelling (BIM), the Employer Information Requirements (EIR) is a fundamental ingredient in achieving a successful BIM project.As of April 2016, Building Information Modelling (BIM) was mandated for all UK government projects, as part of the Government Construction Strategy. This means that all central Government departments must only tender with suppliers that demonstrate their capability on working with the Level-2 BIM.One of the fundamental ingredients of achieving the BIM Level-2 is the provision of full and clear Employer Information Requirements (EIR). As defined by PAS 1192-2, EIR is a “pre- tender document that sets out the information to be delivered and the standards and processes to be adopted by the suppler as part of the project delivery process”. it also notes that “EIR should be incorporated into tender documentation to enable suppliers to produce an initial BIM Execution Plan (BEP)”.Effective definition of EIRs can contribute to better productivity; within the budget and time limit set and improve the quality of the built facility. Also, EIR contribute to the information clients get at the end of the project, which will enable the effective management and operation of the asset at less cost, in an industry, where typically 60% of the cost go towards maintenance and operation.The aim of this research is to develop a better approach, for producing a full and complete set of EIRs, which ensures that the clients information needs for the final model delivered by BIM be clearly defined from the very beginning of the BIM process. It also manages the collaboration between the different stakeholders of the project, which allows them to communicate and deliver to the client’s requirements. In other words, an EIR that manages the whole BIM process and the information delivered throughout its lifecycle, and the standards to be adopted by the suppliers as an essential ingredient for the success of a BIM project. For the research to be able to achieve the aims set and the formulated objectives, firstly a detailed and critical review on related work and issues was conducted. Then the initial design of the OntEIR Framework, which introduced the new categorisation system of the information requirements and the elicitation of requirements from high-level needs using ontology was presented. A research prototype of an online tool was developed as a proof-of- concept to implement and operationalise the research framework.The evaluation of the framework and prototype tool via interviews and questionnaires was conducted with both industry experts and inexperienced stakeholders. The findings indicateivthat the adoption of the framework and tool, in addition to the new categorisation system, could contribute towards effective and efficient development of EIRs that provide a better understanding of the information requirements as requested by BIM, and support the production of a complete BIM Execution Plan (BEP) and a Master Information Delivery Plan (MIDP)
    corecore